The common behavior on S3 implementations if to force the user to empty the bucket before deleting it. If you want to remove a bucket, you have to delete all files in it first.
To give you an example in a cluster with a couple TiB of data the metadata folder is usually around 10GiB or less
@Promotion1877 can you give some data of what garage is logging in response to there requests?
If I remembrer correctly this could be to the node ID being different
To clarify, the actual data (in almost any cases, there are some optimizations for small objects) is not written in this folder, only references are kept here, the actual data is stored in chunks…
Thank you for clarifying the durability model. It seems that GarageHQ’s durability approach resembles a RAID-10 configuration, with each datacenter functioning like a RAID-0 set while the…
To be clear we don't discourage you to run garage on a RAID data pool (be it mdadm, Btrfs, zfs or hardware raid), we simply feel like you should give a though to the resiliency afforded by the…
It all depends on your durability target. For a simple 3 site deployment with replica set to 3, we feel like it's interesting to use the disks as is, so you can lose any disks in one zone without…
This could cause an accuracy issue with your logging manager, as the actual time of the event might be different from the time the log was written to the log file. Out if curiosity, what's the…
I don't think there will, there is no specific prefix sharing depending on object path, it depends on the hash of the actual data. The only thing that might make a difference is the block size you…