forked from Deuxfleurs/garage
S3-compatible object store for small self-hosted geo-distributed deployments
Alex Auvolat
fa13cf6996
With the previous behaviour, repairing could see some data as absent and decide that the object or version was deleted, thus going on to delete the version and blocks. In the case where read_quorum + write_quorum <= replication_factor however, entries may not yet be returned by the get, thus data would have been deleted that should hot have been. The new behavior is more cautious and just skips the entry when the warning is emitted. |
||
---|---|---|
src | ||
.gitignore | ||
Cargo.lock | ||
Cargo.toml | ||
genkeys.sh | ||
LICENSE | ||
Makefile | ||
README.md | ||
rustfmt.toml | ||
test_delete.sh | ||
test_read.sh | ||
test_write.sh | ||
TODO |
Garage
THIS IS ALL WORK IN PROGRESS. NOTHING TO SEE YET BUT THANKS FOR YOUR INTEREST.
Garage implements an S3-compatible object store with high resiliency to network failures, machine failure, and sysadmin failure.
To log:
RUST_LOG=garage=debug cargo run --release -- server -c config_file.toml
What to repair
tables
: to do a full sync of metadata, should not be necessary because it is done every hour by the systemversions
andblock_refs
: very time consuming, usefull if deletions have not been propagated, improves garbage collectionblocks
: very usefull to resync/rebalance blocks betweeen nodes