|2 weeks ago|
|src||4 months ago|
|.dockerignore||4 months ago|
|.gitignore||7 months ago|
|Cargo.lock||4 months ago|
|Cargo.toml||4 months ago|
|Dockerfile||4 months ago|
|LICENSE||7 months ago|
|Makefile||4 months ago|
|README.md||2 weeks ago|
|TODO||7 months ago|
|config.dev.toml||2 weeks ago|
|garage.png||2 weeks ago|
|garage.svg||2 weeks ago|
|genkeys.sh||7 months ago|
|rustfmt.toml||7 months ago|
|test_delete.sh||7 months ago|
|test_read.sh||7 months ago|
|test_write.sh||7 months ago|
Garage is a lightweight S3-compatible distributed object store, with the following goals:
Our main use case is to provide a distributed storage layer for small-scale self hosted services such as Deuxfleurs.
cargo buildto build the project
RUST_BACKTRACE=1 RUST_LOG=garage=debug ./target/debug/garage server -c ./config.dev.tomlto launch a garage test instance (data will be saved in
/tmp, no encryption, only one instance)
genkeys.sh script to generate TLS keys for encrypting communications between Garage nodes.
The script takes no arguments and will generate keys in
This script creates a certificate authority
garage-ca which signs certificates for individual Garage nodes.
Garage nodes from a same cluster authenticate themselves by verifying that they have certificates signed by the same certificate authority.
Garage requires two locations to store its data: a metadata directory, and a data directory. The metadata directory is used to store metadata such as object lists, and should ideally be located on an SSD drive. The data directory is used to store the chunks of data of the objects stored in Garage. In a typical deployment the data directory is stored on a standard HDD.
Garage does not handle TLS for its S3 API endpoint. This should be handled by adding a reverse proxy.
Create a configuration file with the following structure:
block_size = 1048576 # objects are split in blocks of maximum this number of bytes metadata_dir = "/path/to/ssd/metadata/directory" data_dir = "/path/to/hdd/data/directory" rpc_bind_addr = "[::]:3901" # the port other Garage nodes will use to talk to this node bootstrap_peers = [ # Ideally this list should contain the IP addresses of all other Garage nodes of the cluster. # Use Ansible or any kind of configuration templating to generate this automatically. "10.0.0.1:3901", "10.0.0.2:3901", "10.0.0.3:3901", ] # optionnal: garage can find cluster nodes automatically using a Consul server # garage only does lookup but does not register itself, registration should be handled externally by e.g. Nomad consul_host = "localhost:8500" # optionnal: host name of a Consul server for automatic peer discovery consul_service_name = "garage" # optionnal: service name to look up on Consul max_concurrent_rpc_requests = 12 data_replication_factor = 3 meta_replication_factor = 3 meta_epidemic_fanout = 3 [rpc_tls] # NOT RECOMMENDED: you can skip this section if you don't want to encrypt intra-cluster traffic # Thanks to genkeys.sh, generating the keys and certificates is easy, so there is NO REASON NOT TO DO IT. ca_cert = "/path/to/garage/pki/garage-ca.crt" node_cert = "/path/to/garage/pki/garage.crt" node_key = "/path/to/garage/pki/garage.key" [s3_api] api_bind_addr = "[::1]:3900" # the S3 API port, HTTP without TLS. Add a reverse proxy for the TLS part. s3_region = "garage" # set this to anything. S3 API calls will fail if they are not made against the region set here. [s3_web] web_bind_addr = "[::1]:3902"
Build Garage using
cargo build --release.
Then, run it using either
./target/release/garage server -c path/to/config_file.toml or
cargo run --release -- server -c path/to/config_file.toml.
RUST_LOG environment to
garage=debug to dump some debug information.
Set it to
garage=trace to dump even more debug information.
Set it to
garage=warn to show nothing except warnings and errors.
Once all your
garage nodes are running, you will need to:
To run these administrative tasks, you will need to use the
garage command line tool and it to connect to any of the cluster’s nodes on the RPC port.
garage CLI also needs TLS keys and certificates of its own to authenticate and be authenticated in the cluster.
A typicall invocation will be as follows:
./target/release/garage --ca-cert=pki/garage-ca.crt --client-cert=pki/garage-client.crt --client-key=pki/garage-client.key <...>
tables: to do a full sync of metadata, should not be necessary because it is done every hour by the system
block_refs: very time consuming, usefull if deletions have not been propagated, improves garbage collection
blocks: very usefull to resync/rebalance blocks betweeen nodes