3. Run `./script/dev-cluster.sh` to launch a test cluster (feel free to read the script)
4. Run `./script/dev-configure.sh` to configure your test cluster with default values (same datacenter, 100 tokens)
5. Run `./script/dev-bucket.sh` to create a bucket named `éprouvette` and API key stored in `/tmp/garage.s3`
6. Run `source ./script/dev-env.sh` to configure your environment:
-`garage` to manage the cluster. Try `garage --help`.
-`s3grg` to add, remove, and delete files. Try `s3grg --help`, `s3grg put /proc/cpuinfo s3://éprouvette/cpuinfo.txt`, `s3grg ls s3://éprouvette`. `s3grg` is a wrapper on `s3cmd` configured with previous API key (the one in `/tmp/garage.s3`).
Then, run it using either `./target/release/garage server -c path/to/config_file.toml` or `cargo run --release -- server -c path/to/config_file.toml`.
Set the `RUST_LOG` environment to `garage=debug` to dump some debug information.
Set it to `garage=trace` to dump even more debug information.
Set it to `garage=warn` to show nothing except warnings and errors.
## Setting up cluster nodes
Once all your `garage` nodes are running, you will need to:
1. check that they are correctly talking to one another;
2. configure them with their physical location (in the case of a multi-dc deployment) and a number of "ring tokens" proportionnal to the storage space available on each node;
3. create some S3 API keys and buckets;
4. ???;
5. profit!
To run these administrative tasks, you will need to use the `garage` command line tool and it to connect to any of the cluster's nodes on the RPC port.
The `garage` CLI also needs TLS keys and certificates of its own to authenticate and be authenticated in the cluster.