S3-compatible object store for small self-hosted geo-distributed deployments https://garagehq.deuxfleurs.fr/
Find a file
Alex 5768bf3622
All checks were successful
continuous-integration/drone/push Build is passing
First implementation of K2V (#293)
**Specification:**

View spec at [this URL](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/k2v/doc/drafts/k2v-spec.md)

- [x] Specify the structure of K2V triples
- [x] Specify the DVVS format used for causality detection
- [x] Specify the K2V index (just a counter of number of values per partition key)
- [x] Specify single-item endpoints: ReadItem, InsertItem, DeleteItem
- [x] Specify index endpoint: ReadIndex
- [x] Specify multi-item endpoints: InsertBatch, ReadBatch, DeleteBatch
- [x] Move to JSON objects instead of tuples
- [x] Specify endpoints for polling for updates on single values (PollItem)

**Implementation:**

- [x] Table for K2V items, causal contexts
- [x] Indexing mechanism and table for K2V index
- [x] Make API handlers a bit more generic
- [x] K2V API endpoint
- [x] K2V API router
- [x] ReadItem
- [x] InsertItem
- [x] DeleteItem
- [x] PollItem
- [x] ReadIndex
- [x] InsertBatch
- [x] ReadBatch
- [x] DeleteBatch

**Testing:**

- [x] Just a simple Python script that does some requests to check visually that things are going right (does not contain parsing of results or assertions on returned values)
- [x] Actual tests:
  - [x] Adapt testing framework
  - [x] Simple test with InsertItem + ReadItem
  - [x] Test with several Insert/Read/DeleteItem + ReadIndex
  - [x] Test all combinations of return formats for ReadItem
  - [x] Test with ReadBatch, InsertBatch, DeleteBatch
  - [x] Test with PollItem
  - [x] Test error codes
- [ ] Fix most broken stuff
  - [x] test PollItem broken randomly
  - [x] when invalid causality tokens are given, errors should be 4xx not 5xx

**Improvements:**

- [x] Descending range queries
  - [x] Specify
  - [x] Implement
  - [x] Add test
- [x] Batch updates to index counter
- [x] Put K2V behind `k2v` feature flag

Co-authored-by: Alex Auvolat <alex@adnab.me>
Reviewed-on: #293
Co-authored-by: Alex <alex@adnab.me>
Co-committed-by: Alex <alex@adnab.me>
2022-05-10 13:16:57 +02:00
doc First implementation of K2V (#293) 2022-05-10 13:16:57 +02:00
nix Patch cargo2nix openssl override 2022-03-17 12:17:38 +01:00
script Add/Fix OpenTelemetry 2022-04-07 16:12:35 +02:00
src First implementation of K2V (#293) 2022-05-10 13:16:57 +02:00
.dockerignore Build Docker image 2020-06-30 17:18:42 +02:00
.drone.yml Add integration tests to Drone 2022-02-10 17:55:50 +01:00
.gitattributes Add FOSDEM talk and move all .pdf files to Git LFS 2022-02-16 20:01:36 +01:00
.gitignore Work on API 2020-04-28 10:18:14 +00:00
Cargo.lock First implementation of K2V (#293) 2022-05-10 13:16:57 +02:00
Cargo.nix First implementation of K2V (#293) 2022-05-10 13:16:57 +02:00
Cargo.toml Add missing src/block to toplevel cargo.toml 2022-03-23 10:26:10 +01:00
default.nix Compile kuberetes-discovery only when release=true 2022-03-24 16:57:43 +01:00
Dockerfile Extract toolchain build from the CI 2021-10-29 11:34:01 +02:00
k2v_test.py First implementation of K2V (#293) 2022-05-10 13:16:57 +02:00
LICENSE Switch to AGPL 2021-03-16 16:35:46 +01:00
Makefile First implementation of K2V (#293) 2022-05-10 13:16:57 +02:00
README.md Improve how node roles are assigned in Garage 2021-11-16 16:05:53 +01:00
rustfmt.toml Fix the Sync issue. Details: 2020-04-10 22:01:48 +02:00
shell.nix Remove strum crate dependency; add protobuf nix dependency 2022-03-14 10:53:00 +01:00

Garage Build Status

Garage logo

[ Website and documentation | Binary releases | Git repository | Matrix channel ]

Garage is a lightweight S3-compatible distributed object store, with the following goals:

  • As self-contained as possible
  • Easy to set up
  • Highly resilient to network failures, network latency, disk failures, sysadmin failures
  • Relatively simple
  • Made for multi-datacenter deployments

Non-goals include:

  • Extremely high performance
  • Complete implementation of the S3 API
  • Erasure coding (our replication model is simply to copy the data as is on several nodes, in different datacenters if possible)

Our main use case is to provide a distributed storage layer for small-scale self hosted services such as Deuxfleurs.