Commit graph

505 commits

Author SHA1 Message Date
fc427b0b66 Merge branch 'master' into feature/website 2020-11-19 14:39:30 +01:00
6076d869b1 Build error 2020-11-11 21:17:34 +01:00
2765291796 Build path correctly 2020-11-11 19:48:01 +01:00
7d7b9e95a9 Simplify and_then(Some) as map() and remove move 2020-11-11 16:36:48 +01:00
5a5592c176 Replace with option syntaxic sugar 2020-11-11 16:12:42 +01:00
d445c4ef9c WIP fetch object 2020-11-11 15:24:25 +01:00
3cb3994cd2 Add documentation to host_to_bucket 2020-11-10 17:05:10 +01:00
cacf8ddf2d Panic when it is a logical error 2020-11-10 15:52:20 +01:00
d1b2fcc1e7 Rewrite for clarity 2020-11-10 15:48:40 +01:00
ab62c59acb Fix indent again 2020-11-10 15:40:33 +01:00
8797eed0ab Fixes due to integration tests 2020-11-10 15:32:04 +01:00
1e52ee9f5b Rewrite authority to host while staying on stack 2020-11-10 15:26:48 +01:00
27795a390c Fix formatting 2020-11-10 09:59:52 +01:00
4093833ae8 Extract bucket 2020-11-10 09:57:07 +01:00
09137fd6b5 Log host 2020-11-08 16:06:52 +01:00
c78df603d7 Add some documentation 2020-11-08 16:02:16 +01:00
71721f5bcf Merge branch 'master' into feature/website 2020-11-08 15:53:33 +01:00
0791e7164e Parse host header 2020-11-08 15:47:25 +01:00
a50fa70d45 Refactor error management in API part 2020-11-08 15:05:28 +01:00
6a8b972f3a Modif parse_bucket_key to fail when bucket is "" 2020-11-08 13:39:44 +01:00
e1415f5976 Add a second test 2020-11-07 15:34:53 +01:00
9a50ce12a8 Fix formatting 2020-11-07 13:59:30 +01:00
8f4ada1965 Add a test for parse_bucket_key 2020-11-07 13:53:32 +01:00
0d3bc169ee It compiles! 2020-11-03 12:37:16 +01:00
b3caa3628d Fix description of the crate 2020-11-02 15:57:23 +01:00
cea871d944 Skeleton to the new web API 2020-11-02 15:48:39 +01:00
104e2ce0a2 Add "web" configuration entry 2020-10-31 17:28:56 +01:00
5faf069e33 trace test 2020-07-15 15:41:49 +02:00
27a0d0d859 pretty 2020-07-15 15:37:35 +02:00
1c70552f95 Validate content MD5 and SHA256 sums for PutObject and UploadPart 2020-07-15 15:31:13 +02:00
6c7f9704ea Implement correct ETag for objects created with PutObject 2020-07-13 16:51:30 +02:00
9305e5e87f More headers taken into account 2020-07-09 17:04:43 +02:00
44dba0e53c cargo fmt 2020-07-08 17:34:37 +02:00
64a6eda0d2 Migrate S3 api to use new model 2020-07-08 17:33:24 +02:00
84bbbfaa7b Add multiple headers to object model 2020-07-08 17:33:14 +02:00
a5fa2a136b (WIP) New object table model, TODO: update API calls to use it 2020-07-08 16:46:47 +02:00
86bf4dedac Add support for model migrations 2020-07-08 16:10:53 +02:00
86fb7bbba5 Apply cargo fmt; add trace output when request signature is bad 2020-07-08 13:33:02 +02:00
f22ecb60a8 Update to Hyper 0.13.6 that accepts non-Sync streams in wrap_stream.
Simplifies code and makes it possible to publish on crates.io
2020-07-07 17:15:53 +02:00
3b0b11085e Add versions to dependencies 2020-07-07 14:18:47 +02:00
cc65cdc0fe Add license, description and repository to .toml files 2020-07-07 14:14:58 +02:00
bec26a1312 Rename garage_core to garage_model 2020-07-07 13:59:22 +02:00
fbe8fe81f2 Add automatic peer discovery from Consul 2020-06-30 18:33:14 +02:00
db6f1f35a8 Rename epidemic_factor to epidemic_fanout (that's what it is); complete conf example in readme 2020-06-30 15:03:34 +02:00
fa13cf6996 Repair: do not mark deleted when upstream object is not found
With the previous behaviour, repairing could see some data as absent
and decide that the object or version was deleted,
thus going on to delete the version and blocks.
In the case where read_quorum + write_quorum <= replication_factor
however, entries may not yet be returned by the get, thus data would
have been deleted that should hot have been. The new behavior is more
cautious and just skips the entry when the warning is emitted.
2020-05-04 13:30:42 +00:00
b46a7788d1 Implement HTTP ranges in get 2020-05-04 13:09:23 +00:00
16fbb32fd3 Rate limit requests a bit more seriously
droping the slot later (after reading the request response)
means that we aren't freeing our quota slot,
so the maximum number of simultaneous requests now also counts the
response reading phase

TODO next: quotas per rpc destination node, or maybe per datacenter (?)
2020-05-01 19:18:54 +00:00
d867bbcfb5 Implement DeleteObjects 2020-05-01 15:52:35 +00:00
3324971701 Slightly improved S3 compatibility
- ListBucket does not require any of the parameters (delimiter,
    prefix, max-keys, etc)
- URLs are properly percent_decoded
- PutObject and DeleteObject calls now answer correctly
    (empty body, version id in the x-amz-version-id: header)
2020-05-01 14:30:50 +00:00
3686f100b7 Compatibility fixes 2020-04-28 10:35:04 +00:00
0957d0fdfa Work on API 2020-04-28 10:18:14 +00:00
0877a5500c Abort multipart upload 2020-04-26 20:46:33 +00:00
81ecc4999e Implement multipart uploads 2020-04-26 20:39:32 +00:00
1999c0ae51 Update delete code 2020-04-26 19:11:19 +00:00
0a283e4e70 Fix deletion propagation 2020-04-26 18:59:17 +00:00
9cb870f950 Prepare for multipart uploads 2020-04-26 18:55:13 +00:00
ea7e4748ed S3 compatibility: fix bucket listing and HEAD and PUT on bucket 2020-04-26 16:22:33 +00:00
0e49e0c8b5 Add key table to repair procedure 2020-04-26 16:22:22 +00:00
e3203f998b Remove leading / in keys; better delimiter handling 2020-04-24 22:28:15 +02:00
be4831d768 Less verbosity 2020-04-24 19:27:27 +00:00
a52db67954 xml escape 2020-04-24 18:56:00 +00:00
91b2d1fcc1 Some basic S3 functionnality 2020-04-24 18:47:11 +00:00
f2e05986c4 Starting to be S3 compatible 2020-04-24 17:46:52 +00:00
d8f5e643bc Split code for modular compilation 2020-04-24 10:10:01 +00:00
51fb3799a1 Key management admin commands 2020-04-23 20:25:45 +00:00
4ef84a0558 Move repair to separate file 2020-04-23 18:36:12 +00:00
44a1089d95 Make table objects slightly more fool-proof; add key table 2020-04-23 18:16:52 +00:00
c9c6b0dbd4 Reorganize code 2020-04-23 17:05:46 +00:00
01a8acdeec Better error reporting 2020-04-23 16:23:06 +00:00
82f4cd8719 Continue pinging nodes when they are down ; overall better handling of down nodes 2020-04-23 16:06:11 +00:00
2fe82be3bc RPC to ourself do not pass through serialization + HTTPS 2020-04-23 14:40:59 +00:00
37f880bc09 RequestStrategy with possible interruption or not 2020-04-23 13:37:10 +00:00
73574ab43e Fix in rpc_client (see comment in code) 2020-04-22 20:42:23 +00:00
897fafa8db Improvements to block resync queue & worker 2020-04-22 20:32:58 +00:00
2556a1e383 I'm stupid though 2020-04-22 20:06:12 +00:00
231cb32955 Do not delete block if just a single replication error. Write TODO stuff. 2020-04-22 19:25:15 +00:00
8971f34c81 Well they still have to exit when we're exiting though 2020-04-22 17:04:33 +00:00
e8214cb180 Better concurrency:
Use Notify instead of stupid sleep in background worker
Use Semaphore to limit concurrent requests in rpc_client
Make more background tasks cancellable
2020-04-22 16:51:52 +00:00
ec59e896c6 Make UUID & Hash Copy and remove some .clone() noise 2020-04-21 17:08:42 +00:00
8915224966 Return BadRequest codes for some admin_rpc failure cases 2020-04-21 16:45:32 +00:00
b1ddb933b0 Make the repair command accept subcommands to not do everything all the time 2020-04-21 16:40:17 +00:00
a04218047e Do full sync on node (re)start 2020-04-21 16:15:32 +00:00
2a84d965ab Improve table sync 2020-04-21 16:05:55 +00:00
0226561035 Do not insert deletion marker if there is no object to delete 2020-04-21 14:33:12 +00:00
be0a2bae81 Add node tags in configuration 2020-04-21 14:08:28 +00:00
cc4f2f1cfb Pretty logging 2020-04-21 12:54:55 +00:00
53cf4d1baa Log which workers are doing what 2020-04-19 21:33:38 +00:00
ec7f9f07e2 Implement repair object->version and version->block ref 2020-04-19 21:27:08 +00:00
04acaea231 Don't do version & block_ref updates in background on deletion 2020-04-19 20:52:20 +00:00
5ae32972ef Implement repair command 2020-04-19 20:36:36 +00:00
a54f3158f1 Less output 2020-04-19 19:38:45 +00:00
ea75564851 More aggressive sync timings & improve other stuff 2020-04-19 17:59:59 +00:00
e325c7f47a Add hostname to node info 2020-04-19 19:08:48 +02:00
a6129d8626 Begin implement bucket management & admin commands 2020-04-19 17:15:48 +02:00
302502f4c1 Add support for fully replicated tables with epidemic dissemination of updates 2020-04-19 15:14:23 +02:00
7131553c53 Refactor sharding logic; coming next: full replication with epidemic dissemination 2020-04-19 13:22:28 +02:00
4ba54ccfca Reorder imports.
Trying to separate:
1. Stuff for handling the swarm of nodes and generic table data replication
2. Stuff for the object store core application: metadata tables and block management
3. Stuff for the S3 API
2020-04-18 19:39:57 +02:00
bd1618e78e Remove proto.rs & move some definitions out of data.rs 2020-04-18 19:30:05 +02:00
f41583e1b7 Massive RPC refactoring 2020-04-18 19:21:34 +02:00
3f40ef149f Fix sync: use max root checksum level 2020-04-17 21:59:07 +02:00
f62b54f1df Fix add to resync on incref 2020-04-17 21:14:06 +02:00
ace07da7c1 Fix walk_ring_from 2020-04-17 21:08:43 +02:00
40c48e6a59 Several resync workers; add delay on retry resync 2020-04-17 20:58:10 +02:00
29a1e94f23 Implement missing handler for read_range 2020-04-17 19:38:47 +02:00
b4e96bdcf0 Fix paths :o 2020-04-17 19:20:17 +02:00
4abfb75509 Implement sending blocks to nodes that need them 2020-04-17 19:16:08 +02:00
db1c4222ce Don't send items...
...if syncer doesn't need them because he's going to delete the partition anyway.
Also, fix block resync queue
2020-04-17 18:51:29 +02:00
4bacaaf53f Resync block on read error 2020-04-17 18:38:11 +02:00
b780f6485d Make sync send data both ways 2020-04-17 18:27:29 +02:00
69f1d8fef2 WIP
TODOs:
- ensure sync goes both way
- finish sending blocks to other nodes when they need them before deleting
2020-04-17 17:09:57 +02:00
e41ce4d815 Implement getting missing blocks when RC increases
Issue: RC increases also when the block ref entry is first put by the actual client.
At that point the client is probably already sending us the block content,
so we don't need to do a get...
We should add a delay before the task is added or find something to do.
2020-04-17 15:40:13 +02:00
867646093b Table range deletion 2020-04-17 14:49:10 +02:00
6ce14e2c9e Make all requests continue in the background even after we got enough responses. 2020-04-16 23:13:15 +02:00
768d22ccdb Better prints again, and a great question 2020-04-16 19:57:13 +02:00
2a05fd135a Change debug prints a bit 2020-04-16 19:37:08 +02:00
2f3b1a072f WIP 2020-04-16 19:28:02 +02:00
2832be4396 WIP 2020-04-16 18:41:10 +02:00
e8d750175d Implement ring comparison algorithm 2020-04-16 17:04:28 +02:00
f01c1e71b5 Begin work on sync... 2020-04-16 14:50:49 +02:00
43ce5e4ab4 Fix table RPC to not be interruptible 2020-04-12 23:05:53 +02:00
2bea76ce16 Small refactorings 2020-04-12 22:24:53 +02:00
9f8b3b5a18 TLS for command line client 2020-04-12 19:41:19 +02:00
c788fc9f9e Cleanup 2020-04-12 19:18:31 +02:00
d2814b5c33 TLS works \o/
So, the issues were:
- webpki does not support IP addresses as DNS names in URLs,
  so I hacked the HttpsConnector to always provide a fixed string
  as the DNS name for server certificate validation
- the certificate requied a SAN section which was complicated to build
  but eventually the solution is there in genkeys.sh
2020-04-12 19:00:30 +02:00
d1e8f78b2c Trying to do TLS 2020-04-12 15:51:19 +02:00
5967c5a5af Refactor a bit 2020-04-12 13:03:55 +02:00
419c70e506 fixes
- make block_put call uninterruptible by client
- used meta_replication_factor instead of data_replication_factor
- listen on ipv6
2020-04-12 12:10:33 +02:00
9c931f5eda Keep network status & ring in a tokio::sync::watch
advantages
- reads don't prevent preparing writes
- can be followed from other parts of the system by cloning the receiver
2020-04-11 23:53:32 +02:00
5dd59e437d Local refcounting of blocks 2020-04-11 23:00:26 +02:00
dcf58499a4 table::insert_many, version_table::updated 2020-04-11 19:43:29 +02:00
53289b69e5 Background task runner that replaces tokio::spawn 2020-04-11 18:51:11 +02:00
4a2624b76a We don't want the debugging to fail, actually. 2020-04-10 23:43:35 +02:00
cf8fd948fc Add block ref table 2020-04-10 23:11:52 +02:00
ff4fb97568 (Try to) disable LTO ? 2020-04-10 22:55:01 +02:00
a50f07dfdc Refactor 2020-04-10 22:26:48 +02:00
3477864142 Fix the Sync issue. Details:
So the HTTP client future of Hyper is not Sync, thus the stream
that read blocks wasn't either. However Hyper's default Body type
requires a stream to be Sync for wrap_stream. Solution: reimplement
a custom HTTP body type.
2020-04-10 22:01:48 +02:00
d66c0d6833 Why is it not Sync?? 2020-04-09 23:45:07 +02:00
a3eb88e601 Locally, transactions 2020-04-09 20:58:39 +02:00
1d786c2c66 Something works 2020-04-09 18:43:53 +02:00
101444abb3 Some progress 2020-04-09 17:32:28 +02:00
4c1aee42d5 Reorganize table API 2020-04-09 16:16:27 +02:00
a450103ed0 Work & TODO 2020-04-08 23:47:34 +02:00
cc580da0ae Some work 2020-04-08 23:01:49 +02:00
bacc76a057 Some work in actually storing things 2020-04-08 22:00:41 +02:00
d50edcdb4f Basic workflow for client PUT, next is do actual storage 2020-04-08 00:39:07 +02:00
90cdffb425 custom data type for hashes and identifiers 2020-04-07 18:10:20 +02:00
82b7fcd280 Config management & ring 2020-04-07 17:00:48 +02:00
061e676136 Refactor; ability to update network config 2020-04-07 16:26:22 +02:00
46d5b896e8 Fix pinging 2020-04-07 00:00:43 +02:00
a7b85146fe Forgot to recalculate hash at strategic locations 2020-04-06 23:10:54 +02:00
a09f019cc5 Seems to be fixed 2020-04-06 22:54:03 +02:00
87f2b4d2fc Ununderstandable error 2020-04-06 22:27:51 +02:00
3c36b449a3 Some work 2020-04-06 21:02:15 +02:00
1a5e6e39af Some more basic work 2020-04-06 19:55:39 +02:00
7102db1d54 First commit: skeleton for something great 2020-04-05 23:33:42 +02:00