Commit graph

30 commits

Author SHA1 Message Date
c9c6b0dbd4 Reorganize code 2020-04-23 17:05:46 +00:00
2fe82be3bc RPC to ourself do not pass through serialization + HTTPS 2020-04-23 14:40:59 +00:00
37f880bc09 RequestStrategy with possible interruption or not 2020-04-23 13:37:10 +00:00
897fafa8db Improvements to block resync queue & worker 2020-04-22 20:32:58 +00:00
2556a1e383 I'm stupid though 2020-04-22 20:06:12 +00:00
231cb32955 Do not delete block if just a single replication error. Write TODO stuff. 2020-04-22 19:25:15 +00:00
ec59e896c6 Make UUID & Hash Copy and remove some .clone() noise 2020-04-21 17:08:42 +00:00
cc4f2f1cfb Pretty logging 2020-04-21 12:54:55 +00:00
53cf4d1baa Log which workers are doing what 2020-04-19 21:33:38 +00:00
ec7f9f07e2 Implement repair object->version and version->block ref 2020-04-19 21:27:08 +00:00
5ae32972ef Implement repair command 2020-04-19 20:36:36 +00:00
ea75564851 More aggressive sync timings & improve other stuff 2020-04-19 17:59:59 +00:00
a6129d8626 Begin implement bucket management & admin commands 2020-04-19 17:15:48 +02:00
4ba54ccfca Reorder imports.
Trying to separate:
1. Stuff for handling the swarm of nodes and generic table data replication
2. Stuff for the object store core application: metadata tables and block management
3. Stuff for the S3 API
2020-04-18 19:39:57 +02:00
bd1618e78e Remove proto.rs & move some definitions out of data.rs 2020-04-18 19:30:05 +02:00
f41583e1b7 Massive RPC refactoring 2020-04-18 19:21:34 +02:00
3f40ef149f Fix sync: use max root checksum level 2020-04-17 21:59:07 +02:00
f62b54f1df Fix add to resync on incref 2020-04-17 21:14:06 +02:00
ace07da7c1 Fix walk_ring_from 2020-04-17 21:08:43 +02:00
40c48e6a59 Several resync workers; add delay on retry resync 2020-04-17 20:58:10 +02:00
b4e96bdcf0 Fix paths :o 2020-04-17 19:20:17 +02:00
4abfb75509 Implement sending blocks to nodes that need them 2020-04-17 19:16:08 +02:00
db1c4222ce Don't send items...
...if syncer doesn't need them because he's going to delete the partition anyway.
Also, fix block resync queue
2020-04-17 18:51:29 +02:00
4bacaaf53f Resync block on read error 2020-04-17 18:38:11 +02:00
69f1d8fef2 WIP
TODOs:
- ensure sync goes both way
- finish sending blocks to other nodes when they need them before deleting
2020-04-17 17:09:57 +02:00
e41ce4d815 Implement getting missing blocks when RC increases
Issue: RC increases also when the block ref entry is first put by the actual client.
At that point the client is probably already sending us the block content,
so we don't need to do a get...
We should add a delay before the task is added or find something to do.
2020-04-17 15:40:13 +02:00
9c931f5eda Keep network status & ring in a tokio::sync::watch
advantages
- reads don't prevent preparing writes
- can be followed from other parts of the system by cloning the receiver
2020-04-11 23:53:32 +02:00
5dd59e437d Local refcounting of blocks 2020-04-11 23:00:26 +02:00
3477864142 Fix the Sync issue. Details:
So the HTTP client future of Hyper is not Sync, thus the stream
that read blocks wasn't either. However Hyper's default Body type
requires a stream to be Sync for wrap_stream. Solution: reimplement
a custom HTTP body type.
2020-04-10 22:01:48 +02:00
d66c0d6833 Why is it not Sync?? 2020-04-09 23:45:07 +02:00