Commit graph

1133 commits

Author SHA1 Message Date
73574ab43e Fix in rpc_client (see comment in code) 2020-04-22 20:42:23 +00:00
897fafa8db Improvements to block resync queue & worker 2020-04-22 20:32:58 +00:00
2556a1e383 I'm stupid though 2020-04-22 20:06:12 +00:00
231cb32955 Do not delete block if just a single replication error. Write TODO stuff. 2020-04-22 19:25:15 +00:00
8971f34c81 Well they still have to exit when we're exiting though 2020-04-22 17:04:33 +00:00
e8214cb180 Better concurrency:
Use Notify instead of stupid sleep in background worker
Use Semaphore to limit concurrent requests in rpc_client
Make more background tasks cancellable
2020-04-22 16:51:52 +00:00
ec59e896c6 Make UUID & Hash Copy and remove some .clone() noise 2020-04-21 17:08:42 +00:00
8915224966 Return BadRequest codes for some admin_rpc failure cases 2020-04-21 16:45:32 +00:00
b1ddb933b0 Make the repair command accept subcommands to not do everything all the time 2020-04-21 16:40:17 +00:00
a04218047e Do full sync on node (re)start 2020-04-21 16:15:32 +00:00
2a84d965ab Improve table sync 2020-04-21 16:05:55 +00:00
0226561035 Do not insert deletion marker if there is no object to delete 2020-04-21 14:33:12 +00:00
be0a2bae81 Add node tags in configuration 2020-04-21 14:08:28 +00:00
cc4f2f1cfb Pretty logging 2020-04-21 12:54:55 +00:00
53cf4d1baa Log which workers are doing what 2020-04-19 21:33:38 +00:00
ec7f9f07e2 Implement repair object->version and version->block ref 2020-04-19 21:27:08 +00:00
04acaea231 Don't do version & block_ref updates in background on deletion 2020-04-19 20:52:20 +00:00
5ae32972ef Implement repair command 2020-04-19 20:36:36 +00:00
a54f3158f1 Less output 2020-04-19 19:38:45 +00:00
ea75564851 More aggressive sync timings & improve other stuff 2020-04-19 17:59:59 +00:00
e325c7f47a Add hostname to node info 2020-04-19 19:08:48 +02:00
a6129d8626 Begin implement bucket management & admin commands 2020-04-19 17:15:48 +02:00
302502f4c1 Add support for fully replicated tables with epidemic dissemination of updates 2020-04-19 15:14:23 +02:00
7131553c53 Refactor sharding logic; coming next: full replication with epidemic dissemination 2020-04-19 13:22:28 +02:00
4ba54ccfca Reorder imports.
Trying to separate:
1. Stuff for handling the swarm of nodes and generic table data replication
2. Stuff for the object store core application: metadata tables and block management
3. Stuff for the S3 API
2020-04-18 19:39:57 +02:00
bd1618e78e Remove proto.rs & move some definitions out of data.rs 2020-04-18 19:30:05 +02:00
f41583e1b7 Massive RPC refactoring 2020-04-18 19:21:34 +02:00
3f40ef149f Fix sync: use max root checksum level 2020-04-17 21:59:07 +02:00
f62b54f1df Fix add to resync on incref 2020-04-17 21:14:06 +02:00
ace07da7c1 Fix walk_ring_from 2020-04-17 21:08:43 +02:00
40c48e6a59 Several resync workers; add delay on retry resync 2020-04-17 20:58:10 +02:00
29a1e94f23 Implement missing handler for read_range 2020-04-17 19:38:47 +02:00
b4e96bdcf0 Fix paths :o 2020-04-17 19:20:17 +02:00
4abfb75509 Implement sending blocks to nodes that need them 2020-04-17 19:16:08 +02:00
db1c4222ce Don't send items...
...if syncer doesn't need them because he's going to delete the partition anyway.
Also, fix block resync queue
2020-04-17 18:51:29 +02:00
4bacaaf53f Resync block on read error 2020-04-17 18:38:11 +02:00
b780f6485d Make sync send data both ways 2020-04-17 18:27:29 +02:00
69f1d8fef2 WIP
TODOs:
- ensure sync goes both way
- finish sending blocks to other nodes when they need them before deleting
2020-04-17 17:09:57 +02:00
e41ce4d815 Implement getting missing blocks when RC increases
Issue: RC increases also when the block ref entry is first put by the actual client.
At that point the client is probably already sending us the block content,
so we don't need to do a get...
We should add a delay before the task is added or find something to do.
2020-04-17 15:40:13 +02:00
867646093b Table range deletion 2020-04-17 14:49:10 +02:00
6ce14e2c9e Make all requests continue in the background even after we got enough responses. 2020-04-16 23:13:15 +02:00
768d22ccdb Better prints again, and a great question 2020-04-16 19:57:13 +02:00
2a05fd135a Change debug prints a bit 2020-04-16 19:37:08 +02:00
2f3b1a072f WIP 2020-04-16 19:28:02 +02:00
2832be4396 WIP 2020-04-16 18:41:10 +02:00
e8d750175d Implement ring comparison algorithm 2020-04-16 17:04:28 +02:00
f01c1e71b5 Begin work on sync... 2020-04-16 14:50:49 +02:00
43ce5e4ab4 Fix table RPC to not be interruptible 2020-04-12 23:05:53 +02:00
2bea76ce16 Small refactorings 2020-04-12 22:24:53 +02:00
9f8b3b5a18 TLS for command line client 2020-04-12 19:41:19 +02:00
c788fc9f9e Cleanup 2020-04-12 19:18:31 +02:00
d2814b5c33 TLS works \o/
So, the issues were:
- webpki does not support IP addresses as DNS names in URLs,
  so I hacked the HttpsConnector to always provide a fixed string
  as the DNS name for server certificate validation
- the certificate requied a SAN section which was complicated to build
  but eventually the solution is there in genkeys.sh
2020-04-12 19:00:30 +02:00
d1e8f78b2c Trying to do TLS 2020-04-12 15:51:19 +02:00
5967c5a5af Refactor a bit 2020-04-12 13:03:55 +02:00
419c70e506 fixes
- make block_put call uninterruptible by client
- used meta_replication_factor instead of data_replication_factor
- listen on ipv6
2020-04-12 12:10:33 +02:00
9c931f5eda Keep network status & ring in a tokio::sync::watch
advantages
- reads don't prevent preparing writes
- can be followed from other parts of the system by cloning the receiver
2020-04-11 23:53:32 +02:00
5dd59e437d Local refcounting of blocks 2020-04-11 23:00:26 +02:00
dcf58499a4 table::insert_many, version_table::updated 2020-04-11 19:43:29 +02:00
53289b69e5 Background task runner that replaces tokio::spawn 2020-04-11 18:51:11 +02:00
4a2624b76a We don't want the debugging to fail, actually. 2020-04-10 23:43:35 +02:00
cf8fd948fc Add block ref table 2020-04-10 23:11:52 +02:00
ff4fb97568 (Try to) disable LTO ? 2020-04-10 22:55:01 +02:00
a50f07dfdc Refactor 2020-04-10 22:26:48 +02:00
3477864142 Fix the Sync issue. Details:
So the HTTP client future of Hyper is not Sync, thus the stream
that read blocks wasn't either. However Hyper's default Body type
requires a stream to be Sync for wrap_stream. Solution: reimplement
a custom HTTP body type.
2020-04-10 22:01:48 +02:00
d66c0d6833 Why is it not Sync?? 2020-04-09 23:45:07 +02:00
a3eb88e601 Locally, transactions 2020-04-09 20:58:39 +02:00
1d786c2c66 Something works 2020-04-09 18:43:53 +02:00
101444abb3 Some progress 2020-04-09 17:32:28 +02:00
4c1aee42d5 Reorganize table API 2020-04-09 16:16:27 +02:00
a450103ed0 Work & TODO 2020-04-08 23:47:34 +02:00
cc580da0ae Some work 2020-04-08 23:01:49 +02:00
bacc76a057 Some work in actually storing things 2020-04-08 22:00:41 +02:00
d50edcdb4f Basic workflow for client PUT, next is do actual storage 2020-04-08 00:39:07 +02:00
90cdffb425 custom data type for hashes and identifiers 2020-04-07 18:10:20 +02:00
82b7fcd280 Config management & ring 2020-04-07 17:00:48 +02:00
061e676136 Refactor; ability to update network config 2020-04-07 16:26:22 +02:00
46d5b896e8 Fix pinging 2020-04-07 00:00:43 +02:00
a7b85146fe Forgot to recalculate hash at strategic locations 2020-04-06 23:10:54 +02:00
a09f019cc5 Seems to be fixed 2020-04-06 22:54:03 +02:00
87f2b4d2fc Ununderstandable error 2020-04-06 22:27:51 +02:00
3c36b449a3 Some work 2020-04-06 21:02:15 +02:00
1a5e6e39af Some more basic work 2020-04-06 19:55:39 +02:00
7102db1d54 First commit: skeleton for something great 2020-04-05 23:33:42 +02:00