Determine file size of mount
We can also keep this for later as an open issue if you want this PR to be merged as is.
Fix permissions of generated key file
I think we can ignore this on Windows for now. The "correct" solution would probably be to use NT ACLs to restrict read access to the secret key, but I…
One obvious thing I'm noticing across the code base is that IDs are fully random; something like UUIDv7's, ULID etc. is much better for every KV store, as this will implicitly order new keys…
I'm not sure exactly how the following interact together:
manual_journal_persist
at the level of the TxKeyspace, which is set in the Configmanual_journal_persist
at the level of the…
Why create a new source_keyspace and not use self.keyspace
? Does this not cause consistency issues ?
We need the snapshots we take to be consistent across partitions, i.e. we need to snapshot all partitions at once and then copy them to the target keyspace. Here we are snapshotting each partition one after the other, so there will be changes in between that will cause inconsistencies when trying to restore from the snapshot.
Concerning performance, my suspicion is that fjall does an fsync to the journal after each transcation. With other db engines we never do fsync if metadata_fsync = false
in the config. To obtain the corresponding level of performance with fjall, it would need to be configured accordingly.
I think the correct implementation of opt.fsync == false
would be to disable all fsync operations in fjall, in particular setting manual_journal_persist
to true
so that transactions would not do an fsync call. This is the meaning of that option for other db engines. Even with opt.fsync == false
we can set fsync_ms to something reasonable like 1000, because if i understand correctly, the fsyncs will now be done by background threads at a regular interval and will not interfere with interactive operations. @marvinj97 please correct me if I'm wrong.
It would be nice if you could open PRs to include fjall and rocksdb support in Garage. Even if they are janky we might want to include them so that users could try them out and try to tune them to…
Another point I forgot:
- We need to think of the behaviour if Garage crashes after popping an element from a queue but before completing the corresponding action. With the current implementatio…
This is interesting work for sure, but I think there are many things to be carefull of:
- First, I think we would want to have an adapter that keeps the current behavior of using the DB engine…
Concerning the "index of all objects":
- For each bucket, there are three servers out of your entire Garage cluster that store the list of objects in this bucket. These three servers are chosen…
In all cases, you must configure your network (including your router) so that connections to rpc_public_addr
are directed to your Garage node, otherwise your Garage cluster will not work. The…