@maximilien (I assume you meant Zstd) apparently it can, but does not by default. The helper used does not allow to use checksum, but a <10 lines function could
In case the node already have an uncompressed block, and receive a compressed version of it, it will store and use that compressed version, but not delete the uncompressed one
I don't have numbers to quantify how much better it is (if it is). What I know however is that Borg (backup software) uses chunks of min 512kio, average 2Mio and max size 8Mio…
As explained above, I feel like a default implementation should be what we want in most case. Of 5 implementations of this trait, only 2 actually use an empty updated
, so being explicit about it…
hum, it might double-count pings, as it count once in rpc_clients, and once in this file in ping_nodes
it's sorted in recalculate_hash. Actually the node list is short, so sorting it is pretty much free
Ok I've read s3 doc on multi-part upload and I hate it. Next comment appear to be false too, it's offset in part, not in the whole file
I would not necessarily call it bad practice. It conveyed to me that "doing nothing" is a good default behavior, and while doing nothing might be a valid behavior, I think wanting to do nothing…