Separate replication modes for metadata/data #720
Labels
No labels
action
check-aws
action
discussion-needed
action
for-external-contributors
action
for-newcomers
action
more-info-needed
action
need-funding
action
triage-required
kind
correctness
kind
ideas
kind
improvement
kind
performance
kind
testing
kind
usability
kind
wrong-behavior
prio
critical
prio
low
scope
admin-api
scope
background-healing
scope
build
scope
documentation
scope
k8s
scope
layout
scope
metadata
scope
ops
scope
rpc
scope
s3-api
scope
security
scope
telemetry
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Deuxfleurs/garage#720
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
An idea about a replication mode that would not compromise too much on complexity (no erasure coding or anything), but still give the ability to have a reliable garage cluster with a bit less disk usage.
So basically I would split the replication count of metadata and data. So that there are three copies of the metadata, but only two copies of the data. Since the metadata includes a hash of the block, the third metadata node could be used as a tie breaker (and to confirm the read when one of the two nodes holding the data is offline, for providing read-after-write-consistency) while not requiring to save the whole block a third time. I wonder if you considered something like this already.
I think once we have #550 this would not be too hard. Supporting multiple replication modes / multiple layouts in a single cluster is the really hard part.