Zone-aware data migration #483
Labels
No labels
action
check-aws
action
discussion-needed
action
for-external-contributors
action
for-newcomers
action
more-info-needed
action
need-funding
action
triage-required
kind
correctness
kind
ideas
kind
improvement
kind
performance
kind
testing
kind
usability
kind
wrong-behavior
prio
critical
prio
low
scope
admin-api
scope
background-healing
scope
build
scope
documentation
scope
k8s
scope
layout
scope
metadata
scope
ops
scope
rpc
scope
s3-api
scope
security
scope
telemetry
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Deuxfleurs/garage#483
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I recently set up a
garage
cluster with a single node in two zones, and of course - a replication count of2
. These two nodes have a small datapipe between themselves.I then added a single node to one of the zones (below as
ccc
), and garage presented this new layout:As an admin, I thought - "OK, it knows that the data is in
aaa
and will move that toccc
in the same zone - they have a full gigabit link and that should take no time".However - looking at the Prometheus metrics, garage instead decided to take half the data from
aaa
and then the other half frombbb
- which took longer due to the smaller pipe.I think garage should prioritize moving the data from within the zone itself if that it available.