Slow GC #839
Labels
No labels
action
check-aws
action
discussion-needed
action
for-external-contributors
action
for-newcomers
action
more-info-needed
action
need-funding
action
triage-required
kind
correctness
kind
ideas
kind
improvement
kind
performance
kind
testing
kind
usability
kind
wrong-behavior
prio
critical
prio
low
scope
admin-api
scope
background-healing
scope
build
scope
documentation
scope
k8s
scope
layout
scope
metadata
scope
ops
scope
rpc
scope
s3-api
scope
security
scope
telemetry
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Deuxfleurs/garage#839
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
On a 3 nodes garage cluster GC is very slow, it does around 100K GC per day (meta is on nvme).
Is there any tunable ?
Is the GC too slow? Or your garage cluster is too slow?
Do you want to increase the speed of GC? Or do you want to decrease it?
What is the kind of workload you have?
Can you describe your deployment a bit more in depth? Especially CPU, RAM, virtualization, shared env, etc.
What makes you think its a Garage issue and not your servers that are too slow? (Not a way to dismiss your issue, but it helps to understand where the strange things are).
my underlying filesystem is zfs perhaps this make it slow, after my vacation gc is finished and seem no more being an issue.
Thanks,
Items in the GC queue are processes after a 24h delay so it is normal that the queue is never zero