0.8.1: Heavy data directory reading after upgrade #470
Labels
No labels
AdminAPI
Bug
Check AWS
CI
Correctness
Critical
Documentation
Ideas
Improvement
Low priority
Newcomer
Performance
S3 Compatibility
Testing
Usability
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Deuxfleurs/garage#470
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I upgraded my cluster from
0.7.2
to0.8.1
, and followed all the steps at: https://garagehq.deuxfleurs.fr/documentation/working-documents/migration-08/There's currently zero S3/web activity on the cluster nor anything in the queues, but I'm observating a lot of disk reading:
If I probe DTrace for things being opened, it seems
garage
is crawling its own data directory:Is there something else I can try to debug where this is coming from?
Probably your node is doing a scrub of the stored data to check for corruptions. It does it once every month. It's meant to be a background process that limits itself in terms of I/O in order to leave space for interactive requests to be served first. You can check the progress of the scrub using
garage worker list
andgarage worker info
. You can change the speed of the scrub usinggarage worker set scrub-tranquility
(zero is the fastest possible, larger values mean more interval between iterations and therefore a smaller proportion of I/O time used by the scrub).Very interesting - that was indeed it.