Use SRPT scheduling to improve liveness when sending messages between peers #361
Labels
No labels
action
check-aws
action
discussion-needed
action
for-external-contributors
action
for-newcomers
action
more-info-needed
action
need-funding
action
triage-required
kind
correctness
kind
ideas
kind
improvement
kind
performance
kind
testing
kind
usability
kind
wrong-behavior
prio
critical
prio
low
scope
admin-api
scope
background-healing
scope
build
scope
documentation
scope
k8s
scope
layout
scope
metadata
scope
ops
scope
rpc
scope
s3-api
scope
security
scope
telemetry
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Deuxfleurs/garage#361
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Currently, according to discussions and current research, messages between Garage peers are scheduled according to the following strategy:
Round-Robin is not ideal in the second case, because when there are many concurrent messages, all messages will experience a high completion time.
A better strategy for messages with identical priorities is SRPT: Shortest Remaining Processing Time. It is designed to minimize the number of objects in the system (depending on hypotheses), and it also minimizes the average completion time.
For Garage it would look like the following:
Here are some papers on the subject (in different theoretical settings but the ideas should apply in a similar way):
Question: We are planing to moving Netapp to a streaming-based API, in which clients and servers could use async streams to send and receive data. This means that Netapp would not necessarily know how much data is remaining to send on each stream it is currently processing, as such data has not been produced yet. Do we know what is the optimal scheduling policy in the case where the remaining size is unknown?
The netapp streaming PR that was merged recently (#343) prevents the use of SRPT because we don't know the length of streams that will be sent ahead of time. Instead it uses LAS scheduling (Least Attained Service). TODO: benchmark this.
As far as I know, LAS is a good idea for long-tail distribution of flow sizes (roughly speaking: many small flows, a few very big flows) when the actual flow size is not known in advance.
Do you have an idea of the typical distribution of stream sizes in Garage? If there are many streams of the same size, I have the feeling that LAS would be similar to Round-Robin.
After discussing with LX, it seems that in practise we always know the length of a stream ahead of time. Either the user is pushing an object of an unknown size, in this case we need to buffer it to compute its hash and choose a server, or a user is fetching it, and thus we have the object in our system, and we can get its size.
The algorithm I have converged on for Garage v1.0 is the following:
In priority, poll and send packets for new streams, as these streams have a chance of being short messages and being successfully sent in only a single chunk.
For streams that require more than one chunk (this applies almost exclusively to data blocks), we use the basic round robin strategy
The block manager has a semaphore (
buffer_kb_semaphore
) that limits the maximum number of data chunks to be transferred concurrently. This avoids memory exhaustion by buffering too many chunks in RAM when the network throughput is too slow (#788, #792)For the time being, this solves the issue quite well.