Use SRPT scheduling to improve liveness when sending messages between peers #361
Currently, according to discussions and current research, messages between Garage peers are scheduled according to the following strategy:
- higher priority messages get scheduled first
- messages with the same priority are sent with a Round-Robin strategy
Round-Robin is not ideal in the second case, because when there are many concurrent messages, all messages will experience a high completion time.
A better strategy for messages with identical priorities is SRPT: Shortest Remaining Processing Time. It is designed to minimize the number of objects in the system (depending on hypotheses), and it also minimizes the average completion time.
For Garage it would look like the following:
- order all messages in the queue according to their remaining size (e.g. if we already sent 800 KB of a 1 MB block, the remaining size is 200 KB)
- send messages one after the other according to this order (no parallelization)
- whenever a new message needs to be sent, it is inserted in the ordered queue; if it comes first in the queue (i.e. it is smaller than the remaining size of all other messages), it pre-empts the message that we were currently sending
Here are some papers on the subject (in different theoretical settings but the ideas should apply in a similar way):
Question: We are planing to moving Netapp to a streaming-based API, in which clients and servers could use async streams to send and receive data. This means that Netapp would not necessarily know how much data is remaining to send on each stream it is currently processing, as such data has not been produced yet. Do we know what is the optimal scheduling policy in the case where the remaining size is unknown?
The netapp streaming PR that was merged recently (#343) prevents the use of SRPT because we don't know the length of streams that will be sent ahead of time. Instead it uses LAS scheduling (Least Attained Service). TODO: benchmark this.
As far as I know, LAS is a good idea for long-tail distribution of flow sizes (roughly speaking: many small flows, a few very big flows) when the actual flow size is not known in advance.
Do you have an idea of the typical distribution of stream sizes in Garage? If there are many streams of the same size, I have the feeling that LAS would be similar to Round-Robin.
After discussing with LX, it seems that in practise we always know the length of a stream ahead of time. Either the user is pushing an object of an unknown size, in this case we need to buffer it to compute its hash and choose a server, or a user is fetching it, and thus we have the object in our system, and we can get its size.
No due date set.
No dependencies set.
Deleting a branch is permanent. Although the deleted branch may exist for a short time before cleaning up, in most cases it CANNOT be undone. Continue?