Compare commits

..

2 Commits

Author SHA1 Message Date
Alex 85f580cbde
[fix-buffering] change request sending strategy and fix priorities
ci/woodpecker/push/debug Pipeline was successful Details
ci/woodpecker/pr/debug Pipeline was successful Details
ci/woodpecker/deployment/debug Pipeline was successful Details
ci/woodpecker/deployment/release/4 Pipeline was successful Details
ci/woodpecker/deployment/release/3 Pipeline was successful Details
ci/woodpecker/deployment/release/1 Pipeline was successful Details
ci/woodpecker/deployment/release/2 Pipeline was successful Details
ci/woodpecker/deployment/publish Pipeline was successful Details
remove LAS, priorize new requests but otherwise just do standard queuing
2024-03-27 16:22:40 +01:00
Alex 0d3e285d13
[fix-buffering] implement `block_ram_buffer_max` to avoid excessive RAM usage 2024-03-27 16:22:40 +01:00
1 changed files with 16 additions and 4 deletions

View File

@ -424,10 +424,22 @@ might use more storage space that is optimally possible.
#### `block_ram_buffer_max` (since v0.9.4) {#block_ram_buffer_max}
A limit on the total size of data blocks kept in RAM as they are awaiting
to be sent to storage nodes. This helps avoid unbounded RAM usage growth
if a storage node has slower networking than the others and is not able
to receive and store data blocks fast enough.
A limit on the total size of data blocks kept in RAM by S3 API nodes awaiting
to be sent to storage nodes asynchronously.
Explanation: since Garage wants to tolerate node failures, it uses quorum
writes to send data blocks to storage nodes: try to write the block to three
nodes, and return ok as soon as two writes complete. So even if all three nodes
are online, the third write always completes asynchronously. In general, there
are not many writes to a cluster, and the third asynchronous write can
terminate early enough so as to not cause unbounded RAM growth. However, if
the S3 API node is continuously receiving large quantities of data and the
third node is never able to catch up, many data blocks will be kept buffered in
RAM as they are awaiting transfer to the third node.
The `block_ram_buffer_max` sets a limit to the size of buffers that can be kept
in RAM in this process. When the limit is reached, backpressure is applied
back to the S3 client.
Note that this only counts buffers that have arrived to a certain stage of
processing (received from the client + encrypted and/or compressed as