Add info about logs and client to the writeup

This commit is contained in:
Quentin 2022-08-11 16:56:54 +02:00
parent e83f059675
commit 86ab9d7c00
Signed by: quentin
GPG key ID: E9602264D639FF68

View file

@ -6,7 +6,9 @@ On our production cluster that runs without pressure, we don't really observe th
But when I wanted to start a benchmark created by Minio developers, I hit the same limit.
So I wanted to reproduce this behavior in a more controlled environment.
I thus chose to use mknet to emulate a simple network with close to zero latency but with a very small bandwidth, 1M. The idea is that the network will be the bottleneck, and not the CPU, the memory or the disk.
## Reproducing the error in mknet
I used mknet to emulate a simple network with close to zero latency but with a very small bandwidth: 1Mbit/s. The idea is that the network will be the bottleneck, but not the CPU, the memory or the disk, even on my low powered laptop.
After a while, we quickly observe that the cluster is not reacting very well:
@ -24,12 +26,25 @@ warp: <ERROR> upload error: Put "http://[fc00:9a7a:9e::1]:3900/warp-benchmark-bu
warp: <ERROR> Error preparing server: upload error: Internal error: Could not reach quorum of 2. 1 of 3 request succeeded, others returned errors: ["Timeout", "Timeout"].
```
We observe many different types of error:
We observe many different types of error that I categorize as follow:
- [RPC] Timeout quorum errors, they are probably generated by a ping between nodes
- [RPC] Not connected error, after a timeout, a reconnection is not triggered directly
- [S3 Gateway] The gateway took to much time to answer and a timeout was triggered in the client
- [S3 Gateway] The S3 gateway closes the TCP connection before answering
As a first conclusion, we started to clearly reduce the scope of the problem by identifying that this undesirable behavior is triggered by a network bottleneck.
As a first conclusion, we started to clearly reduce the scope of the problem by identifying that this undesirable behavior is triggered by a network bottleneck.
Looking at Garage logs, we see that:
- node1, which is our S3 gateway, has many quorum errors / netapp not connected errors, which are the same than the ones reported earlier
- node2 and node3 which are only used as storage nodes, have no error/warn in their logs
It starts to really look like a congestion control/control flow error/scheduler issue: our S3 gateway seems to receive more data than it can send over the network, which in turn trigger timeouts, that trigger disconnect, and breaks everything.
## Write a custom client exhibiting the issue
We know how to trigger the issue with `warp`, Minio's benchmark tool but we don't yet understand well what kind of load it puts on the cluster except that it sends concurrently PUT and Multipart requests. So, before investigating the issue more in depth, we want to know:
- If a single large PUT request can trigger this issue or not?
- How many parallel requests are needed to trigger this issue?
- Does Multipart transfer are more impacted by this issue?
To get answer to our questions, we will write a specific benchmark.