article is ready
This commit is contained in:
parent
e1e4497c64
commit
a2ecf6db60
1 changed files with 12 additions and 4 deletions
|
@ -51,12 +51,20 @@ Digging deeper in the crate dependencies, we learn from the [aws-smithy-runtime]
|
||||||
|
|
||||||
> [Constructing] a Hyper client with the default TLS implementation (rustls) [...] can be useful when you want to share a Hyper connector between multiple generated Smithy clients.
|
> [Constructing] a Hyper client with the default TLS implementation (rustls) [...] can be useful when you want to share a Hyper connector between multiple generated Smithy clients.
|
||||||
|
|
||||||
It seems to be exactly what we want to do: to the best of my knowledge and my high-level understanding of the Rust aws-sdk ecosystem, the thread pool referenced early
|
It seems to be exactly what we want to do, but to be effective, we must do the same thing for K2V.
|
||||||
is in fact the thread pool created by the Hyper Client. Looking at [Hyper 0.14 client documentation](https://docs.rs/hyper/0.14.28/hyper/client/index.html), we indeed learn that:
|
After implementing these features, I got the following plot:
|
||||||
|
|
||||||
> The default Client provides these things on top of the lower-level API: [...] A pool of existing connections, allowing better performance when making multiple requests to the same hostname.
|
![Idle resource usage for Aerograme 0.2.1 & 0.2.2](idle.png)
|
||||||
|
|
||||||
That's exactly what we want: we are doing requests to a single hostname, so we could have a single TCP connection, instead of *n* connections where *n* is the number of connected users! However, it now means sharing a tokio client among multiple threads. Before Hyper 0.11.2, [it was even impossible](https://stackoverflow.com/questions/44866366/how-can-i-use-hyperclient-from-another-thread). Starting from 0.11.3, the Client Pool is behind an Arc reference which allows to share it between threads, which is not necessarily something desirable: we now have synchronizations on this object. Given our workloads (a high number of users, where we expect the load to be evenly spread between them), a share nothing architecture is possible. So ideally we want one thread per core, and as few communication as possible between these threads. Like all other design changes: we are discussing long-term planning/changes, for now having a bit more synchronization could be an acceptable trade-off.
|
First, the spikes are more spaced because I am typing the command by hands, not because Aerogramme is slower!
|
||||||
|
Another non relevant artifact is the end of the memory plot: memory is not released on the left plot as I cut the recording before typing the LOGOUT command.
|
||||||
|
What's interesting is the memory usage range: on the left, it's ~20MB, on the right it's ~10MB.
|
||||||
|
By sharing the HTTP Client, we thus use twice less memory per-user, down to around ~650kB/user for Aerogramme 0.2.2
|
||||||
|
Back to our 1k users, we get 650MB of RAM, 6.5GB for 10k, and thus 65GB for 100k. So *in theory*, it seems OK,
|
||||||
|
but our sample and our methodology is too dubious to confirm that *in practice* such memory usage will be observed.
|
||||||
|
|
||||||
|
In the end, I think for now IDLE RAM usage in Aerogramme is acceptable, and thus we can move
|
||||||
|
on other aspects without fearing that IDLE will make the software unusable.
|
||||||
|
|
||||||
|
|
||||||
## Users feedbacks
|
## Users feedbacks
|
||||||
|
|
Loading…
Reference in a new issue