New article: Bringing theoretical design and observed performances face to face #12
1 changed files with 8 additions and 7 deletions
|
@ -319,8 +319,8 @@ metadata engine and thus focus only on 16-byte objects.
|
|||
It appears that the performances of our metadata engine are acceptable, as we
|
||||
have a comfortable margin compared to Minio (Minio is between 3x and 4x times
|
||||
slower per batch). We also note that, past 200k objects, Minio batch
|
||||
completion time is constant as Garage's one remains linear: it could be
|
||||
interesting to know if Garage batch's completion time would cross Minio's one
|
||||
completion time is constant as Garage's one is still increasing in the observed range:
|
||||
it could be interesting to know if Garage batch's completion time would cross Minio's one
|
||||
for a very large number of objects. If we reason per object, both Minio and
|
||||
Garage performances remain very good: it takes respectively around 20ms and
|
||||
5ms to create an object. At 100 Mbps, if you upload a 10MB file, the
|
||||
|
@ -333,10 +333,11 @@ Next, we focus on Garage's data only to better see its specific behavior:
|
|||
|
||||
![Showing the time to send 128 batches of 8192 objects for Garage only](1million.png)
|
||||
|
||||
Two effects are now more visible: 1. batch completion time is linear with the
|
||||
Two effects are now more visible: 1. increasing batch completion time with the
|
||||
number of objects in the bucket and 2. measurements are dispersed, at least
|
||||
more than Minio. We discussed the first point previously but not the second
|
||||
one on measurement dispersion. This instability could be an issue as it could
|
||||
more than Minio. We don't know for sure if this increasing batch completion
|
||||
time is linear or logarithmic as we don't have enough datapoint; additinal
|
||||
measurements are needed. Concercning the observed instability, it could
|
||||
be a symptom of what we saw with some other experiments in this machine:
|
||||
sometimes it freezes under heavy I/O operations. Such freezes could lead to
|
||||
request timeouts and failures. If it occurs on our testing computer, it will
|
||||
|
@ -351,8 +352,8 @@ cluster at [deuxfleurs.fr](https://deuxfleurs) smoothly manages a bucket with
|
|||
116k objects. This bucket contains real data: it is used by our Matrix instance
|
||||
to store people's media files (profile pictures, shared pictures, videos,
|
||||
audios, documents...). Thanks to this benchmark, we have identified two points
|
||||
of vigilance: putting object duration seems linear with the number of existing
|
||||
objects in the cluster, and we have some volatility in our measured data that
|
||||
of vigilance: batch duration increases with the number of existing
|
||||
objects in the cluster in the observed range, and we have some volatility in our measured data that
|
||||
could be a symptom of our system freezing under the load. Despite these two
|
||||
points, we are confident that Garage could scale way above 1M+ objects, but it
|
||||
remains to be proved!
|
||||
|
|
Loading…
Reference in a new issue