Proposal: make capacity easier to approach #357

Closed
opened 2022-08-11 10:36:30 +00:00 by quentin · 5 comments
Owner

Understanding the concept of "capacity" is hard for newcomers (in garage layout assign --zone x --capacity 10 <uid>)

We could add a layer on top of it to help newcomers.

The idea is that it should be still possible to configure the capacity in term of weights as now (we can even rename the --capacity flag as --weight or keep it as it).

But we could add a second flag named --storage where it should be possible for people to pass storage capacity like --storage 500GB. We should find a way to display it nicely in the various outputs.

Finally, we could even choose to make this parameter optional, and instead probing the disk that holds the data folder. I know that other things can be stored so it does not map to available space to Garage, but I think that it will be good enough for most people. And in the end, Garage is easy to rebalance so it should not be that hard.

Understanding the concept of "capacity" is hard for newcomers (in `garage layout assign --zone x --capacity 10 <uid>`) We could add a layer on top of it to help newcomers. The idea is that it should be still possible to configure the capacity in term of weights as now (we can even rename the `--capacity` flag as `--weight` or keep it as it). But we could add a second flag named `--storage` where it should be possible for people to pass storage capacity like `--storage 500GB`. We should find a way to display it nicely in the various outputs. Finally, we could even choose to make this parameter optional, and instead probing the disk that holds the `data` folder. I know that other things can be stored so it does not map to available space to Garage, but I think that it will be good enough for most people. And in the end, Garage is easy to rebalance so it should not be that hard.
quentin added the
kind
improvement
kind
ideas
labels 2022-08-11 10:36:30 +00:00

Currently --capacity 10 - it is 1TB?

Capacity values must be integers but can be given any signification. Here we chose that 1 unit of capacity = 100 GB.
from here

But how to set this with integers less than 100G?
50Gb for example?

Currently --capacity 10 - it is 1TB? >>Capacity values must be integers but can be given any signification. Here we chose that 1 unit of capacity = 100 GB. [from here](https://garagehq.deuxfleurs.fr/documentation/cookbook/real-world/) But how to set this with integers less than 100G? 50Gb for example?
Author
Owner

Currently, this value is only a weight value, it does not represent a real capacity.
If you set capacity=2 on server A and capacity=1 on server B, server A will receive twice more data than server B. If it happens that, in addition, server A has a 2TB hard drive and server B a 1TB hard drive (or 50GB hard drive and 25GB hard drive, or any other combination where server A HDD is twice bigger than server B HDD), both server A and B will be filled up by Garage at the same time, maximizing the usage of your HDD. That's why we refer to this "weight" parameter as capacity!

I said capacity=2 and capacity=1 in my prev. example, but I could have said capacity=30 and capacity=15, or capacity=140 and capacity=70. In the end, what is important is the ratio between server A and B.

So to help management, we could say that we hide this complexity and ask people to put their available disk space, and then compute the ratio for them.

Currently, this value is only a weight value, it does not represent a real capacity. If you set capacity=2 on server A and capacity=1 on server B, server A will receive twice more data than server B. If it happens that, in addition, server A has a 2TB hard drive and server B a 1TB hard drive (or 50GB hard drive and 25GB hard drive, or any other combination where server A HDD is twice bigger than server B HDD), both server A and B will be filled up by Garage at the same time, maximizing the usage of your HDD. That's why we refer to this "weight" parameter as capacity! I said capacity=2 and capacity=1 in my prev. example, but I could have said capacity=30 and capacity=15, or capacity=140 and capacity=70. In the end, what is important is the ratio between server A and B. So to help management, we could say that we hide this complexity and ask people to put their available disk space, and then compute the ratio for them.

Yes, this would be critical for me as well. I'm planning on setting this up across several nodes on my homelab. Only a couple of the nodes are focused on data storage. The other nodes run various applications. Those nodes do however have a fair amount of spare capicity, so I was going to use them for replication. However, I'd like to be able to cap the available capacity to some fixed value on a per-node basis. I could technically cap it using ZFS quotas, but even though I'm not anywhere near using up total available capicity, I don't know how Garage would handle disk space allocation denial because it hit the ZFS quota. Having it built into Garage is the ideal option.

Also, I'm running this on FreeBSD 13.1 stable after getting it compiled there. I have a port (FreeBSD's package manager) that I've written and so far has been working fine. The current package in the repo was out-dated and also lacks a few steps during packaging (e.g. stripping the binary for release, disabling OpenSSL vendoring and using base, startup service script, etc). I don't mind submitting it here once I've done a review on it. Let me know where it should go.

Yes, this would be critical for me as well. I'm planning on setting this up across several nodes on my homelab. Only a couple of the nodes are focused on data storage. The other nodes run various applications. Those nodes do however have a fair amount of spare capicity, so I was going to use them for replication. However, I'd like to be able to cap the available capacity to some fixed value on a per-node basis. I could technically cap it using ZFS quotas, but even though I'm not anywhere near using up total available capicity, I don't know how Garage would handle disk space allocation denial because it hit the ZFS quota. Having it built into Garage is the ideal option. Also, I'm running this on FreeBSD 13.1 stable after getting it compiled there. I have a port (FreeBSD's package manager) that I've written and so far has been working fine. The current package in the repo was out-dated and also lacks a few steps during packaging (e.g. stripping the binary for release, disabling OpenSSL vendoring and using base, startup service script, etc). I don't mind submitting it here once I've done a review on it. Let me know where it should go.
Owner

This is linked to the work going on in #296. It would be nice to land everything together: the new layout assignation algorithm and a reorganization of capacity values so that they are in a meaningful unit. This is mostly a UI problem, as today there is no issue in defining 1 capacity unit = 1 byte (the simplest choice), so let's just do that and adapt the CLI code everywhere.

This is linked to the work going on in #296. It would be nice to land everything together: the new layout assignation algorithm and a reorganization of capacity values so that they are in a meaningful unit. This is mostly a UI problem, as today there is no issue in defining 1 capacity unit = 1 byte (the simplest choice), so let's just do that and adapt the CLI code everywhere.
lx removed the
kind
ideas
label 2022-09-14 11:27:12 +00:00
Owner

(removing the Ideas tag as we are pretty commited to doing this, it's not speculative anymore)

(removing the Ideas tag as we are pretty commited to doing this, it's not speculative anymore)
lx added this to the v1.0 milestone 2022-10-16 19:12:34 +00:00
lx modified the milestone from v1.0 to v0.9 2022-10-16 19:12:42 +00:00
lx closed this issue 2023-09-11 10:53:34 +00:00
Sign in to join this conversation.
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: Deuxfleurs/garage#357
No description provided.