Merge branch 'main' into next-v2
This commit is contained in:
commit
c9156f6828
6 changed files with 517 additions and 526 deletions
967
Cargo.lock
generated
967
Cargo.lock
generated
File diff suppressed because it is too large
Load diff
|
@ -135,8 +135,8 @@ opentelemetry-contrib = "0.9"
|
|||
prometheus = "0.13"
|
||||
|
||||
# used by the k2v-client crate only
|
||||
aws-sigv4 = { version = "1.1" }
|
||||
hyper-rustls = { version = "0.26", features = ["http2"] }
|
||||
aws-sigv4 = { version = "1.1", default-features = false }
|
||||
hyper-rustls = { version = "0.26", default-features = false, features = ["http1", "http2", "ring", "rustls-native-certs"] }
|
||||
log = "0.4"
|
||||
thiserror = "1.0"
|
||||
|
||||
|
@ -144,8 +144,9 @@ thiserror = "1.0"
|
|||
assert-json-diff = "2.0"
|
||||
rustc_version = "0.4.0"
|
||||
static_init = "1.0"
|
||||
aws-sdk-config = "1.62"
|
||||
aws-sdk-s3 = "=1.68"
|
||||
aws-smithy-runtime = { version = "1.8", default-features = false, features = ["tls-rustls"] }
|
||||
aws-sdk-config = { version = "1.62", default-features = false }
|
||||
aws-sdk-s3 = { version = "1.79", default-features = false, features = ["rt-tokio"] }
|
||||
|
||||
[profile.dev]
|
||||
#lto = "thin" # disabled for now, adds 2-4 min to each CI build
|
||||
|
|
|
@ -86,3 +86,62 @@ helm delete --namespace garage garage
|
|||
```
|
||||
|
||||
Note that this will leave behind custom CRD `garagenodes.deuxfleurs.fr`, which must be removed manually if desired.
|
||||
|
||||
## Increase PVC size on running Garage instances
|
||||
|
||||
Since the Garage Helm chart creates the data and meta PVC based on `StatefulSet` templates, increasing the PVC size can be a bit tricky.
|
||||
|
||||
### Confirm the `StorageClass` used for Garage supports volume expansion
|
||||
|
||||
Confirm the storage class used for garage.
|
||||
|
||||
```bash
|
||||
kubectl -n garage get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
|
||||
data-garage-0 Bound pvc-080360c9-8ce3-4acf-8579-1701e57b7f3f 30Gi RWO longhorn-local <unset> 77d
|
||||
data-garage-1 Bound pvc-ab8ba697-6030-4fc7-ab3c-0d6df9e3dbc0 30Gi RWO longhorn-local <unset> 5d8h
|
||||
data-garage-2 Bound pvc-3ab37551-0231-4604-986d-136d0fd950ec 30Gi RWO longhorn-local <unset> 5d5h
|
||||
meta-garage-0 Bound pvc-3b457302-3023-4169-846e-c928c5f2ea65 3Gi RWO longhorn-local <unset> 77d
|
||||
meta-garage-1 Bound pvc-49ace2b9-5c85-42df-9247-51c4cf64b460 3Gi RWO longhorn-local <unset> 5d8h
|
||||
meta-garage-2 Bound pvc-99e2e50f-42b4-4128-ae2f-b52629259723 3Gi RWO longhorn-local <unset> 5d5h
|
||||
```
|
||||
|
||||
In this case, the storage class is `longhorn-local`. Now, check if `ALLOWVOLUMEEXPANSION` is true for the used `StorageClass`.
|
||||
|
||||
```bash
|
||||
kubectl get storageclasses.storage.k8s.io longhorn-local
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
longhorn-local driver.longhorn.io Delete Immediate true 103d
|
||||
```
|
||||
|
||||
If your `StorageClass` does not support volume expansion, double check if you can enable it. Otherwise, your only real option is to spin up a new Garage cluster with increased size and migrate all data over.
|
||||
|
||||
If your `StorageClass` supports expansion, you are free to continue.
|
||||
|
||||
### Increase the size of the PVCs
|
||||
|
||||
Increase the size of all PVCs to your desired size.
|
||||
|
||||
```bash
|
||||
kubectl -n garage edit pvc data-garage-0
|
||||
kubectl -n garage edit pvc data-garage-1
|
||||
kubectl -n garage edit pvc data-garage-2
|
||||
kubectl -n garage edit pvc meta-garage-0
|
||||
kubectl -n garage edit pvc meta-garage-1
|
||||
kubectl -n garage edit pvc meta-garage-2
|
||||
```
|
||||
|
||||
### Increase the size of the `StatefulSet` PVC template
|
||||
|
||||
This is an optional step, but if not done, future instances of Garage will be created with the original size from the template.
|
||||
|
||||
```bash
|
||||
kubectl -n garage delete sts --cascade=orphan garage
|
||||
statefulset.apps "garage" deleted
|
||||
```
|
||||
|
||||
This will remove the Garage `StatefulSet` but leave the pods running. It may seem destructive but needs to be done this way since edits to the size of PVC templates are prohibited.
|
||||
|
||||
### Redeploy the `StatefulSet`
|
||||
|
||||
Now the size of future PVCs can be increased, and the Garage Helm chart can be upgraded. The new `StatefulSet` should take ownership of the orphaned pods again.
|
||||
|
|
|
@ -71,7 +71,7 @@ The entire procedure would look something like this:
|
|||
|
||||
2. Take each node offline individually to back up its metadata folder, bring them back online once the backup is done.
|
||||
You can do all of the nodes in a single zone at once as that won't impact global cluster availability.
|
||||
Do not try to make a backup of the metadata folder of a running node.
|
||||
Do not try to manually copy the metadata folder of a running node.
|
||||
|
||||
**Since Garage v0.9.4,** you can use the `garage meta snapshot --all` command
|
||||
to take a simultaneous snapshot of the metadata database files of all your
|
||||
|
|
|
@ -6,7 +6,8 @@ metadata:
|
|||
labels:
|
||||
{{- include "garage.labels" . | nindent 4 }}
|
||||
spec:
|
||||
type: {{ .Values.service.type }}
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: {{ .Values.service.s3.api.port }}
|
||||
targetPort: 3900
|
||||
|
@ -18,4 +19,4 @@ spec:
|
|||
name: s3-web
|
||||
selector:
|
||||
{{- include "garage.selectorLabels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
|
|
@ -64,6 +64,7 @@ syslog-tracing = { workspace = true, optional = true }
|
|||
garage_api_common.workspace = true
|
||||
|
||||
aws-sdk-s3.workspace = true
|
||||
aws-smithy-runtime.workspace = true
|
||||
chrono.workspace = true
|
||||
http.workspace = true
|
||||
hmac.workspace = true
|
||||
|
|
Loading…
Add table
Reference in a new issue