Blog post inroducing Garage v0.7 #6
1 changed files with 25 additions and 153 deletions
|
@ -9,7 +9,7 @@ date=2022-04-04
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Two months ago, we were impressed by the success of our open beta launch at FOSDEM and on Hacker News: [our intial post](https://garagehq.deuxfleurs.fr/blog/2022-introducing-garage/) lead to more than 40k views in 10 days, going up to 100 views/minute.
|
Two months ago, we were impressed by the success of our open beta launch at FOSDEM and on Hacker News: [our initial post](https://garagehq.deuxfleurs.fr/blog/2022-introducing-garage/) lead to more than 40k views in 10 days, going up to 100 views/minute, and all requests were served by Garage without cache!
|
||||||
Since this event, we continued to improve Garage, and - 2 months after the initial release - we are happy to announce version 0.7.0.
|
Since this event, we continued to improve Garage, and - 2 months after the initial release - we are happy to announce version 0.7.0.
|
||||||
|
|
||||||
But first, we would like to thank the contributors that made this new release possible: Alex, Jill, Max Audron, Maximilien, Quentin, Rune Henrisken, Steam, and trinity-1686a.
|
But first, we would like to thank the contributors that made this new release possible: Alex, Jill, Max Audron, Maximilien, Quentin, Rune Henrisken, Steam, and trinity-1686a.
|
||||||
|
@ -26,171 +26,40 @@ Besides bugfixes, there is two new features: a better Kubernetes integration and
|
||||||
|
|
||||||
## Kubernetes integration
|
## Kubernetes integration
|
||||||
|
|
||||||
Before Garage v0.7.0, you had to deploy a Consul cluster or spawn a "coordinating" pod to deploy Garage on Kubernetes.
|
Before Garage v0.7.0, you had to deploy a Consul cluster or spawn a "coordinating" pod to deploy Garage on [Kubernetes](https://kubernetes.io) (K8S).
|
||||||
In this new version, Garage integrates a method to discover other peers by using Kubernetes [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to simplify cluster discovery.
|
In this new version, Garage integrates a method to discover other peers by using Kubernetes [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR) to simplify cluster discovery.
|
||||||
Garage can self-apply the [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD) to your cluster, or you can manage it manually.
|
|
||||||
|
|
||||||
Let's see practically how it works with a minimalistic example (not secured nor suitable for production).
|
CR discovery can be quickly enabled by configuring the name of the desired service (`kubernetes_namespace`) and which namespace to look for (`kubernetes_service_name`) in your Garage config:
|
||||||
You can run it on [minikube](https://minikube.sigs.k8s.io) if you want a more interactive reading.
|
|
||||||
|
|
||||||
Start by creating a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) containing Garage's configuration (let's name it `config.yaml`):
|
```toml
|
||||||
|
kubernetes_namespace = "default"
|
||||||
```yaml
|
kubernetes_service_name = "garage-daemon"
|
||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: garage-config
|
|
||||||
namespace: default
|
|
||||||
data:
|
|
||||||
garage.toml: |-
|
|
||||||
metadata_dir = "/mnt/fast"
|
|
||||||
data_dir = "/mnt/slow"
|
|
||||||
|
|
||||||
replication_mode = "3"
|
|
||||||
|
|
||||||
rpc_bind_addr = "[::]:3901"
|
|
||||||
rpc_secret = "<secret>"
|
|
||||||
|
|
||||||
bootstrap_peers = []
|
|
||||||
|
|
||||||
kubernetes_namespace = "default"
|
|
||||||
kubernetes_service_name = "garage-daemon"
|
|
||||||
kubernetes_skip_crd = false
|
|
||||||
|
|
||||||
[s3_api]
|
|
||||||
s3_region = "garage"
|
|
||||||
api_bind_addr = "[::]:3900"
|
|
||||||
root_domain = ".s3.garage.tld"
|
|
||||||
|
|
||||||
[s3_web]
|
|
||||||
bind_addr = "[::]:3902"
|
|
||||||
root_domain = ".web.garage.tld"
|
|
||||||
index = "index.html"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The 3 important parameters are `kubernetes_namespace`, `kubernetes_service_name`, and `kubernetes_skip_crd`.
|
Custom Resources must be defined *a priori* with [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD).
|
||||||
Configure them according to your planned deployment.
|
If the CRD does not exist, Garage can create it for you. It is enabled by default but it requires some additional permissions.
|
||||||
The last one controls wether you want to create the CRD manually or allow Garage to create it automatically on startup.
|
If you prefer limiting accesses to your K8S cluster, you can create the resource manually and prevent Garage from automatically creating it:
|
||||||
In this example, we keep it to `false`, which means we allow Garage to automatically create the CRD.
|
|
||||||
|
|
||||||
Apply this configuration on your cluster:
|
```toml
|
||||||
|
kubernetes_skip_crd = true
|
||||||
```bash
|
|
||||||
kubectl apply -f config.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Allowing Garage to create the CRD is not enough, the process must have enough permissions.
|
If you want to try Garage on K8S, we currently only provide some basic [example files](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/7e1ac51b580afa8e900206e7cc49791ed0a00d94/script/k8s). These files register a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/), a [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding), and a [StatefulSet](https://kubernetes.io/fr/docs/concepts/workloads/controllers/statefulset/) with a [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
|
||||||
A quick unsecure way to add the permission is to create a [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) to give admin rights to our local user, effectively breaking Kubernetes' security model (we name this file `admin.yml`):
|
|
||||||
|
|
||||||
```yaml
|
Once these files deployed, you will be able to interact with Garage as follow:
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: garage-admin
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: cluster-admin
|
|
||||||
subjects:
|
|
||||||
- apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: User
|
|
||||||
name: system:serviceaccount:default:default
|
|
||||||
```
|
|
||||||
|
|
||||||
Apply it:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl apply -f admin.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Finally, we create a [StatefulSet](https://kubernetes.io/fr/docs/concepts/workloads/controllers/statefulset/) to run our service (`service.yaml`):
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: StatefulSet
|
|
||||||
metadata:
|
|
||||||
name: garage
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: garage
|
|
||||||
serviceName: "garage"
|
|
||||||
replicas: 3
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: garage
|
|
||||||
spec:
|
|
||||||
terminationGracePeriodSeconds: 10
|
|
||||||
containers:
|
|
||||||
- name: garage
|
|
||||||
image: dxflrs/amd64_garage:v0.7.0
|
|
||||||
ports:
|
|
||||||
- containerPort: 3900
|
|
||||||
name: s3-api
|
|
||||||
- containerPort: 3902
|
|
||||||
name: web-api
|
|
||||||
volumeMounts:
|
|
||||||
- name: fast
|
|
||||||
mountPath: /mnt/fast
|
|
||||||
- name: slow
|
|
||||||
mountPath: /mnt/slow
|
|
||||||
- name: etc
|
|
||||||
mountPath: /etc/garage.toml
|
|
||||||
subPath: garage.toml
|
|
||||||
volumes:
|
|
||||||
- name: etc
|
|
||||||
configMap:
|
|
||||||
name: garage-config
|
|
||||||
volumeClaimTemplates:
|
|
||||||
- metadata:
|
|
||||||
name: fast
|
|
||||||
spec:
|
|
||||||
accessModes: [ "ReadWriteOnce" ]
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 100Mi
|
|
||||||
- metadata:
|
|
||||||
name: slow
|
|
||||||
spec:
|
|
||||||
accessModes: [ "ReadWriteOnce" ]
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 100Mi
|
|
||||||
```
|
|
||||||
|
|
||||||
Garage is a stateful program, so it needs a stable place to store its data and metadata.
|
|
||||||
This feature is provided by Kubernetes' [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) that can be used only from a [StatefulSet](https://kubernetes.io/fr/docs/concepts/workloads/controllers/statefulset/), hence the choice of this K8S object to deploy our service.
|
|
||||||
|
|
||||||
Kubernetes has many "drivers" for Persistent Volumes, for production uses we recommend **only** the `local` driver.
|
|
||||||
Using other drivers may lead to huge performance issues or data corruption, probably both in practice.
|
|
||||||
|
|
||||||
In the example, we are claiming 2 volumes of 100MB.
|
|
||||||
We use 2 volumes instead of 1 because Garage separates its metadata from its data.
|
|
||||||
By having 2 volumes, you can reserve a smaller capacity on a SSD for the metadata and a larger capacity on a regular HDD for the data.
|
|
||||||
Do not forget to change the reserved capacity, 100MB is only suitable for testing.
|
|
||||||
|
|
||||||
*Note how we are mounting our ConfigMap: we need to set the `subpath` property to mount only the `garage.toml` file and not the whole `/etc` folder that would prevent K8S from writing its own files
|
|
||||||
in `/etc` and fail the pod.*
|
|
||||||
|
|
||||||
You can apply this file with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl apply -f service.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Now, you are ready to interact with your cluster, each instance must have discovered the other ones:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl exec -it garage-0 --container garage -- /garage status
|
kubectl exec -it garage-0 --container garage -- /garage status
|
||||||
# ==== HEALTHY NODES ====
|
# ==== HEALTHY NODES ====
|
||||||
# ID Hostname Address Tags Zone Capacity
|
# ID Hostname Address Tags Zone Capacity
|
||||||
# e6284331c321a23c garage-0 172.17.0.5:3901 NO ROLE ASSIGNED
|
# e628.. garage-0 172.17.0.5:3901 NO ROLE ASSIGNED
|
||||||
# 570ff9b0ed3648a7 garage-2 [::ffff:172.17.0.7]:3901 NO ROLE ASSIGNED
|
# 570f.. garage-2 172.17.0.7:3901 NO ROLE ASSIGNED
|
||||||
# e1990a2069429428 garage-1 [::ffff:172.17.0.6]:3901 NO ROLE ASSIGNED
|
# e199.. garage-1 172.17.0.6:3901 NO ROLE ASSIGNED
|
||||||
```
|
```
|
||||||
|
|
||||||
Of course, to have a full deployment, you will probably want to deploy a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) in front of your cluster and/or a reverse proxy.
|
You can then follow the [regular documentation](https://garagehq.deuxfleurs.fr/documentation/cookbook/real-world/#creating-a-cluster-layout) to complete the configuration of your cluster.
|
||||||
|
|
||||||
|
If you target a production deployment, you should avoid binding admin rights to your cluster to create Garage's CRD. You will also need to expose some [Services](https://kubernetes.io/docs/concepts/services-networking/service/) to make your cluster reachable. Keep also in mind that Garage is a stateful service, so you must be very careful how you handle your data in Kubernetes to not lose them. In the near future, we plan to release a proper Helm chart and write "best practises" on our documentation.
|
||||||
|
|
||||||
If Kubernetes is not your thing, know that we are running Garage on a Nomad+Consul cluster.
|
If Kubernetes is not your thing, know that we are running Garage on a Nomad+Consul cluster.
|
||||||
We have not documented it yet but you can get a look at [our Nomad service](https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/src/commit/1e5e4af35c073d04698bb10dd4ad1330d6c62a0d/app/garage/deploy/garage.hcl).
|
We have not documented it yet but you can get a look at [our Nomad service](https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/src/commit/1e5e4af35c073d04698bb10dd4ad1330d6c62a0d/app/garage/deploy/garage.hcl).
|
||||||
|
@ -265,5 +134,8 @@ In all cases, your feedback is welcome on our Matrix channel.
|
||||||
|
|
||||||
## Conclusion
|
## Conclusion
|
||||||
|
|
||||||
|
This is only the first iteration of the Kubernetes and OpenTelemetry into Garage, so things are still a bit rough.
|
||||||
|
We plan to polish their integration in the coming months based on our experience and your feedback.
|
||||||
|
|
||||||
|
You may also ask yourself what will be the other works we plan to conduct: stay tuned, we will release a roadmap soon!
|
||||||
|
In the mean time, we hope you will enjoy Garave v0.7!
|
||||||
|
|
Loading…
Reference in a new issue