aerogramme.deuxfleurs.fr/content/blog/2022-v0.7-released.md
2022-04-06 10:28:13 +02:00

8.4 KiB

+++ title="Garage v0.7: Kubernetes and OpenTelemetry" date=2022-04-04 +++

We just published Garage v0.7, our second public beta release. In this post, we do a quick tour of its 2 new features: Kubernetes integration and OpenTelemetry support.


Two months ago, we were impressed by the success of our open beta launch at FOSDEM and on Hacker News: our intial post lead to more than 40k views in 10 days, going up to 100 views/minute. Since this event, we continued to improve Garage, and - 2 months after the initial release - we are happy to announce version 0.7.0.

But first, we would like to thank the contributors that made this new release possible: Alex, Jill, Max Audron, Maximilien, Quentin, Rune Henrisken, Steam, and trinity-1686a. This is also our first time welcoming contributors external to the core team, and as we wish for Garage to be a community-driven project, we encourage it.

As a noverlty as well, you can get this release using our binaries or the package provided by your distribution. We ship statically compiled binaries for most Linux architectures (amd64, i386, aarch64 and armv6) and associated Docker containers. Garage now is also packaged by third parties on some OS/distributions. We are currently aware of FreeBSD and AUR for Arch Linux. Feel free to reach us if you are packaging (or planning to package) Garage, we welcome maintainers and will upstream specific patches if that can help. If you already did package garage, tell us and we'll add it to the documentation.

Speaking about the changes of this new version, it obviously includes many bug fixes. We listed them in our changelogs, take a look, we might have fixed something that annoyed you! Besides bugfixes, there is two new features: a better Kubernetes integration and support for OpenTelemetry.

Kubernetes integration

Before Garage v0.7.0, you had to deploy a Consul cluster or spawn a "coordinating" pod to deploy Garage on Kubernetes. In this new version, Garage integrates a method to discover other peers by using Kubernetes Custom Resources to simplify cluster discovery. Garage can self-apply the Custom Resource Definition (CRD) to your cluster, or you can manage it manually.

Let's see practically how it works with a minimalistic example (not secured nor suitable for production). You can run it on minikube if you a more interactive reading.

Start by creating a ConfigMap containg Garage's configuration (let's name it config.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
  name: garage-config
  namespace: default
data:
  garage.toml: |-
    metadata_dir = "/mnt/fast"
    data_dir = "/mnt/slow"

    replication_mode = "3"

    rpc_bind_addr = "[::]:3901"
    rpc_secret = "<secret>"

    bootstrap_peers = []

    kubernetes_namespace = "default"
    kubernetes_service_name = "garage-daemon"
    kubernetes_skip_crd = false

    [s3_api]
    s3_region = "garage"
    api_bind_addr = "[::]:3900"
    root_domain = ".s3.garage.tld"

    [s3_web]
    bind_addr = "[::]:3902"
    root_domain = ".web.garage.tld"
    index = "index.html"    

The 3 important parameters are kubernetes_namespace, kubernetes_service_name, and kubernetes_skip_crd. Configure them according to your planned deployment. The last one controls wether you want to create the CRD manually or allow Garage to create it automatically on startup. In this example, we keep it to false, which means we allow Garage to automatically create the CRD.

Apply this configuration on your cluster:

kubectl apply -f config.yaml

Allowing Garage to create the CRD is not enough, the process must have enough permissions. A quick unsecure way to add the permission is to create a ClusterRoleBinding to give admin rights to our local user, effectively breaking Kubernetes' security model (we name this file admin.yml):

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: garage-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:serviceaccount:default:default

Apply it:

kubectl apply -f admin.yaml

Finally, we create a StatefulSet to run our service (service.yaml):

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: garage
spec:
  selector:
    matchLabels:
      app: garage
  serviceName: "garage"
  replicas: 3
  template:
    metadata:
      labels:
        app: garage
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: garage
        image: dxflrs/amd64_garage:v0.7.0
        ports:
        - containerPort: 3900
          name: s3-api
        - containerPort: 3902
          name: web-api
        volumeMounts:
        - name: fast
          mountPath: /mnt/fast
        - name: slow
          mountPath: /mnt/slow
        - name: etc
          mountPath: /etc/garage.toml
          subPath: garage.toml
      volumes:
      - name: etc
        configMap:
          name: garage-config
  volumeClaimTemplates:
  - metadata:
      name: fast
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Mi
  - metadata:
      name: slow
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Mi

Garage is a stateful program, so it needs a stable place to store its data and metadata. This feature is provided by Kubernetes' Persistent Volumes that can be used only from a StatefulSet, hence the choice of this K8S object to deploy our service.

Kubernetes has many "drivers" for Persistent Volumes, for production uses we recommend only the local driver. Using other drivers may lead to huge performance issues or data corruption, probably both in practice.

In the example, we are claiming 2 volumes of 100MB. We use 2 volumes instead of 1 because Garage separates its metadata from its data. By having 2 volumes, you can reserve a smaller capacity on a SSD for the metadata and a larger capacity on a regular HDD for the data. Do not forget to change the reserved capacity, 100MB is only suitable for testing.

Note how we are mounting our ConfigMap: we need to set the subpath property to mount only the garage.toml file and not the whole /etc folder that would prevent K8S from writing its own files in /etc and fail the pod.

You can apply this file with:

kubectl apply -f service.yaml

Now, you are ready to interact with your cluster, each instance must have discovered the other ones:

kubectl exec -it garage-0 --container garage -- /garage status
# ==== HEALTHY NODES ====
# ID                Hostname  Address                   Tags              Zone  Capacity
# e6284331c321a23c  garage-0  172.17.0.5:3901           NO ROLE ASSIGNED
# 570ff9b0ed3648a7  garage-2  [::ffff:172.17.0.7]:3901  NO ROLE ASSIGNED
# e1990a2069429428  garage-1  [::ffff:172.17.0.6]:3901  NO ROLE ASSIGNED

Of course, to have a full deployment, you will probably want to deploy a Service in front of your cluster and/or a reverse proxy.

If Kubernetes is not your thing, know that we are running Garage on a Nomad+Consul cluster. We have not documented it yet but you can get a look at our Nomad service.

OpenTelemetry support

And next?

roadmap: k2v, allocation simulator, s3 compatibility, community feedback, whitepaper