Kubernetes? #241
Labels
No labels
action
check-aws
action
discussion-needed
action
for-external-contributors
action
for-newcomers
action
more-info-needed
action
need-funding
action
triage-required
kind
correctness
kind
ideas
kind
improvement
kind
performance
kind
testing
kind
usability
kind
wrong-behavior
prio
critical
prio
low
scope
admin-api
scope
background-healing
scope
build
scope
documentation
scope
k8s
scope
layout
scope
metadata
scope
ops
scope
rpc
scope
s3-api
scope
security
scope
telemetry
No milestone
No project
No assignees
5 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Deuxfleurs/garage#241
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
hello,
I'm trying out garage and tried to make it work inside kubernetes and can't figure out my head around it, if it really works there?
most of the time kubernetes works by dns, because in k8s all ip addresses are volatile.
it looks like it's build with ip addresses in mind, is there any way to use k8s at all?
Hi schmitch,
At Deuxfleurs, we are not using Kubernetes but Nomad, that has way less abstractions on the network. So for sure, I don't know what would be the idiomatic way to integrate Garage with Kubernetes.
At least, if you have a Consul server on your network, you can configure your Garage nodes to register themselves in Consul and they will be able to discover their other peers.
You may also try to expose your Garage's instance ports on the server IP address (NodePort?). And then, in the bootstrap_peer section, you could put the IP address of one of your instance to trigger the autodiscovery.
Hopefully, someone more knowledgeable in Kubernetes may be able to help you in a better way. I will try to ping the people I know that operate K8S clusters.
Garage supports connecting to a node using something like
<hex blob>@my-domain.tld:1234
. You can use that in bootstrap_peers in config, or when doinggarage node connect <peer-id>
.Internally, Garage will resolve this domain as soon as it learns about it (at each restart for bootstrap_peers, when you run the command for
garage node connect
).I believe if a peer changes IP (keeping the same domain or not, but keeping the same public key), and if it manages to connect to any other peer, it should be able to learn IPs of all other peers and connect to them, allowing them to learn it's new IP in return.
This seems like something that would live outside of k8s so if your cluster goes away you don't lose your data.
ok so from what I understand, if I correctly set
bootstrap_peers
to a valid DNS name and rungarage node connect
after a reboot it should work?I think the biggest problem when it comes to
garage
than, that some processes are also hard to automate. Kubernetes is mostly a declarative system. I think for garage to work, somebody would need to write a wrapper, so that basicallygarage
can be put into a Statefulset, which would be impossible at the moment (especially since there is no way of specifying a public key upfront.the things looks really hard to me.
basically at the moment I think something like this would need to be automated by some wrapper inside k8s:
garage server
garage server
on all nodes with the new configgarage connect
thus this means the wrapper would probably also need to use k8s to have some kind of leader election.
btw. it's basically impossible to fill out
rpc_public_addr
upfront or keep STABLE ip addresses in k8s (some networks allow that, but most won't for a reason).(correct me if I am wrong in the initial bootstrap process tough!)
when I'm rebooting a single node I guess I need to do the same, expect that I already have a valid config, thus I can skip step 2, 3, 4, 5 (as long as at least 1 other node is online)
(maybe I will try to look into it and make a simple wrapper to check if that works)
You also have to put the public key of a node, something like this:
But maybe in your case something you could try is autodiscovery with Consul (I think Consul can probably run in kubernetes?).
Yes it's true that's an issue. As said @withinboredom, Garage should probably be set up manually outside of your Kubernetes cluster. What we do at Deuxfleurs is run it in a Nomad cluster but using host networking, so that IP addresses of Garage nodes are the IP addresses of the machines they are running on (which don't change).
So basically I see two relatively easy solutions in your use case:
rpc_public_addr
when they start (even if it's a new IP each time that's not a problem, just fill in the configuration dynamically), and let Garage nodes advertise themselves in the Consul catalog.We could probably implement other autodiscovery mechanisms (maybe using the Kubernetes equivalent of the Consul catalog? I don't really know what that would be). Basically the following things would be required:
rpc_public_addr
)If you have information on how to do this with Kubernetes, we could probably work towards a prototype.
I searched for "Kubernetes Autodiscovery" as this is this property we want, and we could integrate it in Garage, but it does not seem to be a common idiom.
K8S often uses etcd, and we could use it as a discovery service, similarly to Consul, but again, it seems that we would break K8S' abstraction by doing so.
I also found a documentation page about Cassandra + StatefulSet. As you mentioned, the difference is that we need to pre-generate a key.
This key is part of Garage's security features and we do not want to drop it. We need to understand how people are managing cluster applications on K8S that use TLS client auth between nodes of the cluster, as we will be able to translate their pattern to Garage.
I know that K8S has some Custom Resources and/or Custom Resource Definitions but it seems to be targeted to "human management" so not designed for programmatic access. Do you know if K8S defines some ways for an app to push some data about its states?
Edit: we might need to write some codes so Garage get a ServiceAccount then we might ask Garage to register some Secrets / ConfigMaps. We may take inspiration from Akka, a java library to ease distributed/clustered application devlopment, they have a K8S section in their doc.
btw. another good example for autodiscovery via k8s is patroni from zalando to bootstrap postgresql: https://github.com/zalando/patroni/blob/master/docs/kubernetes.rst
however their implementation might not be straight forward: https://github.com/zalando/patroni/blob/master/patroni/dcs/kubernetes.py
basically they created what somebody would need to do, to run
garage
on k8s at the moment. A wrapper "script" that register everything needed for service discovery and they put everything onto an endpoint, the same thing would be a little bit more work for garage since it probably needs an endpoint for each garage search and query them via k8s labels.but kubernetes autodiscovery is something that basically everybody makes a little bit different, depending on the need.
Edit:
I will look into it if I might have a good solution also based on something somebody written inside element:
https://matrix.to/#/!BCLqSdZmZvTrJncZYQ:zinz.dev/$DZ7hOBZAPi7GfjTiybwH6U1hqK0e3a5Q2FVrdUK6V68?via=zinz.dev&via=matrix.org&via=deuxfleurs.fr