Helm installation - Pods don't auto-discover, no CRD #522
Labels
No labels
action
check-aws
action
discussion-needed
action
for-external-contributors
action
for-newcomers
action
more-info-needed
action
need-funding
action
triage-required
kind
correctness
kind
ideas
kind
improvement
kind
performance
kind
testing
kind
usability
kind
wrong-behavior
prio
critical
prio
low
scope
admin-api
scope
background-healing
scope
build
scope
documentation
scope
k8s
scope
layout
scope
metadata
scope
ops
scope
rpc
scope
s3-api
scope
security
scope
telemetry
No milestone
No project
No assignees
5 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Deuxfleurs/garage#522
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hi all,
I wanted to start using Garage and install it into my Kubernetes cluster.
I used the Helm Chart from the main branch to install it and started with 2 replicas (as for testing I currently only have 2 nodes).
But the 2 pods arent automatically discovering each other, as I thought they should.
From my understanding Garage should deploy a CRD that is then used for discovery.
But I don't see any CRD from garage having been deployed to my cluster, so I think that might be the culprit.
Also it seems like the Helm chart is still deploying the older version 0.7, instead of 0.8, is that intentional?
For reference, I am using k3s and these are my values:
Here are some logs from the garage container, if those are of help:
I reproduced the issue and I am looking into it. The reason why the helm chart has the old 0.7.x version is that a migration step is required and the documentation was not quite ready at the time. We're considering raising the version.
Just to let you (and all people who will encounter this issue) know: bumping version to
v0.8.2
(appVersion: "v0.8.2"
inChart.yaml
) - node autodiscovery works out of the box.Manual step to do is creating layout - is there a way to automate it? For example with the same
capacity
(de factoweight
as explained deeply in #357) as those nodes are the same?hi @elwin013, thanks for your workaround. And sorry, we are a bit slow on this issue, as Kubernetes is not the scheduler we use (we chose Nomad some years ago). If someone wants to open a PR to update Garage's Helm chart, it will be welcome.
Currently, we don't provide ways to automatically configure the layout as it handles sensitive data, and based on the use cases we are aware of, it seems better to have a human manually making change to it to prevent data loss.
If you have a specific use case where configuring layout automatically makes sense and is not dangerous, and does not create dangerous situations, you could open an issue so we keep an eye on it. But ofc we can't make any promise wether or no the feature will be implemented.
Hi @quentin, thank you for quick reply!
I fully understand that applying automatic layout could be dangerous and it is better to make this manually. And as long as automatic discovery of nodes works (with
v0.8.2
) - I can live with that (which means make it manually or write some not-so-fancy script that do this). :-)I've created small PR with updating Helm chart versions (linked above).
Thanks @elwin013 for info that changing the version works.
Regarding the migration step:
I understand the concern and normally I would say that this should probably be covered by incrementing the major version of the Chart to indicate the breaking nature of an upgrade without intervention.
But as the Chart is only hosted on Git, that increment would probably be unseen by most users and also probably not picked up by tools like Flux or ArgoCD.
This issue has not seen activity during 1 year and I have the feeling that several questions were mixed up in one thread, so I will close this one for now. Feel free to create new issues if there are still specific things to discuss.