[k8s] Error: Internal error: Remote error: Could not reach quorum of 1 #478
Labels
No labels
action
check-aws
action
discussion-needed
action
for-external-contributors
action
for-newcomers
action
more-info-needed
action
need-funding
action
triage-required
kind
correctness
kind
ideas
kind
improvement
kind
performance
kind
testing
kind
usability
kind
wrong-behavior
prio
critical
prio
low
scope
admin-api
scope
background-healing
scope
build
scope
documentation
scope
k8s
scope
layout
scope
metadata
scope
ops
scope
rpc
scope
s3-api
scope
security
scope
telemetry
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Deuxfleurs/garage#478
Loading…
Add table
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hi, after installing garage via helm chart - it started succesfully and, for example, garage status - works fine, but when i want to create bucket or key - it drops such error:
and gives me warns:
is there's any additional steps to work in k8s?
So, the problem was in
replicaCount: 1
seems like there's should be more than 1 copy of garage
i'll leave this issue open until fix for single-node cluster
I am not the author of the helm chart, and I don't know how Kubernetes does things, but can you confirm that your node has a zone and capacity assigned in garage status ? I.e. that a layout has been created and applied, even with just one node
@lx
No, it wasn't, also, garage didn't offer me to create layout with single node, but after changing
replicaCount
to 2 - garage offered me creation of layout which successfully fixed that issueThe default configuration for the garage helm chart specify 3 replicas for the pods, and 3 for the replication mode as well. If you change the number of replicas, you have to adjust the config provided to garage to change the
replication_mode
to something compatible with the number of pods that you are running.@maximilien yeah, but I set
replicaCount
andreplication_mode
to 1 from the first launch for helm chartfresh install single replica, replication_mode = 1
I don't understand this issue. Are the helm chart files wrong? If so, can you fix them and make a PR?
Thanks, I'll see if I can reproduce this on my side.
Closing for inactivity