Add helm chart #331

Merged
lx merged 10 commits from chemicstry/garage:helm_chart into main 2022-10-02 14:40:55 +00:00
2 changed files with 54 additions and 35 deletions
Showing only changes of commit db0c8b3980 - Show all commits

View file

@ -21,4 +21,4 @@ version: 0.1.0
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "v0.7.2"
appVersion: "v0.7.2.1"

View file

@ -6,10 +6,13 @@
garage:
metadataDir: "/mnt/meta"
dataDir: "/mnt/data"
# Default to 3 replicas, see the replication_mode section at
# https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/
replicationMode: "3"
maximilien marked this conversation as resolved Outdated

Does it mean that if people deploy this helm chart without overriding this value,
they will have a working but vulnerable cluster?

We have some discussions about adding some defense in depth mechanisms to Garage (here: #310) in case this secret leaks but for now, an attack knowing this secret could join the clusteras long as the RPC port is accessible.

I think it could be better to replace this field by something that will make the cluster crashes if not overriden, like "CHANGE ME!!!!"

Does it mean that if people deploy this helm chart without overriding this value, they will have a working but vulnerable cluster? We have some discussions about adding some defense in depth mechanisms to Garage (here: https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/310) in case this secret leaks but for now, an attack knowing this secret could join the clusteras long as the RPC port is accessible. I think it could be better to replace this field by something that will make the cluster crashes if not overriden, like "CHANGE ME!!!!"

Good point. I think it would be best to store rcpSecret as a kubernetes Secret object, which is randomly generated if not provided, but then there is a problem how to inject that into container configuration. It would be easier if garage accepted configuration through env vars. Otherwise I think the only option is to fire up an init container and patch up configuration toml.

Good point. I think it would be best to store rcpSecret as a kubernetes Secret object, which is randomly generated if not provided, but then there is a problem how to inject that into container configuration. It would be easier if garage accepted configuration through env vars. Otherwise I think the only option is to fire up an init container and patch up configuration toml.
rpcBindAddr: "[::]:3901"
# If not given, a random secret will be generated
# If not given, a random secret will be generated and stored in a Secret object
rpcSecret: ""
# This is not required if you use the integrated kubernetes discovery
bootstrapPeers: []
kubernetesSkipCrd: false
s3:
@ -24,17 +27,19 @@ garage:
persistence:
enabled: true
meta:
# storageClass: ""
# storageClass: "fast-storage-class"
size: 100Mi
data:
# storageClass: ""
# storageClass: "slow-storage-class"
size: 100Mi
# Number of StatefulSet replicas to start
# Number of StatefulSet replicas/garage nodes to start
replicaCount: 3
image:
repository: dxflrs/amd64_garage
# please prefer using the chart version and not this tag
tag: ""
pullPolicy: IfNotPresent
imagePullSecrets: []
@ -55,66 +60,80 @@ podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
securityContext:
# The default security context is heavily restricted
# feel free to tune it to your requirements
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
service:
# You can rely on any service to expose your cluster
# - ClusterIP (+ Ingress)
# - NodePort (+ Ingress)
# - LoadBalancer
type: ClusterIP
s3:
api:
port: 3900
web:
port: 3902
# NOTE: the admin API is excluded for now as it is not consistent across nodes
ingress:
s3:
api:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
enabled: true
# Rely either on the className or the annotation below but not both
# replace "nginx" by an Ingress controller
# you can find examples here https://kubernetes.io/docs/concepts/services-networking/ingress-controllers
className: "nginx"
annotations:
# kubernetes.io/ingress.class: "nginx"
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
- host: "s3.garage.tld" # garage S3 API endpoint
paths:
- path: /
pathType: ImplementationSpecific
pathType: Prefix
- host: "*.s3.garage.tld" # garage S3 API endpoint, DNS style bucket access
paths:
- path: /
pathType: Prefix
tls: []
# - secretName: chart-example-tls
# - secretName: my-garage-cluster-tls
# hosts:
# - chart-example.local
# - kubernetes.docker.internal
web:
enabled: false
className: ""
enabled: true
className: "nginx"
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
- host: "*.web.garage.tld" # wildcard website access with bucket name prefix
paths:
- path: /
pathType: Prefix
- host: "mywebpage.example.com" # specific bucket access with FQDN bucket
paths:
- path: /
pathType: Prefix
tls: []
# - secretName: chart-example-tls
# - secretName: my-garage-cluster-tls
# hosts:
# - chart-example.local
# - kubernetes.docker.internal
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# The following are indicative for a small-size deployement, for anything serious double them.
# limits:
# cpu: 100m
# memory: 128Mi
# memory: 1024Mi
# requests:
# cpu: 100m
# memory: 128Mi
# memory: 512Mi
nodeSelector: {}