Compare commits

...

19 commits

Author SHA1 Message Date
forgejo-actions
4a91201865 chore(release): 0.0.43
All checks were successful
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
Mirror Sync / codeberg (push) Successful in 10s
🎁 Release Charts / release (push) Successful in 24s
2024-08-17 17:42:29 +00:00
723ccf5a19
fix: maildev cron
Some checks failed
Mirror Sync / codeberg (push) Successful in 9s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-08-17 19:41:58 +02:00
forgejo-actions
af5de94208 chore(release): 0.0.42
All checks were successful
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
🎁 Release Charts / release (push) Successful in 22s
Mirror Sync / codeberg (push) Successful in 28s
2024-07-21 14:05:40 +00:00
9940847608
fix: up
Some checks failed
Mirror Sync / codeberg (push) Has been cancelled
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-07-21 16:04:44 +02:00
forgejo-actions
3f4f183792 chore(release): 0.0.41
All checks were successful
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
Mirror Sync / codeberg (push) Successful in 9s
🎁 Release Charts / release (push) Successful in 8s
2024-07-18 22:47:52 +00:00
103cc957ca
feat: cnpg-cluster
Some checks failed
Mirror Sync / codeberg (push) Successful in 8s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-07-19 00:47:26 +02:00
forgejo-actions
bef54f201e chore(release): 0.0.40
All checks were successful
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
Mirror Sync / codeberg (push) Successful in 8s
🎁 Release Charts / release (push) Successful in 8s
2024-07-18 18:37:12 +00:00
921e731c81
fix: probes
Some checks failed
Mirror Sync / codeberg (push) Successful in 9s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-07-18 20:36:46 +02:00
forgejo-actions
ccc683ebba chore(release): 0.0.39
All checks were successful
Mirror Sync / codeberg (push) Successful in 10s
🎁 Release Charts / release (push) Successful in 8s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
2024-07-18 17:50:45 +00:00
a6dfb33bd7
fix: trigger ci
Some checks failed
Mirror Sync / codeberg (push) Successful in 8s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-07-18 19:50:19 +02:00
forgejo-actions
4e922704bc chore(release): 0.0.38
All checks were successful
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
Mirror Sync / codeberg (push) Successful in 11s
🎁 Release Charts / release (push) Successful in 23s
2024-07-18 17:46:19 +00:00
38a66768ad
fix: trigger ci
Some checks failed
Mirror Sync / codeberg (push) Successful in 10s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-07-18 19:45:49 +02:00
forgejo-actions
56d3a06a3b chore(release): 0.0.37
All checks were successful
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
Mirror Sync / codeberg (push) Successful in 21s
🎁 Release Charts / release (push) Successful in 24s
2024-07-18 17:43:47 +00:00
200bee21cb
fix: cicd
Some checks failed
Mirror Sync / codeberg (push) Successful in 22s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-07-18 19:43:19 +02:00
24ae004694
fix: probes
Some checks failed
Mirror Sync / codeberg (push) Failing after 3s
🎉 Release Commit / create release using commit-and-tag-version (push) Failing after 3s
2024-07-18 19:36:25 +02:00
forgejo-actions
4c94033a00 chore(release): 0.0.36
All checks were successful
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
Mirror Sync / codeberg (push) Successful in 9s
🎁 Release Charts / release (push) Successful in 22s
2024-07-12 21:07:47 +00:00
9b67090f9d
feat: add keydb chart
Some checks failed
Mirror Sync / codeberg (push) Successful in 21s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-07-12 23:07:19 +02:00
forgejo-actions
aec10b4554 chore(release): 0.0.35
All checks were successful
🎉 Release Commit / create release using commit-and-tag-version (push) Has been skipped
Mirror Sync / codeberg (push) Successful in 9s
🎁 Release Charts / release (push) Successful in 8s
2024-07-04 17:31:37 +00:00
d1ef35b12b
fix: trigger ci
Some checks failed
Mirror Sync / codeberg (push) Successful in 20s
🎉 Release Commit / create release using commit-and-tag-version (push) Has been cancelled
2024-07-04 19:31:01 +02:00
31 changed files with 1663 additions and 11 deletions

View file

@ -10,7 +10,7 @@ jobs:
codeberg:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v4.1.7
with:
fetch-depth: 0
- uses: https://codeberg.org/devthefuture/repository-mirroring-action.git@v1

View file

@ -9,7 +9,7 @@ jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v4.1.7
- name: Publish Helm charts
# uses: https://git.devthefuture.org/devthefuture/helm-pages.git@master
uses: https://git.devthefuture.org/devthefuture/helm-pages.git@67ae29485d9312f224f5b188bc2a0ed9f2f4a4f2

View file

@ -18,7 +18,7 @@ jobs:
runs-on: ubuntu-latest
name: create release using commit-and-tag-version
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v4.1.7
with:
token: ${{ secrets.M8A_ORG_BOT_REPO_TOKEN }}

View file

@ -2,6 +2,69 @@
All notable changes to this project will be documented in this file. See [commit-and-tag-version](https://github.com/absolute-version/commit-and-tag-version) for commit guidelines.
## 0.0.43 (2024-08-17)
### Bug Fixes
* maildev cron ([723ccf5](https://codeberg.org/devthefuture/helm-charts/commit/723ccf5a190b763107814327f6f4121eae278e9c))
## 0.0.42 (2024-07-21)
### Bug Fixes
* up ([9940847](https://codeberg.org/devthefuture/helm-charts/commit/99408476084068c8f91b449952ecef07afd24bfa))
## 0.0.41 (2024-07-18)
### Features
* cnpg-cluster ([103cc95](https://codeberg.org/devthefuture/helm-charts/commit/103cc957ca84e799f4a950718b9624867ec2e326))
## 0.0.40 (2024-07-18)
### Bug Fixes
* probes ([921e731](https://codeberg.org/devthefuture/helm-charts/commit/921e731c815394ccdd9391a14f8be307da3b0972))
## 0.0.39 (2024-07-18)
### Bug Fixes
* trigger ci ([a6dfb33](https://codeberg.org/devthefuture/helm-charts/commit/a6dfb33bd74e72e75d9a4844ec789bf60c3c75a4))
## 0.0.38 (2024-07-18)
### Bug Fixes
* trigger ci ([38a6676](https://codeberg.org/devthefuture/helm-charts/commit/38a66768ad86ab37ccf7c7f719c2e8f1a0f31f56))
## 0.0.37 (2024-07-18)
### Bug Fixes
* cicd ([200bee2](https://codeberg.org/devthefuture/helm-charts/commit/200bee21cb7d4749c5e5e6dadd35bd9936a018d6))
## 0.0.36 (2024-07-12)
### Features
* add keydb chart ([9b67090](https://codeberg.org/devthefuture/helm-charts/commit/9b67090f9d25142722a442161848c8e0e05f31d7))
## 0.0.35 (2024-07-04)
### Bug Fixes
* trigger ci ([d1ef35b](https://codeberg.org/devthefuture/helm-charts/commit/d1ef35b12bf51d9f1182e14f5a24bed1c3bcb2c9))
## 0.0.34 (2024-07-04)

View file

@ -0,0 +1,6 @@
apiVersion: v2
name: cnpg-cluster
description: A Helm chart to create cloudnative-pg.io clusters
type: application
version: 0.0.43
appVersion: '15'

View file

@ -0,0 +1,8 @@
# cnpg-cluster
A Helm chart to create cloudnative-pg.io clusters
originally based on [enix's cnpg-cluster helm chart](https://github.com/enix/helm-charts/tree/master/charts/cnpg-cluster)
then on https://socialgouv.github.io/helm-charts

View file

@ -0,0 +1,68 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "cnpg-cluster.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "cnpg-cluster.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "cnpg-cluster.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "cnpg-cluster.labels" -}}
helm.sh/chart: {{ include "cnpg-cluster.chart" . }}
{{ include "cnpg-cluster.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Common annotations
*/}}
{{- define "cnpg-cluster.annotations" -}}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.annotations }}
{{ toYaml .Values.annotations}}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "cnpg-cluster.selectorLabels" -}}
app.kubernetes.io/name: {{ include "cnpg-cluster.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Backup secret name
*/}}
{{- define "cnpg-cluster.backupSecretName" -}}
{{ or .Values.backup.secretName (print (include "cnpg-cluster.fullname" .) `-backup`) }}
{{- end }}

View file

@ -0,0 +1,166 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: {{ include "cnpg-cluster.fullname" . }}
labels:
{{- include "cnpg-cluster.labels" . | nindent 4 }}
annotations:
{{- include "cnpg-cluster.annotations" . | nindent 4 }}
spec:
logLevel: {{ .Values.logLevel }}
instances: {{ .Values.instances }}
{{- if .Values.image.repository }}
imageName: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
{{- end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if or .Values.imagePullSecrets .Values.registryCredentials }}
imagePullSecrets:
{{- with .Values.imagePullSecrets }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- range $name, $settings := .Values.registryCredentials }}
- name: "{{ include "cnpg-cluster.fullname" $ }}-{{ $name }}"
{{- end }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if or .Values.nodeSelector .Values.tolerations .Values.extraAffinity }}
affinity:
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.extraAffinity }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
storage:
size: {{ .Values.persistence.size | quote }}
{{- with .Values.persistence.resizeInUseVolumes }}
resizeInUseVolumes: {{ . | quote }}
{{- end }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClass: ""
{{- else }}
storageClass: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- with .Values.persistence.pvcTemplate }}
pvcTemplate:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- if .Values.backup.enabled }}
backup:
retentionPolicy: "{{ .Values.backup.retentionPolicy }}"
barmanObjectStore:
{{- toYaml .Values.backup.barmanObjectStore | nindent 6 }}
{{- end }}
minSyncReplicas: {{ .Values.minSyncReplicas }}
maxSyncReplicas: {{ .Values.maxSyncReplicas }}
postgresql:
pg_hba:
{{- with .Values.pg_hba }}
{{- toYaml . | nindent 8 }}
{{- end }}
parameters:
{{- with .Values.postgresqlParameters }}
{{- toYaml . | nindent 8 }}
{{- end }}
shared_preload_libraries:
{{- with .Values.sharedPreloadLibraries }}
{{- toYaml . | nindent 8 }}
{{- end }}
monitoring:
enablePodMonitor: {{ .Values.monitoring.enablePodMonitor }}
{{ if .Values.superuserSecretName }}
superuserSecret:
name: {{ .Values.superuserSecretName }}
{{ end}}
{{- if .Values.replica.enabled }}
replica:
enabled: true
source: {{ .Values.replica.source }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
bootstrap:
{{- if .Values.recovery.enabled }}
recovery:
source: {{ .Values.recovery.externalClusterName | default "recovery-cluster" }}
{{ if .Values.recovery.database }}
database: {{ .Values.recovery.database }}
{{- end }}
{{ if .Values.recovery.owner }}
owner: {{ .Values.recovery.owner }}
{{- end }}
{{ if .Values.recovery.secretName }}
secret:
name: {{ .Values.recovery.secretName }}
{{ end }}
{{- if .Values.recovery.targetTime }}
recoveryTarget:
targetTime: "{{ .Values.recovery.targetTime }}"
{{- end }}
{{- else if (and .Values.pg_basebackup.enabled .Values.pg_basebackup.source) }}
pg_basebackup:
source: {{ .Values.pg_basebackup.source }}
{{- else }}
initdb:
database: {{ .Values.dbName }}
owner: {{ .Values.dbOwner }}
{{ if .Values.dbSecretName }}
secret:
name: {{ .Values.dbSecretName }}
{{ end }}
# postgis configuration plugins
postInitTemplateSQL:
{{- range $cmd := .Values.postgresqlInitCommandsBeforeExtensions }}
- {{ $cmd | quote }}
{{- end }}
{{- range $name := .Values.extensions }}
- CREATE EXTENSION IF NOT EXISTS "{{ $name }}";
{{- end }}
{{- range $cmd := .Values.postgresqlInitCommands }}
- {{ $cmd | quote }}
{{- end }}
{{ if .Values.postInitApplicationSQL }}
postInitApplicationSQL:
{{- toYaml .Values.postInitApplicationSQL | nindent 8 }}
{{ end }}
{{ if .Values.postInitApplicationSQLRefs }}
postInitApplicationSQLRefs:
{{- toYaml .Values.postInitApplicationSQLRefs | nindent 8 }}
{{ end }}
{{- end }}
externalClusters:
{{- if .Values.recovery.enabled }}
- name: {{ .Values.recovery.externalClusterName | default "recovery-cluster" }}
barmanObjectStore:
{{- toYaml .Values.recovery.barmanObjectStore | nindent 8 }}
{{- end }}
{{- if .Values.externalClusters }}
{{- toYaml .Values.externalClusters | nindent 4 }}
{{- end }}
{{- with .Values.clusterExtraSpec }}
{{- toYaml . | nindent 2 }}
{{- end }}

View file

@ -0,0 +1,16 @@
{{- range $name, $spec := .Values.poolers }}
---
apiVersion: postgresql.cnpg.io/v1
kind: Pooler
metadata:
name: {{ include "cnpg-cluster.fullname" $ }}-{{ $name }}
labels:
{{- include "cnpg-cluster.labels" $ | nindent 4 }}
cnpg.io/poolerName: {{ include "cnpg-cluster.fullname" $ }}-{{ $name }}
spec:
cluster:
name: {{ include "cnpg-cluster.fullname" $ }}
{{- toYaml $spec | nindent 2 }}
monitoring:
enablePodMonitor: {{ $.Values.monitoring.enablePodMonitor }}
{{- end }}

View file

@ -0,0 +1,12 @@
{{- range $name, $settings := .Values.registryCredentials }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "cnpg-cluster.fullname" $ }}-{{ $name }}
labels:
{{- include "cnpg-cluster.labels" $ | nindent 4 }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: "{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" $settings.registry $settings.username $settings.password $settings.email (printf "%s:%s" $settings.username $settings.password | b64enc) | b64enc }}"
---
{{- end }}

View file

@ -0,0 +1,14 @@
{{- if .Values.backup.enabled }}
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: {{ include "cnpg-cluster.fullname" $ }}-scheduledbackup
labels:
{{- include "cnpg-cluster.labels" $ | nindent 4 }}
spec:
backupOwnerReference: self
cluster:
name: {{ include "cnpg-cluster.fullname" $ }}
schedule: "{{ .Values.backup.schedule }}"
immediate: {{ .Values.backup.immediate }}
{{- end }}

View file

@ -0,0 +1,239 @@
# yaml-language-server: $schema=./values.schema.json
# Default values for cnpg-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# @param {number} [instances] Number of Postgres instances in the cluster
instances: 1
# @param {string} [logLevel] The instances log level, one of the following values: error, warning, info (default), debug, trace
logLevel: info
# @param {object} [annotations] CNPG cluster annotations
annotations: {}
# @param {object} [image] Docker image for the PG instances
image:
# @param {string} [repository] CNPG compatible Postgres image. see https://github.com/cloudnative-pg/postgres-containers
repository: "ghcr.io/cloudnative-pg/postgis"
# @param {string} [pullPolicy] Docker image pull policy. see https://kubernetes.io/docs/concepts/containers/images#updating-images
pullPolicy: IfNotPresent
# @param {string} [tag] Docker image tag
tag: "15"
# @param {array} [imagePullSecrets] docker image pull secrets. see https://kubernetes.io/fr/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# @param {object,null} [registryCredentials]
registryCredentials:
# Eg:
# mygitlab:
# registry: gitlab-registry.example.org
# email: foo@example.org
# username: foobar
# password: secret
# @param {string} [nameOverride] String to partially override cnpg-cluster.fullname template with a string (will prepend the release name)
nameOverride: ""
# @param {string} [fullnameOverride] String to fully override cnpg-cluster.fullname template with a string
fullnameOverride: ""
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# @param {https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/v1.24.0/_definitions.json#/definitions/io.k8s.api.core.v1.ResourceRequirements} [resources] CPU/Memory resource requests/limits
resources:
{}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# @param {object} [nodeSelector] Postgres instances labels for pod assignment
nodeSelector: {}
# Name of the priority class which will be used in every generated Pod, if the PriorityClass specified does not exist, the pod will not be able to schedule. Please refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass for more information
# @param {string} [priorityClassName] Name of the priority class which will be used in every generated Pod
priorityClassName: ""
# @param {array} [tolerations] Postgres instances labels for tolerations pod assignment
tolerations: []
# @param {object} [extraAffinity] Extra configuration for Cluster's affinity resource, see: https://cloudnative-pg.io/documentation/1.17/api_reference/#AffinityConfiguration
extraAffinity: {}
# @param {object} [persistence] Data persistence configuration
persistence:
# @param {string} [size] Size of each instance storage volume
size: 8Gi
# @param {boolean,null} [resizeInUseVolumes] Resize existent PVCs, defaults to true
resizeInUseVolumes:
# Applied after evaluating the PVC template, if available.
# If not specified, generated PVCs will be satisfied by the default storage class
# @param {string} [storageClass] StorageClass to use for database data,
storageClass: ""
# @param {object} [pvcTemplate] Template to be used to generate the Persistent Volume Claim
pvcTemplate: {}
# @param {object} [backup] Backup configuration
backup:
# @param {boolean} [enabled] Enable backups
enabled: false
# this cron format has the seconds on the left
# @param {string} [schedule] Schedule the backups, for instance every day at midnight
schedule: "0 0 0 * * 0"
# The retention policy is expressed in the form of XXu where XX is a positive integer and
# u is in [dwm] - days, weeks, months.
# @param {string} [retentionPolicy] RetentionPolicy is the retention policy to be used for backups and WALs (i.e. '60d').
retentionPolicy: 30d
# @param {boolean} [enabled] If the first backup has to be immediately start after creation or not
immediate: true
# See: https://cloudnative-pg.io/documentation/1.20/backup_recovery/
# @param {object,null} [barmanObjectStore] Object store credentials and access config
barmanObjectStore:
# destinationPath:
# endpointURL:
# s3Credentials:
# accessKeyId:
# name:
# key:
# secretAccessKey:
# name:
# key:
# region:
# name:
# key:
# @param {object} [clusterExtraSpec] Extra configuration for Cluster resource. See: https://cloudnative-pg.io/documentation/1.17/api_reference/#clusterspec
clusterExtraSpec: {}
# @param {object} [scheduledBackups] ScheduledBackup resources to create for this Cluster resource. See: https://cloudnative-pg.io/documentation/1.17/api_reference/#ScheduledBackupSpec
scheduledBackups: {}
# Eg:
# daily:
# schedule: "0 0 0 * * *"
# @param {object} [poolers] Pooler resources to create for this Cluster resource. See: https://cloudnative-pg.io/documentation/1.17/api_reference/#PoolerSpec
poolers: {}
# Eg:
# rw:
# instances: 3
# type: rw
# pgbouncer:
# poolMode: session
# parameters:
# max_client_conn: "1000"
# default_pool_size: "10"
# @param {number} [minSyncReplicas] Minimum of synchronous replicas. see https://cloudnative-pg.io/documentation/current/replication/#synchronous-replication
minSyncReplicas: 0
# @param {number} [maxSyncReplicas] Maximum of synchronous replicas. see https://cloudnative-pg.io/documentation/current/replication/#synchronous-replication
maxSyncReplicas: 0
# @param {array} [pg_hba] pg_hba entries. See https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
pg_hba: []
# Define your parameters on https://pgtune.leopard.in.ua
# @param {https://raw.githubusercontent.com/SocialGouv/json-schemas/main/postgres/parameters.json} [postgresqlParameters] PostgreSQL parameters. See https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
postgresqlParameters: {}
# @param {array} [sharedPreloadLibraries] PostgreSQL shared preload libraries. See https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-PostgresConfiguration
sharedPreloadLibraries: {}
# @param {array} [externalClusters] define external clusters for recovery/replication see https://cloudnative-pg.io/documentation/current/api_reference/#externalcluster
externalClusters: []
# @param {object} [replica] Replica mode
replica:
# @param {boolean} [enabled] Enable replica mode
enabled: false
# @param {object} [pg_basebackup] Enable pg_basebackup on bootstrap, see https://cloudnative-pg.io/documentation/current/bootstrap/#bootstrap-from-a-live-cluster-pg_basebackup
pg_basebackup:
# @param {boolean} [enabled] Enable pg_basebackup bootstrap, see https://cloudnative-pg.io/documentation/current/bootstrap/#bootstrap-from-a-live-cluster-pg_basebackup
enabled: false
# @param {string,null} [source] externalCluster cluster name for the pg_basebackup
source:
# @param {https://raw.githubusercontent.com/SocialGouv/json-schemas/main/postgres/extensions.json} [extensions]
extensions: []
postgresqlInitCommandsBeforeExtensions: []
postgresqlInitCommands: []
# @param {string} [dbName] Name of the default database to create
dbName: app
# @param {string} [dbName] Name of the default user to create
dbOwner: app
# @param {object} [monitoring] Monitoring. see https://cloudnative-pg.io/documentation/current/monitoring/
monitoring:
# @param {boolean} [enablePodMonitor] Enable metrics monitoring. see https://cloudnative-pg.io/documentation/current/monitoring/
enablePodMonitor: false
# @param {string,null} [superuserSecretName] To force the super user secret name
superuserSecretName:
# @param {string,null} [dbSecretName] To force the DB secret name
dbSecretName:
# @param {object} [recovery] Recovery. see https://cloudnative-pg.io/documentation/current/backup_recovery/#recovery
recovery:
# @param {boolean} [enabled] Enable recovery
enabled: false
# Relative to Postgres server timezone
# @param {string} [targetTime] Time to restore from, in RFC3339 format https://datatracker.ietf.org/doc/html/rfc3339
targetTime: ""
# @param {string,null} [database] Database to restore to
database:
# @param {string,null} [owner] Database owner to restore to
owner:
# @param {string,null} [secretName] Secret where owner password is set
secretName:
# @param {string,null} [externalClusterName] Name for the external cluster to recover from
externalClusterName:
# See: https://cloudnative-pg.io/documentation/current/backup_recovery/
# @param {object,null} [barmanObjectStore] Object store credentials and access config
barmanObjectStore:
# destinationPath:
# endpointURL:
# name of the recovery server on the s3 backups
# serverName:
# s3Credentials:
# accessKeyId:
# name:
# key:
# secretAccessKey:
# name:
# key:
# region:
# name:
# key:
# @param {string[]} [postInitApplicationSQL] List of SQL queries to be executed as a superuser in the application database right after is created - to be used with extreme care (by default empty)
postInitApplicationSQL: []
# Points references to ConfigMaps or Secrets which contain SQL files, the general implementation order to these references is from all Secrets to all ConfigMaps, and inside Secrets or ConfigMaps, the implementation order is same as the order of each array (by default empty
# See https://cloudnative-pg.io/documentation/current/api_reference/#postinitapplicationsqlrefs
# @param {object,null} [postInitApplicationSQLRefs]
postInitApplicationSQLRefs:
# configMapRefs:
# - name: post-init-sql-configmap
# key: configmap.sql
# secretRefs:
# - name: post-init-sql-secret
# key: secret.sql

5
charts/keydb/Chart.yaml Normal file
View file

@ -0,0 +1,5 @@
apiVersion: v2
name: keydb
description: A Helm chart for KeyDB multimaster setup
type: application
version: 0.0.43

134
charts/keydb/README.md Normal file
View file

@ -0,0 +1,134 @@
# KeyDB
[KeyDB](https://keydb.dev) clocks in at 5X faster than Redis (node vs node). KeyDB is a popular drop in Redis alternative that people flock to because it enables you to consolidate a lot of the complexities associated with Redis. KeyDB is multithreaded with the ability to use several storage mediums natively and scale vertically. The superior architecture is enabling KeyDB to become the bridge between cache layer and traditional databases offering performance and durability.
## Introduction
This chart bootstraps a [KeyDB](https://keydb.dev) highly available multi-master statefulset in a [Kubernetes](http://kubernetes.io) cluster using the Helm package manager.
forked from https://github.com/Enapter/charts
### Config Example:
```
configExtraArgs:
- client-output-buffer-limit: ["normal", "0", "0", "0"]
- client-output-buffer-limit: ["replica", "268435456", "67108864", "60"]
- client-output-buffer-limit: ["pubsub", "33554432", "8388608", "60"]
- save: ~
- tcp-backlog "1024"
```
### Resulting File:
```
...
exec keydb-server /etc/keydb/redis.conf \
...
--client-output-buffer-limit "normal" "0" "0" "0" \
--client-output-buffer-limit "replica" "268435456" "67108864" "60" \
--client-output-buffer-limit "pubsub" "33554432" "8388608" "60" \
--save \
--tcp-backlog "1024" \
...
```
## Prerequisites
- PV provisioner support in the underlying infrastructure if you want to enable persistence
## Configuration
The following table lists the configurable parameters of the KeyDB chart and their default values.
| Parameter | Description | Default |
|:--------------------------------|:---------------------------------------------------|:------------------------------------------|
| `imageRepository` | KeyDB docker image | `eqalpha/keydb` |
| `imageTag` | KeyDB docker image tag | `x86_64_v6.3.2` |
| `imagePullPolicy` | K8s imagePullPolicy | `IfNotPresent` |
| `imagePullSecrets` | KeyDB Pod imagePullSecrets | `[]` |
| `nodes` | Number of KeyDB master pods | `3` |
| `password` | If enabled KeyDB servers are password-protected | `""` |
| `existingSecret` | If enabled password is taken from secret | `""` |
| `existingSecretPasswordKey` | Secret key name. | `"password"` |
| `port` | KeyDB service port clients connect to | `6379` |
| `portName` | KeyDB service port name in the Service spec | `server` |
| `threads` | KeyDB server-threads per node | `2` |
| `multiMaster` | KeyDB multi-master setup | `yes` |
| `activeReplicas` | KeyDB active replication setup | `yes` |
| `protectedMode` | KeyDB protection mode | `no` |
| `appendonly` | KeyDB appendonly setting | `no` |
| `configExtraArgs` | Additional configuration arguments for KeyDB | `[]` |
| `annotations` | KeyDB StatefulSet annotations | `{}` |
| `podAnnotations` | KeyDB pods annotations | `{}` |
| `tolerations` | KeyDB tolerations setting | `{}` |
| `nodeSelector` | KeyDB nodeSelector setting | `{}` |
| `topologySpreadConstraints` | KeyDB topologySpreadConstraints setting | `[]` |
| `affinity` | StatefulSet Affinity rules | Look values.yaml |
| `extraInitContainers` | Additional init containers for StatefulSet | `[]` |
| `extraContainers` | Additional sidecar containers for StatefulSet | `[]` |
| `extraVolumes` | Additional volumes for init and sidecar containers | `[]` |
| `livenessProbe.custom` | Custom LivenessProbe for KeyDB pods | `{}` |
| `readinessProbe.custom` | Custom ReadinessProbe for KeyDB pods | `{}` |
| `readinessProbeRandomUuid` | Random UUIDv4 for readiness GET probe | `90f717dd-0e68-43b8-9363-fddaad00d6c9` |
| `startupProbe.custom` | Custom StartupProbe for KeyDB pods | `{}` |
| `persistentVolume.enabled` | Should PVC be created via volumeClaimTemplates | `true` |
| `persistentVolume.accessModes` | Volume access modes | `[ReadWriteOnce]` |
| `persistentVolume.selector` | PVC selector. (In order to match existing PVs) | `{}` |
| `persistentVolume.size` | Size of the volume | `1Gi` |
| `persistentVolume.storageClass` | StorageClassName for volume | `` |
| `podDisruptionBudget` | podDisruptionBudget for KeyDB pods | Look values.yaml |
| `resources` | Resources for KeyDB containers | `{}` |
| `scripts.enabled` | Turn on health util scripts | `false` |
| `scripts.cleanupCoredumps` | Coredumps cleanup scripts | Look values.yaml |
| `scripts.cleanupTempfiles` | Tempfiles cleanup scripts | Look values.yaml |
| `scripts.securityContext` | SecurityContext for scripts container | `{}` |
| `keydb.securityContext` | SecurityContext for KeyDB container | `{}` |
| `securityContext` | SecurityContext for KeyDB pods | `{}` |
| `service.annotations` | Service annotations | `{}` |
| `service.appProtocol.enabled` | Turn on appProtocol fields in port specs | `false` |
| `loadBalancer.enabled` | Create LoadBalancer service | `false` |
| `loadBalancer.annotations` | Annotations for LB | `{}` |
| `loadBalancer.extraSpec` | Additional spec for LB | `{}` |
| `serviceAccount.enabled` | Use a dedicated ServiceAccount (SA) | `false` |
| `serviceAccount.create` | Create the SA (rather than use an existing one) | `true` |
| `serviceAccount.name` | Set the name of an existing SA or override created | `` |
| `serviceAccount.extraSpec` | Additional spec for the created SA | `{}` |
| `serviceMonitor.enabled` | Prometheus operator ServiceMonitor | `false` |
| `serviceMonitor.labels` | Additional labels for ServiceMonitor | `{}` |
| `serviceMonitor.annotations` | Additional annotations for ServiceMonitor | `{}` |
| `serviceMonitor.interval` | ServiceMonitor scrape interval | `30s` |
| `serviceMonitor.scrapeTimeout` | ServiceMonitor scrape timeout | `nil` |
| `exporter.enabled` | Prometheus Exporter sidecar contaner | `false` |
| `exporter.imageRepository` | Exporter Image | `oliver006/redis_exporter` |
| `exporter.imageTag` | Exporter Image Tag | `v1.48.0-alpine` |
| `exporter.pullPolicy` | Exporter imagePullPolicy | `IfNotPresent` |
| `exporter.port` | `prometheus.io/port` | `9121` |
| `exporter.portName` | Exporter service port name in the Service spec | `redis-exporter` |
| `exporter.scrapePath` | `prometheus.io/path` | `/metrics` |
| `exporter.livenessProbe` | LivenessProbe for sidecar Prometheus exporter | Look values.yaml |
| `exporter.readinessProbe` | ReadinessProbe for sidecar Prometheus exporter | Look values.yaml |
| `exporter.startupProbe` | StartupProbe for sidecar Prometheus exporter | Look values.yaml |
| `exporter.resources` | Resources for sidecar Prometheus container | `{}` |
| `exporter.securityContext` | SecurityContext for Prometheus exporter container | `{}` |
| `exporter.extraArgs` | Additional arguments for exporter | `[]` |
## Using existingSecret
When definining existingSecret (by default is "") password value is ignored. Password is taken from that secret, instead of being provided as plain text under values.yaml file. \
Secret key must be `existingSecretPasswordKey` (*password* by default). \
Example of of such secret:
```bash
kubectl create secret generic keydb-password --from-literal=password=KEYDB_PASSWORD
```
Definition of existingSecret in that case:
```yaml
password: ""
existingSecret: keydb-password
existingSecretPasswordKey: password-key-in-secret-file
```
It is important to use only one way of providing passwords: via plain text under values.yaml or using already existing secret.

View file

@ -0,0 +1,68 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "keydb.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "keydb.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "keydb.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "keydb.labels" -}}
helm.sh/chart: {{ include "keydb.chart" . }}
{{ include "keydb.selectorLabels" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "keydb.selectorLabels" -}}
app.kubernetes.io/name: {{ include "keydb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "keydb.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "keydb.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- define "common.tplvalues.render" -}}
{{- if typeIs "string" .value }}
{{- tpl .value .context }}
{{- else }}
{{- tpl (.value | toYaml) .context }}
{{- end }}
{{- end -}}

View file

@ -0,0 +1,80 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "keydb.fullname" . }}-health
labels:
{{- include "keydb.labels" . | nindent 4 }}
data:
ping_readiness_local.sh: |-
#!/bin/bash
set -e
loading_response="LOADING KeyDB is loading the dataset in memory"
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 15 "${1}" \
keydb-cli \
-h localhost \
-p "${REDIS_PORT}" \
GET {{ .Values.readinessProbeRandomUuid }}
)"
if [ "${response}" = "${loading_response}" ]; then
echo "${response}"
exit 1
fi
ping_liveness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 15 "${1}" \
keydb-cli \
-h localhost \
-p "${REDIS_PORT}" \
PING
)"
if [ "${response}" != "PONG" ]; then
echo "${response}"
exit 1
fi
{{- if .Values.scripts.enabled }}
scripts_local.sh: |-
#!/bin/bash
set -e
script_dir="$(dirname "$0")"
while true; do
{{- if .Values.scripts.cleanupCoredumps.enabled }}
"${script_dir}/cleanup_coredumps.sh"
{{- end }}
{{- if .Values.scripts.cleanupTempfiles.enabled }}
"${script_dir}/cleanup_tempfiles.sh"
{{- end }}
sleep 60
done
{{- end }}
{{- if .Values.scripts.cleanupCoredumps.enabled }}
cleanup_coredumps.sh: |-
#!/bin/bash
set -e
find /data/ -type f -name "core.*" -mmin +{{ .Values.scripts.cleanupCoredumps.minutes }} -delete
{{- end }}
{{- if .Values.scripts.cleanupTempfiles.enabled }}
cleanup_tempfiles.sh: |-
#!/bin/bash
set -e
find /data/ -type f \( -name "temp-*.aof" -o -name "temp-*.rdb" \) -mmin +{{ .Values.scripts.cleanupTempfiles.minutes }} -delete
{{- end }}

View file

@ -0,0 +1,18 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ include "keydb.fullname" . }}
labels:
{{- include "keydb.labels" . | nindent 4 }}
spec:
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
selector:
matchLabels:
{{- include "keydb.selectorLabels" . | nindent 6 }}
{{- end }}

View file

@ -0,0 +1,11 @@
{{- if .Values.serviceAccount.enabled | and .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "keydb.serviceAccountName" . | quote }}
labels:
{{- include "keydb.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.extraSpec }}
{{ toYaml . }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,46 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "keydb.fullname" . }}-utils
labels:
{{- include "keydb.labels" . | nindent 4 }}
type: Opaque
stringData:
server.sh: |
#!/bin/bash
set -euxo pipefail
host="$(hostname)"
replicas=()
for node in {0..{{ (sub (.Values.nodes | int) 1) }}}; do
if [ "${host}" != "{{ include "keydb.fullname" . }}-${node}" ]; then
replicas+=("--replicaof {{ include "keydb.fullname" . }}-${node}.{{ include "keydb.fullname" . }}-headless {{ .Values.port }}")
fi
done
exec keydb-server /etc/keydb/redis.conf \
--active-replica {{ .Values.activeReplicas | quote }} \
--multi-master {{ .Values.multiMaster | quote }} \
--appendonly {{ .Values.appendonly | quote }} \
--bind "0.0.0.0" \
--port "{{ .Values.internalPort }}" \
--protected-mode {{ .Values.protectedMode | quote }} \
--server-threads {{ .Values.threads | quote }} \
{{- if .Values.existingSecret }}
--masterauth "${REDIS_PASSWORD}" \
--requirepass "${REDIS_PASSWORD}" \
{{- else if .Values.password }}
--masterauth {{ .Values.password | quote }} \
--requirepass {{ .Values.password | quote }} \
{{- end }}
{{- range $item := .Values.configExtraArgs }}
{{- range $key, $value := $item }}
{{- if kindIs "invalid" $value }}
--{{ $key }} \
{{- else if kindIs "slice" $value }}
--{{ $key }}{{ range $value }} {{ . | quote }}{{ end }} \
{{- else }}
--{{ $key }} {{ $value | quote }} \
{{- end }}
{{- end }}
{{- end }}
"${replicas[@]}"

View file

@ -0,0 +1,31 @@
{{- if and .Values.exporter.enabled .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "keydb.fullname" . }}
labels:
{{- include "keydb.labels" . | nindent 4 }}
{{- if .Values.serviceMonitor.labels }}
{{- toYaml .Values.serviceMonitor.labels | nindent 4 }}
{{- end }}
{{- if .Values.serviceMonitor.annotations }}
annotations:
{{- toYaml .Values.serviceMonitor.annotations | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
{{- include "keydb.labels" . | nindent 6 }}
namespaceSelector:
matchNames:
- {{.Release.Namespace }}
endpoints:
- port: redis-exporter
path: {{ .Values.exporter.scrapePath }}
{{- if .Values.serviceMonitor.interval }}
interval: {{ .Values.serviceMonitor.interval }}
{{- end }}
{{- if .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,311 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "keydb.fullname" . }}
{{- if .Values.annotations }}
annotations:
{{- toYaml .Values.annotations | nindent 4 }}
{{- end }}
labels:
{{- include "keydb.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.nodes }}
serviceName: {{ include "keydb.fullname" . }}-headless
selector:
matchLabels:
{{- include "keydb.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/secret-utils: {{ include (print $.Template.BasePath "/secret-utils.yaml") . | sha256sum }}
{{- if .Values.exporter.enabled }}
prometheus.io/scrape: "true"
prometheus.io/path: "{{ .Values.exporter.scrapePath }}"
prometheus.io/port: "{{ .Values.exporter.port }}"
{{- end }}
{{- if .Values.podAnnotations }}
{{- toYaml .Values.podAnnotations | nindent 8 }}
{{- end }}
labels:
{{- include "keydb.labels" . | nindent 8 }}
spec:
affinity:
{{- include "common.tplvalues.render" (dict "value" .Values.affinity "context" $) | nindent 8 }}
{{- if .Values.extraInitContainers }}
initContainers:
{{- toYaml .Values.extraInitContainers | nindent 6 }}
{{- end }}
containers:
- name: keydb
{{- if .Values.image }}
image: {{ .Values.image }}
{{- else }}
image: {{ .Values.imageRepository }}:{{ .Values.imageTag }}
{{- end }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
command:
- /utils/server.sh
env:
- name: REDIS_PORT
value: {{ .Values.internalPort | quote }}
{{- if .Values.existingSecret }}
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.existingSecret }}
key: {{ .Values.existingSecretPasswordKey }}
{{- else if .Values.password }}
- name: REDIS_PASSWORD
value: "{{ .Values.password }}"
{{- end }}
ports:
- name: {{ .Values.internalPortName }}
containerPort: {{ .Values.internalPort | int }}
protocol: TCP
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
{{- if .Values.livenessProbe.custom }}
{{- toYaml .Values.livenessProbe.custom | nindent 10 }}
{{- else }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: {{ add1 .Values.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
exec:
command:
- sh
- -c
- /health/ping_liveness_local.sh {{ .Values.livenessProbe.timeoutSeconds }}
{{- end }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
{{- if .Values.readinessProbe.custom }}
{{- toYaml .Values.readinessProbe.custom | nindent 10 }}
{{- else }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: {{ add1 .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh {{ .Values.readinessProbe.timeoutSeconds }}
{{- end }}
{{- end }}
{{- if .Values.startupProbe.enabled }}
startupProbe:
{{- if .Values.startupProbe.custom }}
{{- toYaml .Values.startupProbe.custom | nindent 10 }}
{{- else }}
periodSeconds: {{ .Values.startupProbe.periodSeconds }}
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: {{ add1 .Values.startupProbe.timeoutSeconds }}
failureThreshold: {{ .Values.startupProbe.failureThreshold }}
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh {{ .Values.startupProbe.timeoutSeconds }}
{{- end }}
{{- end }}
{{- if .Values.lifecycle }}
lifecycle:
{{- toYaml .Values.lifecycle | nindent 10 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 10 }}
securityContext:
{{- toYaml .Values.keydb.securityContext | nindent 10 }}
volumeMounts:
- name: health
mountPath: /health
- name: keydb-data
mountPath: /data
- name: utils
mountPath: /utils
readOnly: true
{{- if .Values.exporter.enabled }}
- name: redis-exporter
{{- if .Values.exporter.image }}
image: {{ .Values.exporter.image }}
{{- else }}
image: {{ .Values.exporter.imageRepository }}:{{ .Values.exporter.imageTag }}
{{- end }}
imagePullPolicy: {{ .Values.exporter.pullPolicy }}
args:
{{- range $item := .Values.exporter.extraArgs }}
{{- range $key, $value := $item }}
{{- if kindIs "invalid" $value }}
- --{{ $key }}
{{- else if kindIs "slice" $value }}
- --{{ $key }}
{{- range $value }}
- {{ . | quote }}
{{- end }}
{{- else }}
- --{{ $key }}
- {{ $value | quote }}
{{- end }}
{{- end }}
{{- end }}
env:
- name: REDIS_EXPORTER_WEB_LISTEN_ADDRESS
value: "0.0.0.0:{{ .Values.exporter.port }}"
- name: REDIS_EXPORTER_WEB_TELEMETRY_PATH
value: {{ .Values.exporter.scrapePath | quote }}
- name: REDIS_ADDR
value: "redis://localhost:{{ .Values.internalPort }}"
{{- if .Values.existingSecret }}
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.existingSecret }}
key: {{ .Values.existingSecretPasswordKey }}
{{- else if .Values.password }}
- name: REDIS_PASSWORD
value: "{{ .Values.password }}"
{{- end }}
{{- if .Values.exporter.livenessProbe }}
livenessProbe:
{{- toYaml .Values.exporter.livenessProbe | nindent 10 }}
{{- end }}
{{- if .Values.exporter.readinessProbe }}
readinessProbe:
{{- toYaml .Values.exporter.readinessProbe | nindent 10 }}
{{- end }}
{{- if .Values.exporter.startupProbe }}
startupProbe:
{{- toYaml .Values.exporter.startupProbe | nindent 10 }}
{{- end }}
resources:
{{- toYaml .Values.exporter.resources | nindent 10 }}
securityContext:
{{- toYaml .Values.exporter.securityContext | nindent 10 }}
ports:
- name: {{ .Values.exporter.portName | quote }}
containerPort: {{ .Values.exporter.port }}
protocol: TCP
{{- end }}
{{- if .Values.scripts.enabled }}
- name: scripts
{{- if .Values.image }}
image: {{ .Values.image }}
{{- else }}
image: {{ .Values.imageRepository }}:{{ .Values.imageTag }}
{{- end }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
command:
- /health/scripts_local.sh
env:
- name: REDIS_PORT
value: {{ .Values.internalPort | quote }}
{{- if .Values.existingSecret }}
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.existingSecret }}
key: {{ .Values.existingSecretPasswordKey }}
{{- else if .Values.password }}
- name: REDIS_PASSWORD
value: "{{ .Values.password }}"
{{- end }}
resources:
{{- toYaml .Values.scripts.resources | nindent 10 }}
securityContext:
{{- toYaml .Values.scripts.securityContext | nindent 10 }}
volumeMounts:
- name: health
mountPath: /health
- name: keydb-data
mountPath: /data
{{- end }}
{{- if .Values.extraContainers }}
{{- toYaml .Values.extraContainers | nindent 6 }}
{{- end }}
imagePullSecrets:
{{- toYaml .Values.imagePullSecrets | nindent 8 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}
{{- if .Values.serviceAccount.enabled }}
serviceAccountName: {{ include "keydb.serviceAccountName" . | quote }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{- toYaml .Values.tolerations | nindent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- range .Values.topologySpreadConstraints }}
- labelSelector:
matchLabels:
{{- include "keydb.selectorLabels" $ | nindent 14 }}
topologyKey: {{ default "topology.kubernetes.io/zone" .topologyKey }}
maxSkew: {{ .maxSkew }}
{{- if .minDomains }}
minDomains: {{ .minDomains }}
{{- end }}
whenUnsatisfiable: {{ default "DoNotSchedule" .whenUnsatisfiable }}
{{- if .nodeAffinityPolicy }}
nodeAffinityPolicy: {{ .nodeAffinityPolicy }}
{{- end }}
{{- if .nodeTaintsPolicy }}
nodeTaintsPolicy: {{ .nodeTaintsPolicy }}
{{- end }}
{{- end }}
{{- end }}
volumes:
- name: health
configMap:
name: {{ include "keydb.fullname" . }}-health
defaultMode: 0755
- name: utils
secret:
secretName: {{ include "keydb.fullname" . }}-utils
defaultMode: 0755
items:
- key: server.sh
path: server.sh
{{- if not .Values.persistentVolume.enabled }}
- name: keydb-data
emptyDir: {{- toYaml .Values.persistentVolume.emptyDir | nindent 10 }}
{{- end }}
{{- if .Values.extraVolumes }}
{{- toYaml .Values.extraVolumes | nindent 6 }}
{{- end }}
{{- if .Values.persistentVolume.enabled }}
volumeClaimTemplates:
- metadata:
name: keydb-data
annotations:
{{- if .Values.persistentVolume.annotations }}
{{- toYaml .Values.persistentVolume.annotations | nindent 8 }}
{{- end }}
labels:
spec:
accessModes:
{{- toYaml .Values.persistentVolume.accessModes | nindent 8 }}
resources:
requests:
storage: {{ .Values.persistentVolume.size }}
{{- if .Values.persistentVolume.storageClass }}
{{- if (eq "-" .Values.persistentVolume.storageClass) }}
storageClassName: ""
{{ else }}
storageClassName: {{ .Values.persistentVolume.storageClass }}
{{- end }}
{{- end }}
{{- if .Values.persistentVolume.selector }}
selector:
{{- toYaml .Values.persistentVolume.selector | nindent 8 }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,22 @@
# Headless service
apiVersion: v1
kind: Service
metadata:
name: {{ include "keydb.fullname" . }}-headless
labels:
{{- include "keydb.labels" . | nindent 4 }}
annotations:
{{- toYaml .Values.service.annotations | nindent 4 }}
spec:
type: ClusterIP
clusterIP: None
ports:
- name: {{ .Values.portName | quote }}
port: {{ .Values.port | int }}
protocol: TCP
targetPort: {{ .Values.internalPortName | quote }}
{{- if .Values.service.appProtocol.enabled }}
appProtocol: redis
{{- end }}
selector:
{{- include "keydb.selectorLabels" . | nindent 4 }}

View file

@ -0,0 +1,26 @@
{{- if .Values.loadBalancer.enabled }}
# Load balancer service
apiVersion: v1
kind: Service
metadata:
name: {{ include "keydb.fullname" . }}-lb
labels:
{{- include "keydb.labels" . | nindent 4 }}
annotations:
{{- toYaml .Values.loadBalancer.annotations | nindent 4 }}
spec:
type: LoadBalancer
{{- if .Values.loadBalancer.extraSpec }}
{{- toYaml .Values.loadBalancer.extraSpec | nindent 2 }}
{{- end }}
ports:
- name: {{ .Values.portName | quote }}
port: {{ .Values.port | int }}
protocol: TCP
targetPort: {{ .Values.internalPortName | quote }}
{{- if .Values.service.appProtocol.enabled }}
appProtocol: redis
{{- end }}
selector:
{{- include "keydb.selectorLabels" . | nindent 4 }}
{{- end }}

View file

@ -0,0 +1,28 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "keydb.fullname" . }}
labels:
{{- include "keydb.labels" . | nindent 4 }}
annotations:
{{- toYaml .Values.service.annotations | nindent 4 }}
spec:
type: ClusterIP
ports:
- name: {{ .Values.portName | quote }}
port: {{ .Values.port | int }}
protocol: TCP
targetPort: {{ .Values.internalPortName | quote }}
{{- if .Values.service.appProtocol.enabled }}
appProtocol: redis
{{- end }}
- name: {{ .Values.exporter.portName | quote }}
port: {{ .Values.exporter.port | int }}
protocol: TCP
targetPort: {{ .Values.exporter.portName | quote }}
{{- if .Values.service.appProtocol.enabled }}
appProtocol: http
{{- end }}
selector:
{{- include "keydb.selectorLabels" . | nindent 4 }}
sessionAffinity: ClientIP

279
charts/keydb/values.yaml Normal file
View file

@ -0,0 +1,279 @@
nameOverride: ""
fullnameOverride: ""
imageRepository: eqalpha/keydb
imageTag: x86_64_v6.3.4
imagePullPolicy: IfNotPresent
imagePullSecrets: []
nodes: 3
password: ""
existingSecret: ""
existingSecretPasswordKey: "password"
port: 6379
portName: server
internalPort: 6379
internalPortName: keydb
threads: 2
multiMaster: "yes"
activeReplicas: "yes"
protectedMode: "no"
appendonly: "no"
annotations: {}
configExtraArgs: []
# - somesimple: "argument"
# - client-output-buffer-limit: ["normal", "0", "0", "0"]
# - client-output-buffer-limit: ["replica", "268435456", "67108864", "60"]
# - client-output-buffer-limit: ["pubsub", "33554432", "8388608", "60"]
podAnnotations: {}
tolerations: {}
# - effect: NoSchedule
# key: key
# operator: Equal
# value: value
nodeSelector: {}
# topology.kubernetes.io/region: some-region
topologySpreadConstraints: []
# - maxSkew: 1
# ## Optional keys
# # whenUnsatisfiable: DoNotSchedule
# # topologyKey: "topology.kubernetes.io/zone"
# # minDomains: 1
# # nodeAffinityPolicy: Honor
# # nodeTaintsPolicy: Honor
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- '{{ include "keydb.name" . }}'
- key: app.kubernetes.io/instance
operator: In
values:
- '{{ .Release.Name }}'
topologyKey: "kubernetes.io/hostname"
additionalAffinities: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: node_pool
# operator: In
# values: somenodepool
podDisruptionBudget:
enabled: true
maxUnavailable: 1
# Additional init containers
extraInitContainers: []
# Additional sidecar containers
extraContainers: []
# - name: backup
# image: minio/mc:latest
# Volumes that can be used in init and sidecar containers
extraVolumes: []
# - name: volume-from-secret
# secret:
# secretName: secret-to-mount
# - name: empty-dir-volume
# emptyDir: {}
# Liveness Probe
livenessProbe:
enabled: true
custom: {}
# tcpSocket:
# port: keydb
# initialDelaySeconds: 30
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
# Readiness Probe
readinessProbe:
enabled: true
custom: {}
# tcpSocket:
# port: keydb
# initialDelaySeconds: 30
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
# Random UUID for readiness GET probe
readinessProbeRandomUuid: "90f717dd-0e68-43b8-9363-fddaad00d6c9"
# Startup Probe
startupProbe:
enabled: true
custom: {}
# tcpSocket:
# port: keydb
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 24
# Lifecycle Hooks
lifecycle: {}
# preStop:
# exec:
# command:
# - sh
# - -c
# - "sleep 15; kill 1"
persistentVolume:
enabled: true
accessModes:
- ReadWriteOnce
selector: {}
# matchLabels:
# release: "stable"
# matchExpressions:
# - {key: environment, operator: In, values: [dev]}
size: 1Gi
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
# If persistentVolume is disable, use this to configure the empty dir
emptyDir: {}
resources: {}
# Please read https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls
# before sysctls setup
securityContext: {}
# sysctls:
# - name: net.core.somaxconn
# value: "512"
# - name: vm.overcommit_memory
# value: "1"
keydb:
# Container security context
securityContext: {}
service:
annotations: {}
appProtocol:
enabled: false
serviceAccount:
enabled: false
create: true
name: ""
# extraSpec:
# automountServiceAccountToken: false
# imagePullSecrets:
# - name: pull-secret
extraSpec: {}
loadBalancer:
enabled: false
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-type: nlb
annotations: {}
# extraSpec:
# loadBalancerIP: "1.2.3.4"
# loadBalancerSourceRanges:
# - 1.2.3.4/32
extraSpec: {}
# Prometheus-operator ServiceMonitor
serviceMonitor:
# Redis exporter must also be enabled
enabled: false
labels:
annotations:
interval: 30s
# scrapeTimeout: 20s
# Redis exporter
exporter:
enabled: false
imageRepository: oliver006/redis_exporter
imageTag: v1.48.0-alpine
pullPolicy: IfNotPresent
# Prometheus port & scrape path
port: 9121
portName: redis-exporter
scrapePath: /metrics
# Liveness Probe
livenessProbe:
httpGet:
path: /health
port: redis-exporter
# Readiness Probe
readinessProbe:
httpGet:
path: /health
port: redis-exporter
# Startup Probe
startupProbe:
httpGet:
path: /health
port: redis-exporter
failureThreshold: 24
periodSeconds: 5
# CPU/Memory resource limits/requests
resources: {}
# Container security context
securityContext: {}
# Additional args for redis exporter
extraArgs: []
# - somesimple: "argument"
# - client-output-buffer-limit: ["normal", "0", "0", "0"]
# - client-output-buffer-limit: ["replica", "268435456", "67108864", "60"]
# - client-output-buffer-limit: ["pubsub", "33554432", "8388608", "60"]
scripts:
enabled: false
# CPU/Memory resource limits/requests
resources: {}
# Container security context
securityContext: {}
cleanupCoredumps:
enabled: false
minutes: 1440
cleanupTempfiles:
enabled: true
minutes: 60

View file

@ -1,4 +1,4 @@
apiVersion: v2
description: A Helm chart for maildev
name: maildev
version: 0.0.34
version: 0.0.43

View file

@ -42,7 +42,7 @@ spec:
- name: reload-mail
image: "{{ .Values.cron.image.repository }}:{{ .Values.cron.image.tag }}"
imagePullPolicy: {{ .Values.cron.image.pullPolicy }}
args: ["wget","{{ include "common.names.fullname" . }}:1080/reloadMailsFromDirectory"]
args: ["wget","{{ include "common.names.fullname" . }}:1080/reloadMailsFromDirectory","-q","-O","/dev/null"]
securityContext:
runAsUser: 1000
runAsGroup: 1000

View file

@ -1,3 +1,3 @@
apiVersion: v2
name: modjo-microservice
version: 0.0.34
version: 0.0.43

View file

@ -70,6 +70,7 @@ spec:
- name: http
containerPort: {{ .Values.httpContainerPort }}
protocol: TCP
{{- end }}
{{- if .Values.customLivenessProbe }}
livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }}
{{- else if .Values.livenessProbe.enabled }}
@ -109,7 +110,6 @@ spec:
successThreshold: {{ .Values.startupProbe.successThreshold }}
failureThreshold: {{ .Values.startupProbe.failureThreshold }}
{{- end }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.lifecycleHooks }}

View file

@ -64,9 +64,9 @@ startupProbe:
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 60
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
customLivenessProbe:
customReadinessProbe:
customStartupProbe:
lifecycleHooks: {}
initContainers: []
sidecars: []
@ -112,6 +112,7 @@ ingress:
className: ""
annotations: {}
hostname: api.local
tls: true
tlsSecretname: alerte-secours-tls
resources: {}

View file

@ -1,5 +1,5 @@
{
"version": "0.0.34",
"version": "0.0.43",
"repository": "git@codeberg.org:devthefuture/helm-charts.git",
"license": "MIT",
"private": true,