Postgresql-Ha helm chart
High availability cloud native postgresql based on stolon
- Chart Version: 0.4.3
Prerequisites
- Kubernetes 1.21+
- Helm 3+
- PV provisioner support in the underlying infrastructure (when using volumes)
How to use this chart?
Installing the Chart
Add helm repository first
helm repo add kubit-packs https://repo.sabz.dev/artifactory/kubit-packs
To install the chart with the release name my-postgresql-ha
create a my-postgresql-ha.values.yaml
file with following contents:
# my-postgresql-ha.values.yaml
clusterName: ...
debug: ...
superuserUsername: ...
superuserPassword: ...
superuserPasswordFile: ...
#...
and then run the following command:
kubectl create ns test-postgresql-ha || true
helm upgrade --install -n test-postgresql-ha my-postgresql-ha kubit-packs/postgresql-ha -f my-postgresql-ha.values.yaml
The command deploys postgresql-ha on the Kubernetes cluster with given parameters. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Uninstalling the Chart
To uninstall/delete the my-postgresql-ha
release:
helm delete -n test-postgresql-ha my-postgresql-ha
The command removes all the Kubernetes components associated with the chart and deletes the release.
Install via Pack
Create a my-postgresql-ha.pack.yaml
file with following content.
# my-postgresql-ha.pack.yaml
apiVersion: k8s.kubit.ir/v1alpha1
kind: Pack
metadata:
name: my-postgresql-ha
namespace: test-postgresql-ha
spec:
chart:
repository:
kind: ClusterPackRepository
name: kubit-packs
name: postgresql-ha
version: ~=0.4.3
values:
clusterName: ...
debug: ...
superuserUsername: ...
superuserPassword: ...
superuserPasswordFile: ...
#...
and then run the following command
kubectl create ns test-postgresql-ha || true
kubectl create -f my-postgresql-ha.pack.yaml
Uninstalling via Pack
To uninstall/delete the my-postgresql-ha
pack:
kubectl -n test-postgresql-ha delete pack my-postgresql-ha
The command removes all the Kubernetes components associated with the chart and deletes the pack.
Parameters
The following table lists the configurable parameters sections of the postgresql-ha chart.
Parameter | Type | Description | Default |
---|---|---|---|
clusterName | string | stolon cluster name override | "" |
debug | bool | false | |
global.commonImageRegistry | string | "" | |
image.registry | string | Postgres/Stolon image registry | "" |
image.repository | string | Postgres/Stolon image repository | "sabzco/postgres" |
image.tag | string | Postgres/Stolon image tag | "15" |
image.keeperTag | string | Use image.keeperTag to prevent keeper pods restart when image.tag changed | "" |
image.pullPolicy | string | Postgres/Stolon image pull policy | "IfNotPresent" |
dockerize.image.repository | string | "jwilder/dockerize" | |
dockerize.image.pullPolicy | string | "IfNotPresent" | |
shmVolume.enabled | bool | Enable emptyDir volume for /dev/shm on keepers pods | false |
persistence.enabled | bool | Enable persistence data of keepers | true |
persistence.size | string | Keepers volume size | "" |
persistence.storageClass | string | Storage class name of backing keepers PVC | "" |
persistence.accessModes | list | Keeper persistent volumes access modes | ["ReadWriteOnce"] |
rbac.create | bool | Specifies if RBAC resources should be created | true |
serviceAccount.create | bool | Specifies if ServiceAccount should be created | true |
serviceAccount.name | string | The name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template | "" |
serviceAccount.imagePullSecrets | list | List of pull secrets added to ServiceAccount | [] |
serviceAccount.imagePullSecrets[].name | string | Name of Secret used to pull images | "" |
superuserUsername | string | Postgresql superuser username | "postgres" |
superuserPassword | string | Password for the superuser (REQUIRED if superuserSecret and superuserPasswordFile are not set) | "" |
superuserPasswordFile | string | File where to read the Postgresql superuser password | "" |
superuserSecret.name | string | Postgresql superuser credential secret name | "" |
superuserSecret.usernameKey | string | Username key of Postgresql superuser in secret | "pg_su_username" |
superuserSecret.passwordKey | string | Password key of Postgresql superuser in secret | "pg_su_password" |
replicationUsername | string | Replication username | "replica" |
replicationPassword | string | Password for the replication user (REQUIRED if replicationSecret and replicationPasswordFile are not set) | "" |
replicationPasswordFile | string | File where to read the replication password | "" |
replicationSecret.name | string | Postgresql replication credential secret name | "" |
replicationSecret.usernameKey | string | Username key of Postgresql replication in secret | "pg_repl_username" |
replicationSecret.passwordKey | string | Password key of Postgresql replication in secret | "pg_repl_password" |
store.backend | string | Stolon store backend. It could be one of the following: consul, etcdv2, etcdv3 or kubernetes. If set it kubernetes or consul set etcd.enabled to false | "etcdv2" |
store.endpoints | string | Store backend endpoints (eg: http://stolon-etcd:2379) | nil |
store.kubeResourceKind | string | Kubernetes resource kind One of configmap or secret (only for kubernetes backend) | nil |
pgParameters | object | postgresql.conf options used during cluster creation | see below |
pgParameters.shared_buffers | string | Sets the number of shared memory buffers used by the server | 0.25 * resources.requests.memory |
pgParameters.log_checkpoints | on/off | Logs each checkpoint | "on" |
pgParameters.log_lock_waits | on/off | Logs long lock waits | "on" |
pgParameters.checkpoint_completion_target | string | Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval | "0.9" |
pgParameters.wal_keep_size | string | Sets the size of WAL files held for standby servers | "1GB" |
serviceMonitor.enabled | bool | When set to true then use a ServiceMonitor to collect metrics | true |
serviceMonitor.labels | object | Custom labels to use in the ServiceMonitor to be matched with a specific Prometheus | {} |
serviceMonitor.namespace | string | Set the namespace the ServiceMonitor should be deployed to | "default" |
serviceMonitor.interval | string | How frequently Prometheus should scrape | "30s" |
serviceMonitor.scrapeTimeout | string | How much Prometheus wait for scrape until timeout (scrapeTimeout must lower than interval) | "10s" |
forceInitCluster | bool | false | |
databases | array | Array of databases to create | [] |
databases[].database | string | Name of database | "" |
databases[].databaseCreationExtraArguments | string | Extra arguments that append to create database sql command | "" |
databases[].username | string | User that created and access of database granted for | "" |
databases[].password | string | Password of user | "" |
databases[].extensions | list of string | List of extensions as string that created for this database | [] |
mode | string | Stolon mode, default create standalone cluster, set to standby to follow another postgresql instance | "standalone" |
standbyConfig | object | Specifications of master postgresql when mode is standby | {"certs":{"enabled":false,"files":{"ca.crt":"","tls.crt":"","tls.key":""},"path":"certs"},"host":"","port":"","sslmode":"disable"} |
standbyConfig.host | string | Host of master postgresql | "" |
standbyConfig.port | string | Port of master postgresql | "" |
standbyConfig.sslmode | string | Set sslmode in connecting to master postgresql | "disable" |
standbyConfig.certs | object | Certificate properties in connecting to master postgresql | {"enabled":false,"files":{"ca.crt":"","tls.crt":"","tls.key":""},"path":"certs"} |
standbyConfig.certs.enabled | bool | If enabled given certificates are mounted in keepers pod. | false |
standbyConfig.certs.path | string | Path ot mount certificates | "certs" |
standbyConfig.certs.files | object | certificates files | {} |
standbyConfig.certs.files."ca.crt" | string | Content of ca.crt file | "" |
standbyConfig.certs.files."tls.crt" | string | Content of tls.crt file | "" |
standbyConfig.certs.files."tls.key" | string | Content of tls.key file | "" |
clusterSpec | object | Stolon cluster spec reference | {} |
tls | object | Enable support ssl into postgres, you must specify the certs. ref | {} |
tls.enabled | bool | Enable tls support to postgresql | false |
tls."ca.crt" | string | Content of ca.crt file | "" |
tls."tls.crt" | string | Content of tls.crt file | "" |
tls."tls.key" | string | Content of tls.key file | "" |
tls.existingSecret | string | Existing secret with certificate content to stolon credentials | nil |
keeper.uid_prefix | string | Keeper prefix name | "keeper" |
keeper.replicaCount | int | Number of keeper nodes | 2 |
keeper.annotations | object | {} | |
keeper.resources | object | Keeper resource requests/limits | {} |
keeper.priorityClassName | string | Keeper priorityClassName | "" |
keeper.podSecurityContext.fsGroup | int | Keeper securityContext fsGroup, do not set if pg9 or 10 | 1000 |
keeper.podSecurityContext.fsGroupChangePolicy | string | "OnRootMismatch" | |
keeper.updateStrategy | object | {} | |
keeper.service.type | string | "ClusterIP" | |
keeper.service.annotations | object | {} | |
keeper.affinity | object | Affinity settings for keeper pod assignment | {} |
keeper.antiAffinityMode | string | "required" | |
keeper.nodeSelector | object | Node labels for keeper pod assignment | {} |
keeper.tolerations | list | Toleration labels for keeper pod assignment | [] |
keeper.volumes | list | Keeper Additional volumes | [] |
keeper.volumeMounts | list | Mount paths for keeper.volumes | [] |
keeper.hooks.failKeeper.enabled | bool | Enable failkeeper pre-stop hook | false |
keeper.podDisruptionBudget.enabled | bool | If true, create a pod disruption budget for keeper pods. | true |
keeper.podDisruptionBudget.minAvailable | int | Minimum number / percentage of pods that should remain scheduled | 1 if keeper.replicaCount >= 2 and no pdb set otherwise nil |
keeper.podDisruptionBudget.maxUnavailable | int | Maximum number / percentage of pods that may be made unavailable | 1 if keeper.replicaCount == 1 and no pdb set otherwise nil |
keeper.terminationGracePeriodSeconds | int | Optional duration in seconds the keeper pod needs to terminate gracefully. | 10 |
keeper.extraEnv | list | Extra environment variables for keeper | [] |
keeper.networkPolicy.enabled | bool | true | |
keeper.networkPolicy.metricsExtraFrom | list | [] | |
keeper.readinessProbe.enabled | bool | true | |
keeper.readinessProbe.port | int | 10101 | |
keeper.readinessProbe.path | string | "/readiness" | |
keeper.readinessProbe.initialDelaySeconds | int | 2 | |
keeper.readinessProbe.periodSeconds | int | 10 | |
keeper.readinessProbe.timeoutSeconds | int | 1 | |
keeper.readinessProbe.successThreshold | int | 1 | |
keeper.readinessProbe.failureThreshold | int | 3 | |
proxy.replicaCount | int | Number of proxy pods | 2 |
proxy.annotations | object | {} | |
proxy.resources | object | Proxy resource requests/limits | {"requests":{"cpu":"20m","memory":"200Mi"}} |
proxy.priorityClassName | string | Proxy priorityClassName | "" |
proxy.service.type | string | "ClusterIP" | |
proxy.service.annotations | object | {} | |
proxy.service.ports.proxy.port | int | 5432 | |
proxy.service.ports.proxy.targetPort | int | 5432 | |
proxy.service.ports.proxy.protocol | string | "TCP" | |
proxy.affinity | object | Affinity settings for proxy pod assignment | {} |
proxy.antiAffinityMode | string | "required" | |
proxy.nodeSelector | object | Node labels for proxy pod assignment | {} |
proxy.tolerations | list | Toleration labels for proxy pod assignment | [] |
proxy.podDisruptionBudget.enabled | bool | If true, create a pod disruption budget for proxy pods. | false |
proxy.podDisruptionBudget.minAvailable | int | Minimum number / percentage of pods that should remain scheduled | 1 if proxy.replicaCount >= 2 and no pdb set otherwise nil |
proxy.podDisruptionBudget.maxUnavailable | int | Maximum number / percentage of pods that may be made unavailable | 1 if proxy.replicaCount == 1 and no pdb set otherwise nil |
proxy.extraEnv | list | Extra environment variables for proxy | [] |
proxy.networkPolicy.enabled | bool | false | |
proxy.networkPolicy.sameNamespace | bool | true | |
proxy.networkPolicy.extraFrom | list | [] | |
proxy.readinessProbe.port | int | 5432 | |
proxy.readinessProbe.initialDelaySeconds | int | 10 | |
proxy.readinessProbe.timeoutSeconds | int | 5 | |
slaveProxy.enabled | bool | Enable creation of slave-proxy deployment to connect to slave keepers | false |
slaveProxy.replicaCount | int | 2 | |
slaveProxy.annotations | object | {} | |
slaveProxy.resources.requests.memory | string | "200Mi" | |
slaveProxy.resources.requests.cpu | string | "20m" | |
slaveProxy.priorityClassName | string | "" | |
slaveProxy.service.type | string | "ClusterIP" | |
slaveProxy.service.annotations | object | {} | |
slaveProxy.service.ports.proxy.port | int | 5432 | |
slaveProxy.service.ports.proxy.targetPort | int | 5432 | |
slaveProxy.service.ports.proxy.protocol | string | "TCP" | |
slaveProxy.affinity | object | {} | |
slaveProxy.antiAffinityMode | string | "required" | |
slaveProxy.nodeSelector | object | {} | |
slaveProxy.tolerations | list | [] | |
slaveProxy.podDisruptionBudget.enabled | bool | If true, create a pod disruption budget for slaveProxy pods. | false |
slaveProxy.podDisruptionBudget.minAvailable | int | Minimum number / percentage of pods that should remain scheduled | 1 if slaveProxy.replicaCount >= 2 and no pdb set otherwise nil |
slaveProxy.podDisruptionBudget.maxUnavailable | int | Maximum number / percentage of pods that may be made unavailable | 1 if slaveProxy.replicaCount == 1 and no pdb set otherwise nil |
slaveProxy.extraEnv | list | [] | |
slaveProxy.networkPolicy.enabled | bool | false | |
slaveProxy.networkPolicy.sameNamespace | bool | true | |
slaveProxy.networkPolicy.extraFrom | list | [] | |
slaveProxy.readinessProbe.port | int | 5432 | |
slaveProxy.readinessProbe.initialDelaySeconds | int | 10 | |
slaveProxy.readinessProbe.timeoutSeconds | int | 5 | |
sentinel.replicaCount | int | Number of sentinel pods | 3 |
sentinel.annotations | object | {} | |
sentinel.resources | object | Sentinel resource requests/limits | {"requests":{"cpu":"10m","memory":"50Mi"}} |
sentinel.priorityClassName | string | Sentinel priorityClassName | "" |
sentinel.affinity | object | Affinity settings for sentinel pod assignment | {} |
sentinel.antiAffinityMode | string | "required" | |
sentinel.nodeSelector | object | Node labels for sentinel pod assignment | {} |
sentinel.tolerations | list | Toleration labels for sentinel pod assignment | [] |
sentinel.podDisruptionBudget.enabled | bool | If true, create a pod disruption budget for sentinel pods. | false |
sentinel.podDisruptionBudget.minAvailable | int | Minimum number / percentage of pods that should remain scheduled | 1 if sentinel.replicaCount >= 2 and no pdb set otherwise nil |
sentinel.podDisruptionBudget.maxUnavailable | int | Maximum number / percentage of pods that may be made unavailable | 1 if sentinel.replicaCount == 1 and no pdb set otherwise nil |
sentinel.extraEnv | list | Extra environment variables for sentinel | [] |
sentinel.livenessProbe.enabled | bool | false | |
sentinel.livenessProbe.command[0] | string | "/tmp/job-scripts/sentinel-cluster-has-leader.sh" | |
sentinel.livenessProbe.initialDelaySeconds | int | 5 | |
sentinel.livenessProbe.periodSeconds | int | 10 | |
sentinel.livenessProbe.timeoutSeconds | int | 1 | |
sentinel.livenessProbe.successThreshold | int | 1 | |
sentinel.livenessProbe.failureThreshold | int | 5 | |
postgresqlUpgrade.enabled | bool | Enable postgresql upgrade mechanism, (Note: postgresql will be down during upgrade) | false |
postgresqlUpgrade.oldVersion | string | Postgresql upgrade from version | "11" |
postgresqlUpgrade.newVersion | string | Postgresql upgrade to version | "13" |
postgresqlUpgrade.image.registry | string | Postgresql upgrade specific image registry | "" |
postgresqlUpgrade.image.repository | string | Postgresql upgrade specific image repository | "tianon/postgres-upgrade" |
postgresqlUpgrade.image.tag | string | Postgresql upgrade specific image tag | [oldVersion]-to-[newVersion] |
metrics.image.registry | string | "" | |
metrics.image.repository | string | "prometheuscommunity/postgres-exporter" | |
metrics.image.tag | string | "v0.12.0" | |
metrics.image.pullPolicy | string | "IfNotPresent" | |
metrics.database | string | "postgres" | |
metrics.port | int | 9187 | |
metrics.defaultCustomMetrics.pg_replication | bool | false | |
metrics.defaultCustomMetrics.pg_postmaster | bool | false | |
metrics.defaultCustomMetrics.pg_stat_user_tables | bool | false | |
metrics.defaultCustomMetrics.pg_statio_user_tables | bool | false | |
metrics.defaultCustomMetrics.pg_stat_statements | bool | false | |
metrics.defaultCustomMetrics.pg_process_idle | bool | false | |
metrics.postgres_exporter_yml.auth_modules | object | {} | |
metrics.customMetrics | object | {} | |
adminer.enabled | bool | Enable adminer deployment, a full-featured database management tool | false |
adminer.replicaCount | int | Number of adminer pods | 1 |
adminer.image.registry | string | Adminer image registry | "" |
adminer.image.repository | string | Adminer image repository | "adminer" |
adminer.image.tag | string | Adminer image tag | "4.8.1" |
adminer.image.pullPolicy | string | Adminer image pull policy | "IfNotPresent" |
adminer.ingress.annotations | object | {} | |
adminer.ingress.secretName | string | Adminer's ingress secret to use in tls | "" |
adminer.ingress.host | string | Adminer's ingress host | "" |
adminer.resources | object | Adminer resource requests/limit | {} |
adminer.theme | string | Adminer theme name (see more themes) | "pappu687" |
backup.enabled | bool | Enable backup mechanism by creating a CronJob | false |
backup.schedule | string | CronJob schedule | "0 0 * * *" |
backup.activeDeadlineSeconds | int | Maximum time until the job treated as dead | 14400 |
backup.strategy | string | Determines which keeper to back up from. Valid selections are: only-standby , prefer-standby , exclusive-standby . By selecting exclusive-standby it will create a dedicated keeper to backup from. | "only-standby" |
backup.maxBackups | int | Maximum successful backups to retain and remove others | 0 |
backup.provider | string | Backup storage provider. One of (s3, local) | "s3" |
backup.s3 | object | Backup s3 parameters | {"accessKey":"","bucket":"","endpoint":"","existingSecret":"","pathPrefix":"","region":"","secretKey":""} |
backup.s3.existingSecret | string | Existing Secret name containing s3 parameters | "" |
backup.s3.endpoint | string | S3 endpoint | "" |
backup.s3.region | string | S3 region | "" |
backup.s3.accessKey | string | S3 AccessKey | "" |
backup.s3.secretKey | string | S3 SecretKey | "" |
backup.s3.bucket | string | S3 bucket name | "" |
backup.s3.pathPrefix | string | S3 path prefix | "" |
backup.extraArgs | list | Extra args to stolonctl backup command | [] |
backup.image.registry | string | "" | |
backup.image.repository | string | "" | |
backup.image.tag | string | "" | |
backup.image.pullPolicy | string | "Always" | |
backup.persistence.enabled | bool | false | |
backup.persistence.size | string | "50Gi" | |
backup.persistence.accessMode | string | "ReadWriteMany" | |
backup.persistence.existingPVC | string | "" | |
backup.persistence.mountPath | string | "/backups" | |
backup.resources | object | {} | |
backup.affinity | object | {} | |
backup.antiAffinityMode | string | "required" |
Examples
Hello world
image:
tag: v0.17.0-pg13
mode: standalone
superuserUsername: postgres
superuserPassword: my-postgres-password
replicationUsername: replica
replicationPassword: my-replica-password
persistence:
storageClassName: my-storage-class
size: 5Gi
etcd:
enabled: true
External etcd
etcd:
enabled: false (default)
store:
backend: etcdv2 (default)
endpoints: http://my-etcd-endpoint:port
Custom registry
global:
commonImageRegistry: my-docker.io
Database creation
databases:
- database: my-database
username: my-username
password: my-password
extensions:
- my-extension-1
- my-extension-2
databaseCreationExtraArguments: my-arguments
Standby mode
mode: standby
replicationUsername: my-master-pg-replication-username
replicationPassword: my-master-pg-replication-password
standbyConfig:
host: my-master-pg-host
port: 5432
Promotion to standalone mode
Just apply with this value:
mode: standalone (default)
Resources
keeper:
resources:
requests:
memory: 1Gi
cpu: 100m
proxy:
resources:
requests:
memory: 50Mi
cpu: 50m
sentinel:
resources:
requests:
memory: 10Mi
cpu: 10m
Postgresql upgrading
apply with these values:
postgresqlUpgrade:
enabled: true
oldVersion: 11
newVersion: 13
then apply with these values:
postgresqlUpgrade:
enabled: false (default)
Adminer
web-based pg viewer
enabled: true
ingress:
host: my-adminer.com
secretName: my-secret-tls
theme: pappu687 (default)
Recommended pg parameters
pgParameters:
shared_buffers: '0.5GB' # half of keeper.resources.requests.memory
log_checkpoints: 'on'
log_lock_waits: 'on'
checkpoint_completion_target: '0.9'
min_wal_size: '2GB'
shared_preload_libraries: 'pg_stat_statements'
pg_stat_statements.track: 'all'