Auto-generated derived type for ClusterSpec via CustomResource
Affinity/Anti-affinity rules for Pods
AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster’s pods.
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
Required. A pod affinity term, associated with the corresponding weight.
A label query over a set of resources, in this case pods.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
A label query over a set of resources, in this case pods.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated by the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false.
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
Required. A pod affinity term, associated with the corresponding weight.
A label query over a set of resources, in this case pods.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
A label query over a set of resources, in this case pods.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
NodeAffinity describes node affinity scheduling rules for the pod. More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it’s a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
A node selector term, associated with the corresponding weight.
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator .
The configuration to be used for backups
The configuration for the barman-cloud tool suite
The credentials to use to upload data to Azure Blob Storage
The connection string to be used
The storage account where to upload data
The storage account key to be used in conjunction with the storage account name
A shared-access-signature to be used in conjunction with the storage account name
The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy.
EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive
The credentials to use to upload data to Google Cloud Storage
The secret containing the Google Cloud Storage JSON file with the credentials
The credentials to use to upload data to S3
The reference to the access key id
The reference to the secret containing the region name
The reference to the secret access key
The references to the session key
The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy.
VolumeSnapshot provides the configuration for the execution of volume snapshot backups.
Configuration parameters to control the online/hot backup with volume snapshots
Instructions to bootstrap this cluster
Bootstrap the cluster via initdb
Bootstraps the new cluster by importing data from an existing PostgreSQL instance using logical backup (pg_dump
and pg_restore
)
The source of the import
PostInitApplicationSQLRefs points references to ConfigMaps or Secrets which contain SQL files, the general implementation order to these references is from all Secrets to all ConfigMaps, and inside Secrets or ConfigMaps, the implementation order is same as the order of each array (by default empty)
ConfigMapKeySelector contains enough information to let you locate the key of a ConfigMap
SecretKeySelector contains enough information to let you locate the key of a Secret
Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch
Bootstrap the cluster taking a physical backup of another compatible PostgreSQL instance
Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch
Bootstrap the cluster from a backup
The backup object containing the physical base backup from which to initiate the recovery procedure. Mutually exclusive with source
and volumeSnapshots
.
EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive.
By default, the recovery process applies all the available WAL files in the archive (full recovery). However, you can also end the recovery as soon as a consistent state is reached or recover to a point-in-time (PITR) by specifying a RecoveryTarget
object, as expected by PostgreSQL (i.e., timestamp, transaction Id, LSN, …). More info: https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET
Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch
The static PVC data source(s) from which to initiate the recovery procedure. Currently supporting VolumeSnapshot
and PersistentVolumeClaim
resources that map an existing PVC group, compatible with CloudNativePG, and taken with a cold backup copy on a fenced Postgres instance (limitation which will be removed in the future when online backup will be implemented). Mutually exclusive with backup
.
Configuration of the storage of the instances
Configuration of the storage for PostgreSQL tablespaces
Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)
The configuration for the CA and related certificates
EnvVar represents an environment variable present in a Container.
EnvFromSource represents the source of a set of ConfigMaps
The ConfigMap to select from
The Secret to select from
Source for the environment variable’s value. Cannot be used if value is not empty.
Selects a key of a ConfigMap.
Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>']
, metadata.annotations['<KEY>']
, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
Selects a key of a secret in the pod’s namespace
EphemeralVolumeSource allows the user to configure the source of ephemeral volumes.
Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name>
where <volume name>
is the name from the PodSpec.Volumes
array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long).
An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.
This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created.
Required, must not be nil.
May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.
The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.
dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
ResourceClaim references one entry in PodSpec.ResourceClaims.
selector is a label query over volumes to consider for binding.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
EphemeralVolumesSizeLimit allows the user to set the limits for the ephemeral volumes
ExternalCluster represents the connection parameters to an external cluster which is used in the other sections of the configuration
The configuration for the barman-cloud tool suite
The credentials to use to upload data to Azure Blob Storage
The connection string to be used
The storage account where to upload data
The storage account key to be used in conjunction with the storage account name
A shared-access-signature to be used in conjunction with the storage account name
The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy.
EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive
The credentials to use to upload data to Google Cloud Storage
The secret containing the Google Cloud Storage JSON file with the credentials
The credentials to use to upload data to S3
The reference to the access key id
The reference to the secret containing the region name
The reference to the secret access key
The references to the session key
The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy.
The reference to the password to be used to connect to the server. If a password is provided, CloudNativePG creates a PostgreSQL passfile at /controller/external/NAME/pass
(where “NAME” is the cluster’s name). This passfile is automatically referenced in the connection string when establishing a connection to the remote PostgreSQL server from the current PostgreSQL Cluster
. This ensures secure and efficient password management for external clusters.
The reference to an SSL certificate to be used to connect to this instance
The reference to an SSL private key to be used to connect to this instance
The reference to an SSL CA public key to be used to connect to this instance
LocalObjectReference contains enough information to let you locate a local object with a known type inside the same namespace
Metadata that will be inherited by all objects related to the Cluster
The configuration that is used by the portions of PostgreSQL that are managed by the instance manager
RoleConfiguration is the representation, in Kubernetes, of a PostgreSQL role with the additional field Ensure specifying whether to ensure the presence or absence of the role in the database
The defaults of the CREATE ROLE command are applied Reference: https://www.postgresql.org/docs/current/sql-createrole.html
Secret containing the password of the role (if present) If null, the password will be ignored unless DisablePassword is set
The configuration of the monitoring infrastructure of this cluster
ConfigMapKeySelector contains enough information to let you locate the key of a ConfigMap
SecretKeySelector contains enough information to let you locate the key of a Secret
RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples.
More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples.
More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
Define a maintenance window for the Kubernetes nodes
Configuration of the PostgreSQL server
Options to specify LDAP configuration
Bind as authentication configuration
Bind+Search authentication configuration
Secret with the password for the user to bind to the directory
Requirements to be met by sync replicas. This will affect how the “synchronous_standby_names” parameter will be set up.
Template to be used to define projected volumes, projected volumes will be mounted under /projected
base folder
Projection that may be projected along with other supported volume types
configMap information about the configMap data to project
Maps a string key to a path within a volume.
downwardAPI information about the downwardAPI data to project
DownwardAPIVolumeFile represents information to create the file containing the pod field
Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.
secret information about the secret data to project
Maps a string key to a path within a volume.
serviceAccountToken is information about the serviceAccountToken data to project
Replica cluster configuration
Replication slots management configuration
Replication slots for high availability configuration
Resources requirements of every generated Pod. Please refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more information.
ResourceClaim references one entry in PodSpec.ResourceClaims.
The SeccompProfile applied to every Pod and Container. Defaults to: RuntimeDefault
Configure the generation of the service account
Metadata are the metadata to be used for the generated service account
Specification of the desired behavior of the cluster. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
Most recently observed status of the cluster. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
The configuration for the CA and related certificates, initialized with defaults.
Condition contains details for one aspect of the current state of this API Resource. — This struct is intended for direct use as an array at the field path .status.conditions. For example,
type FooStatus struct{ // Represents the observations of a foo’s current state. // Known .status.conditions.type are: “Available”, “Progressing”, and “Degraded” // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"
// other fields }
The list of resource versions of the configmaps, managed by the operator. Every change here is done in the interest of the instance manager, which will refresh the configmap data
The reported state of the instances during the last reconciliation loop
ManagedRolesStatus reports the state of the managed roles in the cluster
PasswordStatus gives the last transaction id and password secret version for each managed role
The integration needed by poolers referencing the cluster
PgBouncerIntegrationStatus encapsulates the needed integration for the pgbouncer poolers referencing the cluster
The list of resource versions of the secrets managed by the operator. Every change here is done in the interest of the instance manager, which will refresh the secret data
TablespaceState represents the state of a tablespace in a cluster
Instances topology.
Configuration of the storage of the instances
Template to be used to generate the Persistent Volume Claim
dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
ResourceClaim references one entry in PodSpec.ResourceClaims.
selector is a label query over volumes to consider for binding.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password
TablespaceConfiguration is the configuration of a tablespace, and includes the storage specification for the tablespace
Owner is the PostgreSQL user owning the tablespace
The storage configuration for the tablespace
Template to be used to generate the Persistent Volume Claim
dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
ResourceClaim references one entry in PodSpec.ResourceClaims.
selector is a label query over volumes to consider for binding.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
TopologySpreadConstraint specifies how to spread matching pods among the given topology.
LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)
Template to be used to generate the Persistent Volume Claim
dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
ResourceClaim references one entry in PodSpec.ResourceClaims.
selector is a label query over volumes to consider for binding.
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.