Custom resource definition parameters

The following table describes the available settings for the Vertica Custom Resource Definition.

The following lists describes the available settings for Vertica custom resource definitions (CRDs).

VerticaDB

annotations
Custom annotations added to all of the objects that the operator creates. Each annotation is encoded as an environment variable in the Vertica server container. The following values are accepted:
  • Letters
  • Numbers
  • Underscores

Invalid character values are converted to underscore characters. For example:

vertica.com/git-ref: 1234abcd

Is converted to:

VERTICA_COM_GIT_REF=1234abcd

autoRestartVertica
Whether the operator restarts the Vertica process when the process is not running.

Set this parameter to false when performing manual maintenance that requires a DOWN database. This prevents the operator from interfering with the database state.

Default: true

certSecrets
A list of Secrets for custom TLS certificates.

Each certificate is mounted in the container at /certs/cert-name/key. For example, a PEM-encoded CA bundle named root_cert.pem and concealed in a Secret named aws-cert is mounted in /certs/aws-cert/root_cert.pem.

If you update the certificate after you add it to a custom resource, the operator updates the value automatically. If you add or delete a certificate, the operator reschedules the pod with the new configuration.

For implementation details, see VerticaDB CRD.

communal.additionalConfig
Sets one or more configuration parameters in the CR:
      
spec:
  communal:
    additionalConfig:
      config-param: "value"
      ...
... 
  

Configuration parameters are set only when the database is initialized. After the database is initialized, changes to this parameter have no effect in the server.

communal.caFile
The mount path in the container filesystem to a CA certificate file that validates HTTPS connections to a communal storage endpoint.

Typically, the certificate is stored in a Secret and included in certSecrets. For details, see VerticaDB CRD.

communal.credentialSecret
The name of the Secret that stores the credentials for the communal storage endpoint.

For implementation details for each supported communal storage location, see Configuring communal storage.

This parameter is optional when you authenticate to an S3-compatible endpoint with an Identity and Access Management (IAM) profile.

communal.endpoint
A communal storage endpoint URL. The endpoint must begin with either the http:// or https:// protocol. For example:

https://path/to/endpoint

You cannot change this value after you create the custom resource instance.

This setting is required when initPolicy is set to Create or Revive.

communal.s3ServerSideEncryption
Server-side encryption type used when reading from or writing to S3. The value depends on which type of encryption at rest is configured for S3.

This parameter accepts the following values:

  • SSE-S3
  • SSE-KMS: Requires that you pass the key identifier with the communal.additionalConfig parameter.
  • SSE-C: Requires that you pass the client key with the communal.s3SSECustomerKeySecret parameter.

You cannot change this value after you create the custom resource instance.

For implementation examples of all encryption types, see Configuring communal storage.

For details about each encryption type, see S3 object store.

Default: Empty string (""), no encryption

communal.s3SSECustomerKeySecret
If s3ServerSideEncryption is set to SSE-C, a Secret containing the client key for S3 access with the following requirements:
  • The Secret must be in the same namespace as the CR.
  • You must set the client key contents with the clientKey field.

The client key must use one of the following formats:

  • 32-character plaintext
  • 44-character base64-encoded

For additional implementation details, see Configuring communal storage.

communal.hadoopConfig
A ConfigMap that contains the contents of the /etc/hadoop directory.

This is mounted in the container to configure connections to a Hadoop Distributed File System (HDFS) communal path.

communal.includeUIDInPath
When set to true, the operator includes in the path the unique identifier (UID) that Kubernetes assigns to the VerticaDB object. Including the UID creates a unique database path so that you can reuse the communal path in the same endpoint.

Default: false

communal.kerberosRealm
The realm portion of the Vertica Kerberos principal. This value is set in the KerberosRealm database parameter during boostrapping.
communal.kerberosServiceName
The service name portion of the Vertica Kerberos principal. This value is set in the KerberosServiceName database parameter during bootstrapping.
communal.path
The path to the communal storage bucket. For example:

s3://bucket-name/key-name

You must create this bucket before you create the Vertica database.

The following initPolicy values determine how to set this value:

  • Create: The path must be empty.

  • Revive: The path cannot be empty.

You cannot change this value after you create the custom resource.

communal.region
The geographic location where the communal storage resources are located.

If you do not set the correct region, the configuration fails. You might experience a delay because Vertica retries several times before failing.

This setting is valid for Amazon Web Services (AWS) and Google Cloud Platform (GCP) only. Vertica ignores this setting for other communal storage providers.

Default:

  • AWS: us-east-1

  • GCP: US-EAST1

dbName
The database name. When initPolicy is set to Revive or ScheduleOnly, this must match the name of the source database.

Default: vertdb

encryptSpreadComm
Sets the EncryptSpreadComm security parameter to configure Spread encryption for a new Vertica database. The VerticaDB operator ignores this parameter unless you set initPolicy to Create.

This parameter accepts the following values:

  • vertica: Enables Spread encryption. Vertica generates the Spread encryption key for the database cluster.

  • Empty string (""): Clears encryption.

Default: Empty string ("")

httpServerMode
Controls the Vertica HTTP server. The HTTP server provides a REST interface that you can use to manage and monitor the server. The following values are accepted:
  • Enabled
  • Disabled
  • Auto or empty string (""). This setting starts the server.

Default: empty string ("")

ignoreClusterLease
Ignore the cluster lease when executing a revive or start_db.

Default: false

image
The image that defines the Vertica server container's runtime environment. If the container is hosted in a private container repository, this name must include the path to the repository.

When you update the image, the operator stops and restarts the cluster.

Default: vertica/vertica-k8s:latest

imagePullPolicy
How often Kubernetes pulls the image for an object. For details, see Updating Images in the Kubernetes documentation.

Default: If the image tag is latest, the default is Always. Otherwise, the default is IfNotPresent.

imagePullSecrets
List of Secrets that store credentials for authentication to a private container repository. For details, see Specifying imagePullSecrets in the Kubernetes documentation.
initPolicy
How to initialize the Vertica database in Kubernetes. This parameter accepts the following values:
  • Create: Forces the creation of a new database for the custom resource.

  • CreateSkipPackageInstall: Same as Create, but does not install any default packages to quickly create a database.

    To install default packages, see the admintools install_packages command.

  • Revive: Initializes an existing Eon Mode database as a StatefulSet with the revive command. For information about Revive, see Generating a custom resource from an existing Eon Mode database.

  • ScheduleOnly: Schedules a subcluster for a Hybrid Kubernetes cluster.

kerberosSecret
The Secret that stores the following values for Kerberos authentication to Hadoop Distributed File System (HDFS):
  • krb5.conf: Contains Kerberos configuration information.

  • krb5.keytab: Contains credentials for the Vertica Kerberos principal. This file must be readable by the file owner that is running the process.

The default location for each of these files is the /etc directory.

kSafety
Sets the fault tolerance for the cluster. The operator supports setting this value to 0 or 1 only. For details, see K-safety.

You cannot change this value after you create the custom resource.

Default: 1

labels
Custom labels added to all of the objects that the operator creates.
licenseSecret
The Secret that contains the contents of license files. The Secret must share a namespace with the custom resource (CR). Each of the keys in the Secret is mounted as a file in /home/dbadmin/licensing/mnt.

If this value is set when the CR is created, the operator installs one of the licenses automatically, choosing the first one alphabetically.

If you update this value after you create the custom resource, you must manually install the Secret in each Vertica pod.

livenessProbeOverride
Overrides default livenessProbe settings that indicate whether the container is running. The VerticaDB operator sets or updates the liveness probe in the StatefulSet.

For example, the following object overrides the default initialDelaySeconds, periodSeconds, and failureThreshold settings:

      
spec:
...
  livenessProbeOverride:
    initialDelaySeconds: 120
    periodSeconds: 15
    failureThreshold: 8
  

For a detailed list of the available probe settings, see the Kubernetes documentation.

local.catalogPath
Optional parameter that sets a custom path in the container filesystem for the catalog, if your environment requires that the catalog is stored in a location separate from the local data.

If initPolicy is set to Revive or ScheduleOnly, local.catalogPath for the new database must match local.catalogPath for the source database.

local.dataPath
The path in the container filesystem for the local data. If local.catalogPath is not set, the catalog is stored in this location.

If initPolicy is set to Revive or ScheduleOnly, the dataPath for the new database must match the dataPath for the source database.

Default: /data

local.depotPath
The path in the container filesystem that stores the depot.

If initPolicy is set to Revive or ScheduleOnly, the depotPath for the new database must match the depotPath for the source database.

Default: /depot

local.depotVolume
The type of volume to use for the depot. This parameter accepts the following values:
  • PersistentVolume: A PersistentVolume is used to store the depot data. This volume type persists depot data between pod lifecycles.
  • EmptyDir: A volume of type emptyDir is used to store the depot data. When the pod is removed from a node, the contents of the volume are deleted. If a container crashes, the depot data is unaffected.

For details about each volume type, see the Kubernetes documentation.

Default: PersistentVolume

local.requestSize
The minimum size of the local data volume when selecting a PersistentVolume (PV).

If local.storageClass allows volume expansion, the operator automatically increases the size of the PV when you change this setting. It expands the size of the depot if the following conditions are met:

  • local.storageClass is set to PersistentVolume.
  • Depot storage is allocated using a percentage of the total disk space rather than a unit, such as a gigabyte.

If you decrease this value, the operator does not decrease the size of the PV or the depot.

Default: 500 Gi

local.storageClass
The StorageClass for the PersistentVolumes that persist local data between pod lifecycles. Select this value when defining the persistent volume claim (PVC).

By default, this parameter is not set. The PVC in the default configuration uses the default storage class set by Kubernetes.

podSecurityContext
Overrides any pod-level security context. This setting is merged with the default context for the pods in the cluster.

For details about the available settings for this parameter, see the Kubernetes documentation.

readinessProbeOverride
Overrides default readinessProbe settings that indicate whether the Vertica pod is ready to accept traffic. The VerticaDB operator sets or updates the readiness probe in the StatefulSet.

For example, the following object overrides the default timeoutSeconds and periodSeconds settings:

      
spec:
...
  readinessProbeOverride:
    initialDelaySeconds: 0
    periodSeconds: 10
    failureThreshold: 3
  

For a detailed list of the available probe settings, see the Kubernetes documentation.

reviveOrder
The order of nodes during a revive operation. Each entry contains the subcluster index, and the number of pods to include from the subcluster.

For example, consider a database with the following setup:

      
- v_db_node0001: subcluster A
- v_db_node0002: subcluster A
- v_db_node0003: subcluster B
- v_db_node0004: subcluster A
- v_db_node0005: subcluster B
- v_db_node0006: subcluster B
  

If the subclusters[] list is defined as {'A', 'B'}, the revive order is as follows:

      
- {subclusterIndex:0, podCount:2} # 2 pods from subcluster A
- {subclusterIndex:1, podCount:1} # 1 pod from subcluster B
- {subclusterIndex:0, podCount:1} # 1 pod from subcluster A
- {subclusterIndex:1, podCount:2} # 2 pods from subcluster B
  

This parameter is used only when initPolicy is set to Revive.

restartTimeout
When restarting pods, the number of seconds before admintools times out.

Default: 0. The operator uses the 20 minutes default used by admintools.

securityContext
Sets any additional security context for the Vertica server container. This setting is merged with the security context value set for the VerticaDB Operator.

For example, if you need a core file for the Vertica server process, you can set the privileged property to true to elevate the server privileges on the host node:

      
spec:
  ...
  securityContext:
    privileged: true
  

For additional information about generating a core file, see Metrics gathering. For details about this parameter, see the Kubernetes documentation.

shardCount
The number of shards in the database. You cannot update this value after you create the custom resource.

For more information about database shards and Eon Mode, see Configuring your Vertica cluster for Eon Mode.

sidecars[]
One or more optional utility containers that complete tasks for the Vertica server container. Each sidecar entry is a fully-formed container spec, similar to the container that you add to a Pod spec.

The following example adds a sidecar named vlogger to the custom resource:

      
  spec:
  ...
  sidecars:
    - name: vlogger
      image: vertica/vertica-logger:1.0.0
      volumeMounts:
        - name: my-custom-vol
          mountPath: /path/to/custom-volume
  

volumeMounts.name is the name of a custom volume. This value must match volumes.name to mount the custom volume in the sidecar container filesystem. See volumes for additional details.

For implementation details, see VerticaDB CRD.

sidecars[i].volumeMounts
List of custom volumes and mount paths that persist sidecar container data. Each volume element requires a name value and a mountPath.

To mount a volume in the Vertica sidecar container filesystem, volumeMounts.name must match the volumes.name value for the corresponding sidecar definition, or the webhook returns an error.

For implementation details, see VerticaDB CRD.

sshSecret
A Secret that contains SSH credentials that authenticate connections to a Vertica server container. Example use cases include the following:
  • Authenticate communication between an Eon Mode database and custom resource in a hybrid architecture.

  • For environments that run the Vertica Kubernetes (No keys) image, pass the custom resource user-provided SSH keys for internal communication between Vertica pods.

The Secret requires the following values:

  • id_rsa

  • id_rsa.pub

  • authorized_keys

For details, see Hybrid Kubernetes clusters.

startupProbeOverride
Overrides the default startupProbe settings that indicate whether the Vertica process is started in the container. The VerticaDB operator sets or updates the startup probe in the StatefulSet.

For example, the following object overrides the default initialDelaySeconds, periodSeconds, and failureThreshold settings:

      
spec:
...
  startupProbeOverride:
    initialDelaySeconds: 30
    periodSeconds: 10
    failureThreshold: 117
    timeoutSeconds: 5
  

For a detailed list of the available probe settings, see the Kubernetes documentation.

subclusters[i].affinity
Applies rules that constrain the Vertica server pod to specific nodes. It is more expressive than nodeSelector. If this parameter is not set, then the pods use no affinity setting.

In production settings, it is a best practice to configure affinity to run one server pod per host node. For configuration details, see VerticaDB CRD.

subclusters[i].externalIPs
Enables the service object to attach to a specified external IP.

If not set, the external IP is empty in the service object.

subclusters[i].httpNodePort
When subclusters[i].serviceType is set to NodePort, sets the port on each node that listens for external connections to the HTTPS service. The port must be within the defined range allocated by the control plane (ports 30000-32767).

If you do not manually define a port number, Kubernetes chooses the port automatically.

subclusters[i].isPrimary
Indicates whether the subcluster is primary or secondary. Each database must have at least one primary subcluster.

Default: true

subclusters[i].loadBalancerIP
When subcluster[i].serviceType is set to LoadBalancer, assigns a static IP to the load balancing service.

Default: Empty string ("")

subclusters[i].name
The subcluster name. This is a required setting. If you change the name of an existing subcluster, the operator deletes the old subcluster and creates a new one with the new name.

Kubernetes derives names for the subcluster Statefulset, service object, and pod from the subcluster name. For additional details about Kubernetes and subcluster naming conventions, see Subclusters on Kubernetes.

subclusters[i].nodePort
When subclusters[i].serviceType is set to NodePort, sets the port on each node that listens for external client connections. The port must be within the defined range allocated by the control plane (ports 30000-32767).

If you do not manually define a port number, Kubernetes chooses the port automatically.

subclusters[i].nodeSelector
Provides control over which nodes are used to schedule each pod. If this is not set, the node selector is left off the pod when it is created. To set this parameter, provide a list of key/value pairs.

The following example schedules server pods only at nodes that have the disktype=ssd and region=us-east labels:

      
subclusters:
  - name: defaultsubcluster
    nodeSelector:
      disktype: ssd
      region: us-east
  
subclusters[i].priorityClassName
The PriorityClass name assigned to pods in the StatefulSet. This affects where the pod gets scheduled.
subclusters[i].resources.limits
The resource limits for pods in the StatefulSet, which sets the maximum amount of CPU and memory that each server pod can consume.

Vertica recommends that you set these values equal to subclusters[i].resources.requests to ensure that the pods are assigned to the guaranteed QoS class. This reduces the possibility that pods are chosen by the out of memory (OOM) Killer.

For more information, see Recommendations for Sizing Vertica Nodes and Clusters in the Vertica Knowledge Base.

subclusters[i].resources.requests
The resource requests for pods in the StatefulSet, which sets the maximum amount of CPU and memory that each server pod can consume.

Vertica recommends that you set these values equal to subclusters[i].resources.limits to ensure that the pods are assigned to the guaranteed QoS class. This reduces the possibility that pods are chosen by the out of memory (OOM) Killer.

For more information, see Recommendations for Sizing Vertica Nodes and Clusters in the Vertica Knowledge Base.

subclusters[i].serviceAnnotations

Custom annotations added to implementation-specific services. Managed Kubernetes use service annotations to configure services such as network load balancers, virtual private cloud (VPC) subnets, and loggers.

subclusters[i].serviceName
Identifies the service object that directs client traffic to the subcluster. Assign a single service object to multiple subclusters to process client data with one or more subclusters. For example:
      
spec:
  ...
  subclusters:
    - name: subcluster-1
      size: 3
      serviceName: connections
    - name: subcluster-2
      size: 3
      serviceName: connections
  

The previous example creates a service object named metadata.name-connections that load balances client traffic among its assigned subclusters.

For implementation details, see VerticaDB CRD.

subclusters[i].serviceType
Identifies the type of Kubernetes service to use for external client connectivity. The default is type is ClusterIP, which sets a stable IP and port that is accessible only from within Kubernetes itself.

Depending on the service type, you might need to set nodePort or externalIPs in addition to this configuration parameter.

Default: ClusterIP

subclusters[i].size
The number of pods in the subcluster. This determines the number of Vertica nodes in the subcluster. Changing this number deletes or schedules new pods.

The minimum size of a subcluster is 1. The subclusters kSafety setting determines the minimum and maximum size of the cluster.

subclusters[i].tolerations
Any taints and tolerations used to influence where a pod is scheduled.
superuserPasswordSecret
The Secret that contains the database superuser password. Create this Secret before deployment.

If you do not create this Secret before deployment, there is no password authentication for the database.

The Secret must use a key named password:

kubectl create secret generic su-passwd --from-literal=password=secret-password

The following text adds this Secret to the custom resource:

      
db:
  superuserSecretPassword: su-passwd
  
temporarySubclusterRouting.names
The existing subcluster that accepts traffic during an online upgrade. The operator routes traffic to the first subcluster that is online. For example:
      
spec:
  ...
  temporarySubclusterRouting:
    names:
      - subcluster-2
      - subcluster-1
  

In the previous example, the operator selects subcluster-2 during the upgrade, and then routes traffic to subcluster-1 when subcluster-2 is down. As a best practice, use secondary subclusters when rerouting traffic.

temporarySubclusterRouting.template
Instructs the operator create a new secondary subcluster during an Online upgrade. The operator creates the subcluster when the upgrade begins and deletes it when the upgrade completes.

To define a temporary subcluster, provide a name and size value. For example:

      
spec:
  ...
  temporarySubclusterRouting:
    template:
      name: transient
      size: 1
  
upgradePolicy
Determines how the operator upgrades Vertica server versions. Accepts the following values:
  • Offline: The operator stops the cluster to prevent multiple versions from running simultaneously.
  • Online: The cluster continues to operator during a rolling update. The data is in read-only mode while the operator upgrades the image for the primary subcluster.

The Online setting has the following restrictions:

  • The cluster must currently run Vertica server version 11.1.0 or higher.

  • If you have only one subcluster, you must configure temporarySubclusterRouting.template to create a new secondary subcluster during the Online upgrade. Otherwise, the operator performs an Offline upgrade, regardless of the setting.

  • Auto: The operator selects either Offline or Online depending on the configuration. The operator selects Online if all of the following are true:

    • A license Secret exists.

    • K-Safety is 1.

    • The cluster is currently running Vertica version 11.1.0 or higher.

Default: Auto

upgradeRequeueTime
During an online upgrade, the number of seconds that the operator waits to complete work for any resource that was requeued during the reconciliation loop.

Default: 30 seconds

volumeMounts
List of custom volumes and mount paths that persist Vertica server container data. Each volume element requires a name value and a mountPath.

To mount a volume in the Vertica server container filesystem, volumeMounts.name must match the volumes.name value defined in the spec definition, or the webhook returns an error.

For implementation details, see VerticaDB CRD.

volumes
List of custom volumes that persist Vertica server container data. Each volume element requires a name value and a volume type. volumes accepts any Kubernetes volume type.

To mount a volume in a filesystem, volumes.name must match the volumeMounts.name value for the corresponding volume mount, or the webhook returns an error.

For implementation details, see VerticaDB CRD.

VerticaAutoScaler

verticaDBName
Required. Name of the VerticaDB CR that the VerticaAutoscaler CR scales resources for.
scalingGranularity
Required. The scaling strategy. This parameter accepts one of the following values:
  • Subcluster: Create or delete entire subclusters. To create a new subcluster, the operator uses a template or an existing subcluster with the same serviceName.
  • Pod: Increase or decrease the size of an existing subcluster.

Default: Subcluster

serviceName
Required. Refers to the subclusters[i].serviceName for the VerticaDB CR.

VerticaAutoscaler uses this value as a selector when scaling subclusters together.

template
When scalingGranularity is set to Subcluster, you can use this parameter to define how VerticaAutoscaler scales the new subcluster. The following is an example:
      
spec:
    verticaDBName: dbname
    scalingGranularity: Subcluster
    serviceName: service-name
    template:
        name: autoscaler-name
        size: 2
        serviceName: service-name
        isPrimary: false
  

If you set template.size to 0, VerticaAutoscaler selects as a template an existing subcluster that uses service-name.

This setting is ignored when scalingGranularity is set to Pod.

EventTrigger

matches[].condition.status
The status portion of the status condition match. The operator watches the condition specified by matches[].condition.type on the EventTrigger reference object. When that condition changes to the status specified in this parameter, the operator runs the task defined in the EventTrigger.
matches[].condition.type
The condition portion of the status condition match. The operator watches this condition on the EventTrigger reference object. When this condition changes to the status specified with matches[].condition.status, the operator runs the task defined in the EventTrigger.
references[].object.apiVersion
Kubernetes API version of the object that the EventTrigger watches.
references[].object.kind
The type of object that the EventTrigger watches.
references[].object.name
The name of the object that the EventTrigger watches.
references[].object.namespace
Optional. The namespace of the object that the EventTrigger watches. The object and the EventTrigger CR must exist within the same namespace.

If omitted, the operator uses the same namespace as the EventTrigger.

template
Full spec for the Job that EventTrigger runs when references[].condition.type and references[].condition.status are found for a reference object.

For implementation details, see EventTrigger CRD.