Custom resource definition parameters

The following table describes the available settings for the Vertica Custom Resource Definition.

The following lists describes the available settings for Vertica custom resource definitions (CRDs).

VerticaDB

Parameters

annotations

Custom annotations added to all of the objects that the operator creates. Each annotation is encoded as an environment variable in the Vertica server container. Annotations accept the following characters:

  • Letters
  • Numbers
  • Underscores

Invalid character values are converted to underscore characters. For example:

vertica.com/git-ref: 1234abcd

Is converted to:

VERTICA_COM_GIT_REF=1234abcd

autoRestartVertica
Whether the operator restarts the Vertica process when the process is not running.

Set this parameter to false when performing manual maintenance that requires a DOWN database. This prevents the operator from interfering with the database state.

Default: true

certSecrets
A list of Secrets for custom TLS certificates.

Each certificate is mounted in the container at /certs/cert-name/key. For example, a PEM-encoded CA bundle named root_cert.pem and concealed in a Secret named aws-cert is mounted in /certs/aws-cert/root_cert.pem.

If you update the certificate after you add it to a custom resource, the operator updates the value automatically. If you add or delete a certificate, the operator reschedules the pod with the new configuration.

For implementation details, see VerticaDB custom resource definition.

communal.additionalConfig
Sets one or more configuration parameters in the CR:
      
spec:
  communal:
    additionalConfig:
      config-param: "value"
      ...
... 
  

Configuration parameters are set only when the database is initialized. After the database is initialized, changes to this parameter have no effect in the server.

communal.caFile
The mount path in the container filesystem to a CA certificate file that validates HTTPS connections to a communal storage endpoint.

Typically, the certificate is stored in a Secret and included in certSecrets. For details, see VerticaDB custom resource definition.

communal.credentialSecret
The name of the Secret that stores the credentials for the communal storage endpoint. This parameter is optional when you authenticate to an S3-compatible endpoint with an Identity and Access Management (IAM) profile.

You can store this value as a secret in AWS Secrets Manager or Google Secret Manager. For implementation details, see Secrets management.

For implementation details for each supported communal storage location, see Configuring communal storage.

communal.endpoint
A communal storage endpoint URL. The endpoint must begin with either the http:// or https:// protocol. For example:

https://path/to/endpoint

You cannot change this value after you create the custom resource instance.

If you omit this setting, Vertica selects one of the following endpoints based on your communal storage provider:

  • AWS: https://s3.amazonaws.com
  • GCS: https://storage.googleapis.com
communal.s3ServerSideEncryption
Server-side encryption type used when reading from or writing to S3. The value depends on which type of encryption at rest is configured for S3.

This parameter accepts the following values:

  • SSE-S3
  • SSE-KMS: Requires that you pass the key identifier with the communal.additionalConfig parameter.
  • SSE-C: Requires that you pass the client key with the communal.s3SSECustomerKeySecret parameter.

You cannot change this value after you create the custom resource instance.

For implementation examples of all encryption types, see Configuring communal storage.

For details about each encryption type, see S3 object store.

Default: Empty string (""), no encryption

communal.s3SSECustomerKeySecret
If s3ServerSideEncryption is set to SSE-C, a Secret containing the client key for S3 access with the following requirements:
  • The Secret must be in the same namespace as the CR.
  • You must set the client key contents with the clientKey field.

The client key must use one of the following formats:

  • 32-character plaintext
  • 44-character base64-encoded

For additional implementation details, see Configuring communal storage.

communal.path
The path to the communal storage bucket. For example:

s3://bucket-name/key-name

You must create this bucket before you create the Vertica database.

The following initPolicy values determine how to set this value:

  • Create: The path must be empty.

  • Revive: The path cannot be empty.

You cannot change this value after you create the custom resource.

communal.region
The geographic location where the communal storage resources are located.

If you do not set the correct region, the configuration fails. You might experience a delay because Vertica retries several times before failing.

This setting is valid for Amazon Web Services (AWS) and Google Cloud Platform (GCP) only. Vertica ignores this setting for other communal storage providers.

Default:

  • AWS: us-east-1

  • GCP: US-EAST1

dbName
The database name. When initPolicy is set to Revive or ScheduleOnly, this must match the name of the source database.

Default: vertdb

encryptSpreadComm
Sets the EncryptSpreadComm security parameter to configure Spread encryption for a new Vertica database. The VerticaDB operator ignores this parameter unless you set initPolicy to Create.

Spread encryption is enabled by default. This parameter accepts the following values:

  • vertica or Empty string (""): Enables Spread encryption. Vertica generates the Spread encryption key for the database cluster.

  • disabled: Clears encryption.

Default: Empty string ("")

hadoopConfig
A ConfigMap that contains the contents of the /etc/hadoop directory.

This is mounted in the container to configure connections to a Hadoop Distributed File System (HDFS) communal path.

image
The image that defines the Vertica server container's runtime environment. If the container is hosted in a private container repository, this name must include the path to the repository.

When you update the image, the operator stops and restarts the cluster.

imagePullPolicy
How often Kubernetes pulls the image for an object. For details, see Updating Images in the Kubernetes documentation.

Default: If the image tag is latest, the default is Always. Otherwise, the default is IfNotPresent.

imagePullSecrets
List of Secrets that store credentials for authentication to a private container repository. For details, see Specifying imagePullSecrets in the Kubernetes documentation.
initPolicy
How to initialize the Vertica database in Kubernetes. This parameter accepts the following values:
kerberosSecret
The Secret that stores the following values for Kerberos authentication to Hadoop Distributed File System (HDFS):
  • krb5.conf: Contains Kerberos configuration information.

  • krb5.keytab: Contains credentials for the Vertica Kerberos principal. This file must be readable by the file owner that is running the process.

The default location for each of these files is the /etc directory.

labels
Custom labels added to all of the objects that the operator creates.
licenseSecret
The Secret that contains the contents of license files. The Secret must share a namespace with the custom resource (CR). Each of the keys in the Secret is mounted as a file in /home/dbadmin/licensing/mnt.

If this value is set when the CR is created, the operator installs one of the licenses automatically, choosing the first one alphabetically.

If you update this value after you create the custom resource, you must manually install the Secret in each Vertica pod.

livenessProbeOverride
Overrides default livenessProbe settings that indicate whether the container is running. The VerticaDB operator sets or updates the liveness probe in the StatefulSet.

For example, the following object overrides the default initialDelaySeconds, periodSeconds, and failureThreshold settings:

      
spec:
...
  livenessProbeOverride:
    initialDelaySeconds: 120
    periodSeconds: 15
    failureThreshold: 8
  

For a detailed list of the available probe settings, see the Kubernetes documentation.

local.catalogPath
Optional parameter that sets a custom path in the container filesystem for the catalog, if your environment requires that the catalog is stored in a location separate from the local data.

If initPolicy is set to Revive or ScheduleOnly, local.catalogPath for the new database must match local.catalogPath for the source database.

local.dataPath
The path in the container filesystem for the local data. If local.catalogPath is not set, the catalog is stored in this location.

If initPolicy is set to Revive or ScheduleOnly, the dataPath for the new database must match the dataPath for the source database.

Default: /data

local.depotPath
The path in the container filesystem that stores the depot.

If initPolicy is set to Revive or ScheduleOnly, the depotPath for the new database must match the depotPath for the source database.

Default: /depot

local.depotVolume
The type of volume to use for the depot. This parameter accepts the following values:
  • PersistentVolume: A PersistentVolume is used to store the depot data. This volume type persists depot data between pod lifecycles.
  • EmptyDir: A volume of type emptyDir is used to store the depot data. When the pod is removed from a node, the contents of the volume are deleted. If a container crashes, the depot data is unaffected.

For details about each volume type, see the Kubernetes documentation.

Default: PersistentVolume

local.requestSize
The minimum size of the local data volume when selecting a PersistentVolume (PV).

If local.storageClass allows volume expansion, the operator automatically increases the size of the PV when you change this setting. It expands the size of the depot if the following conditions are met:

  • local.storageClass is set to PersistentVolume.
  • Depot storage is allocated using a percentage of the total disk space rather than a unit, such as a gigabyte.

If you decrease this value, the operator does not decrease the size of the PV or the depot.

Default: 500 Gi

local.storageClass
The StorageClass for the PersistentVolumes that persist local data between pod lifecycles. Select this value when defining the persistent volume claim (PVC).

By default, this parameter is not set. The PVC in the default configuration uses the default storage class set by Kubernetes.

nmaTLSSecret
Adds custom Node Management Agent (NMA) certificates to the CR. The value must include the tls.key, tls.crt, and ca.crt encoded in base64 format.

You can store this value as a secret in AWS Secrets Manager or Google Secret Manager. For implementation details, see Secrets management.

If you omit this setting, the operator generates self-signed certificates for the NMA.

passwordSecret
The Secret that contains the database superuser password. Create this Secret before deployment.

If you do not create this Secret before deployment, there is no password authentication for the database.

The Secret must use a key named password:

$ kubectl create secret generic su-passwd --from-literal=password=secret-password

Add this Secret to the custom resource:

      
spec:
  passwordSecret: su-passwd
  

You can store this value as a secret in AWS Secrets Manager or Google Secret Manager. For implementation details, see Secrets management.

podSecurityContext
Overrides any pod-level security context. This setting is merged with the default context for the pods in the cluster.

vclusterops deployments can use this parameter to set a custom UID or GID:

...
spec:
  ...
  podSecurityContext:
    runAsUser: 3500
    runAsGroup: 3500
  ...

For details about the available settings for this parameter, see the Kubernetes documentation.

readinessProbeOverride
Overrides default readinessProbe settings that indicate whether the Vertica pod is ready to accept traffic. The VerticaDB operator sets or updates the readiness probe in the StatefulSet.

For example, the following object overrides the default timeoutSeconds and periodSeconds settings:

      
spec:
...
  readinessProbeOverride:
    initialDelaySeconds: 0
    periodSeconds: 10
    failureThreshold: 3
  

For a detailed list of the available probe settings, see the Kubernetes documentation.

restorePoint.archive
Archive that contains the restore points that you want to use in a restore operation. When you revive a database with a restore point, this parameter is required.
restorePoint.id
Unique identifier for the restore point. When you revive a database with a restore point, you must provide either restorePoint.id or restorePoint.index.
restorePoint.index
Identifier that describes the restore point's chronological position in the archive. Restore points are ordered by descending timestamp, where the most recent index is 1.
reviveOrder
The order of nodes during a revive operation. Each entry contains the subcluster index, and the number of pods to include from the subcluster.

For example, consider a database with the following setup:

      
- v_db_node0001: subcluster A
- v_db_node0002: subcluster A
- v_db_node0003: subcluster B
- v_db_node0004: subcluster A
- v_db_node0005: subcluster B
- v_db_node0006: subcluster B
  

If the subclusters[] list is defined as {'A', 'B'}, the revive order is as follows:

      
- {subclusterIndex:0, podCount:2} # 2 pods from subcluster A
- {subclusterIndex:1, podCount:1} # 1 pod from subcluster B
- {subclusterIndex:0, podCount:1} # 1 pod from subcluster A
- {subclusterIndex:1, podCount:2} # 2 pods from subcluster B
  

This parameter is used only when initPolicy is set to Revive.

sandboxes[i].image
Name of the image to use for the sandbox. If omitted, image from the main cluster will be used. Changing this will force an upgrade for the sandbox where it is defined.
sandboxes[i].name
Name of the sandbox.
sandboxes[i].subclusters[i].name
Name of the secondary subcluster to be added to the sandbox. The sandbox must include at least one secondary subcluster.

The following example adds a sandbox named sandbox1 with subclusters sc2 and sc3 to the custom resource:

      
spec:
...
  sandboxes:
  - name: sandbox1
    subclusters:
      - name: sc2
	  - name: sc3
  
securityContext
Sets any additional security context for the Vertica server container. This setting is merged with the security context value set for the VerticaDB Operator.

For example, if you need a core file for the Vertica server process, you can set the privileged property to true to elevate the server privileges on the host node:

      
spec:
  ...
  securityContext:
    privileged: true
  

For additional information about generating a core file, see Metrics gathering. For details about this parameter, see the Kubernetes documentation.

serviceAccountName
Sets the name of the ServiceAccount. This lets you create a service account independently of an operator or VerticaDB instance so that you can add it to the CR as needed.

If you omit this setting, the operator uses the default service account. If you specify a service account that does not exist, the operator creates that service account and then uses it.

shardCount
The number of shards in the database. You cannot update this value after you create the custom resource.

For more information about database shards and Eon Mode, see Configuring your Vertica cluster for Eon Mode.

sidecars[]
One or more optional utility containers that complete tasks for the Vertica server container. Each sidecar entry is a fully-formed container spec, similar to the container that you add to a Pod spec.

The following example adds a sidecar named vlogger to the custom resource:

      
  spec:
  ...
  sidecars:
    - name: vlogger
      image: vertica/vertica-logger:1.0.0
      volumeMounts:
        - name: my-custom-vol
          mountPath: /path/to/custom-volume
  

volumeMounts.name is the name of a custom volume. This value must match volumes.name to mount the custom volume in the sidecar container filesystem. See volumes for additional details.

For implementation details, see VerticaDB custom resource definition.

sidecars[i].volumeMounts
List of custom volumes and mount paths that persist sidecar container data. Each volume element requires a name value and a mountPath.

To mount a volume in the Vertica sidecar container filesystem, volumeMounts.name must match the volumes.name value for the corresponding sidecar definition, or the webhook returns an error.

For implementation details, see VerticaDB custom resource definition.

startupProbeOverride
Overrides the default startupProbe settings that indicate whether the Vertica process is started in the container. The VerticaDB operator sets or updates the startup probe in the StatefulSet.

For example, the following object overrides the default initialDelaySeconds, periodSeconds, and failureThreshold settings:

      
spec:
...
  startupProbeOverride:
    initialDelaySeconds: 30
    periodSeconds: 10
    failureThreshold: 117
    timeoutSeconds: 5
  

For a detailed list of the available probe settings, see the Kubernetes documentation.

subclusters[i].affinity
Applies rules that constrain the Vertica server pod to specific nodes. It is more expressive than nodeSelector. If this parameter is not set, then the pods use no affinity setting.

In production settings, it is a best practice to configure affinity to run one server pod per host node. For configuration details, see VerticaDB custom resource definition.

subclusters[i].externalIPs
Enables the service object to attach to a specified external IP.

If not set, the external IP is empty in the service object.

subclusters[i].verticaHTTPNodePort
When subclusters[i].serviceType is set to NodePort, sets the port on each node that listens for external connections to the HTTPS service. The port must be within the defined range allocated by the control plane (ports 30000-32767).

If you do not manually define a port number, Kubernetes chooses the port automatically.

subclusters[i].type
Indicates the subcluster type. Valid values include the following:
  • primary
  • secondary
  • sandboxprimary: Subcluster type automatically assigned by the VerticaDB operator when a subcluster is sandboxed and cannot be manually selected in the VerticaDB CRD.

The admission controller's webhook verifies that each database has at least one primary subcluster.

Default: primary

subclusters[i].loadBalancerIP
When subcluster[i].serviceType is set to LoadBalancer, assigns a static IP to the load balancing service.

Default: Empty string ("")

subclusters[i].name
The subcluster name. This is a required setting. If you change the name of an existing subcluster, the operator deletes the old subcluster and creates a new one with the new name.

Kubernetes derives names for the subcluster Statefulset, service object, and pod from the subcluster name. For additional details about Kubernetes and subcluster naming conventions, see Subclusters on Kubernetes.

subclusters[i].clientNodePort
When subclusters[i].serviceType is set to NodePort, sets the port on each node that listens for external client connections. The port must be within the defined range allocated by the control plane (ports 30000-32767).

If you do not manually define a port number, Kubernetes chooses the port automatically.

subclusters[i].nodeSelector

List of label key/value pairs that restrict Vertica pod scheduling to nodes with matching labels. For details, see the Kubernetes documentation.

The following example schedules server pods only at nodes that have the disktype=ssd and region=us-east labels:

      
subclusters:
  - name: defaultsubcluster
    nodeSelector:
      disktype: ssd
      region: us-east
  
subclusters[i].priorityClassName

The PriorityClass name assigned to pods in the StatefulSet. This affects where the pod gets scheduled.

For details, see the Kubernetes documentation.

subclusters[i].resources.limits
The resource limits for pods in the StatefulSet, which sets the maximum amount of CPU and memory that the server pod can consume from its host.

Vertica recommends that you set these values equal to subclusters[i].resources.requests to ensure that the pods are assigned to the guaranteed QoS class. This reduces the possibility that pods are chosen by the out of memory (OOM) Killer.

For more information, see Recommendations for Sizing Vertica Nodes and Clusters in the Vertica Knowledge Base.

subclusters[i].resources.requests
The resource requests for pods in the StatefulSet, which sets the amount of CPU and memory that the server pod requests during pod scheduling.

Vertica recommends that you set these values equal to subclusters[i].resources.limits to ensure that the pods are assigned to the guaranteed QoS class. This reduces the possibility that pods are chosen by the out of memory (OOM) Killer.

For more information, see Recommendations for Sizing Vertica Nodes and Clusters in the Vertica Knowledge Base.

subclusters[i].serviceAnnotations

Custom annotations added to implementation-specific services. Managed Kubernetes use service annotations to configure services such as network load balancers, virtual private cloud (VPC) subnets, and loggers.

subclusters[i].serviceName
Identifies the service object that directs client traffic to the subcluster. Assign a single service object to multiple subclusters to process client data with one or more subclusters. For example:
      
spec:
  ...
  subclusters:
    - name: subcluster-1
      size: 3
      serviceName: connections
    - name: subcluster-2
      size: 3
      serviceName: connections
  

The previous example creates a service object named metadata.name-connections that load balances client traffic among its assigned subclusters.

For implementation details, see VerticaDB custom resource definition.

subclusters[i].serviceType
Identifies the type of Kubernetes service to use for external client connectivity. The default is type is ClusterIP, which sets a stable IP and port that is accessible only from within Kubernetes itself.

Depending on the service type, you might need to set nodePort or externalIPs in addition to this configuration parameter.

Default: ClusterIP

subclusters[i].size
The number of pods in the subcluster. This determines the number of Vertica nodes in the subcluster. Changing this number deletes or schedules new pods.

The minimum size of a subcluster is 1. The subclusters kSafety setting determines the minimum and maximum size of the cluster.

subclusters[i].tolerations

Any tolerations and taints that aid in determining where to schedule a pod.

temporarySubclusterRouting.names
The existing subcluster that accepts traffic during an online upgrade. The operator routes traffic to the first subcluster that is online. For example:
      
spec:
  ...
  temporarySubclusterRouting:
    names:
      - subcluster-2
      - subcluster-1
  

In the previous example, the operator selects subcluster-2 during the upgrade, and then routes traffic to subcluster-1 when subcluster-2 is down. As a best practice, use secondary subclusters when rerouting traffic.

temporarySubclusterRouting.template
Instructs the operator create a new secondary subcluster during an Online upgrade. The operator creates the subcluster when the upgrade begins and deletes it when the upgrade completes.

To define a temporary subcluster, provide a name and size value. For example:

      
spec:
  ...
  temporarySubclusterRouting:
    template:
      name: transient
      size: 1
  
upgradePolicy
Determines how the operator upgrades Vertica server versions. Accepts the following values:
  • Offline: The operator stops the cluster to prevent multiple versions from running simultaneously.
  • Online: The cluster continues to operator during a rolling update. The data is in read-only mode while the operator upgrades the image for the primary subcluster.

The Online setting has the following restrictions:

  • The cluster must currently run Vertica server version 11.1.0 or higher.

  • If you have only one subcluster, you must configure temporarySubclusterRouting.template to create a new secondary subcluster during the Online upgrade. Otherwise, the operator performs an Offline upgrade, regardless of the setting.

  • Auto: The operator selects either Offline or Online depending on the configuration. The operator selects Online if all of the following are true:

    • A license Secret exists.

    • K-Safety is 1.

    • The cluster is currently running Vertica version 11.1.0 or higher.

Default: Auto

volumeMounts
List of custom volumes and mount paths that persist Vertica server container data. Each volume element requires a name value and a mountPath.

To mount a volume in the Vertica server container filesystem, volumeMounts.name must match the volumes.name value defined in the spec definition, or the webhook returns an error.

For implementation details, see VerticaDB custom resource definition.

volumes
List of custom volumes that persist Vertica server container data. Each volume element requires a name value and a volume type. volumes accepts any Kubernetes volume type.

To mount a volume in a filesystem, volumes.name must match the volumeMounts.name value for the corresponding volume mount, or the webhook returns an error.

For implementation details, see VerticaDB custom resource definition.

Annotations

Apply each of the following annotations to the metadata.annotations section in the CR:

vertica.com/https-tls-conf-generation
Determines whether the Vertica pod stores a plain text configuration file used to generate default certificates for the HTTPS service.

Set this to false to hide the configuration file when you are certain that the HTTPS service can start in its current configuration. For example:

The presence of this configuration file does not interfere with either of these certificate configurations.

Default: true

vertica.com/ignore-cluster-lease
Ignore the cluster lease when starting or reviving the database.

Default: false

vertica.com/ignore-upgrade-path
When set to false, the operator ensures that you do not downgrade to an earlier release.

Default: true

vertica.com/include-uid-in-path
When set to true, the operator includes in the path the unique identifier (UID) that Kubernetes assigns to the VerticaDB object. Including the UID creates a unique database path so that you can reuse the communal path in the same endpoint.

Default: false

vertica.com/restart-timeout
When restarting pods, the number of seconds before the operation times out.

Default: 0. The operator uses a 20 minute default.

vertica.com/superuser-name
For vclusterops deployments, sets a custom superuser name. All admintools deployments use the default superuser name, dbadmin.
vertica.com/vcluster-ops
Determines whether the VerticaDB CR installs the vclusterops library to manage the cluster. When omitted, API version v1 assumes this annotation is set to true, and the v1beta1 annotation is set to false.

API version v1 must install vclusterops. You can omit this setting to use the default empty string, or explicitly set this to true.

For deprecated API version v1beta1, you must set this to true for vclusterops deployments. For admintools deployments, you can omit this setting or set it to false.

Default: Empty string ("")

EventTrigger

For implementation details, see EventTrigger custom resource definition.

matches[].condition.status
The status portion of the status condition match. The operator watches the condition specified by matches[].condition.type on the EventTrigger reference object. When that condition changes to the status specified in this parameter, the operator runs the task defined in the EventTrigger.
matches[].condition.type
The condition portion of the status condition match. The operator watches this condition on the EventTrigger reference object. When this condition changes to the status specified with matches[].condition.status, the operator runs the task defined in the EventTrigger.
references[].object.apiVersion
Kubernetes API version of the object that the EventTrigger watches.
references[].object.kind
The type of object that the EventTrigger watches.
references[].object.name
The name of the object that the EventTrigger watches.
references[].object.namespace
Optional. The namespace of the object that the EventTrigger watches. The object and the EventTrigger CR must exist within the same namespace.

If omitted, the operator uses the same namespace as the EventTrigger.

template
Full spec for the Job that EventTrigger runs when references[].condition.type and references[].condition.status are found for a reference object.

For implementation details, see EventTrigger custom resource definition.

VerticaAutoScaler

verticaDBName
Required. Name of the VerticaDB CR that the VerticaAutoscaler CR scales resources for.
scalingGranularity
Required. The scaling strategy. This parameter accepts one of the following values:
  • Subcluster: Create or delete entire subclusters. To create a new subcluster, the operator uses a template or an existing subcluster with the same serviceName.
  • Pod: Increase or decrease the size of an existing subcluster.

Default: Subcluster

serviceName
Required. Refers to the subclusters[i].serviceName for the VerticaDB CR.

VerticaAutoscaler uses this value as a selector when scaling subclusters together.

template
When scalingGranularity is set to Subcluster, you can use this parameter to define how VerticaAutoscaler scales the new subcluster. The following is an example:
      
spec:
    verticaDBName: dbname
    scalingGranularity: Subcluster
    serviceName: service-name
    template:
        name: autoscaler-name
        size: 2
        serviceName: service-name
        isPrimary: false
  

If you set template.size to 0, VerticaAutoscaler selects as a template an existing subcluster that uses service-name.

This setting is ignored when scalingGranularity is set to Pod.

VerticaReplicator

source.passwordSecret
Stores the password secret for the specified username. If this field and username are omitted, the default is set to the superuser password secret found in the VerticaDB. An empty value indicates no password. By default, the secret is assumed to be a Kubernetes (k8s) secret unless a secret path reference is specified, in which case it is retrieved from an external secret storage manager.
source.sandboxName
Specify the sandbox name to establish a connection. If no sandbox name is provided, the system defaults to the main database cluster.
source.userName
The username to connect to Vertica with. If no username is specified, the database will default to the superuser. Custom username for source database is not supported yet.
source.verticaDB
Required. Name of an existing VerticaDB.
target.passwordSecret
Stores the password secret for the specified username. If this field and username are omitted, the default is set to the superuser password secret found in the VerticaDB. An empty value indicates no password. By default, the secret is assumed to be a Kubernetes (k8s) secret unless a secret path reference is specified, in which case it is retrieved from an external secret storage manager.
target.sandboxName
Specify the sandbox name to establish a connection. If no sandbox name is provided, the system defaults to the main database cluster.
target.userName
The username to connect to Vertica with. If no username is specified, the database will default to the superuser. Custom username for source database is not supported yet.
target.verticaDB
Required. Name of an existing VerticaDB.
tlsConfig
Optional. TLS configurations to use when connecting from the source database to the target database. It refers to an existing TLS configuration in the source. Using TLS configuration for target database authentication requires the same username for both source and target. Additionally, the security config parameter EnableConnectCredentialForwarding must be enabled on the source database. Custom username for source and target databases is not yet supported when using TLS configuration.

VerticaRestorePointsQuery

verticaDBName
The VerticaDB CR instance that you want to retrieve restore points for.
filterOptions.archive
The archive that contains the restore points that you want to retrieve. If omitted, the query returns all restore points from all archives.
filterOptions.startTimestamp
Limits the query results to restore points with a UTC timestamp that is equal to or later than this value. This parameter accepts the following UTC formats:
  • YYYY-MM-DD
  • YYYY-MM-DD HH:MM:ss
  • YYYY-MM-DD HH:MM:ss.SSSSSSSSS
filterOptions.endTimestamp
Limits the query results to restore points with a UTC timestamp that is equal to or earlier than this value. This parameter accepts the following UTC date and time formats:
  • YYYY-MM-DD. When you use this format, the VerticaDB operator populates the time portion with 23:59:59.999999999.
  • YYYY-MM-DD HH:MM:ss.
  • YYYY-MM-DD HH:MM:ss.SSSSSSSSS.

VerticaScrutinize

affinity
Applies rules that constrain the VerticaScrutinize pod to specific nodes. It is more expressive than nodeSelector. If this parameter is not set, then the scrutinize pod uses no affinity setting.

For details, see the Kubernetes documentation.

annotations
Custom annotations added to all objects created to run scrutinize.
initContainers
A list of custom init containers to run after the init container collects diagnotic information with scrutinize. You can use an init container to perform additional processing on the scrutinize tar file, such as uploading it to an external storage location.
labels
Custom labels added to the scrutinize pod.
nodeSelector

List of label key/value pairs that restrict Vertica pod scheduling to nodes with matching labels. For details, see the Kubernetes documentation.

priorityClassName
The PriorityClass name assigned to the scrutinize pod. This affects where the pod gets scheduled.

For details, see the Kubernetes documentation.

resources.limits
The resource limits for the scrutinize pod, which sets the maximum amount of CPU and memory that the pod can consume from its host.
resources.requests
The resource requests for the scrutinize pod, which sets the amount of CPU and memory that the pod requests during pod scheduling.
tolerations

Any tolerations and taints that aid in determining where to schedule a pod.

verticaDBName
Required. Name of the VerticaDB CR that the VerticaScrutinize CR collects diagnostic information for. The VerticaDB CR must exist in the same namespace as the VerticaScrutinize CR.
volume
Custom volume that stores the finalized scrutinize tar file and any intermediate files. The volume must have enough space to store the scrutinize data. The volume is mounted in /tmp/scrutinize.

If this setting is omitted, an emptyDir volume is created to store the scrutinize data.

Default: emptyDir