The custom resource definition (CRD) is a shared global object that extends the Kubernetes API beyond the standard resource types. The CRD serves as a blueprint for custom resource (CR) instances. You create CRs that specify the desired state of your environment, and the operator monitors the CR to maintain state for the objects within its namespace.
This is the multi-page printable view of this section. Click here to print.
Custom resource definitions
- 1: VerticaDB custom resource definition
- 2: EventTrigger custom resource definition
- 3: VerticaAutoscaler custom resource definition
- 4: VerticaReplicator custom resource definition
- 5: VerticaRestorePointsQuery custom resource definition
- 6: VerticaScrutinize custom resource definition
1 - VerticaDB custom resource definition
Important
Beginning with version 24.1.0, the VerticaDB operator version 2.0.0 manages deployments with vclusterops
, a Go library that uses a high-level REST interface to leverage the Node Management Agent and HTTPS service. The vclusterops
library replaces Administration tools (admintools), so you cannot access a shell within a container and execute any admintools commands.
Version 24.1.0 also introduces API version v1
. All examples in this section use API version v1
and the vcluster deployment type. API version v1beta
is deprecated, and Vertica recommends that you migrate to API version v1
. For migration details, see Upgrading Vertica on Kubernetes.
The Kubernetes API server stores only one format version of the custom resource. If you migrate to API version v1
and then create a custom resource with API version v1beta1
, then the conversion webhook converts the custom resource to API version v1
automatically.
If you migrated to API version v1
, you can view the v1beta1
equivalent of your custom resource with the following command:
$ kubectl get verticadbs.v1beta1.vertica.com cr-name -o yaml
The VerticaDB custom resource definition (CRD) deploys an Eon Mode database. Each subcluster is a StatefulSet, a workload resource type that persists data with ephemeral Kubernetes objects.
A VerticaDB custom resource (CR) requires a primary subcluster and a connection to a communal storage location to persist its data. The VerticaDB operator monitors the CR to maintain its desired state and validate state changes.
The following sections provide a YAML-formatted manifest that defines the minimum required fields to create a VerticaDB CR, and each subsequent section implements a production-ready recommendation or best practice using custom resource parameters. For a comprehensive list of all parameters and their definitions, see custom resource parameters.
Prerequisites
- Complete Installing the VerticaDB operator.
- Configure a dynamic volume provisioner.
- Confirm that you have the resources to deploy objects you plan to create.
- Optionally, acquire a Vertica license. By default, the Helm chart deploys the free Community Edition license. This license limits you to a three-node cluster and 1TB data.
- Configure a supported communal storage location with an empty communal path bucket.
- Understand Kubernetes Secrets and how Vertica manages Secrets. Secrets conceal sensitive information in your custom resource.
Minimal manifest
At minimum, a VerticaDB CR requires a connection to an empty communal storage bucket and a primary subcluster definition. The operator is namespace-scoped, so make sure that you apply the CR manifest in the same namespace as the operator.
The following VerticaDB CR connects to S3 communal storage and deploys a three-node primary subcluster on three nodes. This manifest serves as the starting point for all implementations detailed in the subsequent sections:
apiVersion: vertica.com/v1
kind: VerticaDB
metadata:
name: cr-name
spec:
licenseSecret: vertica-license
passwordSecret: su-password
communal:
path: "s3://bucket-name/key-name"
endpoint: "https://path/to/s3-endpoint"
credentialSecret: s3-creds
region: region
subclusters:
- name: primary
size: 3
shardCount: 6
The following sections detail the minimal manifest's CR parameters, and how to create the CR in the current namespace.
Required fields
Each VerticaDB manifest begins with required fields that describe the version, resource type, and metadata:
apiVersion
: The API group and Kubernetes API version inapi-group/version
format.kind
: The resource type.VerticaDB
is the name of the Vertica custom resource type.metadata
: Data that identifies objects in the namespace.metadata.name
: The name of this CR object. Provide a uniquemetadata.name
value so that you can identify the CR and its resources in its namespace.
spec definition
The spec
field defines the desired state of the CR. The operator control loop compares the spec
definition to the current state and reconciles any differences.
Nest all fields that define your StatefulSet under the spec
field.
Add a license
By default, the Helm chart pulls the free Vertica Community Edition (CE) image. The CE image has a restricted license that limits you to a three-node cluster and 1TB of data.
To add your license so that you can deploy more nodes and use more data, store your license in a Secret and add it to the manifest:
- Create a Secret from your Vertica license file:
$ kubectl create secret generic vertica-license --from-file=license.dat=/path/to/license-file.dat
- Add the Secret to the
licenseSecret
field:... spec: licenseSecret: vertica-license ...
The licenseSecret
value is mounted in the Vertica server container in the /home/dbadmin/licensing/mnt
directory.
Add password authentication
The passwordSecret
field enables password authentication for the database. You must define this field when you create the CR—you cannot define a password for an existing database.
To create a database password, conceal it in a Secret before you add it to the mainfest:
- Create a Secret from a literal string. You must use
password
as the key:$ kubectl create secret generic su-passwd --from-literal=password=password-value
- Add the Secret to the
passwordSecret
field:... spec: ... passwordSecret: su-password
Connect to communal storage
Vertica on Kubernetes supports multiple communal storage locations. For implementation details for each communal storage location, see Configuring communal storage.
This CR connects to an S3 communal storage location. Define your communal storage location with the communal
field:
...
spec:
...
communal:
path: "s3://bucket-name/key-name"
endpoint: "https://path/to/s3-endpoint"
credentialSecret: s3-creds
region: region
...
This manifest sets the following parameters:
-
credentialSecret
: The Secret that contains your communal access and secret key credentials.The following command stores both your S3-compatible communal access and secret key credentials in a Secret named
s3-creds
:$ kubectl create secret generic s3-creds --from-literal=accesskey=accesskey --from-literal=secretkey=secretkey
Note
OmitcredentialSecret
for environments that authenticate to S3 communal storage with Identity and Access Management (IAM) or IAM roles for service accounts (IRSA)—these methods do not require that you store your credentials in a Secret. For details, see Configuring communal storage. -
endpoint
: The S3 endpoint URL. -
path
: The location of the S3 storage bucket, in S3 bucket notation. This bucket must exist before you create the custom resource. After you create the custom resource, you cannot change this value. -
region
: The geographic location of the communal storage resources. This field is valid for AWS and GCP only. If you set the wrong region, you cannot connect to the communal storage location.
Define a primary subcluster
Each CR requires a primary subcluster or it returns an error. At minimum, you must define the name and size of the subcluster:
...
spec:
...
subclusters:
- name: primary
size: 3
...
This manifest sets the following parameters:
name
: The name of the subcluster.size
: The number of pods in the subcluster.
When you define a CR with a single subcluster, the operator designates it as the primary subcluster. If your manifest includes multiple subclusters, you must use the type
parameter to identify the primary subcluster. For example:
spec:
...
subclusters:
- name: primary
size: 3
type: primary
- name: secondary
size: 3
For additional details about primary and secondary subclusters, see Subclusters.
Set the shard count
shardCount
specifies the number of shards in the database, which determines how subcluster nodes subscribe to communal storage data. You cannot change this value after you instantiate the CR. When you change the number of pods in a subcluster or add or remove a subcluster, the operator rebalances shards automatically.
Vertica recommends that the shard count equals double the number of nodes in the cluster. Because this manifest creates a three-node cluster with one Vertica server container per node, set shardCount
to 6
:
...
spec:
...
shardCount: 6
For guidance on selecting the shard count, see Configuring your Vertica cluster for Eon Mode. For details about limiting each node to one Vertica server container, see Node affinity.
Apply the manifest
After you define the minimal manifest in a YAML-formatted file, use kubectl
to create the VerticaDB CR. The following command creates a CR in the current namespace:
$ kubectl apply -f minimal.yaml
verticadb.vertica.com/cr-name created
After you apply the manifest, the operator creates the primary subcluster, connects to the communal storage, and creates the database. You can use kubectl wait
to see when the database is ready:
$ kubectl wait --for=condition=DBInitialized=True vdb/cr-name --timeout=10m
verticadb.vertica.com/cr-name condition met
Specify an image
Each time the operator launches a container, it pulls the image for the most recently released Vertica version from the OpenText Dockerhub repository. Vertica recommends that you explicitly set the image that the operator pulls for your CR. For a list of available Vertica images, see the OpenText Dockerhub registry.
To run a specific image version, set the image
parameter in docker-registry-hostname/image-name:tag
format:
spec:
...
image: vertica/vertica-k8s:version
When you specify an image other than the latest
, the operator pulls the image only when it is not available locally. You can control when the operator pulls the image with the imagePullPolicy
custom resource parameter.
Communal storage authentication
Your communal storage validates HTTPS connections with a self-signed certificate authority (CA) bundle. You must make the CA bundle's root certificate available to each Vertica server container so that the communal storage can authenticate requests from your subcluster.
This authentication requires that you set the following parameters:
-
certSecrets
: Adds a Secret that contains the root certificate.This parameter is a list of Secrets that encrypt internal and external communications for your CR. Each certificate is mounted in the Vertica server container filesystem in the
/certs/
Secret-name
/
cert-name
directory. -
communal.caFile
: Makes the communal storage location aware of the mount path that stores the certificate Secret.
Complete the following to add these parameters to the manifest:
- Create a Secret that contains the PEM-encoded root certificate. The following command creates a Secret named
aws-cert
:$ kubectl create secret generic aws-cert --from-file=root-cert.pem
- Add the
certSecrets
andcommunal.caFile
parameters to the manifest:spec: ... communal: ... caFile: /certs/aws-cert/root_cert.pem certSecrets: - name: aws-cert
Now, the communal storage authenticates requests with the /certs/aws-cert/root_cert.pem
file, whose contents are stored in the aws-cert
Secret.
External client connections
Each subcluster communicates with external clients and internal pods through a service object. To configure the service object to accept external client connections, set the following parameters:
-
serviceName
: Assigns a custom name to the service object. A custom name lets you identify it among multiple subclusters.Service object names use the
metadata.name-serviceName
naming convention. -
serviceType
: Defines the type of the subcluster service object.By default, a subcluster uses the
ClusterIP
serviceType, which sets a stable IP and port that is accessible from within Kubernetes only. In many circumstances, external client applications need to connect to a subcluster that is fine-tuned for that specific workload. For external client access, set theserviceType
toNodePort
orLoadBalancer
.Note
TheLoadBalancer
service type is an external service type that is managed by your cloud provider. For implementation details, refer to the Kubernetes documentation and your cloud provider's documentation. -
serviceAnnotations
: Assigns a custom annotation to the service object for implementation-specific services.
Add these external client connection parameters under the subclusters
field:
spec:
...
subclusters:
...
serviceName: connections
serviceType: LoadBalancer
serviceAnnotations:
service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24
This example creates a LoadBalancer
service object named verticadb-connections
. The serviceAnnotations
parameter defines the CIDRs that can access the network load balancer (NLB). For additional details, see the AWS Load Balancer Controller documentation.
Note
If you run your CR on Amazon Elastic Kubernetes Service (EKS), Vertica recommends the AWS Load Balancer Controller. To use the AWS Load Balancer Controller, apply the following annotations:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
For longer-running queries, you might need to configure TCP keepalive settings.
For additional details about Vertica and service objects, see Containerized Vertica on Kubernetes.
Authenticate clients
You might need to connect applications or command-line interface (CLI) tools to your VerticaDB CR. You can add TLS certificates that authenticate client requests with the certSecrets
parameter:
- Create a Secret that contains your TLS certificates. The following command creates a Secret named
mtls
:$ kubectl create secret generic mtls --from-file=mtls=/path/to/mtls-cert
- Add the Secret to the
certSecrets parameter
:This mounts the TLS certificates in thespec: ... certSecrets: ... - name: mtls
/certs/mtls/mtls-cert
directory.
Sidecar logger
A sidecar is a utility container that runs in the same pod as your main application container and performs a task for that main application's process. The VerticaDB CR uses a sidecar container to handle logs for the Vertica server container. You can use the vertica-logger image to add a sidecar that sends logs from vertica.log
to standard output on the host node for log aggregation.
Add a sidecar with the sidecars
parameter. This parameter accepts a list of sidecar definitions, where each element specifies the following:
name
: Name of the sidecar.name
indicates the beginning of a sidecar element.image
: Image for the sidecar container.
The following example adds a single sidecar container that shares a pod with each Vertica server container:
spec:
...
sidecars:
- name: sidecar-container
image: sidecar-image:latest
This configuration persists logs only for the lifecycle of the container. To persist log data between pod lifecycles, you must mount a custom volume in the sidecar filesystem.
Persist logs with a volume
An external service that requires long-term access to Vertica server data should use a volume to persist that data between pod lifecycles. For details about volumes, see the Kubernetes documentation.
The following parameters add a volume to your CR and mounts it in a sidecar container:
volumes
: Make a custom volume available to the CR so that you can mount it in a container filesystem. This parameter requires aname
value and a volume type.sidecars[i].volumeMounts
: Mounts one or more volumes in the sidecar container filesystem. This parameter requires aname
value and amountPath
value that defines where the volume is mounted in the sidecar container.Note
Vertica also provides a
spec.volumeMounts
parameter so you can mount volumes for other use cases. This parameter behaves likesidecars[i].volumeMounts
, but it mounts volumes in the Vertica server container filesystem.For details, see Custom resource definition parameters.
The following example creates a volume of type emptyDir
, and mounts it in the sidecar-container
filesystem:
spec:
...
volumes:
- name: sidecar-vol
emptyDir: {}
...
sidecars:
- name: sidecar-container
image: sidecar-image:latest
volumeMounts:
- name: sidecar-vol
mountPath: /path/to/sidecar-vol
Resource limits and requests
You should limit the amount of CPU and memory resources that each host node allocates for the Vertica server pod, and set the amount of resources each pod can request.
To control these values, set the following parameters under the subclusters.resources
field:
limits.cpu
: Maximum number of CPUs that each server pod can consume.limits.memory
: Maximum amount of memory that each server pod can consume.requests.cpu
: Number CPUs that each pod requests from the host node.requests.memory
: Amount of memory that each pod requests from a PV.
When you change resource settings, Kubernetes restarts each pod with the updated settings.
Note
Select resource settings that your host nodes can accommodate. When a pod is started or rescheduled, Kubernetes searches for host nodes with enough resources available to start the pod. If there is not a host node with enough resources, the pod STATUS stays in Pending until the resources become available.
For guidance on setting production limits and requests, see Recommendations for Sizing Vertica Nodes and Clusters.
As a best practice, set resource.limits.*
and resource.requests.*
to equal values so that the pods are assigned to the Guaranteed
Quality of Service (QoS) class. Equal settings also provide the best safeguard against the Out Of Memory (OOM) Killer in constrained environments.
The following example allocates 32 CPUs and 96 gigabytes of memory on the host node, and limits the requests to the same values. Because the limits.*
and requests.*
values are equal, the pods are assigned the Guaranteed
QoS class:
spec:
...
subclusters:
...
resources:
limits:
cpu: 32
memory: 96Gi
requests:
cpu: 32
memory: 96Gi
Node affinity
Kubernetes affinity and anti-affinity settings control which resources the operator uses to schedule pods. As a best practice, you should set affinity
to ensure that a single node does not serve more than one Vertica pod.
The following example creates an anti-affinity rule that schedules only one Vertica server pod per node:
spec:
...
subclusters:
...
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- vertica
topologyKey: "kubernetes.io/hostname"
The following provides a detailed explanation about all settings in the previous example:
affinity
: Provides control over pod and host scheduling using labels.podAntiAffinity
: Uses pod labels to prevent scheduling on certain resources.requiredDuringSchedulingIgnoredDuringExecution
: The rules defined under this statement must be met before a pod is scheduled on a host node.labelSelector
: Identifies the pods affected by this affinity rule.matchExpressions
: A list of pod selector requirements that consists of akey
,operator
, andvalues
definition. ThismatchExpression
rule checks if the host node is running another pod that uses avertica
label.topologyKey
: Defines the scope of the rule. Because this uses thehostname
topology label, this applies the rule in terms of pods and host nodes.
For additional details, see the Kubernetes documentation.
2 - EventTrigger custom resource definition
The EventTrigger custom resource definition (CRD) runs a task when the condition of a Kubernetes object changes to a specified status. EventTrigger extends the Kubernetes Job, a workload resource that creates pods, runs a task, then cleans up the pods after the task completes.
Prerequisites
- Deploy a VerticaDB operator.
- Confirm that you have the resources to deploy objects you plan to create.
Limitations
The EventTrigger CRD has the following limitations:
- It can monitor a condition status on only one VerticaDB custom resource (CR).
- You can match only one condition status.
- The EventTrigger and the object that it watches must exist within the same namespace.
Creating an EventTrigger
An EventTrigger resource defines the Kubernetes object that you want to watch, the status condition that triggers the Job, and a pod template that contains the Job logic and provides resources to complete the Job.
This example creates a YAML-formatted file named eventtrigger.yaml
. When you apply eventtrigger.yaml
to your VerticaDB CR, it creates a single-column database table when the VerticaDB CR's DBInitialized
condition status changes to True
:
$ kubectl describe vdb verticadb-name
Status:
...
Conditions:
...
Last Transition Time: transition-time
Status: True
Type: DBInitialized
The following fields form the spec
, which defines the desired state of the EventTrigger object:
references
: The Kubernetes object whose condition status you want to watch.matches
: The condition and status that trigger the Job.template
: Specification for the pods that run the Job after the condition status triggers an event.
The following steps create an EventTrigger CR:
-
Add the
apiVersion
,kind
, andmetadata.name
required fields:apiVersion: vertica.com/v1beta1 kind: EventTrigger metadata: name: eventtrigger-example
-
Begin the
spec
definition with thereferences
field. Theobject
field is an array whose values identify the VerticaDB CR object that you want to watch. You must provide the VerticaDB CR'sapiVersion
,kind
, andname
:spec: references: - object: apiVersion: vertica.com/v1beta1 kind: VerticaDB name: verticadb-example
-
Define the
matches
field that triggers the Job.EventTrigger
can match only one condition:spec: ... matches: - condition: type: DBInitialized status: "True"
The preceding example defines the following:
condition.type
: The condition that the operator watches for state change.condition.status
: The status that triggers the Job.
-
Add the
template
that defines the pod specifications that run the Job aftermatches.condition
triggers an event.A pod template requires its own
spec
definition, and it can optionally have its own metadata. The following example includesmetadata.generateName
, which instructs the operator to generate a unique, random name for any pods that it creates for the Job. The trailing dash (-
) separates the user-provided portion from the generated portion:spec: ... template: metadata: generateName: create-user-table- spec: template: spec: restartPolicy: OnFailure containers: - name: main image: "vertica/vertica-k8s:latest" command: ["/opt/vertica/bin/vsql", "-h", "verticadb-sample-defaultsubcluster", "-f", "CREATE TABLE T1 (C1 INT);"]
The remainder of the
spec
defines the following:restartPolicy
: When to restart all containers in the pod.containers
: The containers that run the Job.name
: The name of the container.image
: The image that the container runs.command
: An array that contains a command, where each element in the array combines to form a command. The final element creates the single-column SQL table.
Apply the manifest
After you create the EventTrigger, apply the manifest in the same namespace as the VerticaDB CR:
$ kubectl apply -f eventtrigger.yaml
eventtrigger.vertica.com/eventtrigger-example created
configmap/create-user-table-sql created
After you create the database, the operator runs a Job that creates a table. You can check the status with kubectl get job
:
$ kubectl get job
NAME COMPLETIONS DURATION AGE
create-user-table 1/1 4s 7s
Verify that the table was created in the logs:
$ kubectl logs create-user-table-guid
CREATE TABLE
Complete file reference
apiVersion: vertica.com/v1beta1
kind: EventTrigger
metadata:
name: eventtrigger-example
spec:
references:
- object:
apiVersion: vertica.com/v1beta1
kind: VerticaDB
name: verticadb-example
matches:
- condition:
type: DBInitialized
status: "True"
template:
metadata:
generateName: create-user-table-
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: main
image: "vertica/vertica-k8s:latest"
command: ["/opt/vertica/bin/vsql", "-h", "verticadb-sample-defaultsubcluster", "-f", "CREATE TABLE T1 (C1 INT);"]
Monitoring an EventTrigger
The following table describes the status fields that help you monitor an EventTrigger CR:
Status Field | Description |
---|---|
references[].apiVersion |
Kubernetes API version of the object that the EventTrigger CR watches. |
references[].kind |
Type of object that the EventTrigger CR watches. |
references[].name |
Name of the object that the EventTrigger CR watches. |
references[].namespace |
Namespace of the object that the EventTrigger CR watches. The EventTrigger and the object that it watches must exist within the same namespace. |
references[].uid |
Generated UID of the reference object. The operator generates this identifier when it locates the reference object. |
references[].resourceVersion |
Current resource version of the object that the EventTrigger watches. |
references[].jobNamespace |
If a Job was created for the object that the EventTrigger watches, the namespace of the Job. |
references[].jobName |
If a Job was created for the object that the EventTrigger watches, the name of the Job. |
3 - VerticaAutoscaler custom resource definition
The VerticaAutoscaler custom resource (CR) is a HorizontalPodAutoscaler that automatically scales resources for existing subclusters using one of the following strategies:
-
Subcluster scaling for short-running dashboard queries.
-
Pod scaling for long-running analytic queries.
The VerticaAutoscaler CR scales using resource or custom metrics. Vertica manages subclusters by workload, which helps you pinpoint the best metrics to trigger a scaling event. To maintain data integrity, the operator does not scale down unless all connections to the pods are drained and sessions are closed.
For details about the algorithm that determines when the VerticaAutoscaler scales, see the Kubernetes documentation.
Additionally, the VerticaAutoscaler provides a webhook to validate state changes. By default, this webhook is enabled. You can configure this webhook with the webhook.enable
Helm chart parameter.
Examples
The examples in this section use the following VerticaDB custom resource. Each example uses CPU to trigger scaling:
apiVersion: vertica.com/v1beta1
kind: VerticaDB
metadata:
name: dbname
spec:
communal:
path: "path/to/communal-storage"
endpoint: "path/to/communal-endpoint"
credentialSecret: credentials-secret
subclusters:
- name: primary1
size: 3
isPrimary: true
serviceName: primary1
resources:
limits:
cpu: "8"
requests:
cpu: "4"
Prerequisites
-
Complete Installing the VerticaDB operator.
-
Install the kubectl command line tool.
-
Complete VerticaDB custom resource definition.
-
Confirm that you have the resources to scale.
Note
By default, the custom resource uses the free Community Edition (CE) license. This license allows you to deploy up to three nodes with a maximum of 1TB of data. To add resources beyond these limits, you must add your Vertica license to the custom resource as described in VerticaDB custom resource definition.
- Set a value for the metric that triggers scaling. For example, if you want to scale by CPU utilization, you must set CPU limits and requests.
Subcluster scaling
Automatically adjust the number of subclusters in your custom resource to fine-tune resources for short-running dashboard queries. For example, increase the number of subclusters to increase throughput. For more information, see Improving query throughput using subclusters.
All subclusters share the same service object, so there are no required changes to external service objects. Pods in the new subcluster are load balanced by the existing service object.
The following example creates a VerticaAutoscaler custom resource that scales by subcluster when the VerticaDB uses 50% of the node's available CPU:
-
Define the VerticaAutoscaler custom resource in a YAML-formatted manifest:
apiVersion: vertica.com/v1beta1 kind: VerticaAutoscaler metadata: name: autoscaler-name spec: verticaDBName: dbname scalingGranularity: Subcluster serviceName: primary1
-
Create the VerticaAutoscaler with the kubectl autoscale command:
$ kubectl autoscale verticaautoscaler autoscaler-name --cpu-percent=50 --min=3 --max=12
The previous command creates a HorizontalPodAutoscaler object that:
-
Sets the target CPU utilization to 50%.
-
Scales to a minimum of three pods in one subcluster, and 12 pods in four subclusters.
-
Pod scaling
For long-running, analytic queries, increase the pod count for a subcluster. For additional information about Vertica and analytic queries, see Using elastic crunch scaling to improve query performance.
When you scale pods in an Eon Mode database, you must consider the impact on database shards. For details, see Namespaces and shards.
The following example creates a VerticaAutoscaler custom resource that scales by pod when the VerticaDB uses 50% of the node's available CPU:
-
Define the VerticaAutoScaler custom resource in a YAML-formatted manifest:
apiVersion: vertica.com/v1beta1 kind: VerticaAutoscaler metadata: name: autoscaler-name spec: verticaDBName: dbname scalingGranularity: Pod serviceName: primary1
-
Create the autoscaler instance with the kubectl autoscale command:
$ kubectl autoscale verticaautoscaler autoscaler-name --cpu-percent=50 --min=3 --max=12
The previous command creates a HorizontalPodAutoscaler object that:
-
Sets the target CPU utilization to 50%.
-
Scales to a minimum of three pods in one subcluster, and 12 pods in four subclusters.
-
Event monitoring
To view the VerticaAutoscaler object, use the kubetctl describe hpa command:
$ kubectl describe hpa autoscaler-name
Name: as
Namespace: vertica
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 12 Apr 2022 15:11:28 -0300
Reference: VerticaAutoscaler/as
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (9m) / 50%
Min replicas: 3
Max replicas: 12
VerticaAutoscaler pods: 3 current / 3 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
When a scaling event occurs, you can view the admintools commands to scale the cluster. Use kubectl to view the StatefulSets:
$ kubectl get statefulsets
NAME READY AGE
db-name-as-instance-name-0 0/3 71s
db-name-primary1 3/3 39m
Use kubectl describe to view the executing commands:
$ kubectl describe vdb dbname | tail
Upgrade Status:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ReviveDBStart 41m verticadb-operator Calling 'admintools -t revive_db'
Normal ReviveDBSucceeded 40m verticadb-operator Successfully revived database. It took 25.255683916s
Normal ClusterRestartStarted 40m verticadb-operator Calling 'admintools -t start_db' to restart the cluster
Normal ClusterRestartSucceeded 39m verticadb-operator Successfully called 'admintools -t start_db' and it took 44.713787718s
Normal SubclusterAdded 10s verticadb-operator Added new subcluster 'as-0'
Normal AddNodeStart 9s verticadb-operator Calling 'admintools -t db_add_node' for pod(s) 'db-name-as-instance-name-0-0, db-name-as-instance-name-0-1, db-name-as-instance-name-0-2'
4 - VerticaReplicator custom resource definition
The VerticaReplicator custom resource (CR) facilitates in-database replication through the Vertica Kubernetes operator. This feature allows you to create a VerticaReplicator CR to replicate databases for copying data, testing, or performing active online upgrade. It supports replication to and from sandbox environments. Additionally, both password-based authentication and source TLS authentication are supported.
Important
- Vertica only supports replicating all data from a database or sandbox to another database or sandbox. Partial table replication is not supported.
- The version of the target database or sandbox must be the same as or higher than the version of the source database or sandbox.
- Vertica only supports replication for the source database that contains a single namespace.
The VerticaReplicator custom resource (CR) runs replicate on a VerticaDB CR, which copies table or schema data directly from one Eon Mode database's communal storage (source VerticaDB) to another (target VerticaDB).
Prerequisites
- Deploy a source and target VerticaDB CR that uses VerticaDB API version
v1
withvclusterops
. - Deploy a VerticaDB operator version 2.0 and higher.
Create a VerticaReplicator CR
A VerticaReplicator CR spec
only requires the names of the source and target VerticaDB CR for which you want to perform replication. The following example defines the CR as a YAML-formatted file named vreplicate-example.yaml
:
apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
name: vreplicator-example
spec:
source:
verticaDB: "vertica-src"
target:
verticaDB: "vertica-trg"
For a complete list of parameters that you can set for a VerticaReplicator CR, see Custom resource definition parameters.
Apply the manifest
After you create the VerticaReplicator CR, apply the manifest in the same namespace as the CR specified by verticaDBName
:
$ kubectl apply -f vreplicator-example.yaml
verticareplicator.vertica.com/vreplicator-example created
The operator starts the replication process and copies the table and schema data from the source VerticaDB to the target VerticaDB.
You can check the applied CRs as follows:
$ kubectl get vrep
NAME SOURCEVERTICADB TARGETVERTICADB STATE AGE
vreplicator-example vertica-src vertica-trg Replicating 2s
Replicating to a sandboxed subcluster
You can replicate from a source db to a sandboxed subcluster.
The following example defines the CR as a YAML-formatted file named vreplicator-trg-sandbox.yaml
:
apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
name: vreplicator-trg-sandbox
spec:
source:
verticaDB: "vertica-src"
target:
verticaDB: "vertica-trg"
sandboxName: "sandbox1"
After you apply the manifest, the operator will copy the table and schema data from the source VerticaDB to the sandboxed subcluster “sandbox1” on the target VerticaDB.
Replication status
You can check the replication status as follows:
$ kubectl describe vrep
Name: vreplicator-example
Namespace: vertica
Labels: <none>
Annotations: <none>
API Version: vertica.com/v1beta1
Kind: VerticaReplicator
Metadata:
Creation Timestamp: 2024-07-24T12:34:51Z
Generation: 1
Resource Version: 19058685
UID: be90db7f-3ed5-49c0-9d86-94f87d681806
Spec:
Source:
Vertica DB: vertica-src
Target:
Vertica DB: vertica-trg
Status:
Conditions:
Last Transition Time: 2024-07-24T12:34:51Z
Message:
Reason: Ready
Status: True
Type: ReplicationReady
Last Transition Time: 2024-07-24T12:35:01Z
Message:
Reason: Succeeded
Status: False
Type: Replicating
Last Transition Time: 2024-07-24T12:35:01Z
Message:
Reason: Succeeded
Status: True
Type: ReplicationComplete
State: Replication successful
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ReplicationStarted 4m3s verticadb-operator Starting replication
Normal ReplicationSucceeded 3m57s verticadb-operator Successfully replicated database in 5s
Conditions
The Conditions
field summarizes each stage of the replication and contains the following fields:
Last Transition Time
: Timestamp that indicates when the status condition last changed.Message
: This field is not in use, you can safely ignore it.Reason
: Indicates why the replication stage is in its currentStatus
.Status
: Boolean, indicates whether the replication stage is currently in process.Type
: The replication that the VerticaDB operator is executing in this stage.
The following table describes each Conditions.Type
, and all possible value combinations for its Reason
and Status
field values:
Type | Description | Status | Reason |
---|---|---|---|
ReplicationReady |
The operator is ready to start the database replication. |
True |
Ready |
False |
Source database or sandbox is running a version earlier than 24.3.0. Target database or sandbox version is lower than the source database or sandbox version. Source database is deployed using admintools. | ||
Replicating |
The operator is replicating the database. |
True |
Started |
False |
| ||
ReplicationComplete |
The database replication is complete. |
True |
Succeeded |
5 - VerticaRestorePointsQuery custom resource definition
Important
Beta Feature — For Test Environments OnlyThe VerticaRestorePointsQuery custom resource (CR) retrieves details about saved restore points that you can use to roll back your database to a previous state or restore specific objects in a VerticaDB CR.
A VerticaRestorePointsQuery CR defines query parameters that the VerticaDB operator uses to retrieve restore points from an archive. A restore point is a snapshot of a database at a specific point in time that can consist of an entire database or a subset of database objects. Each restore point has a unique identifier and a timestamp. An archive is a collection of chronologically organized restore points.
You specify the archive and an optional period of time, and the operator queries the archive and retrieves details about restore points saved in the archive. You can use the query results to revive a VerticaDB CR with the data saved in the restore point.
Prerequisites
- Deploy a VerticaDB CR
- Deploy a VerticaDB operator
Save restore points
You can save a restore point using VerticaDB operator in Kubernetes or by using vsql in Vertica.
Save a restore point using VerticaDB operator
Important
- This is a Beta feature.
- To save a restore point, you must be running Vertica version 24.4.0-0 or later.
-
Use
kubectl edit
to open your default text editor and update the yaml file for the specified custom resource. The following command opens a custom resource namedvdb
for editing:$ kubectl edit vdb
-
In the spec section of the custom resource, add an entry for the archive name. The VerticaDB operator creates the archive, using the
spec.restorePoint.archive
as the archive name to save the restore point.apiVersion: vertica.com/v1 kind: VerticaDB metadata: name: vertica-db spec: ... restorePoint: archive: demo_archive
-
To save a restore point, edit the status condition as follows:
$ kubectl edit --subresource=status vdb/vertica-db
-
Add the following conditions to initialize save restore point:
apiVersion: vertica.com/v1 kind: VerticaDB metadata: name: vertica-db spec: ... status: ... conditions: - lastTransitionTime: "2024-10-01T17:27:27Z" message: "" reason: Init status: "True" type: SaveRestorePointNeeded
Note
- You need to set all five fields as shown in the example when you trigger save restore point for the first time. After that, only the
status
field needs to be updated for subsequent triggers. - The lastTransitionTime, message, and reason fields can have any value.
lastTransitionTime
must follow the date-time format as shown in the example.Reason
must contain at least one character. - After the restore point is saved, the
status
field changes toFalse
. If you want to trigger another restore point in the future, you only need to update thestatus
field toTrue
without changing the other fields.
- You need to set all five fields as shown in the example when you trigger save restore point for the first time. After that, only the
-
You can check the status of the restore point as follows:
$ kubectl describe vdb Name: vertica-db ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateArchiveStart 54s verticadb-operator Starting create archive Normal CreateArchiveSucceeded 54s verticadb-operator Successfully create archive. It took 0s Normal SaveRestorePointStart 54s verticadb-operator Starting save restore point Normal SaveRestorePointSucceeded 33s verticadb-operator Successfully save restore point to archive: demo_archive. It took 20s
-
You can get the new archive name, start timestamp, and end timestamp from vdb status.
To retrieve details about the most recently created restore point, use these values (archive, startTimestamp, and endTimestamp) as filter options in a
VerticaRestorePointsQuery
CR. See, Create a VerticaRestorePointsQuery.kubectl describe vdb ... Status: ... Restore Point: Archive: demo_archive End Timestamp: 2024-10-09 12:25:28.956094972 Start Timestamp: 2024-10-09 12:25:19.029997424
Save a restore point using vsql in Vertica
Important
This section contains the following SQL elements that are in Beta:
CREATE ARCHIVE
statementSAVE RESTORE POINT TO ARCHIVE
statementARCHIVE_RESTORE_POINTS
system table
Before the VerticaDB operator can retrieve restore points, you must create an archive and save restore points to that archive. You can leverage stored procedures and scheduled execution to save restore points to an archive on a regular schedule. In the following sections, you schedule a stored procedure to save restore points to an archive every night at 9:00 PM.
Note
The following steps demonstrate how to retrieve restore points for a VerticaDB CR named restorepoints
. Each kubectl exec
command executes vsql
statements in a running Vertica server container in restorepoints
.
For details about kubectl exec
, see the Kubernetes documentation.
Create the archive and schedule restore points
Create an archive and then create a stored procedure that saves a restore point to that archive:
- Create an archive with
CREATE ARCHIVE
. The following statement creates an archive namednightly
because it will store restore points that are saved every night:$ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \ -w password \ -c "CREATE ARCHIVE nightly;" CREATE ARCHIVE
- Create a stored procedure that saves a restore point. The
SAVE RESTORE POINT TO ARCHIVE
statement creates a restore point and saves it to thenightly
archive:$ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \ -w password \ -c "CREATE OR REPLACE PROCEDURE take_nightly() LANGUAGE PLvSQL AS \$\$ BEGIN EXECUTE 'SAVE RESTORE POINT TO ARCHIVE nightly'; END; \$\$;" CREATE PROCEDURE
- To test the stored procedure, execute it with the CALL statement:
$ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \ -w password \ -c "CALL take_nightly();" take_nightly -------------- 0 (1 row)
- To verify that the stored procedure saved the restore point, query the
ARCHIVE_RESTORE_POINTS
system table to return the number of restore points in the specified archive:$ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \ -w password \ -c "SELECT COUNT(*) FROM ARCHIVE_RESTORE_POINTS WHERE ARCHIVE = 'nightly';" COUNT ------- 1 (1 row)
Schedule the stored procedure
Schedule the stored procedure so that it saves a restore point to the nightly
archive each night:
- Schedule a time to execute the stored procedure with CREATE SCHEDULE. This function uses a
cron
expression to create a schedule at 9:00 PM each night:$ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \ -w password \ -c "CREATE SCHEDULE nightly_sched USING CRON '0 21 * * *';" CREATE SCHEDULE
- Set CREATE TRIGGER to execute the
take_nightly
stored procedure with thenightly_sched
schedule:$ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \ -w password \ -c "CREATE TRIGGER trigger_nightly_sched ON SCHEDULE nightly_sched EXECUTE PROCEDURE take_nightly() AS DEFINER;" CREATE TRIGGER
Verify the archive automation
After you create the stored procedure and configure its schedule, test that it executes and saves a stored procedure at the scheduled time:
- Before the cron job is scheduled to run, verify the system time with the
date
shell built-in:$ date -u Thu Feb 29 20:59:15 UTC 2024
- Wait until the scheduled time elapses:
$ date -u Thu Feb 29 21:00:07 UTC 2024
- To verify that the scheduled stored procedure executed on time, query
ARCHIVE_RESTORE_POINTS
system table for details about thenightly
archive:$ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \ -w password \ -c "SELECT COUNT(*) FROM ARCHIVE_RESTORE_POINTS WHERE ARCHIVE = 'nightly';" COUNT ------- 2 (1 row)
COUNT
is incremented by one, so the stored procedure saved the restore point on schedule.
Create a VerticaRestorePointsQuery
A VerticaRestorePointsQuery manifest specifies an archive and an optional time duration. The VerticaDB operator uses this information to retrieve details about the restore points that were saved to the archive.
Create and apply the manifest
The following manifest defines a VerticaRestorePointsQuery CR named vrqp
. The vrqp
CR instructs the operator to retrieve from the nightly
archive all restore points saved on Feburary 29, 2024:
-
Create a file named
vrpq.yaml
that contains the following manifest. This CR retrieves restore points :apiVersion: vertica.com/v1beta1 kind: VerticaRestorePointsQuery metadata: name: vrqp spec: verticaDBName: restorepoints filterOptions: archiveName: "nightly" startTimestamp: 2024-02-29 endTimestamp: 2024-02-29
The
spec
contains the following fields:verticaDBName
: Name of the VerticaDB CR that you want to retrieve restore points for.filterOptions.archiveName
: Archive that contains the restore points that you want to retrieve.filterOptions.startTimestamp
: Retrieve restore points that were saved on or after this date.filterOptions.endTimestamp
: Retrieve restore points that were saved on or before this date.
For additional details about these parameters, see Custom resource definition parameters.
-
Apply the manifest in the current namespace with
kubectl
:$ kubectl apply -f vrpq.yaml verticarestorepointsquery.vertica.com/vrpq created
After you apply the manifest, the operator begins working to retrieve the restore points.
-
Verify that the query succeeded with
kubectl
:$ kubectl get vrpq NAME VERTICADB STATE AGE vrpq restorepoints Query successful 10s
View retrieved restore points
After you apply the VerticaRestorePointsQuery CR, you can view the retrieved restore points with kubectl describe
. kubectl describe
returns a Status
section, which describes the query activity and properties for each retrieved restore point:
$ kubectl describe vrpq
Name: vrpq
...
Status:
Conditions:
Last Transition Time: 2024-03-15T17:40:39Z
Message:
Reason: Completed
Status: True
Type: QueryReady
Last Transition Time: 2024-03-15T17:40:41Z
Message:
Reason: Completed
Status: False
Type: Querying
Last Transition Time: 2024-03-15T17:40:41Z
Message:
Reason: Completed
Status: True
Type: QueryComplete
Restore Points:
Archive: nightly
Id: af8cd407-246a-4500-bc69-0b534e998cc6
Index: 1
Timestamp: 2024-02-29 21:00:00.728787
vertica_version: version
State: Query successful
...
The Status
section contains relevant restore points details in the Conditions
and Restore Points
fields.
Conditions
The Conditions
field summarizes each stage of the restore points query and contains the following fields:
Last Transition Time
: Timestamp that indicates when the status condition last changed.Message
: This field is not in use, you can safely ignore it.Reason
: Indicates why the query stage is in its currentStatus
.Status
: Boolean, indicates whether the query stage is currently in process.Type
: The query that the VerticaDB operator is executing in this stage.
The following table describes each Conditions.Type
, and all possible value combinations for its Reason
and Status
field values:
Type | Description | Status | Reason |
---|---|---|---|
QueryReady |
The operator verified that the query is executable in the environment. |
True |
Completed |
False |
| ||
Querying |
The operator is running the query. |
True |
Started |
False |
| ||
QueryComplete |
The query is complete and the restore points are available in the Restore Points array. |
True |
Completed |
Restore Points
The Restore Points
field lists each restore point that was retrieved from the archive and contains the following fields:
Archive
: The archive that contains this restore point.Id
: Unique identifier for the restore point.Index
: Restore point rank ordering in the archive, by descending timestamp.1
is the most recent restore point.Timestamp
: Time that indicates when the restore point was created.vertica_version
: Database version when this restore point was saved to the archive.
Restore the database
After the operator retrieves the restore points, you can restore the database with the archive name and either the restore point Index
or Id
. In addition, you must set initPolicy
to Revive
:
- Delete the existing CR:
$ kubectl delete -f restorepoints.yaml verticadb.vertica.com "restorepoints" deleted
- Update the CR. Change the
initPolicy
toRevive
, and add the restore point information. You might have to setignore-cluster-lease
totrue
:apiVersion: vertica.com/v1 kind: VerticaDB metadata: name: restorepoints annotations: vertica.com/ignore-cluster-lease: "true" spec: initPolicy: Revive restorePoint: archive: "nightly" index: 1 ...
- Apply the updated manifest:
$ kubectl apply -f restorepoints.yaml verticadb.vertica.com/restorepoints created
6 - VerticaScrutinize custom resource definition
The VerticaScrutinize custom resource (CR) runs scrutinize
on a VerticaDB CR, which collects diagnostic information about the VerticaDB cluster and packages it in a tar file. This diagnostic information is commonly requested when resolving a case with Vertica Support.
When you create a VerticaScrutinize CR in your cluster, the VerticaDB operator creates a short-lived pod and runs scrutinize in two stages:
- An init container runs scrutinize on the VerticaDB CR. This produces a tar file named
VerticaScrutinize.
timestamp
.tar
that contains the diagnostic information. Optionally, you can define one or more init containers that perform additional processing after scrutinize completes. - A main container persists the tar file in its file system in the
/tmp/scrutinize/
directory. This main container lives for 30 minutes.
When resolving a support case, Vertica Support might request that you upload the tar file to a secure location, such as Vertica Advisor Report.
Prerequisites
- Deploy a VerticaDB CR that uses VerticaDB API version
v1
withvclusterops
. - Deploy a VerticaDB operator version 2.0 and higher.
Create a VerticaScrutinize CR
A VerticaScrutinize CR spec
requires only the name of the VerticaDB CR for which you want to collect diagnostic information. The following example defines the CR as a YAML-formatted file named vscrutinize-example.yaml
:
apiVersion: vertica.com/v1beta1
kind: VerticaScrutinize
metadata:
name: vscrutinize-example
spec:
verticaDBName: verticadb-name
For a complete list of parameters that you can set for a VerticaScrutinize CR, see Custom resource definition parameters.
Apply the manifest
After you create the VerticaScrutinize CR, apply the manifest in the same namespace as the CR specified by verticaDBName
:
$ kubectl apply -f vscrutinize-example.yaml
verticascrutinize.vertica.com/vscrutinize-example created
The operator creates an init container that runs scrutinize:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
...
verticadb-operator-manager-68b7d45854-22c8p 1/1 Running 0 3d17h
vscrutinize-example 0/1 Init:0/1 0 14s
After the init container completes, a new container is created, and the tar file is stored in its file system at /tmp/scrutinize
. This container persists for 30 minutes:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
...
verticadb-operator-manager-68b7d45854-22c8p 1/1 Running 0 3d20h
vscrutinize-example 1/1 Running 0 21s
Add init containers
When you apply a VerticaScrutinize CR, the VerticaDB operator creates an init container that prepares and runs the scrutinize command. You can add one or more init containers to perform additional steps after scrutinize creates a tar file and before the tar file is saved in the main container.
For example, you can define an init container that sends the tar file to another location, such as an S3 bucket. The following manifest defines an initContainer
field that uploads the scrutinize tar file to an S3 bucket:
apiVersion: vertica.com/v1beta1
kind: VerticaScrutinize
metadata:
name: vscrutinize-example-copy-to-s3
spec:
verticaDBName: verticadb-name
initContainers:
- command:
- bash
- '-c'
- 'aws s3 cp $(SCRUTINIZE_TARBALL) s3://k8test/scrutinize/'
env:
- name: AWS_REGION
value: us-east-1
image: 'amazon/aws-cli:2.2.24'
name: copy-tarfile-to-s3
securityContext:
privileged: true
Note
Before you can send the tar file to an S3 bucket, make sure you can write to the S3 bucket. The scrutinize pod assumes the same service account as the Vertica pod.In the previous example, initContainers.command
executes a command that accesses the SCRUTINIZE_TARBALL
environment variable. The operator sets this environment variable in the scrutinize pod, and it defines the location of the tar file in the main container.