This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Custom resource definitions

The custom resource definition (CRD) is a shared global object that extends the Kubernetes API beyond the standard resource types.

The custom resource definition (CRD) is a shared global object that extends the Kubernetes API beyond the standard resource types. The CRD serves as a blueprint for custom resource (CR) instances. You create CRs that specify the desired state of your environment, and the operator monitors the CR to maintain state for the objects within its namespace.

1 - VerticaDB custom resource definition

The VerticaDB custom resource definition (CRD) deploys an Eon Mode database. Each subcluster is a StatefulSet, a workload resource type that persists data with ephemeral Kubernetes objects.

A VerticaDB custom resource (CR) requires a primary subcluster and a connection to a communal storage location to persist its data. The VerticaDB operator monitors the CR to maintain its desired state and validate state changes.

The following sections provide a YAML-formatted manifest that defines the minimum required fields to create a VerticaDB CR, and each subsequent section implements a production-ready recommendation or best practice using custom resource parameters. For a comprehensive list of all parameters and their definitions, see custom resource parameters.

Prerequisites

Minimal manifest

At minimum, a VerticaDB CR requires a connection to an empty communal storage bucket and a primary subcluster definition. The operator is namespace-scoped, so make sure that you apply the CR manifest in the same namespace as the operator.

The following VerticaDB CR connects to S3 communal storage and deploys a three-node primary subcluster on three nodes. This manifest serves as the starting point for all implementations detailed in the subsequent sections:

      
apiVersion: vertica.com/v1
kind: VerticaDB
metadata:
  name: cr-name
spec:
  licenseSecret: vertica-license
  passwordSecret: su-password
  communal:
    path: "s3://bucket-name/key-name"
    endpoint: "https://path/to/s3-endpoint"
    credentialSecret: s3-creds
    region: region
  subclusters:
    - name: primary
      size: 3
  shardCount: 6

The following sections detail the minimal manifest's CR parameters, and how to create the CR in the current namespace.

Required fields

Each VerticaDB manifest begins with required fields that describe the version, resource type, and metadata:

  • apiVersion: The API group and Kubernetes API version in api-group/version format.
  • kind: The resource type. VerticaDB is the name of the custom resource type of the database.
  • metadata: Data that identifies objects in the namespace.
  • metadata.name: The name of this CR object. Provide a unique metadata.name value so that you can identify the CR and its resources in its namespace.

spec definition

The spec field defines the desired state of the CR. The operator control loop compares the spec definition to the current state and reconciles any differences.

Nest all fields that define your StatefulSet under the spec field.

Add a license

By default, the Helm chart pulls the free OpenText™ Analytics Database Community Edition (CE) image. The CE image has a restricted license that limits you to a three-node cluster and 1TB of data.

To add your license so that you can deploy more nodes and use more data, store your license in a Secret and add it to the manifest:

  1. Create a Secret from your database license file:
    $ kubectl create secret generic vertica-license --from-file=license.dat=/path/to/license-file.dat
    
  2. Add the Secret to the licenseSecret field:
    ...
    spec:
      licenseSecret: vertica-license
      ...
    

The licenseSecret value is mounted in the database server container in the /home/dbadmin/licensing/mnt directory.

Add password authentication

The passwordSecret field enables password authentication for the database. You must define this field when you create the CR—you cannot define a password for an existing database.

To create a database password, conceal it in a Secret before you add it to the mainfest:

  1. Create a Secret from a literal string. You must use password as the key:
    $ kubectl create secret generic su-passwd --from-literal=password=password-value 
    
  2. Add the Secret to the passwordSecret field:
    ...
    spec:
      ...
      passwordSecret: su-password
    

Connect to communal storage

OpenText™ Analytics Database on Kubernetes supports multiple communal storage locations. For implementation details for each communal storage location, see Configuring communal storage.

This CR connects to an S3 communal storage location. Define your communal storage location with the communal field:

      
...
spec:
  ...
  communal:
    path: "s3://bucket-name/key-name"
    endpoint: "https://path/to/s3-endpoint"
    credentialSecret: s3-creds
    region: region
  ...

This manifest sets the following parameters:

  • credentialSecret: The Secret that contains your communal access and secret key credentials.

    The following command stores both your S3-compatible communal access and secret key credentials in a Secret named s3-creds:

    $ kubectl create secret generic s3-creds --from-literal=accesskey=accesskey --from-literal=secretkey=secretkey
    

  • endpoint: The S3 endpoint URL.

  • path: The location of the S3 storage bucket, in S3 bucket notation. This bucket must exist before you create the custom resource. After you create the custom resource, you cannot change this value.

  • region: The geographic location of the communal storage resources. This field is valid for AWS and GCP only. If you set the wrong region, you cannot connect to the communal storage location.

Define a primary subcluster

Each CR requires a primary subcluster or it returns an error. At minimum, you must define the name and size of the subcluster:

...
spec:
  ...
  subclusters:
    - name: primary
      size: 3
  ...

This manifest sets the following parameters:

  • name: The name of the subcluster.
  • size: The number of pods in the subcluster.

When you define a CR with a single subcluster, the operator designates it as the primary subcluster. If your manifest includes multiple subclusters, you must use the type parameter to identify the primary subcluster. For example:

spec:
  ...
  subclusters:
    - name: primary
      size: 3
      type: primary
    - name: secondary
      size: 3

For additional details about primary and secondary subclusters, see Subclusters.

Set the shard count

shardCount specifies the number of shards in the database, which determines how subcluster nodes subscribe to communal storage data. You cannot change this value after you instantiate the CR. When you change the number of pods in a subcluster or add or remove a subcluster, the operator rebalances shards automatically.

It is recommended that the shard count equals double the number of nodes in the cluster. Because this manifest creates a three-node cluster with one database server container per node, set shardCount to 6:

...
spec:
  ...
  shardCount: 6

For guidance on selecting the shard count, see Configuring your Vertica cluster for Eon Mode. For details about limiting each node to one database server container, see Node affinity.

Apply the manifest

After you define the minimal manifest in a YAML-formatted file, use kubectl to create the VerticaDB CR. The following command creates a CR in the current namespace:

$ kubectl apply -f minimal.yaml
verticadb.vertica.com/cr-name created

After you apply the manifest, the operator creates the primary subcluster, connects to the communal storage, and creates the database. You can use kubectl wait to see when the database is ready:

$ kubectl wait --for=condition=DBInitialized=True vdb/cr-name --timeout=10m 
verticadb.vertica.com/cr-name condition met

Specify an image

Each time the operator launches a container, it pulls the image for the most recently released database version from the OpenText Dockerhub repository. It is recommended that you explicitly set the image that the operator pulls for your CR. For a list of available database images, see the OpenText Dockerhub registry.

To run a specific image version, set the image parameter in docker-registry-hostname/image-name:tag format:

spec:
  ...
  image: vertica/vertica-k8s:version

When you specify an image other than the latest, the operator pulls the image only when it is not available locally. You can control when the operator pulls the image with the imagePullPolicy custom resource parameter.

Communal storage authentication

Your communal storage validates HTTPS connections with a self-signed certificate authority (CA) bundle. You must make the CA bundle's root certificate available to each database server container so that the communal storage can authenticate requests from your subcluster.

This authentication requires that you set the following parameters:

  • certSecrets: Adds a Secret that contains the root certificate.

    This parameter is a list of Secrets that encrypt internal and external communications for your CR. Each certificate is mounted in the database server container filesystem in the /certs/Secret-name/cert-name directory.

  • communal.caFile: Makes the communal storage location aware of the mount path that stores the certificate Secret.

Complete the following to add these parameters to the manifest:

  1. Create a Secret that contains the PEM-encoded root certificate. The following command creates a Secret named aws-cert:
    $ kubectl create secret generic aws-cert --from-file=root-cert.pem
    
  2. Add the certSecrets and communal.caFile parameters to the manifest:
    spec:
      ...
        communal:
          ...
          caFile: /certs/aws-cert/root_cert.pem
        certSecrets:
          - name: aws-cert
    

Now, the communal storage authenticates requests with the /certs/aws-cert/root_cert.pem file, whose contents are stored in the aws-cert Secret.

External client connections

Each subcluster communicates with external clients and internal pods through a service object. To configure the service object to accept external client connections, set the following parameters:

  • serviceName: Assigns a custom name to the service object. A custom name lets you identify it among multiple subclusters.

    Service object names use the metadata.name-serviceName naming convention.

  • serviceType: Defines the type of the subcluster service object.

    By default, a subcluster uses the ClusterIP serviceType, which sets a stable IP and port that is accessible from within Kubernetes only. In many circumstances, external client applications need to connect to a subcluster that is fine-tuned for that specific workload. For external client access, set the serviceType to NodePort or LoadBalancer.

  • serviceAnnotations: Assigns a custom annotation to the service object for implementation-specific services.

Add these external client connection parameters under the subclusters field:

spec:
  ...
  subclusters:
    ...
    serviceName: connections
    serviceType: LoadBalancer
    serviceAnnotations:
      service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24

This example creates a LoadBalancer service object named verticadb-connections. The serviceAnnotations parameter defines the CIDRs that can access the network load balancer (NLB). For additional details, see the AWS Load Balancer Controller documentation.

For additional details about the database and service objects, see Database container deployment on Kubernetes.

Authenticate clients

You might need to connect applications or command-line interface (CLI) tools to your VerticaDB CR. You can add TLS certificates that authenticate client requests with the certSecrets parameter:

  1. Create a Secret that contains your TLS certificates. The following command creates a Secret named mtls:
    $ kubectl create secret generic mtls --from-file=mtls=/path/to/mtls-cert
    
  2. Add the Secret to the certSecrets parameter:
    spec:
      ...
      certSecrets:
        ...
        - name: mtls
    
    This mounts the TLS certificates in the /certs/mtls/mtls-cert directory.

Sidecar logger

A sidecar is a utility container that runs in the same pod as your main application container and performs a task for that main application's process. The VerticaDB CR uses a sidecar container to handle logs for the database server container. You can use the vertica-logger image to add a sidecar that sends logs from vertica.log to standard output on the host node for log aggregation.

Add a sidecar with the sidecars parameter. This parameter accepts a list of sidecar definitions, where each element specifies the following:

  • name: Name of the sidecar. name indicates the beginning of a sidecar element.
  • image: Image for the sidecar container.

The following example adds a single sidecar container that shares a pod with each database server container:

spec:
  ...
  sidecars:
    - name: sidecar-container
      image: sidecar-image:latest

This configuration persists logs only for the lifecycle of the container. To persist log data between pod lifecycles, you must mount a custom volume in the sidecar filesystem.

Filter logs by severity level

The sidecar container enables log filtering based on severity levels (DEBUG, INFO, WARNING, and ERROR).

You can set the following environment variables:

  • LOG_LEVEL: Minimum severity level for log messages to be printed.
  • LOG_FILTER: Comma-separated list of log severity levels to be printed, for example, INFO,ERROR. Only logs matching the listed levels will be printed.

The following prints logs with severity INFO and higher (INFO, WARNING, and ERROR):

spec:
  ...
  sidecars:
    - name: sidecar-container
      image: sidecar-image:latest
      env:
        - name: LOG_LEVEL
          value: "INFO"

The following prints only logs with severity levels INFO and ERROR:

spec:
  ...
  sidecars:
    - name: sidecar-container
      image: sidecar-image:latest
      env:
        - name: LOG_FILTER
          value: "INFO,ERROR"

Persist logs with a volume

An external service that requires long-term access to the database server data should use a volume to persist that data between pod lifecycles. For details about volumes, see the Kubernetes documentation.

The following parameters add a volume to your CR and mounts it in a sidecar container:

  • volumes: Make a custom volume available to the CR so that you can mount it in a container filesystem. This parameter requires a name value and a volume type.
  • sidecars[i].volumeMounts: Mounts one or more volumes in the sidecar container filesystem. This parameter requires a name value and a mountPath value that defines where the volume is mounted in the sidecar container.

The following example creates a volume of type emptyDir, and mounts it in the sidecar-container filesystem:

spec:
  ...
  volumes:
    - name: sidecar-vol
      emptyDir: {}
  ...
  sidecars:
    - name: sidecar-container
      image: sidecar-image:latest
      volumeMounts:
        - name: sidecar-vol
          mountPath: /path/to/sidecar-vol

Resource limits and requests

You should limit the amount of CPU and memory resources that each host node allocates for the databasse server pod, and set the amount of resources each pod can request.

To control these values, set the following parameters under the subclusters.resources field:

  • limits.cpu: Maximum number of CPUs that each server pod can consume.
  • limits.memory: Maximum amount of memory that each server pod can consume.
  • requests.cpu: Number CPUs that each pod requests from the host node.
  • requests.memory: Amount of memory that each pod requests from a PV.

When you change resource settings, Kubernetes restarts each pod with the updated settings.

As a best practice, set resource.limits.* and resource.requests.* to equal values so that the pods are assigned to the Guaranteed Quality of Service (QoS) class. Equal settings also provide the best safeguard against the Out Of Memory (OOM) Killer in constrained environments.

The following example allocates 32 CPUs and 96 gigabytes of memory on the host node, and limits the requests to the same values. Because the limits.* and requests.* values are equal, the pods are assigned the Guaranteed QoS class:

spec:
  ...
  subclusters:
    ...
    resources:
      limits:
        cpu: 32
        memory: 96Gi
      requests:
        cpu: 32
        memory: 96Gi

Node affinity

Kubernetes affinity and anti-affinity settings control which resources the operator uses to schedule pods. As a best practice, you should set affinity to ensure that a single node does not serve more than one database pod.

The following example creates an anti-affinity rule that schedules only one database server pod per node:

spec:
  ...
  subclusters:
    ...
    affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app.kubernetes.io/name
              operator: In
              values:
              - vertica
          topologyKey: "kubernetes.io/hostname"

The following provides a detailed explanation about all settings in the previous example:

  • affinity: Provides control over pod and host scheduling using labels.
  • podAntiAffinity: Uses pod labels to prevent scheduling on certain resources.
  • requiredDuringSchedulingIgnoredDuringExecution: The rules defined under this statement must be met before a pod is scheduled on a host node.
  • labelSelector: Identifies the pods affected by this affinity rule.
  • matchExpressions: A list of pod selector requirements that consists of a key, operator, and values definition. This matchExpression rule checks if the host node is running another pod that uses a vertica label.
  • topologyKey: Defines the scope of the rule. Because this uses the hostname topology label, this applies the rule in terms of pods and host nodes.

For additional details, see the Kubernetes documentation.

2 - EventTrigger custom resource definition

The EventTrigger custom resource definition (CRD) runs a task when the condition of a Kubernetes object changes to a specified status. EventTrigger extends the Kubernetes Job, a workload resource that creates pods, runs a task, then cleans up the pods after the task completes.

Prerequisites

  • Deploy a VerticaDB operator.
  • Confirm that you have the resources to deploy objects you plan to create.

Limitations

The EventTrigger CRD has the following limitations:

  • It can monitor a condition status on only one VerticaDB custom resource (CR).
  • You can match only one condition status.
  • The EventTrigger and the object that it watches must exist within the same namespace.

Creating an EventTrigger

An EventTrigger resource defines the Kubernetes object that you want to watch, the status condition that triggers the Job, and a pod template that contains the Job logic and provides resources to complete the Job.

This example creates a YAML-formatted file named eventtrigger.yaml. When you apply eventtrigger.yaml to your VerticaDB CR, it creates a single-column database table when the VerticaDB CR's DBInitialized condition status changes to True:

$ kubectl describe vdb verticadb-name
Status:
 ...
  Conditions:
    ...
    Last Transition Time:   transition-time
    Status:                 True 
    Type:                   DBInitialized

The following fields form the spec, which defines the desired state of the EventTrigger object:

  • references: The Kubernetes object whose condition status you want to watch.
  • matches: The condition and status that trigger the Job.
  • template: Specification for the pods that run the Job after the condition status triggers an event.

The following steps create an EventTrigger CR:

  1. Add the apiVersion, kind, and metadata.name required fields:

    apiVersion: vertica.com/v1beta1
    kind: EventTrigger
    metadata:
        name: eventtrigger-example
    
  2. Begin the spec definition with the references field. The object field is an array whose values identify the VerticaDB CR object that you want to watch. You must provide the VerticaDB CR's apiVersion, kind, and name:

    spec:
      references:
      - object:
          apiVersion: vertica.com/v1beta1
          kind: VerticaDB
          name: verticadb-example
    
  3. Define the matches field that triggers the Job. EventTrigger can match only one condition:

    spec:
      ...
      matches:
      - condition:
          type: DBInitialized
          status: "True"
    

    The preceding example defines the following:

    • condition.type: The condition that the operator watches for state change.
    • condition.status: The status that triggers the Job.
  4. Add the template that defines the pod specifications that run the Job after matches.condition triggers an event.

    A pod template requires its own spec definition, and it can optionally have its own metadata. The following example includes metadata.generateName, which instructs the operator to generate a unique, random name for any pods that it creates for the Job. The trailing dash (-) separates the user-provided portion from the generated portion:

    spec:
      ...
      template:
        metadata:
          generateName: create-user-table-
        spec:
          template:
            spec:
              restartPolicy: OnFailure
              containers:
              - name: main
                image: "vertica/vertica-k8s:latest"
                command: ["/opt/vertica/bin/vsql", "-h", "verticadb-sample-defaultsubcluster", "-f", "CREATE TABLE T1 (C1 INT);"]
    

    The remainder of the spec defines the following:

    • restartPolicy: When to restart all containers in the pod.
    • containers: The containers that run the Job.
      • name: The name of the container.
      • image: The image that the container runs.
      • command: An array that contains a command, where each element in the array combines to form a command. The final element creates the single-column SQL table.

Apply the manifest

After you create the EventTrigger, apply the manifest in the same namespace as the VerticaDB CR:

$ kubectl apply -f eventtrigger.yaml

eventtrigger.vertica.com/eventtrigger-example created
configmap/create-user-table-sql created

After you create the database, the operator runs a Job that creates a table. You can check the status with kubectl get job:

$ kubectl get job
NAME                COMPLETIONS   DURATION   AGE
create-user-table   1/1           4s         7s

Verify that the table was created in the logs:

$ kubectl logs create-user-table-guid
CREATE TABLE

Complete file reference

apiVersion: vertica.com/v1beta1
kind: EventTrigger
metadata:
    name: eventtrigger-example
spec:
  references:
  - object:
      apiVersion: vertica.com/v1beta1
      kind: VerticaDB
      name: verticadb-example
  matches:
  - condition:
      type: DBInitialized
      status: "True"
  template:
    metadata:
      generateName: create-user-table-
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: main
            image: "vertica/vertica-k8s:latest"
            command: ["/opt/vertica/bin/vsql", "-h", "verticadb-sample-defaultsubcluster", "-f", "CREATE TABLE T1 (C1 INT);"]

Monitoring an EventTrigger

The following table describes the status fields that help you monitor an EventTrigger CR:

Status Field Description
references[].apiVersion Kubernetes API version of the object that the EventTrigger CR watches.
references[].kind Type of object that the EventTrigger CR watches.
references[].name Name of the object that the EventTrigger CR watches.
references[].namespace Namespace of the object that the EventTrigger CR watches. The EventTrigger and the object that it watches must exist within the same namespace.
references[].uid Generated UID of the reference object. The operator generates this identifier when it locates the reference object.
references[].resourceVersion Current resource version of the object that the EventTrigger watches.
references[].jobNamespace If a Job was created for the object that the EventTrigger watches, the namespace of the Job.
references[].jobName If a Job was created for the object that the EventTrigger watches, the name of the Job.

3 - VerticaAutoscaler custom resource definition

The VerticaAutoscaler custom resource (CR) is a HorizontalPodAutoscaler that automatically scales resources for existing subclusters using one of the following strategies:.

The VerticaAutoscaler custom resource (CR) automatically scales resources for existing subclusters using one of the following autoscalers:

This is achieved using one of the following strategies:

  • Subcluster scaling for short-running dashboard queries
  • Pod scaling for long-running analytic queries

The VerticaAutoscaler CR plays a crucial role in managing the scaling of VerticaDB instances using resource metrics or custom Prometheus metrics for efficient scaling. OpenText™ Analytics Database manages subclusters by workload, which helps you pinpoint the best metrics to trigger a scaling event. To maintain data integrity, the operator does not scale in unless all connections to the pods are drained and sessions are closed.

Additionally, the VerticaAutoscaler provides a webhook to validate state changes. By default, this webhook is enabled. You can configure this webhook with the webhook.enable Helm chart parameter.

Autoscalers

An autoscaler is a Kubernetes object that dynamically adjusts resource allocation based on metrics. The VerticaAutoscaler CR utilizes two types of autoscalers:

  • Horizontal Pod Autoscaler (HPA) - a native Kubernetes object
  • Scaled Object - a custom resource (CR) owned and managed by the Kubernetes Event-Driven Autoscaling (KEDA) operator.

Horizontal Pod Autoscaler (HPA) vs Kubernetes Event-Driven Autoscaling (KEDA) and ScaledObject

In Kubernetes, both the Horizontal Pod Autoscaler (HPA) and Kubernetes Event-Driven Autoscaling (KEDA)'s ScaledObject enable automatic pod scaling based on specific metrics. However, they differ in their operation and the types of metrics they utilize for scaling.

Horizontal Pod Autoscaler

HPA is a built-in Kubernetes resource that automatically scales the number of pods in a deployment based on observed CPU utilization or custom metrics, such as memory usage or other application-specific metrics.

Key features:

  • Metrics: HPA primarily scales based on CPU utilization, memory usage, or custom metrics sourced from the metrics-server or Prometheus adapter.
  • Scaling Trigger: HPA monitors the metric values (for example, CPU utilization) and compares them to a defined target (typically a percentage, such as 50% CPU utilization). If the actual value exceeds the target, it scales up the number of pods; if it falls below the target, it scales down accordingly.
  • Limitations: HPA is effective for resource-based scaling, but may face challenges when scaling based on event-driven triggers (such as, message queues, incoming requests) unless combined with custom metrics.

For more information about the algorithm that determines when the HPA scales, see the Kubernetes documentation.

KEDA's ScaledObject

Kubernetes Event-Driven Autoscaler (KEDA) is designed to scale workloads based on event-driven metrics.

Key features:

  • Integrates with external event sources (like Prometheus).
  • Supports scaling to zero, ensuring no pods run when there is no demand.
  • Utilizes KEDA’s ScaledObject Custom Resource Definition (CRD).
  • Works with HPA internally, with KEDA managing the HPA configuration.

Following is a feature comparison between HPA and KEDA ScaledObject:

Feature HPA (Horizontal Pod Autoscaler) KEDA (ScaledObject)
Scaling Trigger CPU, Memory, Custom metrics CPU, Memory, Custom metrics, External events (queues, databases, etc.)
Use HPA? Native Kubernetes feature Uses HPA internally
Complexity Simple Requires KEDA installation
Flexibility More focused on resource scaling Provides greater flexibility by integrating with multiple external event sources
Response Time Delayed (depends on Metrics API) Faster (direct event triggers)

Examples

The examples in this section use the following VerticaDB custom resource. Each example uses the number of active sessions to trigger scaling:

apiVersion: vertica.com/v1
kind: VerticaDB
metadata:
  name: v-test
spec:
  communal:
    path: "path/to/communal-storage"
    endpoint: "path/to/communal-endpoint"
    credentialSecret: credentials-secret
  licenseSecret: license
  subclusters:
    - name: pri1
      size: 3
      type: primary
      serviceName: primary1
      resources:
        limits:
          cpu: "8"
        requests:
          cpu: "4"
    - name: sec1
      size: 3
      type: secondary
      serviceName: secondary1
      resources:
        limits:
          cpu: "8"
        requests:
          cpu: "4"

Prerequisites

For HPA autoscaler:

  • Configure Metrics server for basic metrics along with a custom metrics provider such as the Prometheus Adapter.
  • Install Prometheus in the Kubernetes cluster to scrape metrics from your database instance.
  • Install Prometheus Adapter to make custom metrics available to HPA. This adapter exposes the custom metrics scraped by Prometheus to the Kubernetes API server (only necessary if scaling based on Prometheus metrics).
  • Configure Custom Metrics API (Prometheus Adapter) to expose custom Prometheus metrics.
  • Configure the database to expose Prometheus-compatible metrics.

For KEDA's ScaledObject:

  • Install KEDA v2.15.0 installation in your cluster. KEDA is responsible for scaling workloads based on external metrics such as custom Prometheus metrics.
  • KEDA 2.15 is compatible with Kubernetes versions 1.28 to 1.30. While it might work on earlier Kubernetes versions, it is outside the supported range and could result in unexpected issues.
  • Install Prometheus in the Kubernetes cluster to scrape metrics from your database instance.
  • Your database is configured to expose Prometheus-compatible metrics.

Subcluster scaling

Automatically adjust the number of subclusters in your custom resource to fine-tune resources for short-running dashboard queries. For example, increase the number of subclusters to increase throughput. For more information, see Improving query throughput using subclusters.

All subclusters share the same service object, so there are no required changes to external service objects. Pods in the new subcluster are load balanced by the existing service object.

The following example creates a VerticaAutoscaler custom resource that scales by subcluster when the average number of active sessions is 50:

Horizontal Pod Autoscaler

  1. Install the Prometheus adapter and configure it to retrieve metrics from Prometheus:

    rules:
      default: false
      custom:
          # Total number of active sessions. Used for testing
        - metricsQuery: sum(vertica_sessions_running_counter{type="active", initiator="user"}) by (namespace, pod)
          resources:
            overrides:
              namespace:
                resource: namespace
              pod:
                resource: pod
          name:
            matches: "^(.*)_counter$"
            as: "${1}_total" # vertica_sessions_running_total
           seriesQuery: vertica_sessions_running_counter{namespace!="", pod!="", type="active", initiator="user"}
    
  2. Define the VerticaAutoscaler custom resource in a YAML-formatted manifest and deploy it:

    apiVersion: vertica.com/v1beta1
    kind: VerticaAutoscaler
    metadata:
      name: v-scale
      verticaDBName: v-test
      scalingGranularity: Subcluster
      serviceName: primary1
      customAutoscaler:
        type: HPA
        hpa:
          minReplicas: 3
          maxReplicas: 12
          metrics:
            - metric:
                type: Pods
                pods:
                  metric:
                    name: vertica_sessions_running_total
                  target:
                    type: AverageValue
                    averageValue: 50
    
  3. This creates a HorizontalPodAutoscaler object with the following configuration:

    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      name: v-scale-hpa
    spec:
      maxReplicas: 12
      metrics:
      - type: Pods
      pods:
        metric:
          name: vertica_sessions_running_total
        target:
          type: AverageValue
          averageValue: "50"
      minReplicas: 3
      scaleTargetRef:
        apiVersion: vertica.com/v1
        kind: VerticaAutoscaler
        name: v-scale
    
  • Sets the target average number of active sessions to 50.
  • Scales to a minimum of three pods in one subcluster and 12 pods in four subclusters.

ScaledObject

  1. Define the VerticaAutoscaler custom resource in a YAML-formatted manifest and deploy it:

    apiVersion: vertica.com/v1
    kind: VerticaAutoscaler
    metadata:
      name: v-scale
    spec:
      verticaDBName: v-test
      serviceName: primary1
      scalingGranularity: Subcluster
      customAutoscaler:
        type: ScaledObject
        scaledObject:
          minReplicas: 3
          maxReplicas: 12
          metrics:
          - name: vertica_sessions_running_total
            metricType: AverageValue
            prometheus:
              serverAddress: "http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090"
              query: sum(vertica_sessions_running_counter{type="active", initiator="user", service="v-scale-primary1"})
              threshold: 50
    
  2. This creates a ScaledObject object with the following configuration:

    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      name: v-scale-keda
    spec:
      maxReplicaCount: 3
      minReplicaCount: 12
      scaleTargetRef:
        apiVersion: vertica.com/v1
        kind: VerticaAutoscaler
        name: v-scale
      triggers:
      - metadata:
          query: sum(vertica_sessions_running_counter{type="active", initiator="user", service="v-test-primary1"})
          serverAddress: http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090
          threshold: "50"
      metricType: AverageValue
      name: vertica_sessions_running_total
      type: prometheus 
    
  • Sets the target average number of active sessions to 50.
  • Scales to a minimum of three pods in one subcluster, and 12 pods in four subclusters

KEDA directly queries Prometheus using the provided address without relying on the pod selector. The scaling is determined by the result of the query, so ensure the query performs as expected.

Pod scaling

For long-running, analytic queries, increase the pod count for a subcluster. For additional information about analytic queries, see Using elastic crunch scaling to improve query performance.

When you scale pods in an Eon Mode database, you must consider the impact on database shards. For details, see Namespaces and shards.

The following example creates a VerticaAutoscaler custom resource that scales by subcluster when the average number of active sessions is 50.

Horizontal Pod Autoscaler

  1. Install the Prometheus adapter and configure it to retrieve metrics from Prometheus:

    rules:
      default: false
      custom:
          # Total number of active sessions. Used for testing
        - metricsQuery: sum(vertica_sessions_running_counter{type="active", initiator="user"}) by (namespace, pod)
          resources:
            overrides:
              namespace:
                resource: namespace
              pod:
                resource: pod
          name:
            matches: "^(.*)_counter$"
            as: "${1}_total" # vertica_sessions_running_total
           seriesQuery: vertica_sessions_running_counter{namespace!="", pod!="", type="active", initiator="user"}
    
  2. Define the VerticaAutoscaler custom resource in a YAML-formatted manifest and deploy it:

    apiVersion: vertica.com/v1beta1
    kind: VerticaAutoscaler
    metadata:
      name: v-scale
      verticaDBName: v-test
      scalingGranularity: Pod
      serviceName: primary1
      customAutoscaler:
        type: HPA
        hpa:
          minReplicas: 3
          maxReplicas: 12
          metrics:
            - metric:
                type: Pods
                pods:
                  metric:
                    name: vertica_sessions_running_total
                  target:
                    type: AverageValue
                    averageValue: 50 
    
  3. This creates a HorizontalPodAutoscaler object with the following configuration:

    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      name: v-scale-hpa
    spec:
      maxReplicas: 12
      metrics:
      - type: Pods
        pods:
          metric:
            name: vertica_sessions_running_total
          target:
            type: AverageValue
            averageValue: "50"
      minReplicas: 3
      scaleTargetRef:
        apiVersion: vertica.com/v1
        kind: VerticaAutoscaler
        name: v-scale 
    
    • Sets the target average number of active sessions to 50.
    • Scales the primary1 subcluster to a minimum of three pods and a maximum of 12 pods. If multiple subclusters are selected by the serviceName, the last one is scaled.

ScaledObject

  1. Define the VerticaAutoscaler custom resource in a YAML-formatted manifest and deploy it:

    apiVersion: vertica.com/v1
    kind: VerticaAutoscaler
    metadata:
      name: v-scale
    spec:
      verticaDBName: v-test
      serviceName: primary1
      scalingGranularity: Pod
      customAutoscaler:
        type: ScaledObject
        scaledObject:
          minReplicas: 3
          maxReplicas: 12
          metrics:
          - name: vertica_sessions_running_total
            metricType: AverageValue
            prometheus:
              serverAddress: "http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090"
              query: sum(vertica_sessions_running_counter{type="active", initiator="user", service="v-test-primary1"})
              threshold: 50
    
  2. This creates a ScaledObject object with the following configuration:

    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      name: v-scale-keda
    spec:
      maxReplicaCount: 3
      minReplicaCount: 12
      scaleTargetRef:
        apiVersion: vertica.com/v1
        kind: VerticaAutoscaler
        name: v-scale
      triggers:
      - metadata:
          query: sum(vertica_sessions_running_counter{type="active", initiator="user", service="v-test-primary1"})
          serverAddress: http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local:9090
          threshold: "50"
        metricType: AverageValue
        name: vertica_sessions_running_total
        type: prometheus
    
    • Sets the target average number of active sessions to 50.
    • Scales the primary1 subcluster to a minimum of three pods and a maximum of 12 pods. If multiple subclusters are selected by the serviceName, the last one is scaled.

Event monitoring

Horizontal Pod Autoscaler

To view the Horizontal Pod Autoscaler object, use the kubetctl describe hpa command:

Name:                                                  v-scale-hpa
Namespace:                                             vertica
Reference:                                             VerticaAutoscaler/vertica.com/v1 v-scale
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Tue, 12 Feb 2024 15:11:28 -0300
Metrics:                       ( current / target )
  "vertica_sessions_running_total" on pods:  5 / 50                                            
Min replicas:                                          3
Max replicas:                                          12
VerticaAutoscaler pods:                                3 current / 3 desired
Conditions:
  Type              Status  Reason                Message
  ----              ------  ------                -------
  AbleToScale       True    ReadyForNewScale      the HPA controller was able to calculate a new replica count
  ScalingActive     True    ValidMetricFound      the HPA was able to successfully calculate a replica count from pods metric vertica_sessions_running_total
  ScalingLimited    False   DesiredWithinRange    the desired replica count is within the acceptable range

ScaledObject

To view the Scaled Object, use the kubetctl describe scaledobject command:

Name:         v-scale-keda
Namespace:    default
Labels:       <none>
Annotations:  <none>
CreationTimestamp:  <unknown>
Spec:
  Cooldown Period:   30
  Max Replica Count:  12
  Min Replica Count:  3
  Polling Interval:   30
  Scale Target Ref:
    API Version:  vertica.com/v1
    Kind:         VerticaAutoscaler
    Name:         v-scale
    Metadata:
      Auth Modes:           
      Query:                 sum(vertica_sessions_running_counter{type="active", initiator="user", service="v-test-primary1"})
      Server Address:        http://prometheus-tls-kube-promet-prometheus.prometheus-tls.svc:9090
      Threshold:             50
      Unsafe Ssl:            false
    Metric Type:             AverageValue
    Name:                    vertica_sessions_running_total
    Type:                    prometheus
    Use Cached Metrics:      false
Conditions:
  Type              Status  Reason                Message
  ----              ------  ------                -------
  Active            True    ScaledObjectActive    ScaledObject is active
  Ready             True    ScaledObjectReady     ScaledObject is ready
  Fallback          False   NoFallback            No fallback was triggered
Events:            <none>  

Viewing scaling events and autoscaler actions

When a scaling event occurs, you can view the newly created pods. Use kubectl to view the StatefulSets:

$ kubectl get statefulsets
NAME                                                   READY   AGE
v-test-v-scale-0                                        0/3     71s
v- test-primary1                                        3/3     39m
v- test-secondary

Use kubectl describe to view the executing commands:

$ kubectl describe vdb v-test | tail
  Upgrade Status:
Events:
  Type    Reason                   Age   From                Message
  ----    ------                   ----  ----                -------
  Normal  SubclusterAdded          10s   verticadb-operator  Added new subcluster 'v-scale-0'
  Normal  AddNodeStart             9s    verticadb-operator  Starting add database node for pod(s) 'v-test-v-scale-0-0, v-test-v-scale-0-1, v-test-v-scale-0-2'

4 - VerticaReplicator custom resource definition

The VerticaReplicator custom resource (CR) facilitates in-database replication through the OpenText™ Analytics Database Kubernetes operator. This feature allows you to create a VerticaReplicator CR to replicate databases for copying data, testing, or performing active online upgrade. It supports replication to and from sandbox environments. Additionally, both password-based authentication and source TLS authentication are supported.

The VerticaReplicator custom resource (CR) runs replicate on a VerticaDB CR, which copies table or schema data directly from one Eon Mode database's communal storage (source VerticaDB) to another (target VerticaDB).

Prerequisites

Create a VerticaReplicator CR

A VerticaReplicator CR spec only requires the names of the source and target VerticaDB CR for which you want to perform replication. The following example defines the CR as a YAML-formatted file named vreplicate-example.yaml:

apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
  name: vreplicator-example
spec:
  source:
    verticaDB: "vertica-src"
  target:
    verticaDB: "vertica-trg"

For a complete list of parameters that you can set for a VerticaReplicator CR, see Custom resource definition parameters.

Apply the manifest

After you create the VerticaReplicator CR, apply the manifest in the same namespace as the CR specified by verticaDBName:

$ kubectl apply -f vreplicator-example.yaml
verticareplicator.vertica.com/vreplicator-example created

The operator starts the replication process and copies the table and schema data from the source VerticaDB to the target VerticaDB.

You can check the applied CRs as follows:

$ kubectl get vrep
NAME                  SOURCEVERTICADB   TARGETVERTICADB   STATE         AGE
vreplicator-example   vertica-src       vertica-trg       Replicating   2s

Replicating to a sandboxed subcluster

You can replicate from a source db to a sandboxed subcluster.

The following example defines the CR as a YAML-formatted file named vreplicator-trg-sandbox.yaml:

apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
  name: vreplicator-trg-sandbox
spec:
  mode: async
  source:
    verticaDB: "vertica-src"
  target:
    verticaDB: "vertica-trg"
    sandboxName: "sandbox1"

After you apply the manifest, the operator will copy the table and schema data from the source VerticaDB to the sandboxed subcluster “sandbox1” on the target VerticaDB.

Partial Replication

You can replicate specific namespaces, schemas, or tables only in async mode. Here are some examples of partial replication.

The following example replicates a specific table mytable from a source cluster to a target cluster:

apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
  name: vrep-async-single-obj
spec:
  mode: async
  source:
    verticaDB: "vertica-src"
    objectName: "mytable"
  target:
    verticaDB: "vertica-trg"

The following example replicates all tables in the public schema that start with the letter t:

apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
  name: vrep-async-pattern1
spec:
  mode: async
  source:
    verticaDB: "vertica-src"
    includePattern: "public.t*"    
  target:
    verticaDB: "vertica-trg"

The following example replicates all tables in the public schema except those that start with the string customer_:

apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
  name: vrep-async-pattern2
spec:
  mode: async 
  source:
    verticaDB: "vertica-src"
    includePattern: "public.*"
    excludePattern: "*.customer_*"
  target:
    verticaDB: "vertica-trg"

The following is an example that replicates the flights table in the airline schema of the airport namespace to the airport2 namespace in the target cluster:

apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
  name: vrep-async-namespace
spec:
  mode: async
  source:
    verticaDB: "vertica-src"
    objectName: ".airport.airline.flights"

  target:
    verticaDB: "vertica-trg"
    namespace: "airport2"

The following example replicates all schemas and tables in the default_namespace of the source cluster to the default_namespace in the target cluster:

apiVersion: vertica.com/v1beta1
kind: VerticaReplicator
metadata:
  name: vrep-pattern3
spec:
  mode: async
  source:
    verticaDB: "vertica-src"
    includePattern: ".default_namespace.*.*"    
  target:
    verticaDB: "vertica-trg"

Replication status

You can check the replication status as follows:

$ kubectl describe vrep
Name:         vreplicator-example
Namespace:    vertica
Labels:       <none>
Annotations:  <none>
API Version:  vertica.com/v1beta1
Kind:         VerticaReplicator
Metadata:
  Creation Timestamp:  2024-07-24T12:34:51Z
  Generation:          1
  Resource Version:    19058685
  UID:                 be90db7f-3ed5-49c0-9d86-94f87d681806
Spec:
  Source:
    Vertica DB:  vertica-src
  Target:
    Vertica DB:  vertica-trg
Status:
  Conditions:
    Last Transition Time:  2024-07-24T12:34:51Z
    Message:
    Reason:                Ready
    Status:                True
    Type:                  ReplicationReady
    Last Transition Time:  2024-07-24T12:35:01Z
    Message:
    Reason:                Succeeded
    Status:                False
    Type:                  Replicating
    Last Transition Time:  2024-07-24T12:35:01Z
    Message:
    Reason:                Succeeded
    Status:                True
    Type:                  ReplicationComplete
  State:                   Replication successful
Events:
  Type    Reason                Age    From                Message
  ----    ------                ----   ----                -------
  Normal  ReplicationStarted    4m3s   verticadb-operator  Starting replication
  Normal  ReplicationSucceeded  3m57s  verticadb-operator  Successfully replicated database in 5s

Conditions

The Conditions field summarizes each stage of the replication and contains the following fields:

  • Last Transition Time: Timestamp that indicates when the status condition last changed.
  • Message: This field is not in use, you can safely ignore it.
  • Reason: Indicates why the replication stage is in its current Status.
  • Status: Boolean, indicates whether the replication stage is currently in process.
  • Type: The replication that the VerticaDB operator is executing in this stage.

The following table describes each Conditions.Type, and all possible value combinations for its Reason and Status field values:

Type Description Status Reason
ReplicationReady The operator is ready to start the database replication. True Ready
False

Source database or sandbox is running a version earlier than 24.3.0.

Target database or sandbox version is lower than the source database or sandbox version.

Source database is deployed using admintools.

Replicating The operator is replicating the database. True Started
False

Failed

Succeeded

ReplicationComplete The database replication is complete. True Succeeded

5 - VerticaRestorePointsQuery custom resource definition

The VerticaRestorePointsQuery custom resource (CR) retrieves details about saved restore points that you can use to roll back your database to a previous state or restore specific objects in a VerticaDB CR.

A VerticaRestorePointsQuery CR defines query parameters that the VerticaDB operator uses to retrieve restore points from an archive. A restore point is a snapshot of a database at a specific point in time that can consist of an entire database or a subset of database objects. Each restore point has a unique identifier and a timestamp. An archive is a collection of chronologically organized restore points.

You specify the archive and an optional period of time, and the operator queries the archive and retrieves details about restore points saved in the archive. You can use the query results to revive a VerticaDB CR with the data saved in the restore point.

Prerequisites

Save restore points

You can save a restore point using VerticaDB operator in Kubernetes or by using vsql in OpenText™ Analytics Database.

Save a restore point using VerticaDB operator

  1. Use kubectl edit to open your default text editor and update the yaml file for the specified custom resource. The following command opens a custom resource named vdb for editing:

    $ kubectl edit vdb
    
  2. In the spec section of the custom resource, add an entry for the archive name. The VerticaDB operator creates the archive, using the spec.restorePoint.archive as the archive name to save the restore point.

    apiVersion: vertica.com/v1
    kind: VerticaDB
    metadata:
      name: vertica-db
    spec:
    ...
      restorePoint:
        archive: demo_archive
    
  3. To save a restore point, edit the status condition as follows:

    $ kubectl edit --subresource=status vdb/vertica-db
    
  4. Add the following conditions to initialize save restore point:

    apiVersion: vertica.com/v1
    kind: VerticaDB
    metadata:
      name: vertica-db
    spec:
    ...
    status:
    ...
    conditions:
    - lastTransitionTime: "2024-10-01T17:27:27Z"
      message: ""
      reason: Init
      status: "True"
      type: SaveRestorePointNeeded
    
  5. You can check the status of the restore point as follows:

    $ kubectl describe vdb
    Name:         vertica-db
    ...
    Events:
     Type    Reason                     Age   From                Message
     ----    ------                     ----  ----                -------
     Normal  CreateArchiveStart         54s   verticadb-operator  Starting create archive
     Normal  CreateArchiveSucceeded     54s   verticadb-operator  Successfully create archive. It took 0s
     Normal  SaveRestorePointStart      54s   verticadb-operator  Starting save restore point
     Normal  SaveRestorePointSucceeded  33s   verticadb-operator  Successfully save restore point to archive: demo_archive. It took 20s
    
  6. You can get the new archive name, start timestamp, and end timestamp from vdb status.

    To retrieve details about the most recently created restore point, use these values (archive, startTimestamp, and endTimestamp) as filter options in a VerticaRestorePointsQuery CR. See, Create a VerticaRestorePointsQuery.

    kubectl describe vdb
    ...
    Status:
     ...
     Restore Point:
       Archive:          demo_archive
       End Timestamp:    2024-10-09 12:25:28.956094972
       Start Timestamp:  2024-10-09 12:25:19.029997424
    

Save a restore point using vsql in OpenText™ Analytics Database

Before the VerticaDB operator can retrieve restore points, you must create an archive and save restore points to that archive. You can leverage stored procedures and scheduled execution to save restore points to an archive on a regular schedule. In the following sections, you schedule a stored procedure to save restore points to an archive every night at 9:00 PM.

Create the archive and schedule restore points

Create an archive and then create a stored procedure that saves a restore point to that archive:

  1. Create an archive with CREATE ARCHIVE. The following statement creates an archive named nightly because it will store restore points that are saved every night:
    $ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \
    -w password \
    -c "CREATE ARCHIVE nightly;"
    CREATE ARCHIVE
    
  2. Create a stored procedure that saves a restore point. The SAVE RESTORE POINT TO ARCHIVE statement creates a restore point and saves it to the nightly archive:
    $ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \
      -w password \
      -c "CREATE OR REPLACE PROCEDURE take_nightly()
      LANGUAGE PLvSQL AS \$\$
      BEGIN
        EXECUTE 'SAVE RESTORE POINT TO ARCHIVE nightly';
      END;
      \$\$;"
    CREATE PROCEDURE
    
  3. To test the stored procedure, execute it with the CALL statement:
    $ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \
    -w password \
    -c "CALL take_nightly();"
     take_nightly
    --------------
                0
    (1 row)
    
  4. To verify that the stored procedure saved the restore point, query the ARCHIVE_RESTORE_POINTS system table to return the number of restore points in the specified archive:
    $ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \
    -w password \
    -c "SELECT COUNT(*) FROM ARCHIVE_RESTORE_POINTS
    WHERE ARCHIVE = 'nightly';"
     COUNT
    -------
         1
    (1 row)
    

Schedule the stored procedure

Schedule the stored procedure so that it saves a restore point to the nightly archive each night:

  1. Schedule a time to execute the stored procedure with CREATE SCHEDULE. This function uses a cron expression to create a schedule at 9:00 PM each night:
    $ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \
    -w password \
    -c "CREATE SCHEDULE nightly_sched USING CRON '0 21 * * *';"
    CREATE SCHEDULE
    
  2. Set CREATE TRIGGER to execute the take_nightly stored procedure with the nightly_sched schedule:
    $ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \
    -w password \
    -c "CREATE TRIGGER trigger_nightly_sched ON SCHEDULE nightly_sched
    EXECUTE PROCEDURE take_nightly() AS DEFINER;"
    CREATE TRIGGER
    

Verify the archive automation

After you create the stored procedure and configure its schedule, test that it executes and saves a stored procedure at the scheduled time:

  1. Before the cron job is scheduled to run, verify the system time with the date shell built-in:
    $ date -u
    Thu Feb 29 20:59:15 UTC 2024
    
  2. Wait until the scheduled time elapses:
    $ date -u
    Thu Feb 29 21:00:07 UTC 2024
    
  3. To verify that the scheduled stored procedure executed on time, query ARCHIVE_RESTORE_POINTS system table for details about the nightly archive:
    $ kubectl exec -it restorepoints-primary1-0 -c server -- vsql \
    -w password \
    -c "SELECT COUNT(*) FROM ARCHIVE_RESTORE_POINTS WHERE ARCHIVE = 'nightly';"
     COUNT
    -------
         2
    (1 row)
    
    COUNT is incremented by one, so the stored procedure saved the restore point on schedule.

Create a VerticaRestorePointsQuery

A VerticaRestorePointsQuery manifest specifies an archive and an optional time duration. The VerticaDB operator uses this information to retrieve details about the restore points that were saved to the archive.

Create and apply the manifest

The following manifest defines a VerticaRestorePointsQuery CR named vrqp. The vrqp CR instructs the operator to retrieve from the nightly archive all restore points saved on Feburary 29, 2024:

  1. Create a file named vrpq.yaml that contains the following manifest. This CR retrieves restore points :

    apiVersion: vertica.com/v1beta1
    kind: VerticaRestorePointsQuery
    metadata:
      name: vrqp
    spec:
      verticaDBName: restorepoints
      filterOptions:
        archiveName: "nightly"
        startTimestamp: 2024-02-29
        endTimestamp: 2024-02-29
    

    The spec contains the following fields:

    • verticaDBName: Name of the VerticaDB CR that you want to retrieve restore points for.
    • filterOptions.archiveName: Archive that contains the restore points that you want to retrieve.
    • filterOptions.startTimestamp: Retrieve restore points that were saved on or after this date.
    • filterOptions.endTimestamp: Retrieve restore points that were saved on or before this date.

    For additional details about these parameters, see Custom resource definition parameters.

  2. Apply the manifest in the current namespace with kubectl:

    $ kubectl apply -f vrpq.yaml
    verticarestorepointsquery.vertica.com/vrpq created
    

    After you apply the manifest, the operator begins working to retrieve the restore points.

  3. Verify that the query succeeded with kubectl:

    $ kubectl get vrpq
    NAME   VERTICADB       STATE              AGE
    vrpq   restorepoints   Query successful   10s
    

View retrieved restore points

After you apply the VerticaRestorePointsQuery CR, you can view the retrieved restore points with kubectl describe. kubectl describe returns a Status section, which describes the query activity and properties for each retrieved restore point:

$ kubectl describe vrpq
Name:         vrpq
...
Status:
  Conditions:
    Last Transition Time:  2024-03-15T17:40:39Z
    Message:
    Reason:                Completed
    Status:                True
    Type:                  QueryReady
    Last Transition Time:  2024-03-15T17:40:41Z
    Message:
    Reason:                Completed
    Status:                False
    Type:                  Querying
    Last Transition Time:  2024-03-15T17:40:41Z
    Message:
    Reason:                Completed
    Status:                True
    Type:                  QueryComplete
  Restore Points:
    Archive:          nightly
    Id:               af8cd407-246a-4500-bc69-0b534e998cc6
    Index:            1
    Timestamp:        2024-02-29 21:00:00.728787
    vertica_version:  version
  State:              Query successful
...

The Status section contains relevant restore points details in the Conditions and Restore Points fields.

Conditions

The Conditions field summarizes each stage of the restore points query and contains the following fields:

  • Last Transition Time: Timestamp that indicates when the status condition last changed.
  • Message: This field is not in use, you can safely ignore it.
  • Reason: Indicates why the query stage is in its current Status.
  • Status: Boolean, indicates whether the query stage is currently in process.
  • Type: The query that the VerticaDB operator is executing in this stage.

The following table describes each Conditions.Type, and all possible value combinations for its Reason and Status field values:

Type Description Status Reason
QueryReady The operator verified that the query is executable in the environment. True Completed
False

IncompatibleDB: CR specified by verticaDBName is not version 24.2 or later.

AdmintoolsNotSupported: CR specified by verticaDBName does not use apiVersion v1. For details, see VerticaDB custom resource definition.

Querying The operator is running the query. True Started
False

Failed

Completed

QueryComplete The query is complete and the restore points are available in the Restore Points array. True Completed

Restore Points

The Restore Points field lists each restore point that was retrieved from the archive and contains the following fields:

  • Archive: The archive that contains this restore point.
  • Id: Unique identifier for the restore point.
  • Index: Restore point rank ordering in the archive, by descending timestamp. 1 is the most recent restore point.
  • Timestamp: Time that indicates when the restore point was created.
  • vertica_version: Database version when this restore point was saved to the archive.

Restore the database

After the operator retrieves the restore points, you can restore the database with the archive name and either the restore point Index or Id. In addition, you must set initPolicy to Revive:

  1. Delete the existing CR:
    $ kubectl delete -f restorepoints.yaml
    verticadb.vertica.com "restorepoints" deleted
    
  2. Update the CR. Change the initPolicy to Revive, and add the restore point information. You might have to set ignore-cluster-lease to true:
    apiVersion: vertica.com/v1
    kind: VerticaDB
    metadata:
      name: restorepoints
      annotations:
        vertica.com/ignore-cluster-lease: "true"
    spec:
      initPolicy: Revive
      restorePoint:
        archive: "nightly"
        index: 1
      ...
    
  3. Apply the updated manifest:
    $ kubectl apply -f restorepoints.yaml
    verticadb.vertica.com/restorepoints created
    

6 - VerticaScrutinize custom resource definition

The VerticaScrutinize custom resource (CR) runs scrutinize on a VerticaDB CR, which collects diagnostic information about the VerticaDB cluster and packages it in a tar file. This diagnostic information is commonly requested when resolving a case with OpenText™ Analytics Database Support.

When you create a VerticaScrutinize CR in your cluster, the VerticaDB operator creates a short-lived pod and runs scrutinize in two stages:

  1. An init container runs scrutinize on the VerticaDB CR. This produces a tar file named VerticaScrutinize.timestamp.tar that contains the diagnostic information. Optionally, you can define one or more init containers that perform additional processing after scrutinize completes.
  2. A main container persists the tar file in its file system in the /tmp/scrutinize/ directory. This main container lives for 30 minutes.

When resolving a support case, the support team might request that you upload the tar file to a secure location, such as vAdvisor Report.

Prerequisites

Create a VerticaScrutinize CR

A VerticaScrutinize CR spec requires only the name of the VerticaDB CR for which you want to collect diagnostic information. The following example defines the CR as a YAML-formatted file named vscrutinize-example.yaml:

apiVersion: vertica.com/v1beta1
kind: VerticaScrutinize
metadata:
  name: vscrutinize-example
spec:
  verticaDBName: verticadb-name

For a complete list of parameters that you can set for a VerticaScrutinize CR, see Custom resource definition parameters.

Apply the manifest

After you create the VerticaScrutinize CR, apply the manifest in the same namespace as the CR specified by verticaDBName:

$ kubectl apply -f vscrutinize-example.yaml
verticascrutinize.vertica.com/vscrutinize-example created

The operator creates an init container that runs scrutinize:

$ kubectl get pods
NAME                                          READY   STATUS     RESTARTS   AGE
...
verticadb-operator-manager-68b7d45854-22c8p   1/1     Running    0          3d17h
vscrutinize-example                           0/1     Init:0/1   0          14s

After the init container completes, a new container is created, and the tar file is stored in its file system at /tmp/scrutinize. This container persists for 30 minutes:

$ kubectl get pods
NAME                                          READY   STATUS    RESTARTS   AGE
...
verticadb-operator-manager-68b7d45854-22c8p   1/1     Running   0          3d20h
vscrutinize-example                           1/1     Running   0          21s

Add init containers

When you apply a VerticaScrutinize CR, the VerticaDB operator creates an init container that prepares and runs the scrutinize command. You can add one or more init containers to perform additional steps after scrutinize creates a tar file and before the tar file is saved in the main container.

For example, you can define an init container that sends the tar file to another location, such as an S3 bucket. The following manifest defines an initContainer field that uploads the scrutinize tar file to an S3 bucket:

apiVersion: vertica.com/v1beta1
kind: VerticaScrutinize
metadata:
  name: vscrutinize-example-copy-to-s3
spec:
  verticaDBName: verticadb-name
  initContainers:
    - command:
        - bash
        - '-c'
        - 'aws s3 cp $(SCRUTINIZE_TARBALL) s3://k8test/scrutinize/'
      env:
        - name: AWS_REGION
          value: us-east-1
      image: 'amazon/aws-cli:2.2.24'
      name: copy-tarfile-to-s3
      securityContext:
        privileged: true

In the previous example, initContainers.command executes a command that accesses the SCRUTINIZE_TARBALL environment variable. The operator sets this environment variable in the scrutinize pod, and it defines the location of the tar file in the main container.