This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Containerized Vertica

Vertica leverages container technology to meet the needs of modern application development and operations workflows that must deliver software quickly and efficiently across a variety of infrastructures.

Vertica Eon Mode leverages container technology to meet the needs of modern application development and operations workflows that must deliver software quickly and efficiently across a variety of infrastructures. Containerized Vertica supports Kubernetes with automation tools to help maintain the state of your environment with minimal disruptions and manual intervention.

Containerized Vertica provides the following benefits:

  • Performance: Eon Mode separates compute from storage, which provides the optimal architecture for stateful, containerized applications. Eon Mode subclusters can target specific workloads and scale elastically according to the current computational needs.

  • High availability: Vertica containers provide a consistent, repeatable environment that you can deploy quickly. If a database host or service fails, you can easily replace the resource.

  • Resource utilization: A container is a runtime environment that packages an application and its dependencies in an isolated process. This isolation allows containerized applications to share hardware without interference, providing granular resource control and cost savings.

  • Flexibility: Kubernetes is the de facto container orchestration platform. It is supported by a large ecosystem of public and private cloud providers.

Containerized Vertica ecosystem

Vertica provides various tools and artifacts for production and development environments. The containerized Vertica ecosystem includes the following:

  • Vertica Helm chart: Helm is a Kubernetes package manager that bundles into a single package the YAML manifests that deploy Kubernetes objects. Download Vertica Helm charts from the Vertica Helm Charts Repository.

  • Custom Resource Definition (CRD): A CRD is a shared global object that extends the Kubernetes API with your custom resource types. Use the CRD to instantiate a custom resource (CR), a deployable object that defines the state of an Eon Mode database on Kubernetes.

  • VerticaDB Operator: The operator is a custom controller that monitors the state of your CR and automates administrator tasks. If the current state differs from the declared state, the operator works to correct the current state.

  • Admission controller: The admission controller uses a webhook that the operator queries to verify changes to mutable states in a CR.

  • VerticaDB vlogger: The vlogger is a lightweight image used to deploy a sidecar utility container. The sidecar sends logs from vertica.log in the Vertica server container to standard output on the host node to simplify log aggregation.

  • Vertica Community Edition (CE) image: The CE image is the containerized version of the limited Enterprise Mode Vertica community edition (CE) license. The CE image provides a test environment consisting of an example database and developer tools.

    In addition to the pre-built CE image, you can build a custom CE image with the tools provided in the Vertica one-node-ce GitHub repository.

  • Communal Storage Options: Vertica supports a variety of public and private cloud storage providers. For a list of supported storage providers, see Containerized environments.

  • UDx development tools: The UDx-container GitHub repository provides the tools to build a container that packages the binaries, libraries, and compilers required to create C++ Vertica user-defined extensions. For additional details about extending Vertica in C++, see C++ SDK.

Vertica images

The following table describes images that Vertica provides for server and automation tools:

Image Description
Vertica Kubernetes minimal image (without Tensorflow)

Optimized for Kubernetes. The default image included in the Vertica Helm charts. This image does not contain the TensorFlow package.

Image names:

Vertica Kubernetes (with Tensorflow)

Optimized for Kubernetes. This image has full machine learning capabilities.

Image name: vertica/vertica-k8s:11.1.1-0

Vertica Community Edition

A single-node Enterprise Mode image for test environments. For more information, see Vertica community edition (CE).

Image name: vertica/vertica-ce:11.1.1-0

VerticaDB Operator

The operator monitors the state of your custom resources and automates life cycle tasks for Vertica on Kubernetes. For installation instructions, see Installing the Vertica DB operator.

Image name: vertica/verticadb-operator:1.5.0

Vertica vlogger

Lightweight image for sidecar logging. The vlogger sends the contents of vertica.log to stdout on the host node. For implementation details, see Creating a custom resource.

Image name: vertica/vertica-logger:1.0.0

Creating a custom Vertica image

The Creating a Vertica Image tutorial in the Vertica Integrator's Guide provides a line-by-line description of the Dockerfile hosted on GitHub. You can add dependencies to replicate your development and production environments.

1 - Containerized Vertica on Kubernetes

Kubernetes is an open-source container orchestration platform that automatically manages infrastructure resources and schedules tasks for containerized applications at scale.

Kubernetes is an open-source container orchestration platform that automatically manages infrastructure resources and schedules tasks for containerized applications at scale. Kubernetes achieves automation with a declarative model that decouples the application from the infrastructure. The administrator provides Kubernetes the desired state of an application, and Kubernetes deploys the application and works to maintain its desired state. This frees the administrator to update the application as business needs evolve, without worrying about the implementation details.

An application consists of resources, which are stateful objects that you create from Kubernetes resource types. Kubernetes provides access to resource types through the Kubernetes API, an HTTP API that exposes resource types as endpoints. The most common way to create a resource is with a YAML-formatted manifest file that defines the desired state of the resource. You use the kubectl command line tool to request a resource instance of that type from the Kubernetes API. In addition to the default resource types, you can extend the Kubernetes API and define your own resource types as a Custom Resource Definition (CRD).

To manage the infrastructure, Kubernetes uses a host to run the control plane, and designates one or more hosts as worker nodes. The control plane is a collection of services and controllers that maintain the desired state of Kubernetes objects and schedule tasks on worker nodes. Worker nodes complete tasks that the control plane assigns. Just as you can create a CRD to extend the Kubernetes API, you can create a custom controller that maintains the state of your custom resources (CR) created from the CRD.

Vertica custom resource definition and custom controller

The Vertica CRD extends the Kubernetes API so that you can create custom resources that deploy an Eon Mode database as a StatefulSet. In addition, Vertica provides the VerticaDB operator, a custom controller that maintains the desired state of your CR and automates life cycle tasks. The result is a self-healing, highly-available, and scalable Eon Mode database that requires minimal manual intervention.

To simplify deployment, Vertica packages the CRD and the operator in Helm charts. A Helm chart bundles manifest files into a single package to create multiple resource type objects with a single command.

Custom resource definition architecture

The Vertica CRD creates a StatefulSet, a workload resource type that persists data with ephemeral Kubernetes objects. The following diagram describes the Vertica CRD architecture:

VerticaDB operator

The operator is a namespace-scoped custom controller that maintains the state of custom objects and automates administrator tasks. The operator watches objects and compares their current state to the desired state declared in the custom resource. When the current state does not match the desired state, the operator works to restore the objects to the desired state.

In addition to state maintenance, the operator:

  • Installs Vertica

  • Creates an Eon Mode database

  • Upgrades Vertica

  • Revives an existing Eon Mode database

  • Restarts and reschedules DOWN pods

  • Scales subclusters

  • Manages services for pods

  • Monitors pod health

  • Handles load balancing for internal and external traffic

To validate changes to the custom resource, the operator queries the admission controller, a webhook that provides rules for mutable states in a custom resource.

Vertica makes the operator and admission controller available through OperatorHub.io or as a Helm chart. For details about installing the operator and the admission controller with both methods, see Installing the Vertica DB operator.

Vertica pod

A pod is essentially a wrapper around one or more logically-grouped containers. These containers consume the host node resources in a shared execution environment. In addition to sharing resources, a pod extends the container to interact with Kubernetes services. For example, you can assign labels to associate pods to other objects, and you can implement affinity rules to schedule pods on specific host nodes.

DNS names provide continuity between pod life cycles. Each pod is assigned an ordered and stable DNS name that is unique within its cluster. When a Vertica pod fails, the rescheduled pod uses the same DNS name as its predecessor. If a pod needs to persist data between life cycles, you can mount a custom volume in its filesystem.

Rescheduled pods require information about the environment to become part of the cluster. This information is provided by the Downward API. Environment information, such as the superuser password Secret, is mounted in the /etc/podinfo directory.

Sidecar container

Pods run multiple containers to tightly couple containers that contribute to the same process. The Vertica pod allows a sidecar, a utility container that can access and perform utility tasks for the Vertica server process.

For example, logging is a common utility task. Idiomatic Kubernetes practices retrieve logs from standard output and standard error on the host node for log aggregation. To facilitate this practice, Vertica offers the vlogger sidecar image that sends the contents of vertica.log to standard output on the host node.

If a sidecar needs to persist data, you can mount a custom volume in the sidecar filesystem.

For implementation details, see Creating a custom resource.

Persistent storage

A pod is an ephemeral, immutable object that requires access to external storage to persist data between life cycles. To persist data, the operator uses the following API resource types:

  • StorageClass: Represents an external storage provider. You must create a StorageClass object separately from your custom resource and set this value with the local.storageClassName configuration parameter.

  • PersistentVolume (PV): A unit of storage that mounts in a pod to persist data. You dynamically or statically provision PVs. Each PV references a StorageClass.

  • PersistentVolumeClaim (PVC): The resource type that a pod uses to describe its StorageClass and storage requirements.

A pod mounts a PV in its filesystem to persist data, but a PV is not associated with a pod by default. However, the pod is associated with a PVC that includes a StorageClass in its storage requirements. When a pod requests storage with a PVC, the operator observes this request and then searches for a PV that meets the storage requirements. If the operator locates a PV, it binds the PVC to the PV and mounts the PV as a volume in the pod. If the operator does not locate a PV, it must either dynamically provision one, or the administrator must manually provision one before the operator can bind it to a pod.

PVs persist data because they exist independently of the pod life cycle. When a pod fails or is rescheduled, it has no effect on the PV. For additional details about StorageClass, PersistentVolume, and PersistentVolumeClaim, see the Kubernetes documentation.

StorageClass requirements

The StorageClass affects how the Vertica server environment and operator function. For optimum performance, consider the following:

  • If you do not set the local.storageClassName configuration parameter, the operator uses the default storage class. If you use the default storage class, confirm that it meets storage requirements for a production workload.

  • Select a StorageClass that uses a recommended storage format type as its fsType.

  • Use dynamic volume provisioning. The operator requires on-demand volume provisioning to create PVs as needed.

Local volume mounts

The operator mounts a single PVC in the /home/dbadmin/local-data/ directory of each pod to persist data. Each of the following subdirectories is a sub-path into the volume that backs the PVC:

  • /data: Stores the catalog and any temporary files. You can customize this path with the local.dataPath parameter.

  • /depot: Improves depot warming in a rescheduled pod. You can customize this path with the local.depotPath parameter.

  • /opt/vertica/config: Persists the contents of the configuration directory between restarts.

  • /opt/vertica/log: Persists log files between pod restarts.

By default, each path mounted in the /local-data directory are owned by the dbadmin user and the verticadb group. For details, see About Linux users created by Vertica and their privileges.

Custom volume mounts

You might need to persist data between pod life cycles in one of the following scenarios:

  • An external process performs a task that requires long-term access to the Vertica server data.

  • Your custom resource includes a sidecar container in the Vertica pod.

You can mount a custom volume in the Vertica pod or sidecar filesystem. To mount a custom volume in the Vertica pod, add the definition in the spec section of the CR. To mount the custom volume in the sidecar, add it in an element of the sidecars array.

The CR requires that you provide the volume type and a name for each custom volume. The CR accepts any Kubernetes volume type. The volumeMounts.name value identifies the volume within the CR, and has the following requirements and restrictions:

  • It must match the volumes.name parameter setting.

  • It must be unique among all volumes in the /local-data, /podinfo, or /licensing mounted directories.

For instructions on how to mount a custom volume in either the Vertica server container or in a sidecar, see Creating a custom resource.

Service objects

Vertica on Kubernetes provides two service objects: a headless service that requires no configuration to maintain DNS records and ordered names for each pod, and a load balancing service that manages internal traffic and external client requests for the pods in your cluster.

Load balancing services

Each subcluster uses a single load balancing service object. You can manually assign a name to a load balancing service object with the subclusters[i].serviceName parameter in the custom resource. Assigning a name is useful when you want to:

  • Direct traffic from a single client to multiple subclusters.

  • Scale subclusters by workload with more flexibility.

  • Identify subclusters by a custom service object name.

To configure the type of service object, use the subclusters[i].serviceType parameter in the custom resource to define a Kubernetes service type. Vertica supports the following service types:

  • ClusterIP: The default service type. This service provides internal load balancing, and sets a stable IP and port that is accessible from within the subcluster only.

  • NodePort: Provides external client access. You can specify a port number for each host node in the subcluster to open for client connections.

  • LoadBalancer: Uses a cloud provider load balancer to create NodePort and ClusterIP services as needed. For details about implementation, see the Kubernetes documentation and your cloud provider documentation.

Because native Vertica load balancing interferes with the Kubernetes service object, Vertica recommends that you allow the Kubernetes services to manage load balancing for the subcluster. You can configure the native Vertica load balancer within the Kubernetes cluster, but you receive unexpected results. For example, if you set the Vertica load balancing policy to ROUNDROBIN, the load balancing appears random.

For additional details about Kubernetes services, see the official Kubernetes documentation.

Security considerations

Vertica on Kubernetes supports both TLS and mTLS for communications between resource objects. You must manually configure TLS in your environment. For details, see TLS protocol.

The VerticaDB operator manages changes to the certificates. If you update an existing certificate, the operator replaces the certificate in the Vertica server container. If you add or delete a certificate, the operator reschedules the pod with the new configuration.

The subsequent sections detail internal and external connections that require TLS for secure communications.

Admission controller webhook certificates

The VerticaDB operator Helm chart includes the admission controller, a webhook that communicates with the Kubernetes API server to validate changes to a resource object. Because the API server communicates over HTTPS only, you must configure TLS certificates to authenticate communications between the API server and the webhook.

The method you use to install the VerticaDB operator determines how you manage TLS certificates for the admission controller:

  • OperatorHub.io: Runs on the Operator Lifecycle Manager (OLM) and automatically creates and mounts a self-signed certificate for the webhook. This installation method does not require additional action.
  • Helm charts: Manually manage admission TLS certificates with the webhook.certSource Helm chart parameter.

For details about each installation method, see Installing the Vertica DB operator.

Communal storage certificates

Supported storage locations authenticate requests with a self-signed certificate authority (CA) bundle. For TLS configuration details for each provider, see Configuring communal storage.

Client-server certificates

You might require multiple certificates to authenticate external client connections to the load balancing service object. You can mount one or more custom certificates in the Vertica server container with the certSecrets custom resource parameter. Each certificate is mounted in the container at /certs/cert-name/key.

For details, see Creating a custom resource.

System configuration

As a best practice, make system configurations on the host node so that pods inherit those settings from the host node. This strategy eliminates the need to provide each pod a privileged security context to make system configurations on the host.

To manually configure host nodes, refer to the following sections:

The dbadmin account must use one of the authentication techniques described in Dbadmin authentication access.

2 - Vertica DB operator

The Vertica operator automates error-prone and time-consuming tasks that a Vertica on Kubernetes administrator must otherwise perform manually.

The Vertica operator automates error-prone and time-consuming tasks that a Vertica on Kubernetes administrator must otherwise perform manually. The operator:

  • Installs Vertica

  • Creates an Eon Mode database

  • Upgrades Vertica

  • Revives an existing Eon Mode database

  • Restarts and reschedules DOWN pods

  • Scales subclusters

  • Manages services for pods

  • Monitors pod health

  • Handles load balancing for internal and external traffic

The Vertica operator is a Go binary that uses the SDK operator framework. It runs in its own pod, and is namespace-scoped to limit any failures to the objects in its namespace.

For details about installing and upgrading the operator, see Installing the Vertica DB operator.

Monitoring desired state

Each namespace is allowed one operator pod that acts as a custom controller and monitors the state of the custom resource objects within that namespace. The operator uses the control loop mechanism to reconcile state changes by investigating state change notifications from the custom resource instance, and periodically comparing the current state with the desired state.

If the operator detects a change in the desired state, it determines what change occurred and reconciles the current state with the new desired state. For example, if the user deletes a subcluster from the custom resource instance and successfully saves the changes, the operator deletes the corresponding subcluster objects in Kubernetes.

Validating state changes

The verticadb-operator Helm chart includes an admission controller, which uses a webhook to prevent invalid state changes to the custom resource. When you save a change to a custom resource, the admission controller webhook queries a REST endpoint that provides rules for mutable states in a custom resource. If a change violates the state rules, the admission controller prevents the change and returns a error. For example, it returns an error if you try to save a change that violates K-Safety.

Limitations

The operator has the following limitations:

  • You must manually configure TLS. For details, see Containerized Vertica on Kubernetes.

  • Vertica recommends that you do not use the Large cluster feature. If a control nodes fails, it might cause more than half of the database nodes to fail. This results in the database losing quorum.

  • Backup and Restore is a manual process.

  • Importing and exporting data between a cluster outside of Kubernetes requires that you expose the service with the NodePort or LoadBalancer service type and properly configure the network.

2.1 - Installing the Vertica DB operator

The custom resource definition (CRD), DB operator, and admission controller work together to maintain the state of your environment and automate tasks:.

The custom resource definition (CRD), VerticaDB operator, and admission controller work together to maintain the state of your environment and automate tasks:

  • The CRD extends the Kubernetes API to provide custom objects. It serves as a blueprint for custom resource (CR) instances that specify the desired state of your environment.

  • The VerticaDB operator is a custom controller that monitors CR instances to maintain the desired state of VerticaDB objects. You can deploy one VerticaDB operator per namespace, and the operator monitors only the VerticaDB objects within that namespace.

  • The admission controller is a webhook that queries a REST endpoint to verify changes to mutable states in a CR instance.

Prerequisites

Installation options

Vertica provides two separate options to install the VerticaDB operator and admission controller:

OperatorHub.io

OperatorHub.io is a registry that allows vendors to share Kubernetes operators. Each vendor must adhere to packaging guidelines to simplify user adoption.

To install the VerticaDB operator from OperatorHub.io, navigate to the Vertica operator page and follow the install instructions.

Helm charts

Vertica packages VerticaDB operator and admission controller in a Helm chart. Vertica on Kubernetes allows one operator instance per namespace.

Configuring TLS for the admission controller

Before you can install the VerticaDB Helm chart, you must configure TLS for the admission controller. The admission controller uses a webhook that requires TLS certificates for data encryption. Choose one of the following data encryption options:

  • cert-manager to generate and manage certificates. For environments that use a self-signed certificate authority (CA), Vertica recommends using cert-manager.

  • Custom certificates

By default, the custom resource uses cert-manager unless you provide custom certificates. You cannot install the VerticaDB operator Helm chart if you do not install cert-manager or provide custom certificates.

Installing cert-manager

cert-manager is available as a YAML manifest in a GitHub repository.

  1. Use kubectl to install cert-manager:

    $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
    

    Installation might take a few minutes.

  2. Verify the cert-manager installation:

    $ kubectl get pods --namespace cert-manager
    NAME                                       READY   STATUS    RESTARTS   AGE
    cert-manager-7dd5854bb4-skks7              1/1     Running   5          12d
    cert-manager-cainjector-64c949654c-9nm2z   1/1     Running   5          12d
    cert-manager-webhook-6bdffc7c9d-b7r2p      1/1     Running   5          12d
    

For additional details about cert-manager install verification, see the cert-manager documentation.

Defining custom certificates

Custom certificates require a TLS key that sets the Subjective Alternative Name (SAN) using the admission controller webhook's fully-qualified domain name (FDQN). You can set the SAN in a configuration file with the following format:

[alt_names]
DNS.1 = verticadb-operator-webhook-service.namespace.svc
DNS.2 = verticadb-operator-webhook-service.namespace.svc.cluster.local

For more information about TLS and Vertica, see TLS protocol.

When you install the VerticaDB operator and admission controller Helm chart, you can pass parameters to customize the Helm chart. Conceal custom certificates in a Secret before you pass them as parameters. The following command creates a Secret that stores the TLS key, TLS certificate, and CA certificate:

$ kubectl create secret generic tls-secret --from-file=tls.key=/path/to/tls.key --from-file=tls.crt=/path/to/tls.crt --from-file=ca.crt=/path/to/ca.crt

Use tls-secret when you install the VerticaDB operator and admission controller Helm chart. For a detailed example, see Helm chart parameters.

Granting operator privileges

You must have cluster administrator privileges to install the operator Helm chart. In some circumstances, you might want to authorize a user with lesser privileges to install the operator in a specific namespace. You can grant these operator privileges with a preconfigured Kubernetes service account.

Vertica leverages Kubernetes RBAC to authorize service accounts with the privileges to perform operator actions. You can grant operator privileges to a Role resource type, then define a RoleBinding resource type that associates that Role with a service account. Any user can pass the service account name to the helm install command with the serviceAccountOverride parameter and install the operator.

The following steps use a YAML file, default-rbac.yaml. This sample file defines a ServiceAccount, Roles, and RoleBindings to grant the required privileges to the service account. It is available in the vertica-kubernetes GitHub repository:

  1. Apply default-rbac.yaml to the namespace:

    $ kubectl apply -n namespace -f https://github.com/vertica/vertica-kubernetes/releases/download/v1.4.0/default-rbac.yaml
    
  2. Verify the changes with kubectl get:

    • Service account:

      $ kubectl get serviceaccounts
      NAME                                    SECRETS   AGE
      default                                 1         71m
      verticadb-operator-controller-manager   1         69m
      
    • Roles in the correct namespace:

      $ kubectl get roles -n namespace
      NAME                                      CREATED AT
      verticadb-operator-leader-election-role   2022-04-14T16:26:53Z
      verticadb-operator-manager-role           2022-04-14T16:26:53Z
      
    • RoleBindings in the correct namespace:

      $ kubectl get rolebinding -n namespace
      NAME                                             ROLE                                           AGE
      verticadb-operator-leader-election-rolebinding   Role/verticadb-operator-leader-election-role   73m
      verticadb-operator-manager-rolebinding           Role/verticadb-operator-manager-role           73m
      

Installing the helm chart

Before you can install the Helm chart, you must configure TLS for the admission controller with one of the following options:

The following install steps use custom certificates:

  1. Add the Vertica helm charts to you repository. The following command installs the CRD Helm chart and names it vertica-charts for future reference:

    $ helm repo add vertica-charts https://vertica.github.io/charts
    
  2. Update your Helm repository to ensure that you are using the latest version of your repository:

    $ helm repo update vertica-charts
    
  3. Install the operator Helm chart. The following examples demonstrate the most common Helm chart configurations. For details about the Helm chart options and parameters, see Helm chart parameters.

    Enter one of the following commands to customize your Helm chart installation:

    • Default configuration. The following command requires cluster administrator privileges:

      $ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator
      
    • Custom certificates. Pass custom certificates with the webhook.caBundle and webhook.tlsSecret. The following command requires cluster administrator privileges, and uses the tls-secret Secret created in Defining Custom Certificates:

      $ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
          --set webhook.caBundle=$(cat /path/to/root.pem | base64 --wrap 0) \
          --set webhook.tlsSecret=tls-secret
      
    • Service account override. Use service accounts to allow users without cluster administrator privileges to install the operator. Pass the service account with the serviceAccountNameOverride parameter:

      $ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
          --set serviceAccountNameOverride=service-account-name
      

      For details, see Granting Operator Installation Privileges.

    • Do not install the admission controller webhook. Deploying the webhook requires cluster-scoped privileges that are not required to install the operator. If you use a service account that is granted the privileges required to install the operator but not the webhook, provide the service account with serviceAccountNameOverride, and set webhook.enable to false to deploy only the operator:

      $ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
          --set serviceAccountNameOverride=service-account-name
          --set webhook.enable=false
      

For additional details about helm install, see the official documentation.

2.2 - Upgrading the Vertica DB operator

Vertica supports two separate options to upgrade the VerticaDB operator:.

Vertica supports two separate options to upgrade the VerticaDB operator:

  • OperatorHub.io

  • Helm Charts

Prerequisites

OperatorHub.io

The Operator Lifecycle Manager (OLM) operator manages upgrades for OperatorHub.io installations. You can configure the OLM operator to upgrade the VerticaDB operator manually or automatically with the Subscription object's spec.installPlanApproval parameter.

Automatic upgrade

To configure automatic version upgrades, set spec.installPlanApproval to Automatic, or omit the setting entirely. When the OLM operator refreshes the catalog source, it installs the new VerticaDB operator automatically.

Manual upgrade

Upgrade the VerticaDB operator manually to approve version upgrades for specific install plans. To manually upgrade, set spec.installPlanApproval parameter to Manual and complete the following:

  1. Verify if there is an install plan that requires approval to proceed with the upgrade:

    $ kubectl get installplan
    NAME CSV APPROVAL APPROVED
    install-ftcj9 verticadb-operator.v1.7.0 Manual false
    install-pw7ph verticadb-operator.v1.6.0 Manual true
    

    The command output shows that the install plan install-ftcj9 for VerticaDB operator version 1.7.0 is not approved.

  2. Approve the install plan with a patch command:

    $ kubectl patch installplan install-ftcj9 --type=merge --patch='{"spec": {"approved": true}}'
    installplan.operators.coreos.com/install-ftcj9 patched
    

After you set the approval, the OLM operator silently upgrades the VerticaDB operator. To monitor its progress, inspect the STATUS column of the Subscription object:


$ kubectl describe subscription subscription-object-name

Helm charts

The CRD is included when you install the Helm chart, but the helm install command does not overwrite an existing CRD. To upgrade the operator, you must update the CRD with the manifest from the GitHub repository. Upgrading the operator with the CRD requires the following prerequisites:

Additionally, you must upgrade the VerticaAutoscaler custom resource, even if you do not use it in your environment. The VerticaAutoscaler CR is installed with the operator and is maintained as a separate YAML manifest. Upgrade the VerticaAutoscaler CR to ensure that your operator is upgraded completely.

Use kubectl apply to upgrade the CRD for both the VerticaDB operator and the VerticaAutoscaler:

  1. Upgrade the VerticaDB operator CRD:

    $ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/verticadbs.vertica.com-crd.yaml
    
  2. Upgrade the VerticaAutoscaler CRD:

    $ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/verticaautoscalers.vertica.com-crd.yaml
    
  3. Upgrade the Helm chart:

    $ helm upgrade operator-name --wait vertica-charts/verticadb-operator
    

2.3 - Helm chart parameters

The following table describes the available settings for the VerticaDB operator and admission controller Helm chart.

The following table describes the available settings for the VerticaDB operator and admission controller Helm chart.

Parameter Description
image.name

The name of the image that runs the operator.

Default: vertica/verticadb-operator:version

imagePullSecrets A list of Secrets that store credentials to authenticate to the private container repository specified by image.repo and rbac_proxy_image. For details, see Specifying ImagePullSecrets in the Kubernetes documentation.
image.repo

The server that hosts the repository that contains image.name. Use this parameter for deployments that require control over a private hosting server, such as an air-gapped operator.

Use this parameter with rbac_proxy_image.name and rbac_proxy_image.repo.

Default: Null

logging.filePath

The path to a log file in the VerticaDB operator filesystem. If this value is not specified, Vertica writes logs to standard output.

Default: Empty string (' ') that indicates standard output.

logging.level

Minimum logging level. This parameter accepts the following values:

  • debug

  • info

  • warn

  • error

Default: info

logging.maxFileSize

When logging.filePath is set, the maximum size in MB of the logging file before log rotation occurs.

Default: 500

logging.maxFileAge

When logging.filePath is set, the maximum age in days of the logging file before log rotation deletes the file.

Default: 7

logging.maxFileRotation

When logging.filePath is set, the maximum number of files that are kept in rotation before the old ones are removed.

Default: 3

prometheus.expose

Configures the operator's /metrics endpoint for the Prometheus integration. The following options are valid:

  • EnableWithAuthProxy: Creates a new service object that exposes an HTTPS /metrics endpoint. The RBAC proxy controls access to the metrics.

  • EnableWithoutAuth: Creates a new service object that exposes an HTTP /metrics endpoint that does not authorize connections. Any client with network access can read the metrics.

  • Disable: Prometheus metrics are not exposed.

Default: EnableWithAuthProxy

rbac_proxy_image.name

The name of the Kubernetes RBAC proxy image that performs authorization. Use this parameter for deployments that require authorization by a proxy server, such as an air-gapped operator.

Use this parameter with image.repo and rbac_proxy_image.repo.

Default: kubebuilder/kube-rbac-proxy:v0.11.0

rbac_proxy_image.repo

The server that hosts the repository that contains rbac_proxy_image.name. Use this parameter for deployments that perform authorization by a proxy server, such as an air-gapped operator.

Use this parameter with image.repo and rbac_proxy_image.name.

Default: gcr.io

serviceAccountNameOverride

Service account that identifies any pods in the cluster for apiserver access. A cluster administrator can create a service account that grants the privileges required to install the operator so that users without cluster administrator privileges can install the Helm chart.

To correctly control access, the service account's Roles and RoleBindings must exist before you add the service account to the CR. If these are not set, the Vertica Helm chart creates and uses a service account.

Default: Empty string ("")

webhook.caBundle A PEM-encoded certificate authority (CA) bundle that validates the webhook's server certificate. If this is not set, the webhook uses the system trust roots on the apiserver.
webhook.enable

Determines if the Helm chart installs the admission controller webhooks for the VerticaDB custom resource and VerticaAutoscaler. If you do not have the privileges required to install the admission controller, set this value to false to deploy the operator only.

This parameter enables or disables both webhooks. You cannot enable one webhook and disable the other.

Default: true

webhook.tlsSecret

Secret that contains the following keys for the webhook.caBundle:

  • tls.key

  • ca.crt

  • tls.crt

resources.limits and resources.requests

The resource requirements for the operator pod.

resources.limits is the maximum amount of CPU and memory that an operator pod can consume from its host node.

resources.requests is the maximum amount of CPU and memory that an operator pod can request from its host node.

Defaults:

resources:
  limits:
    cpu: 100m
    memory: 750Mi
  requests:
    cpu: 100m
    memory: 20Mi

2.4 - Upgrading Vertica on Kubernetes

The operator automates Vertica server version upgrades for a custom resource (CR).

The operator automates Vertica server version upgrades for a custom resource (CR). Use the upgradePolicy setting in the CR to determine whether your cluster remains online or is taken offline during the version upgrade.

Prerequisites

Before you begin, complete the following:

Setting the policy

The upgradePolicy CR parameter setting determines how the operator upgrades Vertica server versions. It provides the following options:

Setting Description
Offline

The operator shuts down the cluster to prevent multiple versions from running simultaneously.

The operator performs all server version upgrades using the Offline setting in the following circumstances:

  • You have only one subcluster

  • You are upgrading from a Vertica server version prior to version 11.1.0

Online The cluster continues to operate during an online upgrade. The data is in read-only mode while the operator upgrades the image for the primary subcluster.
Auto

The default setting. The operator selects either Offline or Online depending on the configuration. The operator performs an Online upgrade if all of the following are true:

  • A license Secret exists

  • K-Safety is 1

  • The cluster is currently running a Vertica version 11.1.0 or higher

If the current configuration does not meet all of the previous requirements, the operator performs an Offline upgrade.

Set the reconcile loop iteration time

During an upgrade, the operator runs the reconcile loop to compare the actual state of the objects to the desired state defined in the CR. The operator requeues any unfinished work, and the reconcile loop compares states with a set period of time between each reconcile iteration. Set the upgradeRequeueTime parameter to determine the amount of time between each reconcile loop iteration.

Routing client traffic during an online upgrade

During an online upgrade, the operator begins by upgrading the Vertica server version in the primary subcluster to form a cluster with the new version. When the operator restarts the primary nodes, it places the secondary subclusters in read-only mode. Next, the operator upgrades any secondary subclusters one at a time. During the upgrade for any subcluster, all client connections are drained, and traffic is rerouted to either an existing subcluster or a temporary subcluster.

Online upgrades require more than one subcluster so that the operator can reroute client traffic for the subcluster while it is upgrading. By default, the operator selects which subcluster receives the rerouted traffic using the following rules:

  • When rerouting traffic for the primary subcluster, the operator selects the first secondary subcluster defined in the CR.

  • When restarting the first secondary subcluster after the upgrade, the operator selects the first subcluster that is defined in the CR that is up.

  • If no secondary subclusters exist, you cannot perform an online upgrade. The operator selects the first primary subcluster defined in the CR and performs an offline upgrade.

Routing client traffic to an existing subcluster

You might want to control which subclusters handle rerouted client traffic due to subcluster capacity or licensing limitations. You can set the temporarySubclusterRouting.names parameter to specify an existing subcluster to receive the rerouted traffic:

spec:
  ...
  temporarySubclusterRouting:
    names:
      - subcluster-2
      - subcluster-1

In the previous example, subcluster-2 accepts traffic when the other subcluster-1 is offline. When subcluster-2 is down, subcluster-1 accepts its traffic.

Routing client traffic to a temporary subcluster

To create a temporary subcluster that exists for the duration of the upgrade process, use the temporarySubclusterRouting.template parameter to provide a name and size for the temporary subcluster:

spec:
  ...
  temporarySubclusterRouting:
    template:
      name: transient
      size: 3

If you choose to upgrade with a temporary subcluster, ensure that you have the necessary resources.

Upgrading the Vertica server version

After you set the upgradePolicy and optionally configure temporary subcluster routing, use the kubectl command line tool to perform the upgrade and monitor its progress.

The following steps perform an online version upgrade:

  1. Set the upgrade policy. The following command uses the kubectl patch command to set the upgradePolicy value to Online:

    $ kubectl patch verticadb cluster-name --type=merge --patch '{"spec": {"upgradePolicy": "Online"}}'
    
  2. Update the image value in the CR with kubectl patch:

    $ kubectl patch verticadb cluster-name --type=merge --patch '{"spec": {"image": "vertica/vertica-k8s:new-version"}}'
    
  3. Use kubectl wait to wait until the operator acknowledges the new image and begins upgrade mode:

    $ kubectl wait --for=condition=ImageChangeInProgress=True vdb/cluster-name –-timeout=180s
    
  4. Use kubectl wait to wait until the operator leaves upgrade mode:

    $ kubectl wait --for=condition=ImageChangeInProgress=False vdb/cluster-name –-timeout=800s
    

Viewing the upgrade process

To view the current phase of the upgrade process, use kubectl get to inspect the upgradeStatus status field:

$ kubectl get vdb -n namespacedatabase-name -o jsonpath='{.status.upgradeStatus}{"\n"}'
Restarting cluster with new image

To view the entire upgrade process, use kubectl describe to list the events the operator generated during the upgrade:

$ kubectl describe vdb cluster-name

...
Events:
  Type    Reason                   Age    From                Message
  ----    ------                   ----   ----                -------
  Normal  UpgradeStart             5m10s  verticadb-operator  Vertica server upgrade has started.  New image is 'vertica-k8s:new-version'
  Normal  ClusterShutdownStarted   5m12s  verticadb-operator  Calling 'admintools -t stop_db'
  Normal  ClusterShutdownSucceeded 4m08s  verticadb-operator  Successfully called 'admintools -t stop_db' and it took 56.22132s
  Normal  ClusterRestartStarted    4m25s  verticadb-operator  Calling 'admintools -t start_db' to restart the cluster
  Normal  ClusterRestartSucceeded  25s    verticadb-operator  Successfully called 'admintools -t start_db' and it took 240s
  Normal  UpgradeSucceeded         5s     verticadb-operator  Vertica server upgrade has completed successfully

2.5 - Red hat OpenShift integration

Red Hat OpenShift is a hybrid cloud platform that provides enhanced security features and greater control over the Kubernetes cluster.

Red Hat OpenShift is a hybrid cloud platform that provides enhanced security features and greater control over the Kubernetes cluster. In addition, OpenShift provides the OperatorHub, a catalog of operators that meet OpenShift requirements.

For comprehensive instructions about the OpenShift platform, refer to the official Red Hat OpenShift documentation.

Enhanced security with security context constraints

OpenShift requires that each deployment uses a security context constraint (SCC) to enforce enhanced security measures. The SCC lets administrators control the privileges of the pods in a cluster. For example, you can restrict namespace access for specific users in a multi-user environment.

Default SCCs

OpenShift provides default SCCs that provide a range of security features without manual configuration. Vertica on Kubernetes supports the privileged SCC, the most restrictive default SCC. The privileged SCC allows Vertica to assign user and group IDs to the Kubernetes objects in the cluster. In addition, the privileged SCC has the following Linux capabilities that enable internal SSH communication between the pods:

  • SYS_CHROOT

  • AUDIT_WRITE

Anyuid-extra custom SCC

Vertica provides anyuid-extra, a custom SCC that you can create that extends the anyuid SCC. Use the anyuid-extra SCC if you need to run Vertica in a less-restrictive environment than the privileged SSC allows. For example, if you do not have the privileges to grant the privileged SCC, you can create the anyuid-extra SCC and add it to your Vertica workloads service account.

For installation details, see Creating a Custom SCC with anyuid-extra.

Installing the operator

The VerticaDB operator is a community operator that is maintained by Vertica. Each operator available in the OperatorHub must adhere to requirements defined by the Operator Lifecycle Manager (OLM). To meet these requirements, vendors must provide a cluster service version (CSV) manifest for each operator. Vertica provides a CSV for each version of the VerticaDB operator available in the OpenShift OperatorHub.

The VerticaDB operator supports OpenShift versions 4.8 and higher.

You must have cluster-admin privileges on your OpenShift account to install the VerticaDB operator. For detailed installation instructions, refer to the OpenShift documentation.

Installing the operator in multiple OpenShift namespaces

By default, the OpenShift user interface (UI) installs the VerticaDB operator in a single OpenShift namespace. In some circumstances, you might require that the operator watch and manage resource objects across multiple OpenShift namespaces.

Prequisites:

The following steps add the VerticaDB operator to an additional namespace:

  1. Create a YAML-formatted OperatorGroup object file. The following example creates file named operatorgroup.yaml:

    apiVersion: operators.coreos.com/v1alpha2
    kind: OperatorGroup
    metadata:
      name: vertica-operatorgroup
      namespace: $NAMESPACE
    spec:
      targetNamespaces:
      - $NAMESPACE
    

    In the previous command, $NAMESPACE is the namespace where you want to install the operator.

  2. Create the OperatorGroup object:

    $ oc apply -f operatorgroup.yaml
    
  3. Create a YAML-formatted Subscription object file to subscribe a namespace to an operator. The following example creates a file named sub.yaml:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: verticadb-operator
      namespace: $NAMESPACE
    spec:
      channel: stable
      name: verticadb-operator
      source: community-operators
      sourceNamespace: openshift-marketplace
    
  4. Create the Subscription object:

    $ oc apply -f sub.yaml
    

    After you create the Subscription object, the OLM is aware of the operator.

  5. Use kubectl get to view the installation progress in a separate shell:

    $ kubectl get -n $NAMESPACE clusterserviceversion -w --selector operators.coreos.com/verticadb-operator.$NAMESPACE
    

When the installation is complete, you can manage the operator from the UI.

Creating a custom SCC with anyuid-extra

Before you can create an operator, you must create the anyuid-extra SCC and add it to your Vertica workloads service account. The Vertica anyuid-extra SCC manifest is available on the Vertica GitHub repository.

  1. Create the custom SCC using the anyuid-extra YAML-formatted manifest:

    $ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/download/v1.4.0/custom-scc.yaml
    

    For detailed instructions, refer to the OpenShift documentation.

  2. Execute the following command to add the custom SCC to your Vertica workloads service account:

    $ oc adm policy add-scc-to-user -n $NAMESPACE -z verticadb-operator-controller-manager anyuid-extra
    

    In the previous command, $NAMESPACE is the namespace with the operator installation.

By default, the anyuid-extra has a priority setting of 10, so it is automatically selected instead of the default privileged SCC. For additional details about the priority setting, refer to the OpenShift documentation.

Deploying Vertica on OpenShift

After you installed the VerticaDB operator and added a supported SCC to your Vertica workloads service account, you can deploy Vertica on OpenShift.

For details about installing OpenShift in supported environments, see the OpenShift Container Platform installation overview.

Before you deploy Vertica on OpenShift, create the required Secrets to store sensitive information. For details about Secrets and OpenShift, see the OpenShift documentation. For guidance on deploying a Vertica custom resource, see Creating a custom resource.

2.6 - Prometheus integration

Vertica on Kubernetes integrates with Prometheus to scrape time series metrics about the VerticaDB operator.

Vertica on Kubernetes integrates with Prometheus to scrape time series metrics about the VerticaDB operator. These metrics create a detailed model of your application over time, which provides valuable performance and troubleshooting insights. Prometheus exposes these metrics with an HTTP endpoint to facilitate internal and external communications and service discovery in microservice and containerized architectures.

Prometheus requires that you set up targets—metrics that you want to monitor. Each target is exposed on the operator's /metrics endpoint, and Prometheus periodically scrapes that endpoint to collect target data. The operator supports the operator SDK framework, which requires that an authorization proxy impose role-based-access control (RBAC) to access operator metrics. To increase flexibility, Vertica provides the following options to access the /metrics endpoint with Prometheus:

  • Use a sidecar container as an RBAC proxy to authorize connections.

  • Expose the /metrics endpoint to external connections without RBAC.

  • Disable Prometheus entirely.

Vertica provides Helm chart parameters and YAML manifests to configure each option.

Prerequisites

Access metrics with RBAC

The operator SDK framework requires that operators use an authorization proxy for metrics access. Because the operator sends metrics to localhost only, Vertica meets these requirements with a sidecar container with localhost access that enforces RBAC.

RBAC rules are cluster-scoped, and the sidecar authorizes connections from clients associated with a service account that has the correct ClusterRole and ClusterRoleBindings. Vertica provides the following example manifests:

For additional details about ClusterRoles and ClusterRoleBindings, see the Kubernetes documentation.

Create RBAC rules

The following steps create the ClusterRole and ClusterRoleBindings objects that grant access to the /metrics endpoint to a non-Kubernetes resource such as Prometheus. Because RBAC rules are cluster-scoped, you must create or add to an existing ClusterRoleBinding:

  1. Create a ClusterRoleBinding that binds the role for the RBAC sidecar proxy with a service account:

    • Create a ClusterRoleBinding:

      $ kubectl create clusterrolebinding verticadb-operator-proxy-rolebinding \
          --clusterrole=verticadb-operator-proxy-role \
          --serviceaccount=namespace:serviceaccount
      
    • Add a service account to an existing ClusterRoleBinding:

      $ kubectl patch clusterrolebinding verticadb-operator-proxy-rolebinding \
          --type='json' \
          -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "serviceaccount","namespace": "namespace" } }]'
      
  2. Create a ClusterRoleBinding that binds the role for the non-Kubernetes object to the RBAC sidecar proxy service account:

    • Create a ClusterRoleBinding:

      $ kubectl create clusterrolebinding verticadb-operator-metrics-reader \
          --clusterrole=verticadb-operator-metrics-reader \
          --serviceaccount=namespace:serviceaccount
      
    • Bind the service account to an existing ClusterRoleBinding:

      $ kubectl patch clusterrolebinding verticadb-operator-metrics-reader \
          --type='json' \
          -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "serviceaccount","namespace": "namespace" } }]'
      

When you install the Helm chart, the ClusterRole and ClusterRoleBindings are created automatically. By default, the prometheus.expose parameter is set to EnableWithProxy, which creates the service object and exposes the operator's /metrics endpoint.

For details about creating a sidecar container, see Creating a custom resource.

Service object

Vertica provides a service object verticadb-operator-metrics-service to access the Prometheus /metrics endpoint. The VerticaDB operator does not manage this service object. By default, the service object uses the ClusterIP service type to support RBAC.

Connect to the /metrics endpoint at port 8443 with the following path:

https://verticadb-operator-metrics-service.namespace.svc.cluster.local:8443/metrics

Bearer token authentication

Kubernetes authenticates requests to the API server with service account credentials. Each pod is associated with a service account and has the following credentials stored in the filesystem of each container in the pod:

  • Token at /var/run/secrets/kubernetes.io/serviceaccount/token

  • Certificate authority (CA) bundle at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

Use these credentials to authenticate to the /metrics endpoint through the service object. You must use the credentials for the service account that you used to create the ClusterRoleBindings.

For example, the following cURL request accesses the /metrics endpoint. Include the --insecure option only if you do not want to verify the serving certificate:

$ curl --insecure --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://verticadb-operator-metrics-service.vertica:8443/metrics

For additional details about service account credentials, see the Kubernetes documentation.

Prometheus operator integration (optional)

Vertica on Kubernetes integrates with the Prometheus operator, which provides custom resources (CRs) that simplify targeting metrics. Vertica supports the ServiceMonitor CR that discovers the VerticaDB operator automatically, and authenticates requests with a bearer token.

The ServiceMonitor CR is available as a release artifact in our GitHub repository. See Helm chart parameters for details about the prometheus.createServiceMonitor parameter.

Access metrics without RBAC

You might have an environment that does not require privileged access to Prometheus metrics. For example, you might run Prometheus outside of Kubernetes.

To allow external access to the /metrics endpoint with HTTP, set prometheus.expose to EnableWithoutAuth. For example:

$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
    --set prometheus.expose=EnableWithoutAuth

Service object

Vertica provides a service object verticadb-operator-metrics-service to access the Prometheus /metrics endpoint. The VerticaDB operator does not manage this service object. By default, the service object uses the ClusterIP service type, so you must change the serviceType for external client access. The service object's fully-qualified domain name (FQDN) is as follows:

verticadb-operator-metrics-service.namespace.svc.cluster.local

Connect to the /metrics endpoint at port 8443 with the following path:

http://verticadb-operator-metrics-service.namespace.svc.cluster.local:8443/metrics

Disable prometheus

To disable Prometheus, set the prometheus.expose Helm chart parameter to Disable. For example:

$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
    --set prometheus.expose=Disable

For details about Helm install commands, see Installing the Vertica DB operator.

3 - Configuring communal storage

Vertica on Kubernetes supports a variety of communal storage providers to accommodate your storage requirements.

Vertica on Kubernetes supports a variety of communal storage providers to accommodate your storage requirements. Configuring each storage provider requires that you create a Secret or ConfigMap to store sensitive information so that you can declare it in your Custom Resource (CR) without exposing any literal values.

Amazon Web Services (AWS) S3 or S3-Compatible storage

Vertica on Kubernetes supports AWS communal storage locations, and private cloud S3 storage such as MinIO.

To connect to an S3-compatible storage location, create a Secret to store both your communal access and secret key credentials. Then, add the Secret, path, and S3 endpoint to the CR spec.

  1. The following command stores both your S3-compatible communal access and secret key credentials in a Secret named s3-creds:

    $ kubectl create secret generic s3-creds --from-literal=accesskey=accesskey --from-literal=secretkey=secretkey
    
  2. Add the Secret to the communal section of the CR spec:

    spec:
      ...
      communal:
        credentialSecret: s3-creds
        endpoint: https://path/to/s3-endpoint
        path: s3://bucket-name/key-name
        ...
    

For a detailed description of an S3-compatible storage implementation, see Creating a custom resource. For additional details about Vertica and AWS, see Vertica on Amazon Web Services.

Google Cloud Storage

Authenticating to Google Cloud Storage (GCS) requires your hash-based message authentication code (HMAC) access and secret keys, and the path to your GCS bucket. For details about HMAC keys, see Eon Mode on GCP prerequisites.

  1. The following command stores your HMAC access and secret key in a Secret named gcs-creds:

    $ kubectl create secret generic gcs-creds --from-literal=accesskey=accessKey --from-literal=secretkey=secretkey
    
  2. Add the Secret and the path to the GCS bucket that contains your Vertica database to the communal section of the CR spec:

    spec:
      ...
      communal:
        credentialSecret: gcs-creds
        path: gs://bucket-name/path/to/database-name
        ...
    

For additional details about Vertica and GCS, see Vertica on Google Cloud Platform.

Azure Blob Storage

Micosoft Azure provides a variety of options to authenticate to Azure Blob Storage location. Depending on your environment, you can use one of the following combinations to store credentials in a Secret:

  • accountName and accountKey

  • accountName and shared access signature (SAS)

If you use an Azure storage emulator such as Azurite in a tesing environment, you can authenticate with accountName and blobStorage values.

  1. The following command stores accountName and accountKey in a Secret named azb-creds:

    $ kubectl create secret generic azb-creds --from-literal=accountKey=accessKey --from-literal=accountName=accountName
    

    Alternately, you could store your accountName and your SAS credentials in azb-creds:

    $ kubectl create secret generic azb-creds --from-literal=sharedAccessSignature=sharedAccessSignature --from-literal=accountName=accountName
    
  2. Add the Secret and the path that contains your AZB storage bucket to the communal section of the CR spec:

    spec:
      ...
      communal:
        credentialSecret: azb-creds
        path: azb://accountName/bucket-name/database-name
        ...
    

For details about Vertica and authenticating to Microsoft Azure, see Eon Mode databases on Azure.

Hadoop file storage

Connect to Hadoop Distributed Filesystem (HDFS) communal storage with the standard webhdfs scheme, or the swebhdfs scheme for wire encryption. In addition, you must add your HDFS configuration files in a ConfigMap, a Kubernetes object that stores data in key-value pairs. You can optionally configure Kerberos to authenticate connections to your HDFS storage location.

The following example uses the swebhdfs wire encryption scheme that requires a certificate authority (CA) bundle in the CR spec.

  1. The following command stores a PEM-encoded CA bundle in a secret named hadoop-cert:

    $ kubectl create secret generic hadoop-cert --from-file=ca-bundle.pem
    
  2. HDFS configuration files are located in the /etc/hadoop directory. The following command creates a ConfigMap named hadoop-conf:

    $ kubectl create configmap hadoop-conf --from-file=/etc/hadoop
    
  3. Add the configuration values to the communal and certSecrets sections of the spec:

    spec:
      ...
      communal:
        path: "swebhdfs://path/to/database"
        hadoopConfig: hadoop-conf
        caFile: /certs/hadoop-cert/ca-bundle.pem
      certSecrets:
        - name: hadoop-cert
      ...
    

    The previous example defines the following:

    • communal.path: The path to the database, using the wire encryption scheme. Enclose the path in double quotes.

    • communal.hadoopConfig: The ConfigMap storing the contents of the /etc/hadoop directory.

    • communal.caFile: The mount path in the container filesystem containing the CA bundle used to create the hadoop-cert Secret.

    • certSecrets.name: The Secret containing the CA bundle.

For additional details about HDFS and Vertica, see Apache Hadoop integration.

Kerberos authentication (optional)

Vertica authenticates connections to HDFS with Kerberos. The Kerberos configuration between Vertica on Kubernetes is the same as between a standard Eon Mode database and Kerberos, as described in Kerberos authentication.

  1. The following command stores the krb5.conf and krb5.keytab files in a Secret named krb5-creds:

    $ kubectl create secret generic krb5-creds --from-file=kerberos-conf=/etc/krb5.conf --from-file=kerberos-keytab=/etc/krb5.keytab
    

    Consider the following when managing the krb5.conf and krb5.keytab files in Vertica on Kubernetes:

    • Each pod uses the same krb5.keytab file, so you must update the krb5.keytab file before you begin any scaling operation.

    • When you update the contents of the krb5.keytab file, the operator updates the mounted files automatically, a process that does not require a pod restart.

    • The krb5.conf file must include a [domain_realm] section that maps the Kubernetes cluster domain to the Kerberos realm. The following example maps the default .cluster.local domain to a Kerberos realm named EXAMPLE.COM:

      [domain_realm]
        .cluster.local = EXAMPLE.COM
      
  2. Add the Secret and additional Kerberos configuration information to the CR:

    spec:
      ...
      communal:
        path: "swebhdfs://path/to/database"
        hadoopConfig: hadoop-conf
        kerberosServiceName: verticadb
        kerberosRealm: EXAMPLE.COM
      kerberosSecret: krb5-creds
      ...
    

The previous example defines the following:

  • communal.path: The path to the database, using the wire encryption scheme. Enclose the path in double quotes.

  • communal.hadoopConfig: The ConfigMap storing the contents of the /etc/hadoop directory.

  • communal.kerberosServiceName: The service name for the Vertica principal.

  • communal.kerberosRealm: The realm portion of the principal.

  • kerberosSecret: The Secret containing the krb5.conf and krb5.keytab files.

For a complete definition of each of the previous values, see Custom resource definition parameters.

4 - Creating a custom resource

The custom resource definition (CRD) is a shared global object that extends the Kubernetes API beyond the standard resource types.

The custom resource definition (CRD) is a shared global object that extends the Kubernetes API beyond the standard resource types. The CRD serves as a blueprint for custom resource (CR) instances. You create CRs that specify the desired state of your environment, and the operator monitors the CR to maintain state for the objects within its namespace.

For convenience, this example CR uses a YAML-formatted file. For details about all available CR settings, see custom resource parameters.

Prerequisites

Creating secrets

Use the kubectl command line tool to create Secrets that store sensitive information in your custom resource without exposing the values they represent.

  1. Create a secret named vertica-license for your Vertica license:

    $ kubectl create secret generic vertica-license --from-file=license.dat=/path/to/license.dat
    

    By default, the Helm chart uses the free Community Edition license. This license is limited to 3 nodes and 1 TB of data.

  2. Create a secret named su-passwd to store your superuser password. If you do not add a superuser password, there is not one associated with the database:

    $ kubectl create secret generic su-passwd --from-literal=password=secret-password
    
  3. The following command stores both your S3-compatible communal access and secret key credentials in a Secret named s3-creds:

    $ kubectl create secret generic s3-creds --from-literal=accesskey=accesskey --from-literal=secretkey=secretkey
    
  4. This tutorial configures a certificate authority (CA) bundle that authenticates the S3-compatible connections to your custom resource. Create a Secret named aws-cert:

    $ kubectl create secret generic aws-cert --from-file=root-cert.pem
    
  5. You can mount multiple certificates in the Vertica server filesystem. The following command creates a Secret for your mTLS certificate in a Secret named mtls:

    $ kubectl create secret generic mtls --from-file=mtls=/path/to/mtls-cert
    

Required fields

The VerticaDB definition begins with required fields that describe the version, resource type, and metadata:

apiVersion: vertica.com/v1beta1
kind: VerticaDB
metadata:
  name: verticadb

The previous example defines the following:

  • apiVersion: The API group and Kubernetes API version in api-group/version format.

  • kind: The resource type. VerticaDB is the name of the Vertica custom resource type.

  • metadata: Data that identifies objects in the namespace.

    • name: The name of this CR object.

Spec definition

The spec field defines the desired state of the CR. During the control loop, the operator compares the spec values to the current state and reconciles any differences.

The following sections nest values under the spec field to define the desired state of your custom resource object.

Image management

Each custom resource instance requires access to Vertica server image and instruction on how often to download a new image:

spec:
  image: vertica/vertica-k8s:latest
  imagePullPolicy: Always

The previous example defines the following:

  • image: The image to run in the Vertica server container pod, defined here in docker-registry-hostname/image-name:tag format. For a full list of available Vertica images, see the Vertica Dockerhub registry.

  • imagePullPolicy: Controls when the operator pulls the image from the container registry. When you use the latest tag, set this to Always. The latest tag is overwritten with each new release, so you should check with the image registry to ensure that the correct most recent image is in use.

Cluster description values

This section logically groups fields that configure the database and how it operates:

spec:
  ...
  initPolicy: Create
  kSafety: "1"
  licenseSecret: vertica-license
  superuserPasswordSecret: su-passwd

The previous example defines the following:

  • initPolicy: Specifies how to initialize the database. Create initializes a new database for the custom resource.

  • kSafety: Determines the fault tolerance for the subcluster. For a three-pod subcluster, set kSafety to 1.

  • licenseSecret: The Secret that contains your Vertica license key. The license is mounted in the /home/dbadmin/licensing/mnt directory.

  • superuserPasswordSecret: The Secret that contains the database superuser password.

Mounting custom TLS certificates

certSecrets is a list that contains each Secret that you created to encrypt internal and external communications for your CR. Use the name key to add each certificate:

spec:
  ...
  certSecrets:
    - name: mtls
    - name: aws-cert

certSecrets accepts an unlimited number of name values. If you update an existing certificate, the operator replaces the certificate in the Vertica server container. If you add or delete a certificate, the operator reschedules the pod with the new configuration.

Each certSecret is mounted in the Vertica server container in the /certs/certSecrets.name directory. For example, the aws-cert Secret is mounted in the certs/aws-cert directory.

Configuring communal storage

The following example configures communal storage for an S3 endpoint. For a list of supported communal storage locations, see Containerized environments. For implementation details for each communal storage location, see Configuring communal storage.

Provide the location and credentials for the storage location in the communal section:

spec:
  ...
  communal:
    credentialSecret: s3-creds
    endpoint: https://path/to/s3-endpoint
    path: s3://bucket-name/key-name
    caFile: /certs/aws-cert/root_cert.pem
    region: aws-region

The previous example defines the following:

  • credentialSecret: The Secret that contains your communal access and secret key credentials.

  • endpoint: The S3 endpoint URL.

  • path: The location of the S3 storage bucket, in S3 bucket notation. This bucket must exist before you create the custom resource. After you create the custom resource, you cannot change this value.

  • caFile: Mounts in the server container filesystem the certificate file that validates S3-compatible connections to your custom resource. The CA file is mounted in the same directory as the aws-cert Secret that was added in Mounting Custom TLS Certificates.

  • region: The geographic location of the communal storage resources.

Adding a sidecar container

A sidecar is a utility container that runs in the same pod as the Vertica server container and performs a task for the Vertica server process. For example, you can use the vertica-logger image to add a sidecar that sends logs from vertica.log to standard output on the host node for log aggregation.

sidecars accepts a list of sidecar definitions, where each element defines the following values:

spec:
  ...
  sidecars:
    - name: sidecar-container
      image: sidecar-image:latest

The previous example defines the following:

  • name: The name of the sidecar. name indicates the beginning of a sidecar element.

  • image: The image for the sidecar container.

A sidecar that shares information with the Vertica server process must persist data between pod life cycles. The following section mounts a custom volume in the sidecar filesystem.

Mounting custom volumes

You might need to mount a custom volume to persist data between pod life cycles if an external service requires long-term access to your Vertica server data.

Use the volumeMounts.* parameters to mount one or more custom volumes. To mount a custom volume for the Vertica server container, add the volumeMounts.* values directly under spec. To mount a custom volume for a sidecar container, nest the volumeMounts.* values in the sidecars array as part of an individual sidecar element definition.

The volumes.* parameters make the custom volume available to the CR to mount in the appropriate container filesystem. Indent volumes to the same level as its corresponding volumeMounts entry. The following example mounts custom volumes for both the Vertica server container and the sidecar utility container:

spec:
  ...
  volumeMounts:
  - name: tenants-vol
    mountPath: /path/to/tenants-vol
  volumes:
    - name: tenants-vol
      persistentVolumeClaim:
        claimName: vertica-pvc
  ...
  sidecars:
    - name: sidecar-container
      image: sidecar-image:latest
      volumeMounts:
        - name: sidecar-vol
          mountPath: /path/to/sidecar-vol
      volumes:
        - name: sidecar-vol
          emptyDir: {}

The previous example defines the following:

  • volumes: Accepts a list of custom volumes and volume types to persist data for a container.

  • volumes.name: The name of the custom volume that persists data. This value must match the corresponding volumeMounts.name value.

  • persistentVolumeClaim and emptyDir: The volume type and name. The Vertica custom resource accepts any Kubernetes volume type.

Local container information

Each container persists catalog, depot, configuration, and log data in a PersistentVolume (PV). You must provide information about the data and depot locations for operations such as pod rescheduling:

spec:
  ...
  local:
    dataPath: /data
    depotPath: /depot
    requestSize: 500Gi

The previous example defines the following:

  • dataPath: Where the /data directory is mounted in the container filesystem. The /data directory stores the local catalogs and temporary files.

  • depotPath: Where the depot is mounted in the container filesystem. Eon Mode databases cache data locally in a depot to reduce the time it takes to fetch data from communal storage to perform operations.

  • requestSize: The minimum size of local data volume available when binding a PV to the pod.

You must configure a StorageClass to bind the pods to a PersistentVolumeClaim (PVC). For details, see Containerized Vertica on Kubernetes.

Shard count

The shardCount setting specifies the number of shards in the database:

spec:
  ...
  shardCount: 12

You cannot change this value after you instantiate the CR. When you change the number of pods in a subcluster, or add or remove a subcluster, the operator rebalances shards automatically.

For guidance on selecting the shard count, see Configuring your Vertica cluster for Eon Mode.

Subcluster definition

The subclusters section is a list of elements, where each element represents a subcluster and its properties. Each CR requires a primary subcluster or it returns an error:

spec:
  ...
  subclusters:
  - isPrimary: true
    name: primary-subcluster
    size: 3

The previous example defines the following:

  • isPrimary: Designates a subcluster as primary or secondary. Each CR requires a primary subcluster or it returns an error. For details, see Subclusters.

  • name: The name of the subcluster.

  • size: The number of pods in the subcluster.

Subcluster service object

Each subcluster communicates with external clients and internal pods through a service object:

spec:
  ...
  subclusters:
    ...
    serviceName: connections
    serviceType: LoadBalancer
    serviceAnnotations:
      service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24

In the previous example:

  • serviceName: Assigns a custom name to the service object so that you can use the same service object for multiple subclusters, if needed.

    Service object names use the metadata.name-serviceName naming convention. This example creates a service object named verticadb-connections.

  • serviceType: Defines the subcluster service object.

    By default, a subcluster uses the ClusterIP serviceType, which sets a stable IP and port that is accessible from within Kubernetes only. In many circumstances, external client applications need to connect to a subcluster that is fine-tuned for that specific workload. For external client access, set the serviceType to NodePort or LoadBalancer.

    The LoadBalancer service type is managed by your cloud provider. For implementation details, refer to the Kubernetes documentation and your cloud provider's documentation.

  • serviceAnnotations: Assigns a custom annotation to the service. This annotation defines the CIDRs that can access the network load balancer (NLB). For additional details, see the AWS Load Balancer Controller documentation.

For details about Vertica and service objects, see Containerized Vertica on Kubernetes.

Pod resource limits and requests

Set the amount of CPU and memory resources each host node allocates for the Vertica server pod, and the amount of resources each pod can request:

spec:
  ...
  subclusters:
    ...
    resources:
      limits:
        cpu: 32
        memory: 96Gi
      requests:
        cpu: 32
        memory: 96Gi

In the previous example:

  • resources: The amount of resources each pod requests from its host node. When you change resource settings, Kubernetes restarts each pod with the updated resource configuration.

  • limits: The maximum amount of CPU and memory that each server pod can consume.

  • requests: The amount of CPU and memory resources that each pod requests from a PV.

    For guidance on setting production limits and requests, see Recommendations for Sizing Vertica Nodes and Clusters.

    As a best practice, set the resource request and limit to equal values so that they are assigned to the guaranteed QoS class. Equal settings also provide the best safeguard against the Out Of Memory (OOM) Killer in constrained environments.

Node affinity

Kubernetes provides affinity and anti-affinity settings to control which resources the operator uses to schedule pods. As a best practice, set affinity to ensure that a single node does not serve two Vertica pods:

spec:
  ...
  subclusters:
    ...
    affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app.kubernetes.io/name
              operator: In
              values:
              - vertica
          topologyKey: "kubernetes.io/hostname"

In the previous example:

  • affinity: Provides control over pod and host scheduling using labels.

  • podAntiAffinity: Uses pod labels to prevent scheduling on certain resources.

  • requiredDuringSchedulingIgnoredDuringExecution: The rules defined under this statement must be met before a pod is scheduled on a host node.

  • labelSelector: Identifies the pods affected by this affinity rule.

  • matchExpressions: A list of pod selector requirements that consists of a key, operator, and values definition. This matchExpression rule checks if the host node is running another pod that uses a vertica label.

  • topologyKey: Defines the scope of the rule. Because this uses the hostname topology label, this applies the rule in terms of pods and host nodes.

Complete file reference

As a reference, below is the complete CR YAML file:

apiVersion: vertica.com/v1beta1
kind: VerticaDB
metadata:
  name: verticadb
spec:
  image: vertica/vertica-k8s:latest
  imagePullPolicy: Always
  initPolicy: Create
  kSafety: "1"
  licenseSecret: vertica-license
  superuserPasswordSecret: su-passwd
  communal:
    credentialSecret: s3-creds
    endpoint: https://path/to/s3-endpoint
    path: s3://bucket-name/key-name
    caFile: /certs/aws-certs/root_cert.pem
    region: aws-region
  volumeMounts:
  - name: tenants-vol
    mountPath: /path/to/tenants-vol
  volumes:
    - name: tenants-vol
      persistentVolumeClaim:
        claimName: vertica-pvc
  sidecars:
    - name: sidecar-container
      image: sidecar-image:latest
      volumeMounts:
        - name: sidecar-vol
          mountPath: /path/to/sidecar-vol
      volumes:
        - name: sidecar-vol
          emptyDir: {}
  certSecrets:
    - name: mtls
    - name: aws-cert
  local:
    dataPath: /data
    depotPath: /depot
    requestSize: 500Gi
  shardCount: 12
  subclusters:
  - isPrimary: true
    name: primary-subcluster
    size: 3
    serviceName: connections
    serviceType: LoadBalancer
    serviceAnnotations:
      service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24
    resources:
      limits:
        cpu: 32
        memory: 96Gi
      requests:
        cpu: 32
        memory: 96Gi
    affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app.kubernetes.io/name
              operator: In
              values:
              - vertica
          topologyKey: "kubernetes.io/hostname"

5 - Custom resource definition parameters

The following table describes the available settings for the Vertica Custom Resource Definition.

The following table describes the available settings for the Vertica Custom Resource Definition.

Parameter Description
annotations

Custom annotations added to all of the objects that the operator creates. Each annotation is encoded as an environment variable in the Vertica server container. The following values are accepted:

  • Letters

  • Numbers

  • Underscores

Invalid character values are converted to underscore characters. For example:

vertica.com/git-ref: 1234abcd

Is converted to:

VERTICA_COM_GIT_REF=1234abcd

autoRestartVertica

Determines if the operator restarts the Vertica process when the process is not running.

Set this parameter to false when performing manual maintenance that requires a DOWN database. This prevents the operator from interfering with the database state.

Default: true

certSecrets

A list of Secrets for custom TLS certificates.

Each certificate is mounted in the container at /certs/cert-name/key. For example, a PEM-encoded CA bundle named root_cert.pem and concealed in a Secret named aws-cert is mounted in /certs/aws-cert/root_cert.pem.

If you update the certificate after you add it to a custom resource, the operator updates the value automatically. If you add or delete a certificate, the operator reschedules the pod with the new configuration.

For implementation details, see Creating a custom resource.

communal.caFile

The mount path in the container filesystem to a CA certificate file that validates HTTPS connections to a communal storage endpoint.

Typically, the certificate is stored in a Secret and included in certSecrets. For details, see Creating a custom resource.

communal.credentialSecret

The name of the Secret that stores the credentials for the communal storage endpoint.

For implementation details for each supported communal storage location, see Configuring communal storage.

communal.endpoint

A communal storage endpoint URL. The endpoint must begin with either the http:// or https:// protocol. For example:

https://path/to/endpoint

You cannot change this value after you create the custom resource instance.

This setting is required when initPolicy is set to Create or Revive.

communal.includeUIDInPath

When set to true, the operator includes in the path the unique identifier (UID) that Kubernetes assigns to the VerticaDB object. Including the UID creates a unique database path so that you can reuse the communal path in the same endpoint.

Default: false

communal.kerberosRealm The realm portion of the Vertica Kerberos principal. This value is set in the KerberosRealm database parameter during boostrapping.
communal.kerberosServiceName The service name portion of the Vertica Kerberos principal. This value is set in the KerberosServiceName database parameter during bootstrapping.
communal.path

The path to the communal storage bucket. For example:

s3://bucket-name/key-name

You must create this bucket before you create the Vertica database.

The following initPolicy values determine how to set this value:

  • Create: The path must be empty.

  • Revive: The path cannot be empty.

You cannot change this value after you create the custom resource.

communal.region

The geographic location where the communal storage resources are located.

If you do not set the correct region, the configuration fails. You might experience a delay because Vertica retries several times before failing.

This setting is valid for Amazon Web Services (AWS) and Google Cloud Platform (GCP) only. Vertica ignores this setting for other communal storage providers.

Default:

  • AWS: us-east-1

  • GCP: US-EAST1

dbName

The database name. When initPolicy is set to Revive or ScheduleOnly, this must match the name of the source database.

Default: vertdb

ignoreClusterLease

Ignore the cluster lease when executing a revive or start_db.

Default: false

image

The image that defines the Vertica server container's runtime environment. If the container is hosted in a private container repository, this name must include the path to the repository.

When you update the image, the operator stops and restarts the cluster.

Default: vertica/vertica-k8s:latest

imagePullPolicy

Determines how often Kubernetes pulls the image for an object. For details, see Updating Images in the Kubernetes documentation.

Default: If the image tag is latest, the default is Always. Otherwise, the default is IfNotPresent.

imagePullSecrets A list of Secrets that store credentials for authentication to a private container repository. For details, see Specifying imagePullSecrets in the Kubernetes documentation.
initPolicy

How to initialize the Vertica database in Kubernetes. Enter Create or Revive:

kerberosSecret

The Secret that stores the following values for Kerberos authentication to Hadoop Distributed File System (HDFS):

  • krb5.conf: Contains Kerberos configuration information.

  • krb5.keytab: Contains credentials for the Vertica Kerberos principal. This file must be readable by the file owner that is running the process.

The default location for each of these files is the /etc directory.

kSafety

Sets the fault tolerance for the cluster. The operator supports setting this value to 0 or 1 only. For details, see K-safety.

You cannot change this value after you create the custom resource.

Default: 1

labels Custom labels added to all of the objects that the operator creates.
licenseSecret

The Secret that contains the contents of license files. The Secret must share a namespace with the custom resource (CR). Each of the keys in the Secret is mounted as a file in /home/dbadmin/licensing/mnt.

If this value is set when the CR is created, the operator installs one of the licenses automatically, choosing the first one alphabetically.

If you update this value after you create the custom resource, you must manually install the Secret in each Vertica pod.

local.dataPath

The path in the container filesystem for the local data, such as the catalog.

If initPolicy is set to Revive or ScheduleOnly, the dataPath for the new database must match the dataPath for the source database.

Default: /data

local.depotPath

The path in the container filesystem that stores the depot.

If initPolicy is set to Revive or ScheduleOnly, the depotPath for the new database must match the depotPath for the source database.

Default: /depot

local.requestSize

The minimum size of the local data volume when selecting a persistent volume (PV).

Default: 500 Gi

local.storageClass

The name of the StorageClass used for the local data volume that stores the local catalog, depot, and configuration files. Select this value when defining the persistent volume claim (PVC).

By default, this parameter is not set. The PVC in the default configuration uses the default storage class set by Kubernetes.

reviveOrder

The order of nodes during a revive operation. Each entry contains the subcluster index, and the number of pods to include from the subcluster.

For example, consider a database with the following setup:

- v_db_node0001: subcluster A
- v_db_node0002: subcluster A
- v_db_node0003: subcluster B
- v_db_node0004: subcluster A
- v_db_node0005: subcluster B
- v_db_node0006: subcluster B

If the subclusters[] list is defined as {'A', 'B'}, the revive order is as follows:

- {subclusterIndex:0, podCount:2} # 2 pods from subcluster A
- {subclusterIndex:1, podCount:1} # 1 pod from subcluster B
- {subclusterIndex:0, podCount:1} # 1 pod from subcluster A
- {subclusterIndex:1, podCount:2} # 2 pods from subcluster B

This parameter is used only when initPolicy is set to Revive.

restartTimeout

When restarting pods, the number of seconds before admintools times out.

Default: 0. The operator uses the 20 minutes default used by admintools.

shardCount

The number of shards in the database. You cannot update this value after you create the custom resource.

For more information about database shards and Eon Mode, see Configuring your Vertica cluster for Eon Mode.

sidecars[]

One or more optional utility containers that complete tasks for the Vertica server container. Each sidecar entry is a fully-formed container spec, similar to the container that you add to a Pod spec.

The following example adds a sidecar named vlogger to the custom resource:

spec:
  ...
  sidecars:
    - name: vlogger
      image: vertica/vertica-logger:1.0.0
      volumeMounts:
        - name: my-custom-vol
          mountPath: /path/to/custom-volume

volumeMounts.name is the name of a custom volume. This value must match volumes.name to mount the custom volume in the sidecar container filesystem. See volumes for additional details.

For implementation details, see Creating a custom resource.

sidecars[i].volumeMounts

List of custom volumes and mount paths that persist sidecar container data. Each volume element requires a name value and a mountPath.

To mount a volume in the Vertica sidecar container filesystem, volumeMounts.name must match the volumes.name value for the corresponding sidecar definition, or the webhook returns an error.

For implementation details, see Creating a custom resource.

sidecars[i].volumes

List of custom volumes that persist sidecar container data. Each volume element requires a name value and a volume type. volumes accepts any Kubernetes volume type.

To mount a volume in a sidecar filesystem, volumes.name must match the volumeMounts.name value for the corresponding sidecar element volume mount, or the webhook returns an error.

For implementation details, see Creating a custom resource.

sshSecret

A Secret that contains SSH credentials that authenticate connections to a Vertica server container. For example, these credentials authenticate communication between an Eon Mode database and custom resource in a hybrid architecture.

The Secret requires the following values:

  • id_rsa

  • id_rsa.pub

  • authorized_keys

For details, see Hybrid Kubernetes clusters.

subclusters[i].affinity

Applies rules that constrain the Vertica server pod to specific nodes. It is more expressive than nodeSelector. If this parameter is not set, then the pods use no affinity setting.

In production settings, it is a best practice to configure affinity to run one server pod per host node. For configuration details, see Creating a custom resource.

subclusters[i].externalIPs Enables the service object to attach to a specified external IP. If not set, the external IP is empty in the service object.
subclusters[i].isPrimary

Indicates whether the subcluster is primary or secondary. Each database must have at least one primary subcluster.

Default: true

subclusters[i].loadBalancerIP

When subcluster[i].serviceType is set to LoadBalancer, assigns a static IP to the load balancing service.

Default: Empty string ("")

subclusters[i].name

The subcluster name. This is a required setting. If you change the name of an existing subcluster, the operator deletes the old subcluster and creates a new one with the new name.

subclusters[i].nodePort

When subclusters[i].serviceType is set to NodePort, this parameter enables you to define the port that is opened at each node for external client connections. The port must be within the defined range allocated by the control plane (ports 30000-32767).

If you do not manually define a port number, Kubernetes chooses the port automatically.

subclusters[i].nodeSelector

Provides control over which nodes are used to schedule each pod. If this is not set, the node selector is left off the pod when it is created. To set this parameter, provide a list of key/value pairs.

The following example schedules server pods only at nodes that have the disktype=ssd and region=us-east labels:

subclusters:
  - name: defaultsubcluster
    nodeSelector:
      disktype: ssd
      region: us-east
subclusters[i].priorityClassName The PriorityClass name assigned to pods in the StatefulSet. This affects where the pod gets scheduled.
subclusters[i].resources.limits

The resource limits for pods in the StatefulSet, which sets the maximum amount of CPU and memory that each server pod can consume.

Vertica recommends that you set these values equal to subclusters[i].resources.requests to ensure that the pods are assigned to the guaranteed QoS class. This reduces the possibility that pods are chosen by the out of memory (OOM) Killer.

For more information, see Recommendations for Sizing Vertica Nodes and Clusters in the Vertica Knowledge Base.

subclusters[i].resources.requests

The resource requests for pods in the StatefulSet, which sets the maximum amount of CPU and memory that each server pod can consume.

Vertica recommends that you set these values equal to subclusters[i].resources.limits to ensure that the pods are assigned to the guaranteed QoS class. This reduces the possibility that pods are chosen by the out of memory (OOM) Killer.

For more information, see Recommendations for Sizing Vertica Nodes and Clusters in the Vertica Knowledge Base.

subclusters[i].serviceAnnotations Custom annotations added to implementation-specific services. Managed Kubernetes use service annotations to configure services such as network load balancers, virtual private cloud (VPC) subnets, and loggers.
subclusters[i].serviceName

Identifies the service object that directs client traffic to the subcluster. Assign a single service object to multiple subclusters to process client data with one or more subclusters. For example:

spec:
  ...
  subclusters:
    - name: subcluster-1
      size: 3
      serviceName: connections
    - name: subcluster-2
      size: 3
      serviceName: connections

The previous example creates a service object named metadata.name-connections that load balances client traffic among its assigned subclusters.

For implementation details, see Creating a custom resource.

subclusters[i].serviceType

Identifies the type of Kubernetes service to use for external client connectivity. The default is type is ClusterIP, which sets a stable IP and port that is accessible only from within Kubernetes itself.

Depending on the service type, you might need to set nodePort or externalIPs in addition to this configuration parameter.

Default: ClusterIP

subclusters[i].size

The number of pods in the subcluster. This determines the number of Vertica nodes in the subcluster. Changing this number deletes or schedules new pods.

The minimum size of a subcluster is 1. The subclusters kSafety setting determines the minimum and maximum size of the cluster.

subclusters[i].tolerations Any taints and tolerations used to influence where a pod is scheduled.
superuserPasswordSecret

The Secret that contains the database superuser password. Create this Secret before deployment.

If you do not create this Secret before deployment, there is no password authentication for the database.

The Secret must use a key named password:

kubectl create secret generic su-passwd --from-literal=password=secret-password

The following text adds this Secret to the custom resource:

db:
  superuserSecretPassword: su-passwd
temporarySubclusterRouting.names

Specifies an existing subcluster that accepts traffic during an online upgrade. The operator routes traffic to the first subcluster that is online. For example:

spec:
  ...
  temporarySubclusterRouting:
    names:
      - subcluster-2
      - subcluster-1

In the previous example, the operator selects subcluster-2 during the upgrade, and then routes traffic to subcluster-1 when subcluster-2 is down. As a best practice, use secondary subclusters when rerouting traffic.

temporarySubclusterRouting.template

Instructs the operator create a new secondary subcluster during an Online upgrade. The operator creates the subcluster when the upgrade begins and deletes it when the upgrade completes.

To define a temporary subcluster, provide a name and size value. For example:

spec:
  ...
  temporarySubclusterRouting:
    template:
      name: transient
      size: 1
upgradePolicy

Determines how the operator upgrades Vertica server versions. Accepts the following values:

  • Offline: The operator stops the cluster to prevent multiple versions from running simultaneously.

  • Online: The cluster continues to operator during a rolling update. The data is in read-only mode while the operator upgrades the image for the primary subcluster.

    The Online setting has the following restrictions:

    • The cluster must currently run Vertica server version 11.1.0 or higher.

    • If you have only one subcluster, you must configure temporarySubclusterRouting.template to create a new secondary subcluster during the Online upgrade. Otherwise, the operator performs an Offline upgrade, regardless of the setting.

  • Auto: The operator selects either Offline or Online depending on the configuration. The operator selects Online if all of the following are true:

    • A license Secret exists.

    • K-Safety is 1.

    • The cluster is currently running Vertica version 11.1.0 or higher.

Default: Auto

upgradeRequeueTime

During an online upgrade, the number of seconds that the operator waits to complete work for any resource that was requeued during the reconciliation loop.

Default: 30 seconds

volumeMounts

List of custom volumes and mount paths that persist Vertica server container data. Each volume element requires a name value and a mountPath.

To mount a volume in the Vertica server container filesystem, volumeMounts.name must match the volumes.name value defined in the spec definition, or the webhook returns an error.

For implementation details, see Creating a custom resource.

volumes

List of custom volumes that persist Vertica server container data. Each volume element requires a name value and a volume type. volumes accepts any Kubernetes volume type.

To mount a volume in the filesystem, volumes.name must match the volumeMounts.name value for the corresponding volume mount, or the webhook returns an error.

For implementation details, see Creating a custom resource.

6 - Subclusters on Kubernetes

Eon Mode uses subclusters for workload isolation and scaling.

Eon Mode uses subclusters for workload isolation and scaling. The Vertica operator provides tools to direct external client communications to specific subclusters, and automate scaling without stopping your database.

The custom resource definition (CRD) provides parameters that allow you to fine-tune each subcluster for specific workloads. For example, you can increase the subcluster size setting for increased throughput, or adjust the resource requests and limits to manage compute power. When you create a custom resource instance, the operator deploys each subcluster as a StatefulSet. Each StatefulSet has a service object, which allows an external client to connect to a specific subcluster.

Kubernetes uses the subcluster name to derive names for the subcluster StatefulSet, service object, and pods. This naming convention tightly couples the subcluster objects to help Kubernetes effectively manage the cluster. If you want to rename a subcluster, you must delete it from the CRD and redefine it so that the operator can create new objects with a derived name.

External client connections

External clients can target specific subclusters that are fine-tuned to handle their workload. Each subcluster has a service object that handles external connections. To target multiple subclusters with a single service object, assign each subcluster the same spec.subclusters.serviceName value in the custom resource (CR). For implementation details, see Creating a custom resource.

The operator performs health monitoring that checks if the Vertica daemon is running on each pod. If it is, then the operator allows the service object to route traffic to the pod.

By default, the service object derives its name from the custom resource name and the associated subcluster and uses the customResourceName-subclusterName format. Use the subclusters[i].serviceName CR parameter to override the default naming format and use the metadata.name-serviceName format.

Vertica supports the following service object types:

  • ClusterIP: The default service type. This service provides internal load balancing, and sets a stable IP and port that is accessible from within the subcluster only.

  • NodePort: Provides external client access. You can specify a port number for each host node in the subcluster to open for client connections.

  • LoadBalancer: Uses a cloud provider load balancer to create NodePort and ClusterIP services as needed. For details about implementation, see the Kubernetes documentation and your cloud provider documentation.

For configuration details, see Creating a custom resource.

Managing internal and external workloads

The Vertica StatefulSet is associated with an external service object. All external client requests are sent through this service object and load balanced among the pods in the cluster.

Import and export

Importing and exporting data between a cluster outside of Kubernetes requires that you expose the service with the NodePort or LoadBalancer service type and properly configure the network.

6.1 - Scaling subclusters

The operator enables you to scale the number of subclusters, and the number of pods per subcluster automatically.

The operator enables you to scale the number of subclusters, and the number of pods per subcluster automatically. This allows you to utilize or conserve resources depending on the immediate needs of your workload.

The following sections explain how to scale resources for new workloads. For details about scaling resources for existing workloads, see VerticaAutoscaler custom resource.

Prerequisites

Scaling the number of subclusters

Adjust the number of subclusters in your custom resource to fine-tune resources for short-running dashboard queries. For example, increase the number of subclusters to increase throughput. For more information, see Improving query throughput using subclusters.

  1. Use kubectl edit to open your default text editor and update the YAML file for the specified custom resource. The following command opens a custom resource named vdb for editing:

    $ kubectl edit vdb
    
  2. In the spec section of the custom resource, locate the subclusters subsection. Begin the isPrimary field to define a new subcluster.

    The isPrimary field accepts a boolean that specifies whether the subcluster is a primary or secondary. Because there is already a primary subcluster in our custom resource, enter false:

    spec:
    ...
      subclusters:
      ...
      - isPrimary: false
    
  3. Follow the steps in Creating a custom resource to complete the subcluster definition. The following completed example adds a secondary subcluster for dashboard queries:

    spec:
    ...
      subclusters:
      - isPrimary: true
        name: primary-subcluster
      ...
      - isPrimary: false
        name: dashboard
        nodePort: 32001
        resources:
          limits:
            cpu: 32
            memory: 96Gi
          requests:
            cpu: 32
            memory: 96Gi
        serviceType: NodePort
        size: 3
    
  4. Save and close the custom resource file. You receive a message similar to the following when you successfully update the file:

    verticadb.vertica.com/vertica-db edited

  5. Use the kubectl wait command to monitor when the new pods are ready:

    $ kubectl wait --for=condition=Ready pod --selector app.kubernetes.io/name=vertica-db --timeout 180s
    pod/vdb-dashboard-0 condition met
    pod/vdb-dashboard-1 condition met
    pod/vdb-dashboard-2 condition met
    

Scaling the pods in a subcluster

For long-running, analytic queries, increase the pod count for a subcluster. See Using elastic crunch scaling to improve query performance.

  1. Use kubectl edit to open your default text editor and update the YAML file for the specified custom resource. The following command opens a custom resource named vdb for editing:

    $ kubectl edit vertica-db
    
  2. Update the subclusters.size value to 6:

    spec:
    ...
      subclusters:
      ...
      - isPrimary: false
        ...
        size: 6
    

    Shards are rebalanced automatically.

  3. Save and close the custom resource file. You receive a message similar to the following when you successfully update the file:

    verticadb.vertica.com/vertica-db edited

  4. Use the kubectl wait command to monitor when the new pods are ready:

    $ kubectl wait --for=condition=Ready pod --selector app.kubernetes.io/name=vertica-db --timeout 180s
    pod/vdb-subcluster1-3 condition met
    pod/vdb-subcluster1-4 condition met
    pod/vdb-subcluster1-5 condition met
    

Removing a subcluster

Remove a subcluster when it is no longer needed, or to preserve resources.

  1. Use kubectl edit to open your default text editor and update the YAML file for the specified custom resource. The following command opens a custom resource named vdb for editing:

    $ kubectl edit vertica-db
    
  2. In the subclusters subsection nested under spec, locate the subcluster that you want to delete. Delete the element in the subcluster array represents the subcluster that you want to delete. Each element is identified by a hyphen (-).

  3. After you delete the subcluster and save, you receive a message similar to the following:

    verticadb.vertica.com/vertica-db edited

6.2 - VerticaAutoscaler custom resource

The VerticaAutoscaler custom resource (CR) is a HorizontalPodAutoscaler that automatically scales resources for existing subclusters using one of the following strategies:.

The VerticaAutoscaler custom resource (CR) is a HorizontalPodAutoscaler that automatically scales resources for existing subclusters using one of the following strategies:

  • Subcluster scaling for short-running dashboard queries.

  • Pod scaling for long-running analytic queries.

The VerticaAutoscaler CR scales using resource or custom metrics. Vertica manages subclusters by workload, which helps you pinpoint the best metrics to trigger a scaling event. To maintain data integrity, the operator does not scale down unless all connections to the pods are drained and sessions are closed.

For details about the algorithm that determines when the VerticaAutoscaler scales, see the Kubernetes documentation.

Additionally, the VerticaAutoscaler provides a webhook to validate state changes. By default, this webhook is enabled. You can configure this webhook with the webhook.enable Helm chart parameter.

Parameters

Parameter Description
verticaDBName Required. Name of the VerticaDB CR that the VerticaAutoscaler CR scales resources for.
scalingGranularity

Required. The scaling strategy. This parameter accepts one of the following values:

  • Subcluster: Create or delete entire subclusters. To create a new subcluster, the operator uses a template or an existing subcluster with the same serviceName.

  • Pod: Increase or decrease the size of an existing subcluster.

Default: Subcluster

serviceName

Required. Refers to the subclusters[i].serviceName for the VerticaDB CR.

VerticaAutoscaler uses this value as a selector when scaling subclusters together.

template

When scalingGranularity is set to Subcluster, you can use this parameter to define how VerticAutoscaler scales the new subcluster. The following is an example:

spec:
    verticaDBName: dbname
    scalingGranularity: Subcluster
    serviceName: service-name
    template:
        name: autoscaler-name
        size: 2
        serviceName: service-name
        isPrimary: false

If you set template.size to 0, VerticaAutoscaler selects as a template an existing subcluster that uses service-name.

This setting is ignored when scalingGranularity is set to Pod.

Examples

The examples in this section use the following VerticaDB custom resource. Each example uses CPU to trigger scaling:

apiVersion: vertica.com/v1beta1
kind: VerticaDB
metadata:
  name: dbname
spec:
  communal:
    path: "path/to/communal-storage"
    endpoint: "path/to/communal-endpoint"
    credentialSecret: credentials-secret
  subclusters:
    - name: primary1
      size: 3
      isPrimary: true
      serviceName: primary1
      resources:
        limits:
          cpu: "8"
        requests:
          cpu: "4"

Prerequisites

  • Set a value for the metric that triggers scaling. For example, if you want to scale by CPU utilization, you must set CPU limits and requests.

Subcluster scaling

Automatically adjust the number of subclusters in your custom resource to fine-tune resources for short-running dashboard queries. For example, increase the number of subclusters to increase throughput. For more information, see Improving query throughput using subclusters.

All subclusters share the same service object, so there are no required changes to external service objects. Pods in the new subcluster are load balanced by the existing service object.

The following example creates a VerticaAutoscaler custom resource that scales by subcluster when the VerticaDB uses 50% of the node's available CPU:

  1. Define the VerticaAutoscaler custom resource in a YAML-formatted manifest:

    apiVersion: vertica.com/v1beta1
    kind: VerticaAutoscaler
    metadata:
      name: autoscaler-name
    spec:
      verticaDBName: dbname
      scalingGranularity: Subcluster
      serviceName: primary1
    
  2. Create the VerticaAutoscaler with the kubectl autoscale command:

    $ kubectl autoscale verticaautoscaler autoscaler-name --cpu-percent=50 --min=3 --max=12
    

    The previous command creates a HorizontalPodAutoscaler object that:

    • Sets the target CPU utilization to 50%.

    • Scales to a minimum of three pods in one subcluster, and 12 pods in four subclusters.

Pod scaling

For long-running, analytic queries, increase the pod count for a subcluster. For additional information about Vertica and analytic queries, see Using elastic crunch scaling to improve query performance.

When you scale pods in an Eon Mode database, you must consider the impact on database shards. For details, see Shards and subscriptions.

The following example creates a VerticaAutoscaler custom resource that scales by pod when the VerticaDB uses 50% of the node's available CPU:

  1. Define the VerticaAutoScaler custom resource in a YAML-formatted manifest:

    apiVersion: vertica.com/v1beta1
    kind: VerticaAutoscaler
    metadata:
      name: autoscaler-name
    spec:
      verticaDBName: dbname
      scalingGranularity: Pod
      serviceName: primary1
    
  2. Create the autoscaler instance with the kubectl autoscale command:

    $ kubectl autoscale verticaautoscaler autoscaler-name --cpu-percent=50 --min=3 --max=12
    

    The previous command creates a HorizontalPodAutoscaler object that:

    • Sets the target CPU utilization to 50%.

    • Scales to a minimum of three pods in one subcluster, and 12 pods in four subclusters.

Event monitoring

To view the VerticaAutoscaler object, use the kubetctl describe hpa command:

$ kubectl describe hpa autoscaler-name
Name:                                                  as
Namespace:                                             vertica
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Tue, 12 Apr 2022 15:11:28 -0300
Reference:                                             VerticaAutoscaler/as
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  0% (9m) / 50%
Min replicas:                                          3
Max replicas:                                          12
VerticaAutoscaler pods:                                3 current / 3 desired
Conditions:
  Type            Status  Reason              Message
  ----            ------  ------              -------
  AbleToScale     True    ReadyForNewScale    recommended size matches current size
  ScalingActive   True    ValidMetricFound    the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  False   DesiredWithinRange  the desired count is within the acceptable range

When a scaling event occurs, you can view the admintools commands to scale the cluster. Use kubectl to view the StatefulSets:

$ kubectl get statefulsets
NAME                                                   READY   AGE
db-name-as-instance-name-0                             0/3     71s
db-name-primary1                                       3/3     39m

Use kubectl describe to view the executing commands:

$ kubectl describe vdb dbname | tail
  Upgrade Status:
Events:
  Type    Reason                   Age   From                Message
  ----    ------                   ----  ----                -------
  Normal  ReviveDBStart            41m   verticadb-operator  Calling 'admintools -t revive_db'
  Normal  ReviveDBSucceeded        40m   verticadb-operator  Successfully revived database. It took 25.255683916s
  Normal  ClusterRestartStarted    40m   verticadb-operator  Calling 'admintools -t start_db' to restart the cluster
  Normal  ClusterRestartSucceeded  39m   verticadb-operator  Successfully called 'admintools -t start_db' and it took 44.713787718s
  Normal  SubclusterAdded          10s   verticadb-operator  Added new subcluster 'as-0'
  Normal  AddNodeStart             9s    verticadb-operator  Calling 'admintools -t db_add_node' for pod(s) 'db-name-as-instance-name-0-0, db-name-as-instance-name-0-1, db-name-as-instance-name-0-2'

7 - Hybrid Kubernetes clusters

An Eon Mode database can run hosts separate from the database and within Kubernetes.

An Eon Mode database can run hosts separate from the database and within Kubernetes. This architecture is useful in scenarios where you want to:

  • Leverage Kubernetes tooling to quickly create a secondary subcluster for a database.

  • Create an isolated sandbox environment to run ad hoc queries on a communal dataset.

  • Experiment with the Vertica on Kubernetes performance overhead without migrating your primary subcluster into Kubernetes.

Define the Kubernetes portion of a hybrid architecture with a custom resource. The custom resource has no knowledge of Vertica hosts that exist separately from the custom resource. This limits the operator's functionality and requires that you manually complete some tasks that the operator automates for a standard Vertica on Kubernetes custom resource.

Requirements and restrictions

The hybrid Kubernetes architecture has the following requirements and restrictions:

  • Hybrid Kubernetes clusters require a tool that enables Border Gateway Protocol (BGP) so that pods are accessible to your on-premises subcluster for external communication. For example, you can use the Calico CNI plugin to enable BGP.

  • You cannot use network address translation (NAT) between the Kubernetes pods and the on-premises cluster.

Operator limitations

In a hybrid architecture, the operator has no visibility outside of the custom resource. This limited visibility means that the operator cannot interact with the Eon Mode database or the primary subcluster. Within the scope of the custom resource, the operator automates only the following:

  • Schedules pods based on the manifest.

  • Creates service objects for the subcluster.

  • Creates a PersistentVolumeClaim (PVC) that persists data for each pod.

  • Executes the restart_node administration tool command if the Vertica server process is not running. To override this default behavior, set autoRestartVertica to false.

Defining a hybrid cluster

Before you define a hybrid cluster, you must create a Secret to store SSH credentials. In an Eon Mode database, nodes communicate through SSH. The Vertica container uses SSH, but the SSH key is regenerated each time a container is built.

The following command creates a Secret named ssh-key that stores SSH credentials that persists between life cycles to allow secure connections between the on-premises nodes and the CR:

$ kubectl create secret generic ssh-keys --from-file=$HOME/.ssh

Create a custom resource to define a subcluster that runs outside your standard Eon Mode database:

apiVersion: vertica.com/v1beta1
kind: VerticaDB
metadata:
  name: hybrid-secondary-sc
spec:
  image: vertica/vertica-k8s:latest
  initPolicy: ScheduleOnly
  sshSecret: ssh-keys
  local:
    dataPath: /data
    depotPath: /depot
  dbName: vertdb
  subclusters:
    - name: sc1
      size: 3
    - name: sc2
      size: 3

In the previous example:

  • initPolicy: Hybrid clusters require that you set this to ScheduleOnly.

  • sshSecret: The Secret that contains SSH keys that authenticate connections to Vertica hosts outside the CR.

  • local: Required. The values persist data to the PersistentVolume (PV). These values must match the directory locations in the Eon Mode database that is associated with the Kubernetes pods.

  • dbName: This value must match the name of the standard Eon Mode database that is associated with this subcluster.

  • subclusters: Definition for each subcluster.

For complete implementation details, see Creating a custom resource. For details about each setting, see Custom resource definition parameters.

Manual tasks

Because of the limited operator functionality, the administrator must manually perform the following tasks:

  • Restart the cluster if quorum is lost. For details about maintaining quorum, see Data integrity and high availability in an Eon Mode database.

  • Execute the update_vertica script to set up the configuration directory. Vertica on Kubernetes requires the following configuration options for update_vertica:

    $ /opt/vertica/sbin/update_vertica \
        --accept-eula \
        --add-hosts host-list \
        --dba-user-password dba-user-password \
        --failure-threshold NONE \
        --no-system-configuration \
        --point-to-point \
        --data-dir /data-dir \
        --dba-user dbadmin \
        --no-package-checks
    

    After you call update_vertica, use admintools with the db_add_node option to add the nodes and complete the setup:

    $ /opt/vertica/bin/admintools \
        -t db_add_node \
        --hosts host-list \
        --database db-name\
        --subcluster sc-name \
        --noprompt
    

    For details, see Adding and removing nodes from subclusters.

8 - Generating a custom resource from an existing Eon Mode database

To simplify Vertica on Kubernetes adoption, Vertica provides the vdb-gen migration tool that revives an existing Eon Mode database as a StatefulSet in Kubernetes.

To simplify Vertica on Kubernetes adoption, Vertica provides the vdb-gen migration tool that revives an existing Eon Mode database as a StatefulSet in Kubernetes. vdb-gen generates a custom resource (CR) from an existing Eon Mode database by connecting to the database and writing to standard output.

The vdb-gen tool is available for download as a release artifact in the vertica-kubernetes GitHub repository.

Use the -h flag to view a full list of the available vdb-gen options, including options for debugging and working with environment variables. The following steps generate a CR using basic commands:

  1. Execute vdb-gen and redirect the output to a YAML-formatted file:

    $ vdb-gen --password secret --name mydb 10.20.30.40 vertdb > vdb.yaml
    

    The previous command uses the following flags and values:

    • password: The existing database superuser secret password.

    • name: The name of the new custom resource object.

    • 10.20.30.40: The IP address of the existing database

    • vertdb: The name of the existing Eon Mode database.

    • vdb.yaml: The YAML formatted file that contains the custom resource definition generated by the vdb-gen tool.

  2. Use the admintools stop_db command to stop the existing database:

    $ /opt/vertica/bin/admintools -t stop_db -d vertdb
    

    Wait for the cluster lease to expire before continuing. For details, seeReviving an Eon Mode database cluster.

  3. Apply the YAML-formatted manifest that was generated by the vdb-gen tool:

    $ kubectl apply -f vdb.yaml
    verticadb.vertica.com/mydb created
    
  4. The operator creates the StatefulSet, installs Vertica on each pod, and runs revive. To view the events generated for the new database, use kubectl describe:

    $ kubectl describe vdb mydb
    

9 - Troubleshooting your Kubernetes cluster

These tips can help you avoid issues related to your Vertica on Kubernetes deployment and troubleshoot any problems that occur.

These tips can help you avoid issues related to your Vertica on Kubernetes deployment and troubleshoot any problems that occur.

Download the kubectl command line tool to debug your Kubernetes resources.

Helm install failure

When you install the VerticaDB operator and admission controller Helm chart, the helm install command might return the following error:

$ helm install vdb-op vertica-charts/verticadb-operator
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Certificate" in version "cert-manager.io/v1", unable to recognize "": no matches for kind "Issuer" in version "cert-manager.io/v1"]

The error indicates that you have not met the TLS prerequisite for the admission controller webhook. To resolve this issue, install cert-manager or configure custom certificates. The following steps install cert-manager.

  1. Install the cert-manager YAML manifest:

    $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
    
  2. Verify the cert-manager installation.

    If you try to install the Helm chart immediately after you install cert-manager, you might receive the following error:

    $ helm install vdb-op vertica-charts/verticadb-operator
    Error: failed to create resource: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": dial tcp 10.96.232.154:443: connect: connection refused
    

    You receive this error because cert-manager needs time to create its pods and register the webhook with the cluster. Wait a few minutes, and then verify the cert-manager installation with the following command:

    $ kubectl get pods --namespace cert-manager
    NAME                                       READY   STATUS    RESTARTS   AGE
    cert-manager-7dd5854bb4-skks7              1/1     Running   5          12d
    cert-manager-cainjector-64c949654c-9nm2z   1/1     Running   5          12d
    cert-manager-webhook-6bdffc7c9d-b7r2p      1/1     Running   5          12d
    

    For additional details about cert-manager install verification, see the cert-manager documentation.

  3. After you verify the cert-manager installation, you must uninstall the Helm chart and then reinstall:

    $ helm uninstall vdb-op
    $ helm install vdb-op vertica-charts/verticadb-operator
    

For additional information, see Installing the Vertica DB operator.

Custom certificate helm install error

If you use custom certificates when you install the operator with the Helm chart, the helm install or kubectl apply command might return an error similar to the following:

$ kubectl apply -f ../operatorcrd.yaml
Error from server (InternalError): error when creating "../operatorcrd.yaml": Internal error occurred: failed calling webhook "mverticadb.kb.io": Post "https://verticadb-operator-webhook-service.namespace.svc:443/mutate-vertica-com-v1beta1-verticadb?timeout=10s": x509: certificate is valid for ip-10-0-21-169.ec2.internal, test-bastion, not verticadb-operator-webhook-service.default.svc

You receive this error when the TLS key's Domain Name System (DNS) or Subject Alternate Name (SAN) is incorrect. To correct this error, define the DNS and SAN in a configuration file in the following format:

commonName = verticadb-operator-webhook-service.namespace.svc
...
[alt_names]
DNS.1 = verticadb-operator-webhook-service.namespace.svc
DNS.2 = verticadb-operator-webhook-service.namespace.svc.cluster.local

For additional details, see Installing the Vertica DB operator.

Verify updates to a custom resource

Because the operator takes time to perform tasks, updates to the custom resource are not effective immediately. Use the kubectl command line tool to verify that changes are applied.

You can use the kubectl wait command to wait for a specified condition. For example, the operator uses the ImageChangeInProgress condition to provide an upgrade status. After you begin the image version upgrade, wait until the operator acknowledges the upgrade and sets this condition to True:

$ kubectl wait --for=condition=ImageChangeInProgress=True vdb/cluster-name –-timeout=180s

After the upgrade begins, you can wait until the operator leaves upgrade mode and sets this condition to False:

$ kubectl wait --for=condition=ImageChangeInProgress=False vdb/cluster-name –-timeout=800s

For more information about kubectl wait, see the kubectl reference documentation.

Pods are running but the database is not ready

When you check the pods in your cluster, the pods are running but the database is not ready:

$ kubectl get pods
NAME                                                    READY   STATUS    RESTARTS   AGE
vertica-crd-sc1-0                                       0/1     Running   0          12m
vertica-crd-sc1-1                                       0/1     Running   1          12m
vertica-crd-sc1-2                                       0/1     Running   0          12m
verticadb-operator-controller-manager-5d9cdc9b8-kw9nv   2/2     Running   0          24m

To find the root cause of the issue, use kubectl logs to check the operator manager. The following example shows that the communal storage bucket does not exist:

$ kubectl logs -l app.kubernetes.io/name=verticadb-operator -c manager -f
2021-08-04T20:03:00.289Z        INFO    controllers.VerticaDB   ExecInPod entry {"verticadb": "default/vertica-crd", "pod": {"namespace": "default", "name": "vertica-crd-sc1-0"}, "command": "bash -c ls -l /opt/vertica/config/admintools.conf && grep '^node\\|^v_\\|^host' /opt/vertica/config/admintools.conf "}
2021-08-04T20:03:00.369Z        INFO    controllers.VerticaDB   ExecInPod stream        {"verticadb": "default/vertica-crd", "pod": {"namespace": "default", "name": "vertica-crd-sc1-0"}, "err": null, "stdout": "-rw-rw-r-- 1 dbadmin verticadba 1243 Aug  4 20:00 /opt/vertica/config/admintools.conf\nhosts = 10.244.1.5,10.244.2.4,10.244.4.6\nnode0001 = 10.244.1.5,/data,/data\nnode0002 = 10.244.2.4,/data,/data\nnode0003 = 10.244.4.6,/data,/data\n", "stderr": ""}
2021-08-04T20:03:00.369Z        INFO    controllers.VerticaDB   ExecInPod entry {"verticadb": "default/vertica-crd", "pod": {"namespace": "default", "name": "vertica-crd-sc1-0"}, "command": "/opt/vertica/bin/admintools -t create_db --skip-fs-checks --hosts=10.244.1.5,10.244.2.4,10.244.4.6 --communal-storage-location=s3://newbucket/db/26100df1-93e5-4e64-b665-533e14abb67c --communal-storage-params=/home/dbadmin/auth_parms.conf --sql=/home/dbadmin/post-db-create.sql --shard-count=12 --depot-path=/depot --database verticadb --force-cleanup-on-failure --noprompt --password ******* "}
2021-08-04T20:03:00.369Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"VerticaDB","namespace":"default","name":"vertica-crd","uid":"26100df1-93e5-4e64-b665-533e14abb67c","apiVersion":"vertica.com/v1beta1","resourceVersion":"11591"}, "reason": "CreateDBStart", "message": "Calling 'admintools -t create_db'"}
2021-08-04T20:03:17.051Z        INFO    controllers.VerticaDB   ExecInPod stream        {"verticadb": "default/vertica-crd", "pod": {"namespace": "default", "name": "vertica-crd-sc1-0"}, "err": "command terminated with exit code 1", "stdout": "Default depot size in use\nDistributing changes to cluster.\n\tCreating database verticadb\nBootstrap on host 10.244.1.5 return code 1 stdout '' stderr 'Logged exception in writeBufferToFile: RecvFiles failed in closing file [s3://newbucket/db/26100df1-93e5-4e64-b665-533e14abb67c/verticadb_rw_access_test.txt]: The specified bucket does not exist. Writing test data to file s3://newbucket/db/26100df1-93e5-4e64-b665-533e14abb67c/verticadb_rw_access_test.txt failed.\\nTesting rw access to communal location s3://newbucket/db/26100df1-93e5-4e64-b665-533e14abb67c/ failed\\n'\n\nError: Bootstrap on host 10.244.1.5 return code 1 stdout '' stderr 'Logged exception in writeBufferToFile: RecvFiles failed in closing file [s3://newbucket/db/26100df1-93e5-4e64-b665-533e14abb67c/verticadb_rw_access_test.txt]: The specified bucket does not exist. Writing test data to file s3://newbucket/db/26100df1-93e5-4e64-b665-533e14abb67c/verticadb_rw_access_test.txt failed.\\nTesting rw access to communal location s3://newbucket/db/26100df1-93e5-4e64-b665-533e14abb67c/ failed\\n'\n\n", "stderr": ""}
2021-08-04T20:03:17.051Z        INFO    controllers.VerticaDB   aborting reconcile of VerticaDB {"verticadb": "default/vertica-crd", "result": {"Requeue":true,"RequeueAfter":0}, "err": null}
2021-08-04T20:03:17.051Z        DEBUG   controller-runtime.manager.events       Warning {"object": {"kind":"VerticaDB","namespace":"default","name":"vertica-crd","uid":"26100df1-93e5-4e64-b665-533e14abb67c","apiVersion":"vertica.com/v1beta1","resourceVersion":"11591"}, "reason": "S3BucketDoesNotExist", "message": "The bucket in the S3 path 's3://newbucket/db/26100df1-93e5-4e64-b665-533e14abb67c' does not exist"}

Create an S3 bucket for the cluster:

$ S3_BUCKET=newbucket
$ S3_CLUSTER_IP=$(kubectl get svc | grep minio | head -1 | awk '{print $3}')
$ export AWS_ACCESS_KEY_ID=minio
$ export AWS_SECRET_ACCESS_KEY=minio123
$ aws s3 mb s3://$S3_BUCKET --endpoint-url http://$S3_CLUSTER_IP
make_bucket: newbucket

Use kubectl get pods to verify that the cluster uses the new S3 bucket and the database is ready:

$ kubectl get pods
NAME                                                    READY   STATUS    RESTARTS   AGE
minio-ss-0-0                                            1/1     Running   0          18m
minio-ss-0-1                                            1/1     Running   0          18m
minio-ss-0-2                                            1/1     Running   0          18m
minio-ss-0-3                                            1/1     Running   0          18m
vertica-crd-sc1-0                                       1/1     Running   0          20m
vertica-crd-sc1-1                                       1/1     Running   0          20m
vertica-crd-sc1-2                                       1/1     Running   0          20m
verticadb-operator-controller-manager-5d9cdc9b8-kw9nv   2/2     Running   0          63m

Database is not available

After you create a custom resource instance, the database is not available. The kubectl get custom-resource command does not display information:

$ kubectl get vdb
NAME          AGE   SUBCLUSTERS   INSTALLED   DBADDED   UP
vertica-crd   4s

Use kubectl describe custom-resource to check the events for the pods to identify any issues:

$ kubectl describe vdb
Name:         vertica-crd
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  vertica.com/v1beta1
Kind:         VerticaDB
Metadata:
  ...
  Superuser Password Secret:  su-passwd
Events:
  Type     Reason                           Age                From                Message
  ----     ------                           ----               ----                -------
  Warning  SuperuserPasswordSecretNotFound  5s (x12 over 15s)  verticadb-operator  Secret for superuser password 'su-passwd' was not found

In this circumstance, the custom resource uses a Secret named su-passwd to store the Superuser Password Secret, but there is no such Secret available. Create a Secret named su-passwd to store the Secret:

$ kubectl create secret generic su-passwd --from-literal=password=sup3rs3cr3t
secret/su-passwd created

Use kubectl get custom-resource to verify the issue is resolved:

$ kubectl get vdb
NAME          AGE   SUBCLUSTERS   INSTALLED   DBADDED   UP
vertica-crd   89s   1             0           0         0

Image pull failure

You receive an ImagePullBackOff error when you deploy a Vertica cluster with Helm charts, but you do not pre-pull the Vertica image from the local registry server:

$ kubectl describe pod pod-name-0
...
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  ...
  Warning  Failed            2m32s                  kubelet            Failed to pull image "k8s-rhel7-01:5000/vertica-k8s:default-1": rpc error: code = Unknown desc = context canceled
  Warning  Failed            2m32s                  kubelet            Error: ErrImagePull
  Normal   BackOff           2m32s                  kubelet            Back-off pulling image "k8s-rhel7-01:5000/vertica-k8s:default-1"
  Warning  Failed            2m32s                  kubelet            Error: ImagePullBackOff
  Normal   Pulling           2m18s (x2 over 4m22s)  kubelet            Pulling image "k8s-rhel7-01:5000/vertica-k8s:default-1"

This occurs because the Vertica image size is too big to pull from the registry while deploying the Vertica cluster. Execute the following command on a Kubernetes host:

$ docker image list | grep vertica-k8s
k8s-rhel7-01:5000/vertica-k8s default-1 2d6f5d3d90d6 9 days ago 1.55GB

The solve this issue, complete one of the following:

  • Pull the Vertica images on each node before creating the Vertica StatefulSet:

    $ NODES=`kubectl get nodes | grep -v NAME | awk '{print $1}'`
    $ for node in $NODES; do ssh $node docker pull $DOCKER_REGISTRY:5000/vertica-k8s:$K8S_TAG; done
    
  • Use the reduced-size vertica/vertica-k8s:latest image for the Vertica server.

Pending pods due to insufficient CPU

If your host nodes do not have enough resources to fulfill the resource request from a pod, the pod stays in pending status.

In the following example, the pod requests 40 CPUs on the host node, and the pod stays in Pending:

$ kubectl describe pod cluster-vertica-defaultsubcluster-0
...
Status:         Pending
...
Containers:
  server:
    Image:       docker.io/library/vertica-k8s:default-1
    Ports:       5433/TCP, 5434/TCP, 22/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /opt/vertica/bin/docker-entrypoint.sh
      restart-vertica-node
    Limits:
      memory:  200Gi
    Requests:
      cpu: 40
      memory:  200Gi
...
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  3h20m  default-scheduler  0/5 nodes are available: 5 Insufficient cpu.

To confirm the resources available on the host node. The following command confirms that the host node has only 40 allocatable CPUs:

$ kubectl describe node host-node-1
...
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 20 Mar 2021 22:39:10 -0400   Sat, 20 Mar 2021 13:07:02 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 20 Mar 2021 22:39:10 -0400   Sat, 20 Mar 2021 13:07:02 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 20 Mar 2021 22:39:10 -0400   Sat, 20 Mar 2021 13:07:02 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 20 Mar 2021 22:39:10 -0400   Sat, 20 Mar 2021 13:07:12 -0400   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.19.0.5
  Hostname:    eng-g9-191
Capacity:
  cpu:                40
  ephemeral-storage:  285509064Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             263839236Ki
  pods:               110
Allocatable:
  cpu:                40
  ephemeral-storage:  285509064Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             263839236Ki
  pods:               110
...
Non-terminated Pods:          (3 in total)
  Namespace                   Name                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                   ------------  ----------  ---------------  -------------  ---
  default                     cluster-vertica-defaultsubcluster-0    38 (95%)      0 (0%)      200Gi (79%)      200Gi (79%)    51m
  kube-system                 kube-flannel-ds-8brv9                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      9h
  kube-system                 kube-proxy-lgjhp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9h
...

To correct this issue, reduce the resource.requests in the subcluster to values lower than the maximum allocatable CPUs. The following example uses a YAML-formatted file named patch.yaml to lower the resource requests for the pod:

$ cat patch.yaml
spec:
  subclusters:
    - name: defaultsubcluster
      resources:
        requests:
          memory: 238Gi
          cpu: "38"
        limits:
          memory: 238Gi
$ kubectl patch vdb cluster-vertica –-type=merge --patch “$(cat patch.yaml)”
verticadb.vertica.com/cluster-vertica patched

Adding and testing the vlogger sidecar

Vertica provides the vlogger image that sends logs from vertica.log to standard output on the host node for log aggregation.

To add the sidecar to the CR, add an element to the spec.sidecars definition:

spec:
  ...
  sidecars:
    - name: vlogger
      image: vertica/vertica-logger:1.0.0

To test the sidecar, run the following command and verify that it returns logs:

$ kubectl logs pod-name -c vlogger

2021-12-08 14:39:08.538 DistCall Dispatch:0x7f3599ffd700-c000000000997e [Txn
2021-12-08 14:40:48.923 INFO New log
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> Log /data/verticadb/v_verticadb_node0002_catalog/vertica.log opened; #1
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> Processing command line: /opt/vertica/bin/vertica -D /data/verticadb/v_verticadb_node0002_catalog -C verticadb -n v_verticadb_node0002 -h 10.20.30.40 -p 5433 -P 4803 -Y ipv4
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> Starting up Vertica Analytic Database v11.0.2-20211201
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO>
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> vertica(v11.0.2) built by @re-docker5 from master@a44ffabdf3f05e8d104426506b088192f741c485 on 'Wed Dec  1 06:10:34 2021' $BuildId$
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> CPU architecture: x86_64
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> 64-bit Optimized Build
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> Compiler Version: 7.3.1 20180303 (Red Hat 7.3.1-5)
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> LD_LIBRARY_PATH=/opt/vertica/lib
2021-12-08 14:40:48.923 Main Thread:0x7fbbe2cf6280 [Init] <INFO> LD_PRELOAD=
2021-12-08 14:40:48.925 Main Thread:0x7fbbe2cf6280 <LOG> @v_verticadb_node0002: 00000/5081: Total swap memory used: 0
2021-12-08 14:40:48.925 Main Thread:0x7fbbe2cf6280 <LOG> @v_verticadb_node0002: 00000/4435: Process size resident set: 28651520
2021-12-08 14:40:48.925 Main Thread:0x7fbbe2cf6280 <LOG> @v_verticadb_node0002: 00000/5075: Total Memory free + cache: 59455180800
2021-12-08 14:40:48.925 Main Thread:0x7fbbe2cf6280 [Txn] <INFO> Looking for catalog at: /data/verticadb/v_verticadb_node0002_catalog/Catalog
...

Cannot find CPU metrics with VerticaAutoscaler

You might notice that your VerticaAutoScaler is not scaling correctly according to CPU utilization:

$ kubectl get hpa
NAME                REFERENCE                           TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
autoscaler-name     VerticaAutoscaler/autoscaler-name   <unknown>/50%   3         12        0          19h

$ kubectl describe hpa
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Name: autoscaler-name
Namespace: namespace
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 12 May 2022 10:25:02 -0400
Reference: VerticaAutoscaler/autoscaler-name
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 3
Max replicas: 12
VerticaAutoscaler pods: 3 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 7s horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedComputeMetricsReplicas 7s horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

You receive this error because the metrics server is not installed:

$ kubectl top nodes
error: Metrics API not available

To install the metrics server:

  1. Download the components.yaml file:

    $ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
    
  2. Optionally, disable TLS:

    $ if ! grep kubelet-insecure-tls components.yaml; then
      sed -i 's/- args:/- args:\n - --kubelet-insecure-tls/' components.yaml;
    
  3. Apply the YAML file:

    $ kubectl apply -f components.yaml
    
  4. Verify that the metrics server is running:

    $ kubectl get svc metrics-server -n namespace
    NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    metrics-server   ClusterIP   10.105.239.175   <none>        443/TCP   19h
    

CPU request error with VerticaAutoscaler

You might receive an error that states:

failed to get cpu utilization: missing request for cpu

You get this error because you must set resource limits on all containers, including sidecar containers. To correct this error:

  1. Verify the error:

    $ kubectl get hpa
    NAME                REFERENCE                           TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
    autoscaler-name     VerticaAutoscaler/autoscaler-name   <unknown>/50%   3         12        0          19h
    
    $ kubectl describe hpa
    Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
    Name: autoscaler-name
    Namespace: namespace
    Labels: <none>
    Annotations: <none>
    CreationTimestamp: Thu, 12 May 2022 15:58:31 -0400
    Reference: VerticaAutoscaler/autoscaler-name
    Metrics: ( current / target )
    resource cpu on pods (as a percentage of request): <unknown> / 50%
    Min replicas: 3
    Max replicas: 12
    VerticaAutoscaler pods: 3 current / 0 desired
    Conditions:
    Type Status Reason Message
    ---- ------ ------ -------
    AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
    ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: missing request for cpu
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedGetResourceMetric 4s (x5 over 64s) horizontal-pod-autoscaler failed to get cpu utilization: missing request for cpu
    Warning FailedComputeMetricsReplicas 4s (x5 over 64s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu
    
  2. Add resource limits to the CR:

    $ cat /tmp/vdb.yaml
    apiVersion: vertica.com/v1beta1
    kind: VerticaDB
    metadata:
      name: vertica-vdb
    spec:
      sidecars:
        - name: vlogger
          image: vertica/vertica-logger:latest
          resources:
            requests:
              memory: "100Mi"
              cpu: "100m"
            limits:
              memory: "100Mi"
              cpu: "100m"
      communal:
        credentialSecret: communal-creds
        endpoint: https://endpoint
            path: s3://bucket-location
      dbName: verticadb
      image: vertica/vertica-k8s:latest
      subclusters:
      - isPrimary: true
        name: sc1
        resources:
          requests:
            memory: "4Gi"
            cpu: 2
          limits:
            memory: "4Gi"
            cpu: 2
        serviceType: ClusterIP
        serviceName: sc1
        size: 3
      upgradePolicy: Auto
    
  3. Apply the update:

    $ kubectl apply -f /tmp/vdb.yaml
    verticadb.vertica.com/vertica-vdb created
    

When you set a new CPU resource limit, Kubernetes reschedules each pod in the StatefulSet in a rolling update until all pods have the updated CPU resource limit.