VerticaAutoscaler CRD

The VerticaAutoscaler custom resource (CR) is a HorizontalPodAutoscaler that automatically scales resources for existing subclusters using one of the following strategies:.

The VerticaAutoscaler custom resource (CR) is a HorizontalPodAutoscaler that automatically scales resources for existing subclusters using one of the following strategies:

  • Subcluster scaling for short-running dashboard queries.

  • Pod scaling for long-running analytic queries.

The VerticaAutoscaler CR scales using resource or custom metrics. Vertica manages subclusters by workload, which helps you pinpoint the best metrics to trigger a scaling event. To maintain data integrity, the operator does not scale down unless all connections to the pods are drained and sessions are closed.

For details about the algorithm that determines when the VerticaAutoscaler scales, see the Kubernetes documentation.

Additionally, the VerticaAutoscaler provides a webhook to validate state changes. By default, this webhook is enabled. You can configure this webhook with the webhook.enable Helm chart parameter.

Examples

The examples in this section use the following VerticaDB custom resource. Each example uses CPU to trigger scaling:

apiVersion: vertica.com/v1beta1
kind: VerticaDB
metadata:
  name: dbname
spec:
  communal:
    path: "path/to/communal-storage"
    endpoint: "path/to/communal-endpoint"
    credentialSecret: credentials-secret
  subclusters:
    - name: primary1
      size: 3
      isPrimary: true
      serviceName: primary1
      resources:
        limits:
          cpu: "8"
        requests:
          cpu: "4"

Prerequisites

  • Complete Installing the VerticaDB operator.

  • Install the kubectl command line tool.

  • Complete VerticaDB CRD.

  • Confirm that you have the resources to scale.

  • Set a value for the metric that triggers scaling. For example, if you want to scale by CPU utilization, you must set CPU limits and requests.

Subcluster scaling

Automatically adjust the number of subclusters in your custom resource to fine-tune resources for short-running dashboard queries. For example, increase the number of subclusters to increase throughput. For more information, see Improving query throughput using subclusters.

All subclusters share the same service object, so there are no required changes to external service objects. Pods in the new subcluster are load balanced by the existing service object.

The following example creates a VerticaAutoscaler custom resource that scales by subcluster when the VerticaDB uses 50% of the node's available CPU:

  1. Define the VerticaAutoscaler custom resource in a YAML-formatted manifest:

    apiVersion: vertica.com/v1beta1
    kind: VerticaAutoscaler
    metadata:
      name: autoscaler-name
    spec:
      verticaDBName: dbname
      scalingGranularity: Subcluster
      serviceName: primary1
    
  2. Create the VerticaAutoscaler with the kubectl autoscale command:

    $ kubectl autoscale verticaautoscaler autoscaler-name --cpu-percent=50 --min=3 --max=12
    

    The previous command creates a HorizontalPodAutoscaler object that:

    • Sets the target CPU utilization to 50%.

    • Scales to a minimum of three pods in one subcluster, and 12 pods in four subclusters.

Pod scaling

For long-running, analytic queries, increase the pod count for a subcluster. For additional information about Vertica and analytic queries, see Using elastic crunch scaling to improve query performance.

When you scale pods in an Eon Mode database, you must consider the impact on database shards. For details, see Namespaces and shards.

The following example creates a VerticaAutoscaler custom resource that scales by pod when the VerticaDB uses 50% of the node's available CPU:

  1. Define the VerticaAutoScaler custom resource in a YAML-formatted manifest:

    apiVersion: vertica.com/v1beta1
    kind: VerticaAutoscaler
    metadata:
      name: autoscaler-name
    spec:
      verticaDBName: dbname
      scalingGranularity: Pod
      serviceName: primary1
    
  2. Create the autoscaler instance with the kubectl autoscale command:

    $ kubectl autoscale verticaautoscaler autoscaler-name --cpu-percent=50 --min=3 --max=12
    

    The previous command creates a HorizontalPodAutoscaler object that:

    • Sets the target CPU utilization to 50%.

    • Scales to a minimum of three pods in one subcluster, and 12 pods in four subclusters.

Event monitoring

To view the VerticaAutoscaler object, use the kubetctl describe hpa command:

$ kubectl describe hpa autoscaler-name
Name:                                                  as
Namespace:                                             vertica
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Tue, 12 Apr 2022 15:11:28 -0300
Reference:                                             VerticaAutoscaler/as
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  0% (9m) / 50%
Min replicas:                                          3
Max replicas:                                          12
VerticaAutoscaler pods:                                3 current / 3 desired
Conditions:
  Type            Status  Reason              Message
  ----            ------  ------              -------
  AbleToScale     True    ReadyForNewScale    recommended size matches current size
  ScalingActive   True    ValidMetricFound    the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  False   DesiredWithinRange  the desired count is within the acceptable range

When a scaling event occurs, you can view the admintools commands to scale the cluster. Use kubectl to view the StatefulSets:

$ kubectl get statefulsets
NAME                                                   READY   AGE
db-name-as-instance-name-0                             0/3     71s
db-name-primary1                                       3/3     39m

Use kubectl describe to view the executing commands:

$ kubectl describe vdb dbname | tail
  Upgrade Status:
Events:
  Type    Reason                   Age   From                Message
  ----    ------                   ----  ----                -------
  Normal  ReviveDBStart            41m   verticadb-operator  Calling 'admintools -t revive_db'
  Normal  ReviveDBSucceeded        40m   verticadb-operator  Successfully revived database. It took 25.255683916s
  Normal  ClusterRestartStarted    40m   verticadb-operator  Calling 'admintools -t start_db' to restart the cluster
  Normal  ClusterRestartSucceeded  39m   verticadb-operator  Successfully called 'admintools -t start_db' and it took 44.713787718s
  Normal  SubclusterAdded          10s   verticadb-operator  Added new subcluster 'as-0'
  Normal  AddNodeStart             9s    verticadb-operator  Calling 'admintools -t db_add_node' for pod(s) 'db-name-as-instance-name-0-0, db-name-as-instance-name-0-1, db-name-as-instance-name-0-2'