Scaling subclusters
The operator enables you to scale the number of subclusters and the number of pods per subcluster automatically. This utilizes or conserves resources depending on the immediate needs of your workload.
The following sections explain how to scale resources for new workloads. For details about scaling resources for existing workloads, see VerticaAutoscaler custom resource definition.
Prerequisites
- 
Complete Installing the VerticaDB operator. 
- 
Install the kubectl command line tool. 
- 
Complete VerticaDB custom resource definition. 
- 
Confirm that you have the resources to scale. NoteBy default, the custom resource uses the free Community Edition (CE) license. This license allows you to deploy up to three nodes with a maximum of 1TB of data. To add resources beyond these limits, you must add your Vertica license to the custom resource as described in VerticaDB custom resource definition.
Scaling the number of subclusters
Adjust the number of subclusters in your custom resource to fine-tune resources for short-running dashboard queries. For example, increase the number of subclusters to increase throughput. For more information, see Improving query throughput using subclusters.
- 
Use kubectl editto open your default text editor and update the YAML file for the specified custom resource. The following command opens a custom resource namedvdbfor editing:$ kubectl edit vdb
- 
In the specsection of the custom resource, locate thesubclusterssubsection. Begin with thetypefield to define a new subcluster.The typefield indicates the subcluster type. Because there is already a primary subcluster, enterSecondary:spec: ... subclusters: ... - type: secondary
- 
Follow the steps in VerticaDB custom resource definition to complete the subcluster definition. The following completed example adds a secondary subcluster for dashboard queries: spec: ... subclusters: - type: primary name: primary-subcluster ... - type: secondary name: dashboard clientNodePort: 32001 resources: limits: cpu: 32 memory: 96Gi requests: cpu: 32 memory: 96Gi serviceType: NodePort size: 3
- 
Save and close the custom resource file. When the update completes, you receive a message similar to the following: verticadb.vertica.com/vertica-db edited
- 
Use the kubectl waitcommand to monitor when the new pods are ready:$ kubectl wait --for=condition=Ready pod --selector app.kubernetes.io/name=verticadb --timeout 180s pod/vdb-dashboard-0 condition met pod/vdb-dashboard-1 condition met pod/vdb-dashboard-2 condition met
Scaling the pods in a subcluster
For long-running, analytic queries, increase the pod count for a subcluster. See Using elastic crunch scaling to improve query performance.
- 
Use kubectl editto open your default text editor and update the YAML file for the specified custom resource. The following command opens a custom resource namedverticadbfor editing:$ kubectl edit verticadb
- 
Update the subclusters.sizevalue to 6:spec: ... subclusters: ... - type: secondary ... size: 6Shards are rebalanced automatically. 
- 
Save and close the custom resource file. You receive a message similar to the following when you successfully update the file: verticadb.vertica.com/verticadb edited 
- 
Use the kubectl waitcommand to monitor when the new pods are ready:$ kubectl wait --for=condition=Ready pod --selector app.kubernetes.io/name=verticadb --timeout 180s pod/vdb-subcluster1-3 condition met pod/vdb-subcluster1-4 condition met pod/vdb-subcluster1-5 condition met
Stopping and shutting down a subcluster
To optimize costs, you can gracefully shut down the subcluster and then the nodes the subcluster is running on. This approach is particularly effective when the database nodes run on dedicated instances ensuring that shutting down one subcluster does not impact other subclusters that need to remain online.
A subcluster can remain in a shutdown state for as long as required.
In the following example, subcluster sc2 will be stopped and remain in the shutdown state as long as shutdown is set to true.
spec:
...
  subclusters:
  ...
    name: sc2
    shutdown: true
    size: 3
    type: secondary
All pods in the subcluster will be deleted and will not be recreated until shutdown is set to false.
Checking the status
You can check the status of the subcluster as follows:
$ kubectl describe vdb
Name:         vertica-db
...
Events:
  Type    Reason           Age   From                Message
  ----    ------           ----  ----                -------
  Normal   StopSubclusterStart      20s                    verticadb-operator  Starting stop subcluster "sc2".
  Normal   StopSubclusterSucceeded  9s                     verticadb-operator  Successfully stopped subcluster "sc2".
Note that spec.subclusters[].shutdown is set to true for the subcluster that has been shut down.
$ kubectl describe vdb
Name:         vertica-db
...
spec:
...
  subclusters:
  ...
    name: sc2
    shutdown: true
...
status:
  ...
  subclusters:
    ...
    name:           sc2
    oid:            54043195528448686
    shutdown:       true
    upNodeCount:  0
You can start the subcluster again by setting spec.subclusters[].shutdown to false.
spec:
...
  subclusters:
  ...
    name: sc2
    shutdown: false
    size: 3
    type: secondary
Note
- Ensure shutdownis not set totruewhen adding a new subcluster to a vdb.
- You cannot sandbox or unsandbox a subcluster with shutdownset totrue.
- A subcluster cannot be removed if it has shutdownset totrue.
Removing a subcluster
Remove a subcluster when it is no longer needed, or to preserve resources.
Important
Because each custom resource instance requires a primary subcluster, you cannot remove all subclusters.- 
Use kubectl editto open your default text editor and update the YAML file for the specified custom resource. The following command opens a custom resource namedverticadbfor editing:$ kubectl edit verticadb
- 
In the subclusterssubsection nested underspec, locate the subcluster that you want to delete. Delete the element in the subcluster array represents the subcluster that you want to delete. Each element is identified by a hyphen (-).
- 
After you delete the subcluster and save, you receive a message similar to the following: verticadb.vertica.com/verticadb edited