Sandboxing on K8s
Sandboxing on Kubernetes allows you to create isolated testing environments without the need to set up a new database or reload data, making it easier to test Vertica features in new versions. Sandboxing enables seamless online upgrades in Kubernetes. While users stay connected to the main cluster, the upgrade is performed on the sandbox. Once the upgrade is complete, the sandbox is promoted to the main cluster. The operator automates the sandboxing process for Vertica subclusters within a custom resource (CR). For more information, see Subcluster sandboxing.
Prerequisites
- Install the VerticaDB operator
- Create a VerticaDB custom resource definition manifest
Sandboxing a Subcluster
Note
- You can only sandbox secondary subclusters. If your existing cluster does not have a secondary subcluster, then you can scale your subclusters to add one. See, Scaling subclusters.
- Only existing subclusters can be added to a sandbox. Any subclusters created after the sandbox is established cannot be added to it.
The following specification in VerticaDB CR (VerticaDB custom resource definition) has the sandbox information:
Parameter | Description |
---|---|
spec.sandboxes[i].name | Name of the sandbox. |
spec.sandboxes[i].subclusters[i].name | Name of the secondary subcluster to be added to the sandbox. The sandbox must include at least one secondary subcluster. |
spec.sandboxes[i].image | Name of the image to use for the sandbox. If omitted, image from the main cluster will be used. Changing this will force an upgrade for the sandbox where it is defined. |
spec.sandboxes[i].shutdown |
Indicates that the sandbox must remain in a shutdown state. When set to true , the sandbox will be stopped (if it is running). The operator will stop all the subclusters in the sandbox using draining shutdown and will not attempt to restart it. |
To sandbox a subcluster
-
Use
kubectl edit
to open your default text editor and update the YAML file for the specified custom resource. The following command opens a custom resource named vdb for editing: -
In the spec section of the custom resource, locate the subclusters subsection and identify the secondary subcluster that you want to sandbox. In the following example, we will sandbox the secondary subcluster, sc2:
-
Now, add an entry for the sandbox. Provide a sandbox name and name of the subcluster(s) that you want to sandbox. For example, we will sandbox subcluster sc2 and name it sandbox1:
Note
The first subcluster added to the sandbox becomes the primary subcluster, any subsequent subclusters are designated as secondary. If multiple subclusters are added to the sandbox simultaneously, K8s operator selects the first subcluster in the list as the primary subcluster. -
Save and close the custom resource file. When the update completes, you will receive the following message:
If you want to include another subcluster in the sandbox, go back to VerticaDB and modify the sandbox information. Following are the contents of the VerticaDB after adding sc3:
Checking sandboxing status
You can check the status of sandboxing as follows:
You can verify if sandboxing is successful by checking the VerticaDB CR to see if the subcluster type changed from secondary
to sandboxprimary
.
Alternatively, you can connect to any node in the subcluster using vsql client and query the system table subclusters
to check that sandboxing is successful.
Upgrading the subcluster
To upgrade the sandbox, simply update the spec.sandboxes.image
field.
Note
Sandboxes can run different Vertica versions. For example, sandbox1 uses version 24.3.0-0, while sandbox2 operates on version 24.3.0-1.Stopping and shutting down a sandbox
You can gracefully shut down the sandbox and the nodes where the sandbox was running. We recommend running sandboxes on separate nodes or node groups. This allows you to shut down the nodes or node groups when the sandboxes are stopped, helping to reduce costs.
A sandbox can remain in a shutdown state for as long as required.
In the following example, sandbox1
will be stopped and remain in the shutdown state as long as shutdown
is set to true
.
All pods in the subclusters within the sandbox will be deleted and will not be recreated until shutdown
is set to false
.
Checking sandbox status
You can check if the sandbox was stopped as follows:
Note that spec.subclusters[].shutdown
and status.subclusters[].shutdown
are set to true
for all subclusters of a sandbox that has been shut down.
You can start the sandbox again by setting shutdown
to false
.
Note
- Ensure
shutdown
is not set totrue
when adding a new subcluster or sandbox to a vdb. - A subcluster cannot be unsandboxed if
shutdown
istrue
or part of a sandbox withshutdown
set totrue
. image
cannot be changed if the sandbox or any of its subclusters haveshutdown
set totrue
.
Removing sandboxes
Removing sandboxes allows you to remove a subcluster from the sandbox and return it to the main cluster.
Note
- You must remove secondary subclusters from a sandbox before removing the sandboxprimary subcluster.
- To remove a sandboxed subcluster from the database, make sure to unsandbox the subcluster first. Removing a sandboxed subcluster without unsandboxing it will cause a failure. You can remove the complete sandbox and all its subclusters (from the spec) at the same time.
- A sandbox cannot be removed if
shutdown
is set totrue
.
To remove a subcluster from the sandbox, you need to remove the subcluster name from spec.sandboxes.subclusters
in VerticaDB. In the following example, subcluster sc3 will be removed from the sandbox:
To remove the complete sandbox, remove its information from the VerticaDB:
Checking unsandboxing status
You can check if the sandbox was removed as follows:
You can verify if the sandbox was removed successfully by opening the VerticaDB CR and checking that subcluster type has changed from sandboxprimary
to secondary
.
Alternatively, you can connect to any node in the subcluster using the vsql client and query the subclusters system table to verify if unsandboxing was successful.