Common administrative tasks
Stop, restart, and revive
This procedure can be useful in cases where you want to change AWS instances to save money. For example, you can use us-east
instances during the day and switch to us-west
instances at night.
This example uses the database created by the following command using the region us-east-1
:
$ vcluster create_db --db-name test_db --hosts 192.2.0.1,192.2.0.2,192.2.0.3 --catalog-path /scratch_b/qa --data-path /scratch_b/qa --shard-count 4 --communal-storage-location s3://testbucket/test_db --depot-path /path/to/depot --depot-size 20G --config /opt/vertica/config/vertica_cluster.yaml --config-param awsauth=key:secret,awsenablehttps=0,awsregion=us-east-1,awsendpoint=myhost:9000 --password "" --skip-package-install
✔ Check NMA service health
...
✔ Synchronize catalog with communal storage
[INFO] Successfully created a database with name [test_db]
Note
When the Vertica host is stopped and restarted, you must restart the NMA to ensure proper functionality.-
Stop the database:
$ Stop DB /opt/vertica/bin/vcluster stop_db --db-name test_db --config /opt/vertica/config/vertica_cluster.yaml --password "" ✔ Collect node information ✔ Collect cluster information ✔ Update node state from running database ✔ Collect information for all up nodes ✔ Synchronize catalog with communal storage ✔ Stop database ✔ Verify database is not running [INFO] Successfully stopped a database with name test_db
-
Revive the database. For this example, the database is revived into a different region:
$ vcluster revive_db --db-name test_db --hosts 192.2.0.1,192.2.0.2,192.2.0.3 --communal-storage-location s3://testbucket/test_db --config /opt/vertica/config/vertica_cluster.yaml --config-param awsauth=key:secret,awsenablehttps=0,awsregion=us-west-1,awsendpoint=myhost:9000 ✔ Check NMA service health ✔ Verify database is running ✔ Download cluster_config.json ✔ Create necessary directories on Vertica hosts ✔ Get network profile of cluster ✔ Load remote catalog [INFO] Successfully revived database test_db
-
Start the database:
$ /opt/vertica/bin/vcluster start_db --db-name test_db --hosts 192.2.0.1,192.2.0.2,192.2.0.3 --catalog-path /path/to/catalog --config /opt/vertica/config/vertica_cluster.yaml --config-param awsauth=miniokey:miniosecret,awsenablehttps=0,awsregion=us-east-1,awsendpoint=qastress-39:9000 --password "" ✔ Check NMA service health ✔ Collect nodes information ✔ Download cluster_config.json ✔ Check NMA service health ✔ Verify database is running ✔ Read catalog ✔ Check Vertica version ✔ Get contents of vertica.conf ✔ Get contents of spread.conf ✔ Start 3 node(s) ✔ Wait for 3 node(s) to come up: all nodes are up ✔ Synchronize catalog with communal storage ✔ Collect node information ✔ Collect cluster information ✔ Update node state from running database [INFO] Started database test_db
Test on sandboxed subclusters
You can create sandboxed subclusters and perform tests on them without affecting your production database. For details, see Subcluster sandboxing:
-
Add the subcluster
sc1
which contains nodes192.2.0.4
and192.2.0.5
:$ vcluster add_subcluster --subcluster sc1 --db-name test_db --password "" --hosts 192.2.0.1,192.2.0.2,192.2.0.3 --control-set-size 1 --new-hosts 192.2.0.4,192.2.0.5 ✔ Collect cluster information ✔ Check NMA service health ... ✔ Initiate rebalance of subcluster shards [INFO] Successfully added subcluster sc1 with nodes [192.2.0.3,192.2.0.4] to database **test_db**
-
Sandbox the subcluster
sc1
with a the new sandboxsand
:$ vcluster sandbox_subcluster --subcluster sc1 --sandbox sand -p "" --config /opt/vertica/config/vertica_cluster.yaml ✔ Collect information for all up nodes ✔ Find all subclusters and record their sandboxing information ✔ Convert subcluster into sandbox in catalog system ✔ Wait for subcluster nodes to come up [INFO] Successfully sandboxed subcluster sc1 as sand
-
Verify that your nodes were sandboxed with
list_all_nodes
. The following command is run from outside the sandboxsand
, so thestate
of nodes insand
are listed asUNKNOWN
:$ vcluster list_all_nodes --config /opt/vertica/config/vertica_cluster.yaml -p "" ✔ Collect node information ✔ Collect cluster information ✔ Update node state from running database ✔ Check NMA service health ✔ Read Vertica version ✔ Check node state from running database [ { "address": "192.0.2.1", "name": "v_test_db_node0001", "state": "UP", "catalog_path": "/scratch_b/qa/test_db/v_test_db_node0001_catalog/Catalog", "subcluster": "default_subcluster", "sandbox": "", "is_primary": true, "version": "v24.3.0-20240613" }, { "address": "192.0.2.2", "name": "v_test_db_node0002", "state": "UP", "catalog_path": "/scratch_b/qa/test_db/v_test_db_node0002_catalog/Catalog", "subcluster": "default_subcluster", "sandbox": "", "is_primary": true, "version": "v24.3.0-20240613" }, { "address": "192.0.2.3", "name": "v_test_db_node0003", "state": "UP", "catalog_path": "/scratch_b/qa/test_db/v_test_db_node0003_catalog/Catalog", "subcluster": "default_subcluster", "sandbox": "", "is_primary": true, "version": "v24.3.0-20240613" }, { "address": "192.0.2.4", "name": "v_test_db_node0004", "state": "UNKNOWN", "catalog_path": "/scratch_b/qa/test_db/v_test_db_node0004_catalog/Catalog", "subcluster": "sc1", "sandbox": "sand", "is_primary": false, "version": "v24.3.0-20240613" }, { "address": "192.0.2.5", "name": "v_test_db_node0005", "state": "UNKNOWN", "catalog_path": "/scratch_b/qa/test_db/v_test_db_node0005_catalog/Catalog", "subcluster": "sc1", "sandbox": "sand", "is_primary": false, "version": "v24.3.0-20240613" } ] [INFO] Successfully listed all nodes
-
After you finish testing your sandboxed subcluster, unsandbox it:
$ vcluster unsandbox_subcluster --subcluster sc1 -p "" --config /opt/vertica/config/vertica_cluster.yaml ✔ Collect node information ✔ Collect cluster information ✔ Update node state from running database ✔ Check NMA service health ✔ Collect information for all up nodes ✔ Stop node ✔ Wait for subcluster nodes to come down ✔ Convert sandboxed subcluster into regular subcluster in catalog ✔ Delete database directories ✔ Check Vertica version ✔ Get Vertica startup command for unsandboxed nodes ✔ Start 0 node(s) ✔ Wait for subcluster nodes to come up [INFO] Successfully unsandboxed subcluster sc1