NUMA multi-node clusters

Use VCluster CLI to manage databases in a NUMA multi-node environment, where multiple nodes run on the same host on different ports.

VCluster CLI supports managing NUMA (Non-Uniform Memory Access) multi-node clusters, where multiple database nodes run on the same physical host, each on a different client port. This works both on real multi-node NUMA hardware and in standard environments where multiple nodes share a single host.

Prerequisites

  • TLS must be configured so that commands can run without supplying a password.
  • The configuration file must be at the default location (/opt/vertica/config/vertica_cluster.yaml), or you must supply its path with --config.

Create a database

You cannot create a database using the NUMA workflow. Create your database using the standard create_db workflow. After creation, use the commands in the following sections to add and remove nodes.

Control node

Each host running NUMA nodes has one control node. The control node is always the first node added to that host and is responsible for distributing information to the other nodes on the host. Because of the control node, all nodes on the host must be part of the same subcluster.

Add nodes

To add a node at a custom client port using the configuration file:

$ vcluster add_node --new-hosts-ports 10.20.30.40=6433

To add a node when no configuration file is available, specify the existing cluster hosts and their ports alongside the new host:

$ vcluster add_node --existing-hosts-ports 10.20.30.40=5433,10.20.30.41=5433 \
  --new-hosts-ports 10.20.30.42=5678

To add a node into an existing subcluster:

$ vcluster add_node --new-hosts-ports 10.20.30.40=6433 --subcluster sc1

To use a NUMA node configuration file, pass it with --numa-node-file. You can target a specific subcluster or add to the default subcluster:

# Add into subcluster sc1
$ vcluster add_node --numa-node-file ./numanodeconfig.json --subcluster sc1

# Add into the default subcluster
$ vcluster add_node --numa-node-file ./numanodeconfig.json

The --numa-node-file option accepts a JSON file with the following format:

{
  "nodes": [
    {
      "host": "192.168.1.101",
      "clientPort": 5678,
      "numaMode": "auto"
    },
    {
      "host": "192.168.1.102",
      "clientPort": 5678,
      "numaMode": 1
    }
  ]
}

Each node entry supports the following properties:

Property Required Default Description
host Yes The IP address or hostname of the node.
clientPort No 5433 The client port for the node.
numaMode No null Controls NUMA node assignment. Accepted values: a non-negative integer (specific NUMA node index), "auto" (selects the least-busy NUMA node), or omitted (no NUMA pinning; standard add_node workflow).

Remove nodes

Each node has a unique name identifier (for example, v_dbname_node0001). When nodes run on custom ports, remove them by node name rather than by IP address:

$ vcluster remove_node --remove-node-names v_test_db_node0004

To remove a node by name when no configuration file is available:

$ vcluster remove_node \
  --existing-hosts-ports 10.20.30.40=5433,10.20.30.41=5433,10.20.30.42=5678 \
  --remove-node-names v_test_db_node0003

List nodes

To list all nodes using the configuration file:

$ vcluster list_all_nodes

To list nodes when no configuration file is available:

$ vcluster list_all_nodes --existing-hosts-ports 10.20.30.40=5433,10.20.30.41=5433,10.20.30.42=5678

Example

The following example walks through a complete NUMA workflow: creating a database, adding nodes on custom ports (including a second node on an existing host), confirming the cluster state, and removing a node.

1. Create the database.

$ vcluster create_db --db-name test_db \
    --hosts 10.20.30.40,10.20.30.41 \
    --catalog-path /data --data-path /data \
    --password-file /password.txt

2. Add two nodes at custom ports on separate hosts.

$ vcluster add_node --new-hosts-ports 10.20.30.40=6433,10.20.30.41=7433

3. Add another node on the same host as the first node, on a different port.

$ vcluster add_node --new-hosts-ports 10.20.30.40=6533

4. Confirm that all nodes are present.

$ vcluster list_all_nodes

5. Remove the node added in step 3.

$ vcluster remove_node --remove-node-names v_test_db_node0003