Subclusters help you organize the nodes in your clusters to isolate workloads and make elastic scaling easier. See Subclusters for an overview of how subclusters can help you.
This is the multi-page printable view of this section. Click here to print.
Managing subclusters
- 1: Creating subclusters
- 2: Duplicating a subcluster
- 3: Adding and removing nodes from subclusters
- 4: Managing workloads with subclusters
- 5: Starting and stopping subclusters
- 6: Altering subcluster settings
- 7: Removing subclusters
1 - Creating subclusters
By default, new Eon Mode databases contain a single primary subcluster named default_subcluster
. This subcluster contains all nodes that are part of the database when you create it. You will often want to create subclusters to separate and manage workloads. You have three options to add subclusters to the database:
-
Use the admintools command line to add a new subcluster from nodes in the database cluster
-
Use admintools to create a duplicate of an existing subcluster
-
Use the Management Console to provision and create a subcluster
Create a subcluster using admintools
To create a subcluster, use the admintools db_add_subcluster
tool:
$ admintools -t db_add_subcluster --help
Usage: db_add_subcluster [options]
Options:
-h, --help show this help message and exit
-d DB, --database=DB Name of database to be modified
-s HOSTS, --hosts=HOSTS
Comma separated list of hosts to add to the subcluster
-p DBPASSWORD, --password=DBPASSWORD
Database password in single quotes
-c SCNAME, --subcluster=SCNAME
Name of the new subcluster for the new node
--is-primary Create primary subcluster
--is-secondary Create secondary subcluster
--control-set-size=CONTROLSETSIZE
Set the number of nodes that will run spread within
the subcluster
--like=CLONESUBCLUSTER
Name of an existing subcluster from which to clone
properties for the new subcluster
--timeout=NONINTERACTIVE_TIMEOUT
set a timeout (in seconds) to wait for actions to
complete ('never') will wait forever (implicitly sets
-i)
-i, --noprompts do not stop and wait for user input(default false).
Setting this implies a timeout of 20 min.
The simplest command adds an empty subcluster. It requires the database name, password, and name for the new subcluster. This example adds a subcluster analytics_cluster
to the verticadb
database:
$ adminTools -t db_add_subcluster -d verticadb -p 'password' -c analytics_cluster
Creating new subcluster 'analytics_cluster'
Subcluster added to verticadb successfully.
By default, admintools creates the subcluster as a secondary subcluster. You can have it create a primary subcluster instead by supplying the --is-primary
argument.
Adding nodes while creating a subcluster
You can also specify one or more hosts for admintools to add to the subcluster as new nodes. These hosts must be part of the cluster but not already part of the database. For example, you can use hosts that you added to the cluster using the MC or admintools, or hosts that remain part of the cluster after you dropped nodes from the database. This example creates a subcluster analytics_cluster
and uses the -s
option to specify the available hosts in the cluster:
$ adminTools -t db_add_subcluster -c analytics_cluster -d verticadb -p 'password' -s 10.0.33.77,10.0.33.181,10.0.33.85
View the subscription status of all nodes in the database with the following query that joins the V_CATALOG.NODES and V_CATALOG.NODE_SUBSCRIPTIONS system tables:
=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
= ns.node_name) ORDER BY 1,2,3;
subcluster_name | node_name | shard_name | subscription_state
-------------------+----------------------+-------------+--------------------
analytics_cluster | v_verticadb_node0004 | replica | ACTIVE
analytics_cluster | v_verticadb_node0004 | segment0001 | ACTIVE
analytics_cluster | v_verticadb_node0004 | segment0003 | ACTIVE
analytics_cluster | v_verticadb_node0005 | replica | ACTIVE
analytics_cluster | v_verticadb_node0005 | segment0001 | ACTIVE
analytics_cluster | v_verticadb_node0005 | segment0002 | ACTIVE
analytics_cluster | v_verticadb_node0006 | replica | ACTIVE
analytics_cluster | v_verticadb_node0006 | segment0002 | ACTIVE
analytics_cluster | v_verticadb_node0006 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0001 | replica | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0002 | replica | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | replica | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0003 | ACTIVE
(18 rows)
If you do not include hosts when you create the subcluster, you must manually rebalance the shards in the subcluster when you add nodes at a later time. For more information, see Updating Shard Subscriptions After Adding Nodes.
Subclusters and large cluster
Vertica has a feature named large cluster that helps manage broadcast messages as the database cluster grows. It has several impacts on adding new subclusters:
- If you create a subcluster with 16 or more nodes, Vertica automatically enables the large cluster feature. It sets the number of control nodes to the square root of the number of nodes in the subcluster. See Planning a large cluster.
- You can set the number of control nodes in a subcluster by using the
--control-set-size
option in the admintools command line. - If the database cluster has 120 control nodes, Vertica returns an error if you try to add a new subcluster. Every subcluster must have at least one control node. The database cannot have more than 120 control nodes. When the database reaches this limit, you must reduce the number of control nodes in other subclusters before you can add a new subcluster. See Changing the number of control nodes and realigning for more information.
- If you attempt to create a subcluster with a number of control nodes that would exceed the 120 control node limit, Vertica warns you and creates the subcluster with fewer control nodes. It adds as many control nodes as it can to the subcluster, which is 120 minus the current count of control nodes in the cluster. For example, suppose you create a 16-node subcluster in a database cluster that already has 118 control nodes. In this case, Vertica warns you and creates the subcluster with just 2 control nodes rather than the default 4.
See Large cluster for more information about the large cluster feature.
2 - Duplicating a subcluster
Subclusters have many settings you can tune to get them to work just the way you want. After you have tuned a subcluster, you may want additional subclusters that are configured the same way. For example, suppose you have a subcluster that you have tuned to perform analytics workloads. To improve query throughput, you can create several more subclusters configured exactly like it. Instead of creating the new subclusters and then manually configuring them from scratch, you can duplicate the existing subcluster (called the source subcluster) to a new subcluster (the target subcluster).
When you create a new subcluster based on another subcluster, Vertica copies most of the source subcluster's settings. See below for a list of the settings that Vertica copies. These settings are both on the node level and the subcluster level.
Note
After you duplicate a subcluster, the target is not connected to the source in any way. Any changes you make to the source subcluster's settings after duplication are not copied to the target. The subclusters are completely independent after duplication.Requirements for the target subcluster
You must have a set of hosts in your database cluster that you will use as the target of the subcluster duplication. Vertica forms these hosts into a target subcluster that receives most of the settings of the source subcluster. The hosts for the target subcluster must meet the following requirements:
-
They must be part of your database cluster but not part of your database. For example, you can use hosts you have dropped from a subcluster or whose subcluster you have removed. Vertica returns an error if you attempt to duplicate a subcluster onto one or more nodes that are currently participating in the database.
Tip
If you want to duplicate the settings of a subcluster to another subcluster, remove the target subcluster (see Removing subclusters). Then duplicate the source subcluster onto the hosts of the now-removed target subcluster. -
The number of nodes you supply for the target subcluster must equal the number of nodes in the source subcluster. When duplicating the subcluster, Vertica performs a 1:1 copy of some node-level settings from each node in the source subcluster to a corresponding node in the target.
-
The RAM and disk allocation for the hosts in the target subcluster should be at least the same as the source nodes. Technically, your target nodes can have less RAM or disk space than the source nodes. However, you will usually see performance issues in the new subcluster because the settings of the original subcluster will not be tuned for the resources of the target subcluster.
You can duplicate a subcluster even if some of the nodes in the source subcluster or hosts in the target are down. If nodes in the target are down, they use the catalog Vertica copied from the source node when they recover.
Duplication of subcluster-level settings
The following table lists the subcluster-level settings that Vertica copies from the source subcluster to the target.
Setting Type | Setting Details |
---|---|
Basic subcluster settings | Whether the subcluster is a primary or secondary subcluster. |
Large cluster settings | The number of control nodes in the subcluster. |
Resource pool settings |
Vertica creates a new resource pool for every subcluster-specific resource pool in the source subcluster. NoteDuplicating a subcluster can fail due to subcluster-specific resource pools. If creating the subcluster-specific resource pools leave less than 25% of the total memory free for the general pool, Vertica stops the duplication and reports an error. Subcluster-specific resource pool cascade settings are copied from the source subcluster and are applied to the newly-created resource pool for the target subcluster. Subcluster-level overrides on global resource pools settings such as MEMORYSIZE. See Managing workload resources in an Eon Mode database for more information. Grants on resource pools are copied from the source subcluster. |
Connection load balancing settings |
If the source subcluster is part of a subcluster-based load balancing group (you created the load balancing group using CREATE LOAD BALANCE GROUP...WITH SUBCLUSTER) the new subcluster is added to the group. See Creating Connection Load Balance Groups. ImportantVertica adds the new subcluster to the subcluster-based load balancing group. However, it does not create network addresses for the nodes in the target subcluster. Load balancing policies cannot direct connections to the new subcluster until you create network addresses for the nodes in the target subcluster. See Creating network addresses for the steps you must take. |
Storage policy settings | Table and table partition pinning policies are copied from the source to the target subcluster. See Pinning Depot Objects for more information. Any existing storage policies on the target subcluster are dropped before the policies are copied from the source. |
Vertica does not copy the following subcluster settings:
Setting Type | Setting Details |
---|---|
Basic subcluster settings |
|
Connection load balancing settings |
Address-based load balancing groups are not duplicated for the target subcluster. For example, suppose you created a load balancing group for the source subcluster by adding the network addresses of all subcluster's nodes . In this case, Vertica does not create a load balancing group for the target subcluster because it does not duplicate the network addresses of the source nodes (see the next section). Because it does not copy the addresses, it cannot not create an address-based group. |
Duplication of node-level settings
When Vertica duplicates a subcluster, it maps each node in the source subcluster to a node in the destination subcluster. Then it copies relevant node-level settings from each individual source node to the corresponding target node.
For example, suppose you have a three-node subcluster consisting of nodes named node01, node02, and node03. The target subcluster has nodes named node04, node05, and node06. In this case, Vertica copies the settings from node01 to node04, from node02 to node05, and from node03 to node06.
The node-level settings that Vertica copies from the source nodes to the target nodes are:
Setting Type | Setting Details |
---|---|
Configuration parameters |
Vertica copies the value of configuration parameters that you have set at the node level in the source node to the target node. For example, suppose you set CompressCatalogOnDisk on the source node using the statement:
If you then duplicated the subcluster containing node01, the setting is copied to the target node. |
Eon Mode settings |
|
Storage location settings |
The DATA, TEMP, DEPOT, and USER storage location paths on the source node are duplicated on the target node. When duplicating node-specific paths (such as DATA or DEPOT) the path names are adjusted for the new node name. For example, suppose node 1 has a depot path of ImportantThe directories for these storage locations on the target node must be empty. They must also have the correct file permissions to allow Vertica to read and write to them. Vertica does not duplicate a storage location if it cannot access its directory on the target node or if the directory is not empty. In this case, the target node will not have the location defined after the duplication process finishes. Admintools does not warn you if any locations were not duplicated. If you find that storage locations have not been duplicated on one or more target nodes, you must fix the issues with the directories on the target nodes. Then re-run the duplication command. |
Large cluster settings |
Control node assignments are copied from the source node to the target node:
|
Vertica does not copy the following node-level settings:
Setting Type | Setting Details |
---|---|
Connection load balancing settings | Network Addresses are not copied. The destination node's network addresses do not depend on the settings of the source node. Therefore, Vertica cannot determine what the target node's addresses should be. |
Depot settings | Depot-related configuration parameters that can be set on a node level (such as FileDeletionServiceInterval) are not copied from the source node to the target node. |
Using admintools to duplicate a subcluster
To duplicate a subcluster, you use the same admintools db_add_subcluster
tool that you use to create a new subcluster (see Creating subclusters). In addition to the required options to create a subcluster (the list of hosts, name for the new subcluster, database name, and so on), you also pass the --like
option with the name of the source subcluster you want to duplicate.
Important
When you use the--like
option, you cannot use the --is-secondary
or --control-set-size
options. Vertica determines whether the new subcluster is secondary and the number of control nodes it contains based on the source subcluster. If you supply these options along with the --like
option, admintools returns an error.
The following examples demonstrate duplicating a three-node subcluster named analytics_1. The first example examines some of the settings in the analytics_1 subcluster:
-
An override of the global TM resource pool's memory size.
-
Its own resource pool named analytics
-
Its membership in a subcluster-based load balancing group named analytics
=> SELECT name, subcluster_name, memorysize FROM SUBCLUSTER_RESOURCE_POOL_OVERRIDES;
name | subcluster_name | memorysize
------+-----------------+------------
tm | analytics_1 | 0%
(1 row)
=> SELECT name, subcluster_name, memorysize, plannedconcurrency
FROM resource_pools WHERE subcluster_name IS NOT NULL;
name | subcluster_name | memorysize | plannedconcurrency
----------------+-----------------+------------+--------------------
analytics_pool | analytics_1 | 70% | 8
(1 row)
=> SELECT * FROM LOAD_BALANCE_GROUPS;
name | policy | filter | type | object_name
-----------+------------+-----------+------------+-------------
analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_1
(1 row)
The following example calls admintool's db_add_subcluster
tool to duplicate the analytics_1 subcluster onto a set of three hosts to create a subcluster named analytics_2.
$ admintools -t db_add_subcluster -d verticadb \
-s 10.11.12.13,10.11.12.14,10.11.12.15 \
-p mypassword --like=analytics_1 -c analytics_2
Creating new subcluster 'analytics_2'
Adding new hosts to 'analytics_2'
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
Warning when creating depot location for node: v_verticadb_node0007
WARNING: Target node v_verticadb_node0007 is down, so depot size has been
estimated from depot location on initiator. As soon as the node comes
up, its depot size might be altered depending on its disk size
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
Warning when creating depot location for node: v_verticadb_node0008
WARNING: Target node v_verticadb_node0008 is down, so depot size has been
estimated from depot location on initiator. As soon as the node comes
up, its depot size might be altered depending on its disk size
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
Warning when creating depot location for node: v_verticadb_node0009
WARNING: Target node v_verticadb_node0009 is down, so depot size has been
estimated from depot location on initiator. As soon as the node comes
up, its depot size might be altered depending on its disk size
Cloning subcluster properties
NOTICE: Nodes in subcluster analytics_1 have network addresses, you
might need to configure network addresses for nodes in subcluster
analytics_2 in order to get load balance groups to work correctly.
Replicating configuration to all nodes
Generating new configuration information and reloading spread
Starting nodes:
v_verticadb_node0007 (10.11.12.81)
v_verticadb_node0008 (10.11.12.209)
v_verticadb_node0009 (10.11.12.186)
Starting Vertica on all nodes. Please wait, databases with a large catalog
may take a while to initialize.
Checking database state for newly added nodes
Node Status: v_verticadb_node0007: (DOWN) v_verticadb_node0008:
(DOWN) v_verticadb_node0009: (DOWN)
Node Status: v_verticadb_node0007: (INITIALIZING) v_verticadb_node0008:
(INITIALIZING) v_verticadb_node0009: (INITIALIZING)
Node Status: v_verticadb_node0007: (UP) v_verticadb_node0008:
(UP) v_verticadb_node0009: (UP)
Syncing catalog on verticadb with 2000 attempts.
Multi-node DB add completed
Nodes added to subcluster analytics_2 successfully.
Subcluster added to verticadb successfully.
Re-running the queries in the first part of the example shows that the settings from analytics_1 have been duplicated in analytics_2:
=> SELECT name, subcluster_name, memorysize FROM SUBCLUSTER_RESOURCE_POOL_OVERRIDES;
name | subcluster_name | memorysize
------+-----------------+------------
tm | analytics_1 | 0%
tm | analytics_2 | 0%
(2 rows)
=> SELECT name, subcluster_name, memorysize, plannedconcurrency
FROM resource_pools WHERE subcluster_name IS NOT NULL;
name | subcluster_name | memorysize | plannedconcurrency
----------------+-----------------+------------+--------------------
analytics_pool | analytics_1 | 70% | 8
analytics_pool | analytics_2 | 70% | 8
(2 rows)
=> SELECT * FROM LOAD_BALANCE_GROUPS;
name | policy | filter | type | object_name
-----------+------------+-----------+------------+-------------
analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_2
analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_1
(2 rows)
As noted earlier, even though analytics_2 subcluster is part of the analytics load balancing group, its nodes do not have network addresses defined for them. Until you define network addresses for the nodes, Vertica cannot redirect client connections to them.
3 - Adding and removing nodes from subclusters
You will often want to add new nodes to and remove existing nodes from a subcluster. This ability lets you scale your database to respond to changing analytic needs. For more information on how adding nodes to a subcluster affects your database's performance, see Scaling your Eon Mode database.
Adding new nodes to a subcluster
You can add nodes to a subcluster to meet additional workloads. The nodes that you add to the subcluster must already be part of your cluster. These can be:
-
Nodes that you removed from other subclusters.
-
Nodes you added following the steps in Add nodes to a cluster in AWS using Management Console.
-
Nodes you created using your cloud provider's interface, such as the AWS EC2 "Launch more like this" feature.
To add new nodes to a subcluster, use the db_add_node
command of admintools:
$ adminTools -t db_add_node -h
Usage: db_add_node [options]
Options:
-h, --help show this help message and exit
-d DB, --database=DB Name of the database
-s HOSTS, --hosts=HOSTS
Comma separated list of hosts to add to database
-p DBPASSWORD, --password=DBPASSWORD
Database password in single quotes
-a AHOSTS, --add=AHOSTS
Comma separated list of hosts to add to database
-c SCNAME, --subcluster=SCNAME
Name of subcluster for the new node
--timeout=NONINTERACTIVE_TIMEOUT
set a timeout (in seconds) to wait for actions to
complete ('never') will wait forever (implicitly sets
-i)
-i, --noprompts do not stop and wait for user input(default false).
Setting this implies a timeout of 20 min.
--compat21 (deprecated) Use Vertica 2.1 method using node names
instead of hostnames
If you do not use the -c
option, Vertica adds new nodes to the default subcluster (set to default_subcluster in new databases). This example adds a new node without specifying the subcluster:
$ adminTools -t db_add_node -p 'password' -d verticadb -s 10.11.12.117
Subcluster not specified, validating default subcluster
Nodes will be added to subcluster 'default_subcluster'
Verifying database connectivity...10.11.12.10
Eon database detected, creating new depot locations for newly added nodes
Creating depots for each node
Generating new configuration information and reloading spread
Replicating configuration to all nodes
Starting nodes
Starting nodes:
v_verticadb_node0004 (10.11.12.117)
Starting Vertica on all nodes. Please wait, databases with a
large catalog may take a while to initialize.
Checking database state
Node Status: v_verticadb_node0004: (DOWN)
Node Status: v_verticadb_node0004: (DOWN)
Node Status: v_verticadb_node0004: (DOWN)
Node Status: v_verticadb_node0004: (DOWN)
Node Status: v_verticadb_node0004: (UP)
Communal storage detected: syncing catalog
Multi-node DB add completed
Nodes added to verticadb successfully.
You will need to redesign your schema to take advantage of the new nodes.
To add nodes to a specific existing subcluster, use the db_add_node
tool's -c
option:
$ adminTools -t db_add_node -s 10.11.12.178 -d verticadb -p 'password' \
-c analytics_subcluster
Subcluster 'analytics_subcluster' specified, validating
Nodes will be added to subcluster 'analytics_subcluster'
Verifying database connectivity...10.11.12.10
Eon database detected, creating new depot locations for newly added nodes
Creating depots for each node
Generating new configuration information and reloading spread
Replicating configuration to all nodes
Starting nodes
Starting nodes:
v_verticadb_node0007 (10.11.12.178)
Starting Vertica on all nodes. Please wait, databases with a
large catalog may take a while to initialize.
Checking database state
Node Status: v_verticadb_node0007: (DOWN)
Node Status: v_verticadb_node0007: (DOWN)
Node Status: v_verticadb_node0007: (DOWN)
Node Status: v_verticadb_node0007: (DOWN)
Node Status: v_verticadb_node0007: (UP)
Communal storage detected: syncing catalog
Multi-node DB add completed
Nodes added to verticadb successfully.
You will need to redesign your schema to take advantage of the new nodes.
Updating shard subscriptions after adding nodes
After you add nodes to a subcluster they do not yet subscribe to shards. You can view the subscription status of all nodes in your database using the following query that joins the V_CATALOG.NODES and V_CATALOG.NODE_SUBSCRIPTIONS system tables:
=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
= ns.node_name) ORDER BY 1,2,3;
subcluster_name | node_name | shard_name | subscription_state
----------------------+----------------------+-------------+--------------------
analytics_subcluster | v_verticadb_node0004 | |
analytics_subcluster | v_verticadb_node0005 | |
analytics_subcluster | v_verticadb_node0006 | |
default_subcluster | v_verticadb_node0001 | replica | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0002 | replica | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | replica | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0003 | ACTIVE
(12 rows)
You can see that none of the nodes in the newly-added analytics_subcluster have subscriptions.
To update the subscriptions for new nodes, call the REBALANCE_SHARDS function. You can limit the rebalance to the subcluster containing the new nodes by passing its name to the REBALANCE_SHARDS function call. The following example runs rebalance shards to update the analytics_subcluster's subscriptions:
=> SELECT REBALANCE_SHARDS('analytics_subcluster');
REBALANCE_SHARDS
-------------------
REBALANCED SHARDS
(1 row)
=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
= ns.node_name) ORDER BY 1,2,3;
subcluster_name | node_name | shard_name | subscription_state
----------------------+----------------------+-------------+--------------------
analytics_subcluster | v_verticadb_node0004 | replica | ACTIVE
analytics_subcluster | v_verticadb_node0004 | segment0001 | ACTIVE
analytics_subcluster | v_verticadb_node0004 | segment0003 | ACTIVE
analytics_subcluster | v_verticadb_node0005 | replica | ACTIVE
analytics_subcluster | v_verticadb_node0005 | segment0001 | ACTIVE
analytics_subcluster | v_verticadb_node0005 | segment0002 | ACTIVE
analytics_subcluster | v_verticadb_node0006 | replica | ACTIVE
analytics_subcluster | v_verticadb_node0006 | segment0002 | ACTIVE
analytics_subcluster | v_verticadb_node0006 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0001 | replica | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0002 | replica | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | replica | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0003 | ACTIVE
(18 rows)
Removing nodes
Your database must meet these requirements before you can remove a node from a subcluster:
-
To remove a node from a primary subcluster, all of the primary nodes in the subcluster must be up, and the database must be able to maintain quorum after the primary node is removed (see Data integrity and high availability in an Eon Mode database). These requirements are necessary because Vertica calls REBALANCE_SHARDS to redistribute shard subscriptions among the remaining nodes in the subcluster. If you attempt to remove a primary node when the database does not meet the requirements, the rebalance shards process waits until either the down nodes recover or a timeout elapses. While it waits, you periodically see a message "Rebalance shards polling iteration number [nn]" indicating that the rebalance process is waiting to complete.
You can remove nodes from a secondary subcluster even when nodes in the subcluster are down.
-
If your database has the large cluster feature enabled, you cannot remove a node if it is the subcluster's last control node and there are nodes that depend on it. See Large cluster for more information.
If there are other control nodes in the subcluster, you can drop a control node. Vertica reassigns the nodes that depend on the node being dropped to other control nodes.
To remove one or more nodes, use admintools's db_remove_node
tool:
$ adminTools -t db_remove_node -p 'password' -d verticadb -s 10.11.12.117
connecting to 10.11.12.10
Waiting for rebalance shards. We will wait for at most 36000 seconds.
Rebalance shards polling iteration number [0], started at [14:56:41], time out at [00:56:41]
Attempting to drop node v_verticadb_node0004 ( 10.11.12.117 )
Shutting down node v_verticadb_node0004
Sending node shutdown command to '['v_verticadb_node0004', '10.11.12.117', '/vertica/data', '/vertica/data']'
Deleting catalog and data directories
Update admintools metadata for v_verticadb_node0004
Eon mode detected. The node v_verticadb_node0004 has been removed from host 10.11.12.117. To remove the
node metadata completely, please clean up the files corresponding to this node, at the communal
location: s3://eonbucket/metadata/verticadb/nodes/v_verticadb_node0004
Reload spread configuration
Replicating configuration to all nodes
Checking database state
Node Status: v_verticadb_node0001: (UP) v_verticadb_node0002: (UP) v_verticadb_node0003: (UP)
Communal storage detected: syncing catalog
When you remove one or more nodes from a subcluster, Vertica automatically rebalances shards in the subcluster. You do not need to manually rebalance shards after removing nodes.
Moving nodes between subclusters
To move a node from one subcluster to another:
-
Remove the node or nodes from the subcluster it is currently a part of.
-
Add the node to the subcluster you want to move it to.
4 - Managing workloads with subclusters
By default, queries are limited to executing on the nodes in the subcluster that contains the initiator node (the node the client is connected to). This example demonstrates executing an explain plan for a query when connected to node 4 of a cluster. Node 4 is part of a subcluster containing nodes 4 through 6. You can see that only the nodes in the subcluster will participate in a query:
=> EXPLAIN SELECT customer_name, customer_state FROM customer_dimension LIMIT 10;
QUERY PLAN
--------------------------------------------------------------------------------
QUERY PLAN DESCRIPTION:
------------------------------
EXPLAIN SELECT customer_name, customer_state FROM customer_dimension LIMIT 10;
Access Path:
+-SELECT LIMIT 10 [Cost: 442, Rows: 10 (NO STATISTICS)] (PATH ID: 0)
| Output Only: 10 tuples
| Execute on: Query Initiator
| +---> STORAGE ACCESS for customer_dimension [Cost: 442, Rows: 10K (NO
STATISTICS)] (PATH ID: 1)
| | Projection: public.customer_dimension_b0
| | Materialize: customer_dimension.customer_name,
customer_dimension.customer_state
| | Output Only: 10 tuples
| | Execute on: v_verticadb_node0004, v_verticadb_node0005,
v_verticadb_node0006
. . .
In Eon Mode, you can override the MEMORYSIZE
, MAXMEMORYSIZE
, and MAXQUERYMEMORYSIZE
settings for built-in global resource pools to fine-tune workloads within a subcluster. See Managing workload resources in an Eon Mode database for more information.
What happens when a subcluster cannot run a query
In order to process queries, each subcluster's nodes must have full coverage of all shards in the database. If the nodes do not have full coverage (which can happen if nodes are down), the subcluster can no longer process queries. This state does not cause the subcluster to shut down. Instead, if you attempt to run a query on a subcluster in this state, you receive error messages telling you that not enough nodes are available to complete the query.
=> SELECT node_name, node_state FROM nodes
WHERE subcluster_name = 'analytics_cluster';
node_name | node_state
----------------------+------------
v_verticadb_node0004 | DOWN
v_verticadb_node0005 | UP
v_verticadb_node0006 | DOWN
(3 rows)
=> SELECT * FROM online_sales.online_sales_fact;
ERROR 9099: Cannot find participating nodes to run the query
Once the down nodes have recovered and the subcluster has full shard coverage, it will be able to process queries.
Controlling where a query runs
You can control where specific types of queries run by controlling which subcluster the clients connect to. The best way to enforce restrictions is to create a set of connection load balancing policies to steer clients from specific IP address ranges to clients in the correct subcluster.
For example, suppose you have the following database with two subclusters: one for performing data loading, and one for performing analytics.
The data load tasks come from a set of ETL systems in the IP 10.20.0.0/16 address range. Analytics tasks can come from any other IP address. In this case, you can create set of connection load balance policies that ensure that the ETL systems connect to the data load subcluster, and all other connections go to the analytics subcluster.
=> SELECT node_name,node_address,node_address_family,subcluster_name
FROM v_catalog.nodes;
node_name | node_address | node_address_family | subcluster_name
----------------------+--------------+---------------------+--------------------
v_verticadb_node0001 | 10.11.12.10 | ipv4 | load_subcluster
v_verticadb_node0002 | 10.11.12.20 | ipv4 | load_subcluster
v_verticadb_node0003 | 10.11.12.30 | ipv4 | load_subcluster
v_verticadb_node0004 | 10.11.12.40 | ipv4 | analytics_subcluster
v_verticadb_node0005 | 10.11.12.50 | ipv4 | analytics_subcluster
v_verticadb_node0006 | 10.11.12.60 | ipv4 | analytics_subcluster
(6 rows)
=> CREATE NETWORK ADDRESS node01 ON v_verticadb_node0001 WITH '10.11.12.10';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node02 ON v_verticadb_node0002 WITH '10.11.12.20';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node03 ON v_verticadb_node0003 WITH '10.11.12.30';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node04 ON v_verticadb_node0004 WITH '10.11.12.40';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node05 ON v_verticadb_node0005 WITH '10.11.12.50';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node06 ON v_verticadb_node0006 WITH '10.11.12.60';
CREATE NETWORK ADDRESS
=> CREATE LOAD BALANCE GROUP load_subcluster WITH SUBCLUSTER load_subcluster
FILTER '0.0.0.0/0';
CREATE LOAD BALANCE GROUP
=> CREATE LOAD BALANCE GROUP analytics_subcluster WITH SUBCLUSTER
analytics_subcluster FILTER '0.0.0.0/0';
CREATE LOAD BALANCE GROUP
=> CREATE ROUTING RULE etl_systems ROUTE '10.20.0.0/16' TO load_subcluster;
CREATE ROUTING RULE
=> CREATE ROUTING RULE analytic_clients ROUTE '0.0.0.0/0' TO analytics_subcluster;
CREATE ROUTING RULE
Once you have created the load balance policies, you can test them using the DESCRIBE_LOAD_BALANCE_DECISION function.
=> SELECT describe_load_balance_decision('192.168.1.1');
describe_load_balance_decision
--------------------------------
Describing load balance decision for address [192.168.1.1]
Load balance cache internal version id (node-local): [1]
Considered rule [etl_systems] source ip filter [10.20.0.0/16]...
input address does not match source ip filter for this rule.
Considered rule [analytic_clients] source ip filter [0.0.0.0/0]...
input address matches this rule
Matched to load balance group [analytics_cluster] the group has
policy [ROUNDROBIN] number of addresses [3]
(0) LB Address: [10.11.12.181]:5433
(1) LB Address: [10.11.12.205]:5433
(2) LB Address: [10.11.12.192]:5433
Chose address at position [1]
Routing table decision: Success. Load balance redirect to: [10.11.12.205]
port [5433]
(1 row)
=> SELECT describe_load_balance_decision('10.20.1.1');
describe_load_balance_decision
--------------------------------
Describing load balance decision for address [10.20.1.1]
Load balance cache internal version id (node-local): [1]
Considered rule [etl_systems] source ip filter [10.20.0.0/16]...
input address matches this rule
Matched to load balance group [default_cluster] the group has policy
[ROUNDROBIN] number of addresses [3]
(0) LB Address: [10.11.12.10]:5433
(1) LB Address: [10.11.12.20]:5433
(2) LB Address: [10.11.12.30]:5433
Chose address at position [1]
Routing table decision: Success. Load balance redirect to: [10.11.12.20]
port [5433]
(1 row)
Normally, with these policies, all queries run by the ETL system will run on the load subcluster. All other queries will run on the analytics subcluster. There are some cases (especially if a subcluster is down or draining) where a client may connect to a node in another subcluster. For this reason, clients should always verify they are connected to the correct subcluster. See Connection load balancing policies for more information about load balancing policies.
5 - Starting and stopping subclusters
Subclusters make it convenient to start and stop a group of nodes as needed. You start and stop them with admintools commands or Vertica functions. You can also start and stop subclusters with Management Console.
Starting a subcluster
To start a subcluster, use the admintools command restart_subcluster
:
$ adminTools -t restart_subcluster -h
Usage: restart_subcluster [options]
Options:
-h, --help show this help message and exit
-d DB, --database=DB Name of database whose subcluster is to be restarted
-c SCNAME, --subcluster=SCNAME
Name of subcluster to be restarted
-p DBPASSWORD, --password=DBPASSWORD
Database password in single quotes
--timeout=NONINTERACTIVE_TIMEOUT
set a timeout (in seconds) to wait for actions to
complete ('never') will wait forever (implicitly sets
-i)
-i, --noprompts do not stop and wait for user input(default false).
Setting this implies a timeout of 20 min.
-F, --force Force the nodes in the subcluster to start and auto
recover if necessary
This example starts the subcluster analytics_cluster
:
$ adminTools -t restart_subcluster -c analytics_cluster \
-d verticadb -p password
*** Restarting subcluster for database verticadb ***
Restarting host [10.11.12.192] with catalog [v_verticadb_node0006_catalog]
Restarting host [10.11.12.181] with catalog [v_verticadb_node0004_catalog]
Restarting host [10.11.12.205] with catalog [v_verticadb_node0005_catalog]
Issuing multi-node restart
Starting nodes:
v_verticadb_node0004 (10.11.12.181)
v_verticadb_node0005 (10.11.12.205)
v_verticadb_node0006 (10.11.12.192)
Starting Vertica on all nodes. Please wait, databases with a large
catalog may take a while to initialize.
Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (UP)
v_verticadb_node0005: (UP) v_verticadb_node0006: (UP)
Communal storage detected: syncing catalog
Restart Subcluster result: 1
Stopping a subcluster
You can stop a subcluster gracefully with the function SHUTDOWN_WITH_DRAIN, or immediately with SHUTDOWN_SUBCLUSTER. You can also shut down subclusters with the admintools command
stop_subcluster
.
Graceful shutdown
The SHUTDOWN_WITH_DRAIN function drains a subcluster's client connections before shutting it down. The function first marks all nodes in the specified subcluster as draining. Work from existing user sessions continues on draining nodes, but the nodes refuse new client connections and are excluded from load-balancing operations. A dbadmin user can still connect to draining nodes. For more information about client connection draining, see Drain client connections.
To run the SHUTDOWN_WITH_DRAIN function, you must specify a timeout value. The function's behavior depends on the sign of the timeout value:
- Positive: The nodes drain until either all the existing connections close or the function reaches the runtime limit set by the timeout value. As soon as one of these conditions is met, the function sends a shutdown message to the subcluster and returns.
- Zero: The function immediately closes any active user sessions on the subcluster and then shuts down the subcluster and returns.
- Negative: The function marks the subcluster's nodes as draining and waits to shut down the subcluster until all active user sessions disconnect.
After all nodes in a draining subcluster are down, its nodes are automatically reset to a not draining status.
The following example demonstrates how you can use a positive timeout value to give active user sessions time to finish their work before shutting down the subcluster:
=> SELECT node_name, subcluster_name, is_draining, count_client_user_sessions, oldest_session_user FROM draining_status ORDER BY 1;
node_name | subcluster_name | is_draining | count_client_user_sessions | oldest_session_user
----------------------+--------------------+-------------+----------------------------+---------------------
v_verticadb_node0001 | default_subcluster | f | 0 |
v_verticadb_node0002 | default_subcluster | f | 0 |
v_verticadb_node0003 | default_subcluster | f | 0 |
v_verticadb_node0004 | analytics | f | 1 | analyst
v_verticadb_node0005 | analytics | f | 0 |
v_verticadb_node0006 | analytics | f | 0 |
(6 rows)
=> SELECT SHUTDOWN_WITH_DRAIN('analytics', 300);
NOTICE 0: Draining has started on subcluster (analytics)
NOTICE 0: Begin shutdown of subcluster (analytics)
SHUTDOWN_WITH_DRAIN
--------------------------------------------------------------------------------------------------------------------
Set subcluster (analytics) to draining state
Waited for 3 nodes to drain
Shutdown message sent to subcluster (analytics)
(1 row)
You can query the NODES system table to confirm that the subcluster shut down:
=> SELECT subcluster_name, node_name, node_state FROM nodes;
subcluster_name | node_name | node_state
--------------------+----------------------+------------
default_subcluster | v_verticadb_node0001 | UP
default_subcluster | v_verticadb_node0002 | UP
default_subcluster | v_verticadb_node0003 | UP
analytics | v_verticadb_node0004 | DOWN
analytics | v_verticadb_node0005 | DOWN
analytics | v_verticadb_node0006 | DOWN
(6 rows)
If you want to see more information about the draining and shutdown events, such as whether all user sessions finished their work before the timeout, you can query the dc_draining_events table. In this case, the subcluster still had one active user session when the function reached timeout:
=> SELECT event_type, event_type_name, event_description, event_result, event_result_name FROM dc_draining_events;
event_type | event_type_name | event_description | event_result | event_result_name
------------+------------------------------+---------------------------------------------------------------------+--------------+-------------------
0 | START_DRAIN_SUBCLUSTER | START_DRAIN for SHUTDOWN of subcluster (analytics) | 0 | SUCCESS
2 | START_WAIT_FOR_NODE_DRAIN | Wait timeout is 300 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 0 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 60 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 120 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 125 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 180 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 240 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 250 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 300 seconds | 4 | INFORMATIONAL
3 | END_WAIT_FOR_NODE_DRAIN | Wait for drain ended with 1 sessions remaining | 2 | TIMEOUT
5 | BEGIN_SHUTDOWN_AFTER_DRAIN | Staring shutdown of subcluster (analytics) following drain | 4 | INFORMATIONAL
(12 rows)
After you restart the subcluster, you can query the DRAINING_STATUS system table to confirm that the nodes have reset their draining statuses to not draining:
=> SELECT node_name, subcluster_name, is_draining, count_client_user_sessions, oldest_session_user FROM draining_status ORDER BY 1;
node_name | subcluster_name | is_draining | count_client_user_sessions | oldest_session_user
----------------------+--------------------+-------------+----------------------------+---------------------
v_verticadb_node0001 | default_subcluster | f | 0 |
v_verticadb_node0002 | default_subcluster | f | 0 |
v_verticadb_node0003 | default_subcluster | f | 0 |
v_verticadb_node0004 | analytics | f | 0 |
v_verticadb_node0005 | analytics | f | 0 |
v_verticadb_node0006 | analytics | f | 0 |
(6 rows)
Immediate shutdown
To shut down a subcluster immediately, call SHUTDOWN_SUBCLUSTER. The following example shuts down the analytics
subcluster immediately, without checking for active client connections:
=> SELECT SHUTDOWN_SUBCLUSTER('analytics');
SHUTDOWN_SUBCLUSTER
---------------------
Subcluster shutdown
(1 row)
admintools
You can use the stop_subcluster
tool to stop a subcluster:
$ adminTools -t stop_subcluster -h
Usage: stop_subcluster [options]
Options:
-h, --help show this help message and exit
-d DB, --database=DB Name of database whose subcluster is to be stopped
-c SCNAME, --subcluster=SCNAME
Name of subcluster to be stopped
-p DBPASSWORD, --password=DBPASSWORD
Database password in single quotes
-n DRAIN_SECONDS, --drain-seconds=DRAIN_SECONDS
Seconds to wait for user connections to close.
Default value is 60 seconds.
When the time expires, connections will be forcibly closed
and the db will shut down.
-F, --force Force the subcluster to shutdown immediately,
even if users are connected.
--timeout=NONINTERACTIVE_TIMEOUT
set a timeout (in seconds) to wait for actions to
complete ('never') will wait forever (implicitly sets
-i)
-i, --noprompts do not stop and wait for user input(default false).
Setting this implies a timeout of 20 min.
By default, stop_subcluster
calls SHUTDOWN_WITH_DRAIN to gracefully shut down the target subcluster. The shutdown process drains client connections from the subcluster before shutting it down.
The -n
(--drain-seconds
) option, which has a default value of 60 seconds, allows you to specify the number of seconds to wait before forcefully closing client connections and shutting down the subcluster. If you set a negative -n
value, the subcluster is marked as draining but is not shut down until all active user sessions disconnect.
In the following example, the subcluster named analytics initially has an active client session, but the session closes before the timeout limit is reached and the subcluster shuts down:
$ admintools -t stop_subcluster -d verticadb -c analytics --password password --drain-seconds 200
--- Subcluster shutdown ---
Verifying subcluster 'analytics'
Node 'v_verticadb_node0004' will shutdown
Node 'v_verticadb_node0005' will shutdown
Node 'v_verticadb_node0006' will shutdown
Connecting to database to begin shutdown of subcluster 'analytics'
Shutdown will use connection draining.
Shutdown will wait for all client sessions to complete, up to 200 seconds
Then it will force a shutdown.
Poller has been running for 0:00:00.000022 seconds since 2022-07-28 12:18:04.891781
------------------------------------------------------------
client_sessions |node_count |node_names
--------------------------------------------------------------
0 |5 |v_verticadb_node0002,v_verticadb_node0004,v_verticadb_node0003,v_verticadb_node0...
1 |1 |v_verticadb_node0005
STATUS: vertica.engine.api.db_client.module is still running on 1 host: nodeIP as of 2022-07-28 12:18:14. See /opt/vertica/log/adminTools.log for full details.
Poller has been running for 0:00:10.383018 seconds since 2022-07-28 12:18:04.891781
...
------------------------------------------------------------
client_sessions |node_count |node_names
--------------------------------------------------------------
0 |3 |v_verticadb_node0002,v_verticadb_node0001,v_verticadb_node0003
down |3 |v_verticadb_node0004,v_verticadb_node0005,v_verticadb_node0006
Stopping poller drain_status because it was canceled
SUCCESS running the shutdown metafunction
Not waiting for processes to completely exit
Shutdown operation was successful
You can use the -F
(or --force
) option to shut down a subcluster immediately, without checking for active user sessions or draining the subcluster:
$ admintools -t stop_subcluster -d verticadb -c analytics --password password -F
--- Subcluster shutdown ---
Verifying subcluster 'analytics'
Node 'v_verticadb_node0004' will shutdown
Node 'v_verticadb_node0005' will shutdown
Node 'v_verticadb_node0006' will shutdown
Connecting to database to begin shutdown of subcluster 'analytics'
Running shutdown metafunction. Not using connection draining
STATUS: vertica.engine.api.db_client.module is still running on 1 host: 192.168.111.31 as of 2022-07-28 13:13:57. See /opt/vertica/log/adminTools.log for full details.
STATUS: vertica.engine.api.db_client.module is still running on 1 host: 192.168.111.31 as of 2022-07-28 13:14:07. See /opt/vertica/log/adminTools.log for full details.
SUCCESS running the shutdown metafunction
Not waiting for processes to completely exit
Shutdown operation was successful
If you want to shut down all subclusters in a database, see Stopping an Eon Mode Database.
6 - Altering subcluster settings
There are several settings you can alter on a subcluster using the ALTER SUBCLUSTER statement. You can also switch a subcluster from a primary to a secondary subcluster, or from a secondary to a primary.
Renaming a subcluster
To rename an existing subcluster, use the ALTER SUBCLUSTER statement's RENAME TO clause:
=> ALTER SUBCLUSTER default_subcluster RENAME TO load_subcluster;
ALTER SUBCLUSTER
=> SELECT DISTINCT subcluster_name FROM subclusters;
subcluster_name
-------------------
load_subcluster
analytics_cluster
(2 rows)
Changing the default subcluster
The default subcluster designates which subcluster Vertica adds nodes to if you do not explicitly specify a subcluster when adding nodes to the database. When you create a new database (or when a database is upgraded from a version prior to 9.3.0) the default_subcluster is the default. You can find the current default subcluster by querying the is_default column of the SUBCLUSTERS system table.
The following example demonstrates finding the default subcluster, and then changing it to the subcluster named analytics_cluster:
=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
subcluster_name
--------------------
default_subcluster
(1 row)
=> ALTER SUBCLUSTER analytics_cluster SET DEFAULT;
ALTER SUBCLUSTER
=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
subcluster_name
-------------------
analytics_cluster
(1 row)
Converting a subcluster from primary to secondary, or secondary to primary
You usually choose whether a subcluster is primary or secondary when creating it (see Creating subclusters for more information). However, you can switch a subcluster between the two settings after you have created it. You may want to change whether a subcluster is primary or secondary to impact the K-safety of your database. For example, if you have a single primary subcluster that has down nodes that you cannot easily replace, you can promote a secondary subcluster to primary to ensure losing another primary node will not cause your database to shut down. On the oither hand, you may choose to convert a primary subcluster to a secondary before eventually shutting it down. This conversion can prevent the database from losing K-Safety if the subcluster you are shutting down contains half or more of the total number of primary nodes in the database.
Note
You cannot promote or demote a subcluster containing the initiator node. You must be connected to a node in a subcluster other than the one you want to promote or demote.To make a secondary subcluster into a primary subcluster, use the PROMOTE_SUBCLUSTER_TO_PRIMARY function:
=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
subcluster_name | is_primary
-------------------+------------
analytics_cluster | f
load_subcluster | t
(2 rows)
=> SELECT PROMOTE_SUBCLUSTER_TO_PRIMARY('analytics_cluster');
PROMOTE_SUBCLUSTER_TO_PRIMARY
-------------------------------
PROMOTE SUBCLUSTER TO PRIMARY
(1 row)
=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
subcluster_name | is_primary
-------------------+------------
analytics_cluster | t
load_subcluster | t
(2 rows)
Making a primary subcluster into a secondary subcluster is similar. Unlike converting a secondary subcluster to a primary, there are several issues that may prevent you from making a primary into a secondary. Vertica prevents you from making a primary into a secondary if any of the following is true:
-
The subcluster contains a critical node.
-
The subcluster is the only primary subcluster in the database. You must have at least one primary subcluster.
-
The initiator node is a member of the subcluster you are trying to demote. You must call DEMOTE_SUBCLUSTER_TO_SECONDARY from another subcluster.
To convert a primary subcluster to secondary, use the DEMOTE_SUBCLUSTER_TO_SECONDARY function:
=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
subcluster_name | is_primary
-------------------+------------
analytics_cluster | t
load_subcluster | t
(2 rows)
=> SELECT DEMOTE_SUBCLUSTER_TO_SECONDARY('analytics_cluster');
DEMOTE_SUBCLUSTER_TO_SECONDARY
--------------------------------
DEMOTE SUBCLUSTER TO SECONDARY
(1 row)
=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
subcluster_name | is_primary
-------------------+------------
analytics_cluster | f
load_subcluster | t
(2 rows)
7 - Removing subclusters
Removing a subcluster from the database deletes the subcluster from the Vertica catalog. During the removal, Vertica removes any nodes in the subcluster from the database. These nodes are still part of the database cluster, but are no longer part of the database. If you view your cluster in the MC, you will see these nodes with the status STANDBY. They can be added back to the database by adding them to another subcluster. See Creating subclusters and Adding New Nodes to a Subcluster.
Vertica places several restrictions on removing a subcluster:
-
You cannot remove the default subcluster. If you want to remove the subcluster that is set as the default, you must make another subcluster the default. See Changing the Default Subcluster for details.
-
You cannot remove the last primary subcluster in the database. Your database must always have at least one primary subcluster.
Note
Removing a subcluster can fail if the database is repartitioning. If this happens, you will see the error message "Transaction commit aborted because session subscriptions do not match catalog." Wait until the repartitioning is done before removing a subcluster.To remove a subcluster, use the admintools command line db_remove_subcluster
tool:
$ adminTools -t db_remove_subcluster -h
Usage: db_remove_subcluster [options]
Options:
-h, --help show this help message and exit
-d DB, --database=DB Name of database to be modified
-c SCNAME, --subcluster=SCNAME
Name of subcluster to be removed
-p DBPASSWORD, --password=DBPASSWORD
Database password in single quotes
--timeout=NONINTERACTIVE_TIMEOUT
set a timeout (in seconds) to wait for actions to
complete ('never') will wait forever (implicitly sets
-i)
-i, --noprompts do not stop and wait for user input(default false).
Setting this implies a timeout of 20 min.
--skip-directory-cleanup
Caution: this option will force you to do a manual
cleanup. This option skips directory deletion during
remove subcluster. This is best used in a cloud
environment where the hosts being removed will be
subsequently discarded.
This example removes the subcluster named analytics_cluster:
$ adminTools -t db_remove_subcluster -d verticadb -c analytics_cluster -p 'password'
Found node v_verticadb_node0004 in subcluster analytics_cluster
Found node v_verticadb_node0005 in subcluster analytics_cluster
Found node v_verticadb_node0006 in subcluster analytics_cluster
Found node v_verticadb_node0007 in subcluster analytics_cluster
Waiting for rebalance shards. We will wait for at most 36000 seconds.
Rebalance shards polling iteration number [0], started at [17:09:35], time
out at [03:09:35]
Attempting to drop node v_verticadb_node0004 ( 10.11.12.40 )
Shutting down node v_verticadb_node0004
Sending node shutdown command to '['v_verticadb_node0004', '10.11.12.40',
'/vertica/data', '/vertica/data']'
Deleting catalog and data directories
Update admintools metadata for v_verticadb_node0004
Eon mode detected. The node v_verticadb_node0004 has been removed from
host 10.11.12.40. To remove the node metadata completely, please clean
up the files corresponding to this node, at the communal location:
s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0004
Attempting to drop node v_verticadb_node0005 ( 10.11.12.50 )
Shutting down node v_verticadb_node0005
Sending node shutdown command to '['v_verticadb_node0005', '10.11.12.50',
'/vertica/data', '/vertica/data']'
Deleting catalog and data directories
Update admintools metadata for v_verticadb_node0005
Eon mode detected. The node v_verticadb_node0005 has been removed from
host 10.11.12.50. To remove the node metadata completely, please clean
up the files corresponding to this node, at the communal location:
s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0005
Attempting to drop node v_verticadb_node0006 ( 10.11.12.60 )
Shutting down node v_verticadb_node0006
Sending node shutdown command to '['v_verticadb_node0006', '10.11.12.60',
'/vertica/data', '/vertica/data']'
Deleting catalog and data directories
Update admintools metadata for v_verticadb_node0006
Eon mode detected. The node v_verticadb_node0006 has been removed from
host 10.11.12.60. To remove the node metadata completely, please clean
up the files corresponding to this node, at the communal location:
s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0006
Attempting to drop node v_verticadb_node0007 ( 10.11.12.70 )
Shutting down node v_verticadb_node0007
Sending node shutdown command to '['v_verticadb_node0007', '10.11.12.70',
'/vertica/data', '/vertica/data']'
Deleting catalog and data directories
Update admintools metadata for v_verticadb_node0007
Eon mode detected. The node v_verticadb_node0007 has been removed from
host 10.11.12.70. To remove the node metadata completely, please clean
up the files corresponding to this node, at the communal location:
s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0007
Reload spread configuration
Replicating configuration to all nodes
Checking database state
Node Status: v_verticadb_node0001: (UP) v_verticadb_node0002: (UP)
v_verticadb_node0003: (UP)
Communal storage detected: syncing catalog