<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Managing subclusters</title>
    <link>/en/eon/managing-subclusters/</link>
    <description>Recent content in Managing subclusters on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/eon/managing-subclusters/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Eon: Creating subclusters</title>
      <link>/en/eon/managing-subclusters/creating-subclusters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/managing-subclusters/creating-subclusters/</guid>
      <description>
        
        
        &lt;p&gt;By default, new Eon Mode databases contain a single primary subcluster named &lt;code&gt;default_subcluster&lt;/code&gt;. This subcluster contains all nodes that are part of the database when you create it. You will often want to create subclusters to separate and manage workloads. You have three options to add subclusters to the database:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#Create&#34;&gt;Use the admintools command line to add a new subcluster from nodes in the database cluster&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/eon/managing-subclusters/duplicating-subcluster/&#34;&gt;Use admintools to create a duplicate of an existing subcluster&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/mc/db-management/subclusters-mc/adding-subclusters-mc/&#34;&gt;Use the Management Console to provision and create a subcluster&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a name=&#34;Create&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;create-a-subcluster-using-admintools&#34;&gt;Create a subcluster using admintools&lt;/h2&gt;
&lt;p&gt;To create a subcluster, use the admintools &lt;code&gt;db_add_subcluster&lt;/code&gt; tool:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t db_add_subcluster --help
Usage: db_add_subcluster [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of database to be modified
  -s HOSTS, --hosts=HOSTS
                        Comma separated list of hosts to add to the subcluster
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  -c SCNAME, --subcluster=SCNAME
                        Name of the new subcluster for the new node
  --is-primary          Create primary subcluster
  --is-secondary        Create secondary subcluster
  --control-set-size=CONTROLSETSIZE
                        Set the number of nodes that will run spread within
                        the subcluster
  --like=CLONESUBCLUSTER
                        Name of an existing subcluster from which to clone
                        properties for the new subcluster
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete (&amp;#39;never&amp;#39;) will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The simplest command adds an empty subcluster. It requires the database name, password, and name for the new subcluster. This example adds a subcluster &lt;code&gt;analytics_cluster&lt;/code&gt; to the &lt;code&gt;verticadb&lt;/code&gt; database:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t db_add_subcluster -d verticadb -p &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt;&amp;#39; -c analytics_cluster
Creating new subcluster &amp;#39;analytics_cluster&amp;#39;
Subcluster added to verticadb successfully.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;By default, admintools creates the subcluster as a &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/secondary-subcluster/&#34; title=&#34;A secondary subcluster is a type of subcluster that is easy to start and shutdown on demand.&#34;&gt;secondary subcluster&lt;/a&gt;. You can have it create a &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary subcluster&lt;/a&gt; instead by supplying the &lt;code&gt;--is-primary&lt;/code&gt; argument.&lt;/p&gt;
&lt;h2 id=&#34;adding-nodes-while-creating-a-subcluster&#34;&gt;Adding nodes while creating a subcluster&lt;/h2&gt;
&lt;p&gt;You can also specify one or more hosts for admintools to add to the subcluster as new nodes. These hosts must be part of the cluster but not already part of the database. For example, you can use hosts that you added to the cluster using the MC or admintools, or hosts that remain part of the cluster after you dropped nodes from the database. This example creates a subcluster &lt;code&gt;analytics_cluster&lt;/code&gt; and uses the &lt;code&gt;-s&lt;/code&gt; option to specify the available hosts in the cluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t db_add_subcluster -c analytics_cluster -d verticadb -p &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt;&amp;#39; -s 10.0.33.77,10.0.33.181,10.0.33.85
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;View the subscription status of all nodes in the database with the following query that joins the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/nodes/&#34;&gt;V_CATALOG.NODES&lt;/a&gt; and &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/node-subscriptions/&#34;&gt;V_CATALOG.NODE_SUBSCRIPTIONS&lt;/a&gt; system tables:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
   v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
   = ns.node_name) ORDER BY 1,2,3;

subcluster_name    |      node_name       | shard_name  | subscription_state
-------------------+----------------------+-------------+--------------------
analytics_cluster  | v_verticadb_node0004 | replica     | ACTIVE
analytics_cluster  | v_verticadb_node0004 | segment0001 | ACTIVE
analytics_cluster  | v_verticadb_node0004 | segment0003 | ACTIVE
analytics_cluster  | v_verticadb_node0005 | replica     | ACTIVE
analytics_cluster  | v_verticadb_node0005 | segment0001 | ACTIVE
analytics_cluster  | v_verticadb_node0005 | segment0002 | ACTIVE
analytics_cluster  | v_verticadb_node0006 | replica     | ACTIVE
analytics_cluster  | v_verticadb_node0006 | segment0002 | ACTIVE
analytics_cluster  | v_verticadb_node0006 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0001 | replica     | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0002 | replica     | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | replica     | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0003 | ACTIVE
(18 rows)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you do not include hosts when you create the subcluster, you must manually rebalance the shards in the subcluster when you add nodes at a later time. For more information, see &lt;a href=&#34;../../../en/eon/managing-subclusters/adding-and-removing-nodes-from-subclusters/#UpdatingShardSubscriptions&#34;&gt;Updating Shard Subscriptions After Adding Nodes&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;subclusters-and-large-cluster&#34;&gt;Subclusters and large cluster&lt;/h2&gt;
&lt;p&gt;Vertica has a feature named large cluster that helps manage broadcast messages as the database cluster grows. It has several impacts on adding new subclusters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If you create a subcluster with 16 or more nodes, Vertica automatically enables the large cluster feature. It sets the number of &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/control-node/&#34; title=&#34;A node that connects to the Spread service to send and receive cluster-wide broadcast messages.&#34;&gt;control nodes&lt;/a&gt; to the square root of the number of nodes in the subcluster. See &lt;a href=&#34;../../../en/admin/managing-db/managing-nodes/large-cluster/planning-large-cluster/#&#34;&gt;Planning a large cluster&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You can set the number of control nodes in a subcluster by using the &lt;code&gt;--control-set-size&lt;/code&gt; option in the admintools command line.&lt;/li&gt;
&lt;li&gt;If the database cluster has 120 control nodes, Vertica returns an error if you try to add a new subcluster. Every subcluster must have at least one control node. The database cannot have more than 120 control nodes. When the database reaches this limit, you must reduce the number of control nodes in other subclusters before you can add a new subcluster. See &lt;a href=&#34;../../../en/admin/managing-db/managing-nodes/large-cluster/changing-number-of-control-nodes-and-realigning/#&#34;&gt;Changing the number of control nodes and realigning&lt;/a&gt; for more information.&lt;/li&gt;
&lt;li&gt;If you attempt to create a subcluster with a number of control nodes that would exceed the 120 control node limit, Vertica warns you and creates the subcluster with fewer control nodes. It adds as many control nodes as it can to the subcluster, which is 120 minus the current count of control nodes in the cluster. For example, suppose you create a 16-node subcluster in a database cluster that already has 118 control nodes. In this case, Vertica warns you and creates the subcluster with just 2 control nodes rather than the default 4.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/admin/managing-db/managing-nodes/large-cluster/#&#34;&gt;Large cluster&lt;/a&gt; for more information about the large cluster feature.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Eon: Duplicating a subcluster</title>
      <link>/en/eon/managing-subclusters/duplicating-subcluster/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/managing-subclusters/duplicating-subcluster/</guid>
      <description>
        
        
        &lt;p&gt;Subclusters have many settings you can tune to get them to work just the way you want. After you have tuned a subcluster, you may want additional subclusters that are configured the same way. For example, suppose you have a subcluster that you have tuned to perform analytics workloads. To improve query throughput, you can create several more subclusters configured exactly like it. Instead of creating the new subclusters and then manually configuring them from scratch, you can duplicate the existing subcluster (called the source subcluster) to a new subcluster (the target subcluster).&lt;/p&gt;
&lt;p&gt;When you create a new subcluster based on another subcluster, Vertica copies most of the source subcluster&#39;s settings. See below for a list of the settings that Vertica copies. These settings are both on the node level and the subcluster level.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

After you duplicate a subcluster, the target is not connected to the source in any way. Any changes you make to the source subcluster&#39;s settings after duplication are not copied to the target. The subclusters are completely independent after duplication.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;requirements-for-the-target-subcluster&#34;&gt;Requirements for the target subcluster&lt;/h2&gt;
&lt;p&gt;You must have a set of hosts in your database cluster that you will use as the target of the subcluster duplication. Vertica forms these hosts into a target subcluster that receives most of the settings of the source subcluster. The hosts for the target subcluster must meet the following requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;They must be part of your database cluster but not part of your database. For example, you can use hosts you have dropped from a subcluster or whose subcluster you have removed. Vertica returns an error if you attempt to duplicate a subcluster onto one or more nodes that are currently participating in the database.&lt;/p&gt;

&lt;div class=&#34;alert admonition tip&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Tip&lt;/h4&gt;

If you want to duplicate the settings of a subcluster to another subcluster, remove the target subcluster (see &lt;a href=&#34;../../../en/eon/managing-subclusters/removing-subclusters/#&#34;&gt;Removing subclusters&lt;/a&gt;). Then duplicate the source subcluster onto the hosts of the now-removed target subcluster.

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The number of nodes you supply for the target subcluster must equal the number of nodes in the source subcluster. When duplicating the subcluster, Vertica performs a 1:1 copy of some node-level settings from each node in the source subcluster to a corresponding node in the target.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The RAM and disk allocation for the hosts in the target subcluster should be at least the same as the source nodes. Technically, your target nodes can have less RAM or disk space than the source nodes. However, you will usually see performance issues in the new subcluster because the settings of the original subcluster will not be tuned for the resources of the target subcluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can duplicate a subcluster even if some of the nodes in the source subcluster or hosts in the target are down. If nodes in the target are down, they use the catalog Vertica copied from the source node when they recover.&lt;/p&gt;
&lt;h2 id=&#34;duplication-of-subcluster-level-settings&#34;&gt;Duplication of subcluster-level settings&lt;/h2&gt;
&lt;p&gt;The following table lists the subcluster-level settings that Vertica copies from the source subcluster to the target.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Setting Type&lt;/th&gt; 

&lt;th &gt;
Setting Details&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Basic subcluster settings&lt;/td&gt; 

&lt;td &gt;
Whether the subcluster is a &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary&lt;/a&gt; or &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/secondary-subcluster/&#34; title=&#34;A secondary subcluster is a type of subcluster that is easy to start and shutdown on demand.&#34;&gt;secondary subcluster&lt;/a&gt;.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;a href=&#34;../../../en/admin/managing-db/managing-nodes/large-cluster/&#34;&gt;Large cluster&lt;/a&gt; settings&lt;/td&gt; 

&lt;td &gt;
The number of &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/control-node/&#34; title=&#34;A node that connects to the Spread service to send and receive cluster-wide broadcast messages.&#34;&gt;control nodes&lt;/a&gt; in the subcluster.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;a href=&#34;../../../en/admin/managing-db/managing-workloads/resource-pool-architecture/&#34;&gt;Resource pool&lt;/a&gt; settings&lt;/td&gt; 

&lt;td &gt;















&lt;p&gt;Vertica creates a new resource pool for every subcluster-specific resource pool in the source subcluster.&lt;/p&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;
&lt;p&gt;Duplicating a subcluster can fail due to subcluster-specific resource pools. If creating the subcluster-specific resource pools leave less than 25% of the total memory free for the general pool, Vertica stops the duplication and reports an error.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Subcluster-specific resource pool cascade settings are copied from the source subcluster and are applied to the newly-created resource pool for the target subcluster.&lt;/p&gt;
&lt;p&gt;Subcluster-level overrides on global resource pools settings such as MEMORYSIZE. See &lt;a href=&#34;../../../en/admin/managing-db/managing-workloads/workload-best-practices/managing-workload-resources-an-eon-db/#&#34;&gt;Managing workload resources in an Eon Mode database&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;Grants on resource pools are copied from the source subcluster.&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;a href=&#34;../../../en/admin/managing-client-connections/connection-load-balancing/connection-load-balancing-policies/&#34;&gt;Connection load balancing&lt;/a&gt; settings&lt;/td&gt; 

&lt;td &gt;







&lt;p&gt;If the source subcluster is part of a subcluster-based load balancing group (you created the load balancing group using CREATE LOAD BALANCE GROUP...WITH SUBCLUSTER) the new subcluster is added to the group. See &lt;a href=&#34;../../../en/admin/managing-client-connections/connection-load-balancing/connection-load-balancing-policies/creating-connection-load-balance-groups/#From_Subclusters&#34;&gt;Creating Connection Load Balance Groups&lt;/a&gt;.&lt;/p&gt;
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Vertica adds the new subcluster to the subcluster-based load balancing group. However, it does not create network addresses for the nodes in the target subcluster. Load balancing policies cannot direct connections to the new subcluster until you create network addresses for the nodes in the target subcluster. See &lt;a href=&#34;../../../en/admin/managing-client-connections/connection-load-balancing/connection-load-balancing-policies/creating-network-addresses/#&#34;&gt;Creating network addresses&lt;/a&gt; for the steps you must take.
&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/storage-policy/&#34; title=&#34;A database object you create to associate a labeled location as the default storage location for the object.&#34;&gt;Storage policy&lt;/a&gt; settings&lt;/td&gt; 

&lt;td &gt;


Table and table partition pinning policies are copied from the source to the target subcluster. See &lt;a href=&#34;../../../en/eon/depot-management/managing-depot-caching/#Pinning&#34;&gt;Pinning Depot Objects&lt;/a&gt; for more information. Any existing storage policies on the target subcluster are dropped before the policies are copied from the source.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;Vertica &lt;strong&gt;does not&lt;/strong&gt; copy the following subcluster settings:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Setting Type&lt;/th&gt; 

&lt;th &gt;
Setting Details&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Basic subcluster settings&lt;/td&gt; 

&lt;td &gt;




&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Subcluster name (you must provide a new name for the target subcluster).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the source is the &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/default-subcluster/&#34; title=&#34;The default subcluster is the subcluster OpenText&amp;amp;trade; Analytics Database adds new nodes to if you do not specify a subcluster to contain the new nodes.&#34;&gt;default subcluster&lt;/a&gt;, the setting is not copied to the target. Your Vertica database has a single default subcluster. If Vertica copied this value, the source subcluster could no longer be the default.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Connection load balancing settings&lt;/td&gt; 

&lt;td &gt;




&lt;p&gt;Address-based load balancing groups are not duplicated for the target subcluster.&lt;/p&gt;
&lt;p&gt;For example, suppose you created a load balancing group for the source subcluster by adding the network addresses of all subcluster&#39;s nodes . In this case, Vertica does not create a load balancing group for the target subcluster because it does not duplicate the network addresses of the source nodes (see the next section). Because it does not copy the addresses, it cannot not create an address-based group.&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;h2 id=&#34;duplication-of-node-level-settings&#34;&gt;Duplication of node-level settings&lt;/h2&gt;
&lt;p&gt;When Vertica duplicates a subcluster, it maps each node in the source subcluster to a node in the destination subcluster. Then it copies relevant node-level settings from each individual source node to the corresponding target node.&lt;/p&gt;
&lt;p&gt;For example, suppose you have a three-node subcluster consisting of nodes named node01, node02, and node03. The target subcluster has nodes named node04, node05, and node06. In this case, Vertica copies the settings from node01 to node04, from node02 to node05, and from node03 to node06.&lt;/p&gt;
&lt;p&gt;The node-level settings that Vertica copies from the source nodes to the target nodes are:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Setting Type&lt;/th&gt; 

&lt;th &gt;
Setting Details&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;a href=&#34;../../../en/admin/configuring-db/config-parameter-management/&#34;&gt;Configuration parameters&lt;/a&gt;&lt;/td&gt; 

&lt;td &gt;






&lt;p&gt;Vertica copies the value of configuration parameters that you have set at the node level in the source node to the target node. For example, suppose you set CompressCatalogOnDisk on the source node using the statement:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ALTER NODE node01 SET CompressCatalogOnDisk = 0;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;If you then duplicated the subcluster containing node01, the setting is copied to the target node.&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Eon Mode settings&lt;/td&gt; 

&lt;td &gt;




&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Shard subscriptions are copied from the source node to the target.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Whether the node is the participating primary node for the shard.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;a href=&#34;../../../en/admin/managing-storage-locations/&#34;&gt;Storage location&lt;/a&gt; settings&lt;/td&gt; 

&lt;td &gt;










&lt;p&gt;The DATA, TEMP, DEPOT, and USER storage location paths on the source node are duplicated on the target node. When duplicating node-specific paths (such as DATA or DEPOT) the path names are adjusted for the new node name. For example, suppose node 1 has a depot path of &lt;code&gt;/vertica/depot/vmart/v_vmart_node0001_depot&lt;/code&gt;. If Vertica duplicates node 1 to node 4, it adjusts the path to &lt;code&gt;/vertica/depot/vmart/v_vmart_node0004_depot&lt;/code&gt;.&lt;/p&gt;
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
&lt;p&gt;The directories for these storage locations on the target node must be empty. They must also have the correct file permissions to allow Vertica to read and write to them.&lt;/p&gt;
&lt;p&gt;Vertica does not duplicate a storage location if it cannot access its directory on the target node or if the directory is not empty. In this case, the target node will not have the location defined after the duplication process finishes. Admintools does not warn you if any locations were not duplicated.&lt;/p&gt;
&lt;p&gt;If you find that storage locations have not been duplicated on one or more target nodes, you must fix the issues with the directories on the target nodes. Then re-run the duplication command.&lt;/p&gt;
&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Large cluster settings&lt;/td&gt; 

&lt;td &gt;






&lt;p&gt;&lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/control-node/&#34; title=&#34;A node that connects to the Spread service to send and receive cluster-wide broadcast messages.&#34;&gt;Control node&lt;/a&gt; assignments are copied from the source node to the target node:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If the source node is a control node, then the target node is made into a control node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the source node depends on a control node, then the target node becomes a dependent of the corresponding control node in the new subcluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;Vertica &lt;strong&gt;does not&lt;/strong&gt; copy the following node-level settings:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Setting Type&lt;/th&gt; 

&lt;th &gt;
Setting Details&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Connection load balancing settings&lt;/td&gt; 

&lt;td &gt;


Network Addresses are not copied. The destination node&#39;s network addresses do not depend on the settings of the source node. Therefore, Vertica cannot determine what the target node&#39;s addresses should be.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Depot settings&lt;/td&gt; 

&lt;td &gt;
Depot-related configuration parameters that can be set on a node level (such as FileDeletionServiceInterval) are not copied from the source node to the target node.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;h2 id=&#34;using-admintools-to-duplicate-a-subcluster&#34;&gt;Using admintools to duplicate a subcluster&lt;/h2&gt;
&lt;p&gt;To duplicate a subcluster, you use the same admintools &lt;code&gt;db_add_subcluster&lt;/code&gt; tool that you use to create a new subcluster (see &lt;a href=&#34;../../../en/eon/managing-subclusters/creating-subclusters/#&#34;&gt;Creating subclusters&lt;/a&gt;). In addition to the required options to create a subcluster (the list of hosts, name for the new subcluster, database name, and so on), you also pass the &lt;code&gt;--like&lt;/code&gt; option with the name of the source subcluster you want to duplicate.

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
When you use the &lt;code&gt;--like&lt;/code&gt; option, you cannot use the &lt;code&gt;--is-secondary&lt;/code&gt; or &lt;code&gt;--control-set-size&lt;/code&gt; options. Vertica determines whether the new subcluster is secondary and the number of control nodes it contains based on the source subcluster. If you supply these options along with the &lt;code&gt;--like&lt;/code&gt; option, admintools returns an error.
&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;The following examples demonstrate duplicating a three-node subcluster named analytics_1. The first example examines some of the settings in the analytics_1 subcluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;An override of the global TM resource pool&#39;s memory size.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Its own resource pool named analytics&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Its membership in a subcluster-based load balancing group named analytics&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT name, subcluster_name, memorysize FROM SUBCLUSTER_RESOURCE_POOL_OVERRIDES;
 name | subcluster_name | memorysize
------+-----------------+------------
 tm   | analytics_1     | 0%
(1 row)

=&amp;gt; SELECT name, subcluster_name, memorysize, plannedconcurrency
      FROM resource_pools WHERE subcluster_name IS NOT NULL;
      name      | subcluster_name | memorysize | plannedconcurrency
----------------+-----------------+------------+--------------------
 analytics_pool | analytics_1     | 70%        | 8
(1 row)

=&amp;gt; SELECT * FROM LOAD_BALANCE_GROUPS;
   name    |   policy   |  filter   |    type    | object_name
-----------+------------+-----------+------------+-------------
 analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_1
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following example calls admintool&#39;s &lt;code&gt;db_add_subcluster&lt;/code&gt; tool to duplicate the analytics_1 subcluster onto a set of three hosts to create a subcluster named analytics_2.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t db_add_subcluster -d verticadb \
             -s 10.11.12.13,10.11.12.14,10.11.12.15 \
          -p mypassword --like=analytics_1 -c analytics_2

Creating new subcluster &amp;#39;analytics_2&amp;#39;
Adding new hosts to &amp;#39;analytics_2&amp;#39;
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
 Warning when creating depot location for node: v_verticadb_node0007
 WARNING: Target node v_verticadb_node0007 is down, so depot size has been
          estimated from depot location on initiator. As soon as the node comes
          up, its depot size might be altered depending on its disk size
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
 Warning when creating depot location for node: v_verticadb_node0008
 WARNING: Target node v_verticadb_node0008 is down, so depot size has been
          estimated from depot location on initiator. As soon as the node comes
          up, its depot size might be altered depending on its disk size
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
 Warning when creating depot location for node: v_verticadb_node0009
 WARNING: Target node v_verticadb_node0009 is down, so depot size has been
          estimated from depot location on initiator. As soon as the node comes
          up, its depot size might be altered depending on its disk size
Cloning subcluster properties
NOTICE: Nodes in subcluster analytics_1 have network addresses, you
might need to configure network addresses for nodes in subcluster
analytics_2 in order to get load balance groups to work correctly.

    Replicating configuration to all nodes
    Generating new configuration information and reloading spread
    Starting nodes:
        v_verticadb_node0007 (10.11.12.81)
        v_verticadb_node0008 (10.11.12.209)
        v_verticadb_node0009 (10.11.12.186)
    Starting Vertica on all nodes. Please wait, databases with a large catalog
         may take a while to initialize.
    Checking database state for newly added nodes
    Node Status: v_verticadb_node0007: (DOWN) v_verticadb_node0008:
                 (DOWN) v_verticadb_node0009: (DOWN)
    Node Status: v_verticadb_node0007: (INITIALIZING) v_verticadb_node0008:
                 (INITIALIZING) v_verticadb_node0009: (INITIALIZING)
    Node Status: v_verticadb_node0007: (UP) v_verticadb_node0008:
                 (UP) v_verticadb_node0009: (UP)
Syncing catalog on verticadb with 2000 attempts.
    Multi-node DB add completed
Nodes added to subcluster analytics_2 successfully.
Subcluster added to verticadb successfully.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Re-running the queries in the first part of the example shows that the settings from analytics_1 have been duplicated in analytics_2:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT name, subcluster_name, memorysize FROM SUBCLUSTER_RESOURCE_POOL_OVERRIDES;
 name | subcluster_name | memorysize
------+-----------------+------------
 tm   | analytics_1     | 0%
 tm   | analytics_2     | 0%
(2 rows)

=&amp;gt; SELECT name, subcluster_name, memorysize, plannedconcurrency
       FROM resource_pools WHERE subcluster_name IS NOT NULL;
      name      | subcluster_name | memorysize |  plannedconcurrency
----------------+-----------------+------------+--------------------
 analytics_pool | analytics_1     | 70%        | 8
 analytics_pool | analytics_2     | 70%        | 8
(2 rows)

=&amp;gt; SELECT * FROM LOAD_BALANCE_GROUPS;
   name    |   policy   |  filter   |    type    | object_name
-----------+------------+-----------+------------+-------------
 analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_2
 analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_1
(2 rows)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;As noted earlier, even though analytics_2 subcluster is part of the analytics load balancing group, its nodes do not have network addresses defined for them. Until you define network addresses for the nodes, Vertica cannot redirect client connections to them.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Eon: Adding and removing nodes from subclusters</title>
      <link>/en/eon/managing-subclusters/adding-and-removing-nodes-from-subclusters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/managing-subclusters/adding-and-removing-nodes-from-subclusters/</guid>
      <description>
        
        
        &lt;p&gt;You will often want to add new nodes to and remove existing nodes from a subcluster. This ability lets you scale your database to respond to changing analytic needs. For more information on how adding nodes to a subcluster affects your database&#39;s performance, see &lt;a href=&#34;../../../en/eon/scaling-your-eon-db/#&#34;&gt;Scaling your Eon Mode database&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;AddingNodes&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;adding-new-nodes-to-a-subcluster&#34;&gt;Adding new nodes to a subcluster&lt;/h2&gt;
&lt;p&gt;You can add nodes to a subcluster to meet additional workloads. The nodes that you add to the subcluster must already be part of your cluster. These can be:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Nodes that you removed from other subclusters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Nodes you added following the steps in &lt;a href=&#34;../../../en/mc/cloud-platforms/aws-mc/add-nodes-to-cluster-aws-using-mc/#&#34;&gt;Add nodes to a cluster in AWS using Management Console&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Nodes you created using your cloud provider&#39;s interface, such as the AWS EC2 &amp;quot;Launch more like this&amp;quot; feature.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To add new nodes to a subcluster, use the &lt;code&gt;db_add_node&lt;/code&gt; command of admintools:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t db_add_node -h
Usage: db_add_node [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of the database
  -s HOSTS, --hosts=HOSTS
                        Comma separated list of hosts to add to database
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  -a AHOSTS, --add=AHOSTS
                        Comma separated list of hosts to add to database
  -c SCNAME, --subcluster=SCNAME
                        Name of subcluster for the new node
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete (&amp;#39;never&amp;#39;) will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.
  --compat21            (deprecated) Use Vertica 2.1 method using node names
                        instead of hostnames
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you do not use the &lt;code&gt;-c&lt;/code&gt; option, Vertica adds new nodes to the &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/default-subcluster/&#34; title=&#34;The default subcluster is the subcluster OpenText&amp;amp;trade; Analytics Database adds new nodes to if you do not specify a subcluster to contain the new nodes.&#34;&gt;default subcluster&lt;/a&gt; (set to default_subcluster in new databases). This example adds a new node without specifying the subcluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t db_add_node -p &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt;&amp;#39; -d verticadb -s 10.11.12.117
Subcluster not specified, validating default subcluster
Nodes will be added to subcluster &amp;#39;default_subcluster&amp;#39;
                Verifying database connectivity...10.11.12.10
Eon database detected, creating new depot locations for newly added nodes
Creating depots for each node
        Generating new configuration information and reloading spread
        Replicating configuration to all nodes
        Starting nodes
        Starting nodes:
                v_verticadb_node0004 (10.11.12.117)
        Starting Vertica on all nodes. Please wait, databases with a
            large catalog may take a while to initialize.
        Checking database state
        Node Status: v_verticadb_node0004: (DOWN)
        Node Status: v_verticadb_node0004: (DOWN)
        Node Status: v_verticadb_node0004: (DOWN)
        Node Status: v_verticadb_node0004: (DOWN)
        Node Status: v_verticadb_node0004: (UP)
Communal storage detected: syncing catalog

        Multi-node DB add completed
Nodes added to verticadb successfully.
You will need to redesign your schema to take advantage of the new nodes.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To add nodes to a specific existing subcluster, use the &lt;code&gt;db_add_node&lt;/code&gt; tool&#39;s &lt;code&gt;-c&lt;/code&gt; option:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t db_add_node -s 10.11.12.178 -d verticadb -p &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt;&amp;#39; \
             -c analytics_subcluster
Subcluster &amp;#39;analytics_subcluster&amp;#39; specified, validating
Nodes will be added to subcluster &amp;#39;analytics_subcluster&amp;#39;
                Verifying database connectivity...10.11.12.10
Eon database detected, creating new depot locations for newly added nodes
Creating depots for each node
        Generating new configuration information and reloading spread
        Replicating configuration to all nodes
        Starting nodes
        Starting nodes:
                v_verticadb_node0007 (10.11.12.178)
        Starting Vertica on all nodes. Please wait, databases with a
              large catalog may take a while to initialize.
        Checking database state
        Node Status: v_verticadb_node0007: (DOWN)
        Node Status: v_verticadb_node0007: (DOWN)
        Node Status: v_verticadb_node0007: (DOWN)
        Node Status: v_verticadb_node0007: (DOWN)
        Node Status: v_verticadb_node0007: (UP)
Communal storage detected: syncing catalog

        Multi-node DB add completed
Nodes added to verticadb successfully.
You will need to redesign your schema to take advantage of the new nodes.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a name=&#34;UpdatingShardSubscriptions&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;updating-shard-subscriptions-after-adding-nodes&#34;&gt;Updating shard subscriptions after adding nodes&lt;/h2&gt;
&lt;p&gt;After you add nodes to a subcluster they do not yet subscribe to shards. You can view the subscription status of all nodes in your database using the following query that joins the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/nodes/&#34;&gt;V_CATALOG.NODES&lt;/a&gt; and &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/node-subscriptions/&#34;&gt;V_CATALOG.NODE_SUBSCRIPTIONS&lt;/a&gt; system tables:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
   v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
   = ns.node_name) ORDER BY 1,2,3;

   subcluster_name    |      node_name       | shard_name  | subscription_state
----------------------+----------------------+-------------+--------------------
 analytics_subcluster | v_verticadb_node0004 |             |
 analytics_subcluster | v_verticadb_node0005 |             |
 analytics_subcluster | v_verticadb_node0006 |             |
 default_subcluster   | v_verticadb_node0001 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0003 | ACTIVE
(12 rows)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see that none of the nodes in the newly-added analytics_subcluster have subscriptions.&lt;/p&gt;
&lt;p&gt;To update the subscriptions for new nodes, call the &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/rebalance-shards/#&#34;&gt;REBALANCE_SHARDS&lt;/a&gt; function. You can limit the rebalance to the subcluster containing the new nodes by passing its name to the REBALANCE_SHARDS function call. The following example runs rebalance shards to update the analytics_subcluster&#39;s subscriptions:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
=&amp;gt; SELECT REBALANCE_SHARDS(&amp;#39;analytics_subcluster&amp;#39;);
 REBALANCE_SHARDS
-------------------
 REBALANCED SHARDS
(1 row)

=&amp;gt; SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
   v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
   = ns.node_name) ORDER BY 1,2,3;

   subcluster_name    |      node_name       | shard_name  | subscription_state
----------------------+----------------------+-------------+--------------------
 analytics_subcluster | v_verticadb_node0004 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0004 | segment0001 | ACTIVE
 analytics_subcluster | v_verticadb_node0004 | segment0003 | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | segment0001 | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | segment0002 | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | segment0002 | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0003 | ACTIVE
(18 rows)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;a name=&#34;Removing&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;removing-nodes&#34;&gt;Removing nodes&lt;/h2&gt;
&lt;p&gt;Your database must meet these requirements before you can remove a node from a subcluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To remove a node from a &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary subcluster&lt;/a&gt;, all of the &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/primary-node/&#34; title=&#34;In Eon Mode, a primary node is a node that is a member of a.&#34;&gt;primary nodes&lt;/a&gt; in the subcluster must be up, and the database must be able to maintain quorum after the primary node is removed (see &lt;a href=&#34;../../../en/architecture/eon-concepts/data-integrity-and-high-availability-an-eon-db/#&#34;&gt;Data integrity and high availability in an Eon Mode database&lt;/a&gt;). These requirements are necessary because Vertica calls &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/rebalance-shards/#&#34;&gt;REBALANCE_SHARDS&lt;/a&gt; to redistribute shard subscriptions among the remaining nodes in the subcluster. If you attempt to remove a primary node when the database does not meet the requirements, the rebalance shards process waits until either the down nodes recover or a timeout elapses. While it waits, you periodically see a message &amp;quot;Rebalance shards polling iteration number [&lt;em&gt;nn&lt;/em&gt;]&amp;quot; indicating that the rebalance process is waiting to complete.&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;You can remove nodes from a &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/secondary-subcluster/&#34; title=&#34;A secondary subcluster is a type of subcluster that is easy to start and shutdown on demand.&#34;&gt;secondary subcluster&lt;/a&gt; even when nodes in the subcluster are down.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If your database has the large cluster feature enabled, you cannot remove a node if it is the subcluster&#39;s last &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/control-node/&#34; title=&#34;A node that connects to the Spread service to send and receive cluster-wide broadcast messages.&#34;&gt;control node&lt;/a&gt; and there are nodes that depend on it. See &lt;a href=&#34;../../../en/admin/managing-db/managing-nodes/large-cluster/#&#34;&gt;Large cluster&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;If there are other control nodes in the subcluster, you can drop a control node. Vertica reassigns the nodes that depend on the node being dropped to other control nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To remove one or more nodes, use admintools&#39;s &lt;code&gt;db_remove_node&lt;/code&gt; tool:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t db_remove_node -p &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt;&amp;#39; -d verticadb -s 10.11.12.117
connecting to 10.11.12.10
Waiting for rebalance shards. We will wait for at most 36000 seconds.
Rebalance shards polling iteration number [0], started at [14:56:41], time out at [00:56:41]
Attempting to drop node v_verticadb_node0004 ( 10.11.12.117 )
        Shutting down node v_verticadb_node0004
        Sending node shutdown command to &amp;#39;[&amp;#39;v_verticadb_node0004&amp;#39;, &amp;#39;10.11.12.117&amp;#39;, &amp;#39;/vertica/data&amp;#39;, &amp;#39;/vertica/data&amp;#39;]&amp;#39;
        Deleting catalog and data directories
        Update admintools metadata for v_verticadb_node0004
        Eon mode detected. The node v_verticadb_node0004 has been removed from host 10.11.12.117. To remove the
        node metadata completely, please clean up the files corresponding to this node, at the communal
        location: s3://eonbucket/metadata/verticadb/nodes/v_verticadb_node0004
        Reload spread configuration
        Replicating configuration to all nodes
        Checking database state
        Node Status: v_verticadb_node0001: (UP) v_verticadb_node0002: (UP) v_verticadb_node0003: (UP)
Communal storage detected: syncing catalog
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When you remove one or more nodes from a subcluster, Vertica automatically rebalances shards in the subcluster. You do not need to manually rebalance shards after removing nodes.&lt;/p&gt;
&lt;h2 id=&#34;moving-nodes-between-subclusters&#34;&gt;Moving nodes between subclusters&lt;/h2&gt;
&lt;p&gt;To move a node from one subcluster to another:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Remove the node or nodes from the subcluster it is currently a part of.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the node to the subcluster you want to move it to.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Eon: Managing workloads with subclusters</title>
      <link>/en/eon/managing-subclusters/managing-workloads-with-subclusters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/managing-subclusters/managing-workloads-with-subclusters/</guid>
      <description>
        
        
        &lt;p&gt;By default, queries are limited to executing on the nodes in the subcluster that contains the initiator node (the node the client is connected to). This example demonstrates executing an explain plan for a query when connected to node 4 of a cluster. Node 4 is part of a subcluster containing nodes 4 through 6. You can see that only the nodes in the subcluster will participate in a query:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; EXPLAIN SELECT customer_name, customer_state FROM customer_dimension LIMIT 10;

                                   QUERY PLAN
--------------------------------------------------------------------------------

 QUERY PLAN DESCRIPTION:
 ------------------------------

 EXPLAIN SELECT customer_name, customer_state FROM customer_dimension LIMIT 10;

 Access Path:
 +-SELECT  LIMIT 10 [Cost: 442, Rows: 10 (NO STATISTICS)] (PATH ID: 0)
 |  Output Only: 10 tuples
 |  Execute on: Query Initiator
 | +---&amp;gt; STORAGE ACCESS for customer_dimension [Cost: 442, Rows: 10K (NO
           STATISTICS)] (PATH ID: 1)
 | |      Projection: public.customer_dimension_b0
 | |      Materialize: customer_dimension.customer_name,
            customer_dimension.customer_state
 | |      Output Only: 10 tuples
 | |      Execute on: v_verticadb_node0004, v_verticadb_node0005,
                      v_verticadb_node0006
     .   .   .
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In Eon Mode, you can override the &lt;code&gt;MEMORYSIZE&lt;/code&gt;, &lt;code&gt;MAXMEMORYSIZE&lt;/code&gt;, and &lt;code&gt;MAXQUERYMEMORYSIZE&lt;/code&gt; settings for built-in global resource pools to fine-tune workloads within a subcluster. See &lt;a href=&#34;../../../en/admin/managing-db/managing-workloads/workload-best-practices/managing-workload-resources-an-eon-db/#&#34;&gt;Managing workload resources in an Eon Mode database&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2 id=&#34;what-happens-when-a-subcluster-cannot-run-a-query&#34;&gt;What happens when a subcluster cannot run a query&lt;/h2&gt;
&lt;p&gt;In order to process queries, each subcluster&#39;s nodes must have full coverage of all shards in the database. If the nodes do not have full coverage (which can happen if nodes are down), the subcluster can no longer process queries. This state does not cause the subcluster to shut down. Instead, if you attempt to run a query on a subcluster in this state, you receive error messages telling you that not enough nodes are available to complete the query.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name, node_state FROM nodes
   WHERE subcluster_name = &amp;#39;analytics_cluster&amp;#39;;
      node_name       | node_state
----------------------+------------
 v_verticadb_node0004 | DOWN
 v_verticadb_node0005 | UP
 v_verticadb_node0006 | DOWN
(3 rows)

=&amp;gt; SELECT * FROM online_sales.online_sales_fact;
ERROR 9099:  Cannot find participating nodes to run the query
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once the down nodes have recovered and the subcluster has full shard coverage, it will be able to process queries.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;ControllingWhereAQueryRuns&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;controlling-where-a-query-runs&#34;&gt;Controlling where a query runs&lt;/h2&gt;
&lt;p&gt;You can control where specific types of queries run by controlling which subcluster the clients connect to. The best way to enforce restrictions is to create a set of connection load balancing policies to steer clients from specific IP address ranges to clients in the correct subcluster.&lt;/p&gt;
&lt;p&gt;For example, suppose you have the following database with two subclusters: one for performing data loading, and one for performing analytics.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;../../../images/eon/subclusters-load-balancing-example.svg&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;p&gt;The data load tasks come from a set of ETL systems in the IP 10.20.0.0/16 address range. Analytics tasks can come from any other IP address. In this case, you can create set of connection load balance policies that ensure that the ETL systems connect to the data load subcluster, and all other connections go to the analytics subcluster.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name,node_address,node_address_family,subcluster_name
   FROM v_catalog.nodes;
      node_name       | node_address | node_address_family |  subcluster_name
----------------------+--------------+---------------------+--------------------
 v_verticadb_node0001 | 10.11.12.10  | ipv4                | load_subcluster
 v_verticadb_node0002 | 10.11.12.20  | ipv4                | load_subcluster
 v_verticadb_node0003 | 10.11.12.30  | ipv4                | load_subcluster
 v_verticadb_node0004 | 10.11.12.40  | ipv4                | analytics_subcluster
 v_verticadb_node0005 | 10.11.12.50  | ipv4                | analytics_subcluster
 v_verticadb_node0006 | 10.11.12.60  | ipv4                | analytics_subcluster
(6 rows)

=&amp;gt; CREATE NETWORK ADDRESS node01 ON v_verticadb_node0001 WITH &amp;#39;10.11.12.10&amp;#39;;
CREATE NETWORK ADDRESS
=&amp;gt; CREATE NETWORK ADDRESS node02 ON v_verticadb_node0002 WITH &amp;#39;10.11.12.20&amp;#39;;
CREATE NETWORK ADDRESS
=&amp;gt; CREATE NETWORK ADDRESS node03 ON v_verticadb_node0003 WITH &amp;#39;10.11.12.30&amp;#39;;
CREATE NETWORK ADDRESS
=&amp;gt; CREATE NETWORK ADDRESS node04 ON v_verticadb_node0004 WITH &amp;#39;10.11.12.40&amp;#39;;
CREATE NETWORK ADDRESS
=&amp;gt; CREATE NETWORK ADDRESS node05 ON v_verticadb_node0005 WITH &amp;#39;10.11.12.50&amp;#39;;
CREATE NETWORK ADDRESS
=&amp;gt; CREATE NETWORK ADDRESS node06 ON v_verticadb_node0006 WITH &amp;#39;10.11.12.60&amp;#39;;
CREATE NETWORK ADDRESS

=&amp;gt; CREATE LOAD BALANCE GROUP load_subcluster WITH SUBCLUSTER load_subcluster
   FILTER &amp;#39;0.0.0.0/0&amp;#39;;
CREATE LOAD BALANCE GROUP
=&amp;gt; CREATE LOAD BALANCE GROUP analytics_subcluster WITH SUBCLUSTER
   analytics_subcluster FILTER &amp;#39;0.0.0.0/0&amp;#39;;
CREATE LOAD BALANCE GROUP
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
=&amp;gt; CREATE ROUTING RULE etl_systems ROUTE &amp;#39;10.20.0.0/16&amp;#39; TO load_subcluster;
CREATE ROUTING RULE
=&amp;gt; CREATE ROUTING RULE analytic_clients ROUTE &amp;#39;0.0.0.0/0&amp;#39; TO analytics_subcluster;
CREATE ROUTING RULE
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once you have created the load balance policies, you can test them using the &lt;a href=&#34;../../../en/sql-reference/functions/client-connection-functions/describe-load-balance-decision/#&#34;&gt;DESCRIBE_LOAD_BALANCE_DECISION&lt;/a&gt; function.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT describe_load_balance_decision(&amp;#39;192.168.1.1&amp;#39;);

               describe_load_balance_decision
           --------------------------------
 Describing load balance decision for address [192.168.1.1]
Load balance cache internal version id (node-local): [1]
Considered rule [etl_systems] source ip filter [10.20.0.0/16]...
   input address does not match source ip filter for this rule.
Considered rule [analytic_clients] source ip filter [0.0.0.0/0]...
   input address matches this rule
Matched to load balance group [analytics_cluster] the group has
   policy [ROUNDROBIN] number of addresses [3]
(0) LB Address: [10.11.12.181]:5433
(1) LB Address: [10.11.12.205]:5433
(2) LB Address: [10.11.12.192]:5433
Chose address at position [1]
Routing table decision: Success. Load balance redirect to: [10.11.12.205]
    port [5433]

(1 row)

=&amp;gt; SELECT describe_load_balance_decision(&amp;#39;10.20.1.1&amp;#39;);

        describe_load_balance_decision
    --------------------------------
 Describing load balance decision for address [10.20.1.1]
Load balance cache internal version id (node-local): [1]
Considered rule [etl_systems] source ip filter [10.20.0.0/16]...
  input address matches this rule
Matched to load balance group [default_cluster] the group has policy
  [ROUNDROBIN] number of addresses [3]
(0) LB Address: [10.11.12.10]:5433
(1) LB Address: [10.11.12.20]:5433
(2) LB Address: [10.11.12.30]:5433
Chose address at position [1]
Routing table decision: Success. Load balance redirect to: [10.11.12.20]
  port [5433]

(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Normally, with these policies, all queries run by the ETL system will run on the load subcluster. All other queries will run on the analytics subcluster. There are some cases (especially if a subcluster is down or draining) where a client may connect to a node in another subcluster. For this reason, clients should always verify they are connected to the correct subcluster. See &lt;a href=&#34;../../../en/admin/managing-client-connections/connection-load-balancing/connection-load-balancing-policies/#&#34;&gt;Connection load balancing policies&lt;/a&gt; for more information about load balancing policies.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Eon: Starting and stopping subclusters</title>
      <link>/en/eon/managing-subclusters/starting-and-stopping-subclusters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/managing-subclusters/starting-and-stopping-subclusters/</guid>
      <description>
        
        
        &lt;p&gt;Subclusters make it convenient to start and stop a group of nodes as needed. You start and stop them with admintools commands or Vertica functions. You can also &lt;a href=&#34;../../../en/mc/db-management/subclusters-mc/starting-and-stopping-subclusters-mc/&#34;&gt;start and stop subclusters with Management Console&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Starting_a_Subcluster&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;starting-a-subcluster&#34;&gt;Starting a subcluster&lt;/h2&gt;
&lt;p&gt;To start a subcluster, use the admintools command &lt;code&gt;restart_subcluster&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t restart_subcluster -h
Usage: restart_subcluster [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of database whose subcluster is to be restarted
  -c SCNAME, --subcluster=SCNAME
                        Name of subcluster to be restarted
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete (&amp;#39;never&amp;#39;) will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.
  -F, --force           Force the nodes in the subcluster to start and auto
                        recover if necessary
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example starts the subcluster &lt;code&gt;analytics_cluster&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t restart_subcluster -c analytics_cluster \
          -d verticadb -p &lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt;
*** Restarting subcluster for database verticadb ***
        Restarting host [10.11.12.192] with catalog [v_verticadb_node0006_catalog]
        Restarting host [10.11.12.181] with catalog [v_verticadb_node0004_catalog]
        Restarting host [10.11.12.205] with catalog [v_verticadb_node0005_catalog]
        Issuing multi-node restart
        Starting nodes:
                v_verticadb_node0004 (10.11.12.181)
                v_verticadb_node0005 (10.11.12.205)
                v_verticadb_node0006 (10.11.12.192)
        Starting Vertica on all nodes. Please wait, databases with a large
            catalog may take a while to initialize.
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
                     v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
                     v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
                     v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
                     v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (UP)
                     v_verticadb_node0005: (UP) v_verticadb_node0006: (UP)
Communal storage detected: syncing catalog

Restart Subcluster result:  1
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a name=&#34;Stopping&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;stopping-a-subcluster&#34;&gt;Stopping a subcluster&lt;/h2&gt;
&lt;p&gt;You can stop a subcluster &lt;a href=&#34;#Graceful_Shutdown&#34;&gt;gracefully&lt;/a&gt; with the function &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/shutdown-with-drain/#&#34;&gt;SHUTDOWN_WITH_DRAIN&lt;/a&gt;, or &lt;a href=&#34;#Immediat&#34;&gt;immediately&lt;/a&gt; with &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/shutdown-subcluster/#&#34;&gt;SHUTDOWN_SUBCLUSTER&lt;/a&gt;. You can also shut down subclusters with the admintools command
&lt;code&gt;&lt;a href=&#34;#Admintoo&#34;&gt;stop_subcluster&lt;/a&gt;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Graceful_Shutdown&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;graceful-shutdown&#34;&gt;Graceful shutdown&lt;/h3&gt;
&lt;p&gt;The &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/shutdown-with-drain/#&#34;&gt;SHUTDOWN_WITH_DRAIN&lt;/a&gt; function drains a subcluster&#39;s client connections before shutting it down. The function first marks all nodes in the specified subcluster as draining. Work from existing user sessions continues on draining nodes, but the nodes refuse new client connections and are excluded from load-balancing operations. A dbadmin user can still connect to draining nodes. For more information about client connection draining, see &lt;a href=&#34;../../../en/admin/managing-client-connections/drain-client-connections/#&#34;&gt;Drain client connections&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To run the SHUTDOWN_WITH_DRAIN function, you must specify a timeout value. The function&#39;s behavior depends on the sign of the timeout value:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Positive: The nodes drain until either all the existing connections close or the function reaches the runtime limit set by the timeout value. As soon as one of these conditions is met, the function sends a shutdown message to the subcluster and returns.&lt;/li&gt;
&lt;li&gt;Zero: The function immediately closes any active user sessions on the subcluster and then shuts down the subcluster and returns.&lt;/li&gt;
&lt;li&gt;Negative: The function marks the subcluster&#39;s nodes as draining and waits to shut down the subcluster until all active user sessions disconnect.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After all nodes in a draining subcluster are down, its nodes are automatically reset to a not draining status.&lt;/p&gt;
&lt;p&gt;The following example demonstrates how you can use a positive timeout value to give active user sessions time to finish their work before shutting down the subcluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name, subcluster_name, is_draining, count_client_user_sessions, oldest_session_user FROM draining_status ORDER BY 1;
      node_name       |  subcluster_name   | is_draining | count_client_user_sessions | oldest_session_user
----------------------+--------------------+-------------+----------------------------+---------------------
 v_verticadb_node0001 | default_subcluster | f           |                          0 |
 v_verticadb_node0002 | default_subcluster | f           |                          0 |
 v_verticadb_node0003 | default_subcluster | f           |                          0 |
 v_verticadb_node0004 | analytics          | f           |                          1 | analyst
 v_verticadb_node0005 | analytics          | f           |                          0 |
 v_verticadb_node0006 | analytics          | f           |                          0 |
(6 rows)

=&amp;gt; SELECT SHUTDOWN_WITH_DRAIN(&amp;#39;analytics&amp;#39;, 300);
NOTICE 0:  Draining has started on subcluster (analytics)
NOTICE 0:  Begin shutdown of subcluster (analytics)
                              SHUTDOWN_WITH_DRAIN
--------------------------------------------------------------------------------------------------------------------
Set subcluster (analytics) to draining state
Waited for 3 nodes to drain
Shutdown message sent to subcluster (analytics)

(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can query the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/nodes/#&#34;&gt;NODES&lt;/a&gt; system table to confirm that the subcluster shut down:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT subcluster_name, node_name, node_state FROM nodes;
  subcluster_name   |      node_name       | node_state
--------------------+----------------------+------------
 default_subcluster | v_verticadb_node0001 | UP
 default_subcluster | v_verticadb_node0002 | UP
 default_subcluster | v_verticadb_node0003 | UP
 analytics          | v_verticadb_node0004 | DOWN
 analytics          | v_verticadb_node0005 | DOWN
 analytics          | v_verticadb_node0006 | DOWN
(6 rows)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you want to see more information about the draining and shutdown events, such as whether all user sessions finished their work before the timeout, you can query the dc_draining_events table. In this case, the subcluster still had one active user session when the function reached timeout:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT event_type, event_type_name, event_description, event_result, event_result_name FROM dc_draining_events;
 event_type |       event_type_name        |                          event_description                          | event_result | event_result_name
------------+------------------------------+---------------------------------------------------------------------+--------------+-------------------
          0 | START_DRAIN_SUBCLUSTER       | START_DRAIN for SHUTDOWN of subcluster (analytics)                  |            0 | SUCCESS
          2 | START_WAIT_FOR_NODE_DRAIN    | Wait timeout is 300 seconds                                         |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 0 seconds                                   |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 60 seconds                                  |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 120 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 125 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 180 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 240 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 250 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 300 seconds                                 |            4 | INFORMATIONAL
          3 | END_WAIT_FOR_NODE_DRAIN      | Wait for drain ended with 1 sessions remaining                      |            2 | TIMEOUT
          5 | BEGIN_SHUTDOWN_AFTER_DRAIN   | Staring shutdown of subcluster (analytics) following drain          |            4 | INFORMATIONAL
(12 rows)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;After you restart the subcluster, you can query the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-monitor-schema/draining-status/#&#34;&gt;DRAINING_STATUS&lt;/a&gt; system table to confirm that the nodes have reset their draining statuses to not draining:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name, subcluster_name, is_draining, count_client_user_sessions, oldest_session_user FROM draining_status ORDER BY 1;
      node_name       |  subcluster_name   | is_draining | count_client_user_sessions | oldest_session_user
----------------------+--------------------+-------------+----------------------------+---------------------
 v_verticadb_node0001 | default_subcluster | f           |                          0 |
 v_verticadb_node0002 | default_subcluster | f           |                          0 |
 v_verticadb_node0003 | default_subcluster | f           |                          0 |
 v_verticadb_node0004 | analytics          | f           |                          0 |
 v_verticadb_node0005 | analytics          | f           |                          0 |
 v_verticadb_node0006 | analytics          | f           |                          0 |
(6 rows)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a name=&#34;Immediat&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;immediate-shutdown&#34;&gt;Immediate shutdown&lt;/h3&gt;
&lt;p&gt;To shut down a subcluster immediately, call &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/shutdown-subcluster/#&#34;&gt;SHUTDOWN_SUBCLUSTER&lt;/a&gt;. The following example shuts down the &lt;code&gt;analytics&lt;/code&gt; subcluster immediately, without checking for active client connections:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT SHUTDOWN_SUBCLUSTER(&amp;#39;analytics&amp;#39;);
 SHUTDOWN_SUBCLUSTER
---------------------
Subcluster shutdown
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a name=&#34;Admintoo&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;admintools&#34;&gt;admintools&lt;/h3&gt;
&lt;p&gt;You can use the &lt;code&gt;stop_subcluster&lt;/code&gt; tool to stop a subcluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t stop_subcluster -h
Usage: stop_subcluster [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of database whose subcluster is to be stopped
  -c SCNAME, --subcluster=SCNAME
                        Name of subcluster to be stopped
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  -n DRAIN_SECONDS, --drain-seconds=DRAIN_SECONDS
                        Seconds to wait for user connections to close.
                        Default value is 60 seconds.
                        When the time expires, connections will be forcibly closed
                        and the db will shut down.
  -F, --force           Force the subcluster to shutdown immediately,
                        even if users are connected.
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete (&amp;#39;never&amp;#39;) will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;By default, &lt;code&gt;stop_subcluster&lt;/code&gt; calls &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/shutdown-with-drain/#&#34;&gt;SHUTDOWN_WITH_DRAIN&lt;/a&gt; to &lt;a href=&#34;../../../en/eon/managing-subclusters/starting-and-stopping-subclusters/#Graceful_Shutdown&#34;&gt;gracefully shut down&lt;/a&gt; the target subcluster. The shutdown process drains client connections from the subcluster before shutting it down.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;-n&lt;/code&gt; (&lt;code&gt;--drain-seconds&lt;/code&gt;) option, which has a default value of 60 seconds, allows you to specify the number of seconds to wait before forcefully closing client connections and shutting down the subcluster. If you set a negative &lt;code&gt;-n&lt;/code&gt; value, the subcluster is marked as draining but is not shut down until all active user sessions disconnect.&lt;/p&gt;
&lt;p&gt;In the following example, the subcluster named analytics initially has an active client session, but the session closes before the timeout limit is reached and the subcluster shuts down:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t stop_subcluster -d verticadb -c analytics --password &lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt; --drain-seconds 200
--- Subcluster shutdown ---
Verifying subcluster &amp;#39;analytics&amp;#39;
Node &amp;#39;v_verticadb_node0004&amp;#39; will shutdown
Node &amp;#39;v_verticadb_node0005&amp;#39; will shutdown
Node &amp;#39;v_verticadb_node0006&amp;#39; will shutdown
Connecting to database to begin shutdown of subcluster &amp;#39;analytics&amp;#39;
Shutdown will use connection draining.
Shutdown will wait for all client sessions to complete, up to 200 seconds
Then it will force a shutdown.
Poller has been running for 0:00:00.000022 seconds since 2022-07-28 12:18:04.891781

------------------------------------------------------------
client_sessions     |node_count          |node_names
--------------------------------------------------------------
0                   |5                   |v_verticadb_node0002,v_verticadb_node0004,v_verticadb_node0003,v_verticadb_node0...
1                   |1                   |v_verticadb_node0005
STATUS: vertica.engine.api.db_client.module is still running on 1 host: &lt;span class=&#34;code-variable&#34;&gt;nodeIP&lt;/span&gt; as of 2022-07-28 12:18:14. See /opt/vertica/log/adminTools.log for full details.
Poller has been running for 0:00:10.383018 seconds since 2022-07-28 12:18:04.891781

...

------------------------------------------------------------
client_sessions     |node_count          |node_names
--------------------------------------------------------------
0                   |3                   |v_verticadb_node0002,v_verticadb_node0001,v_verticadb_node0003
down                |3                   |v_verticadb_node0004,v_verticadb_node0005,v_verticadb_node0006
Stopping poller drain_status because it was canceled
SUCCESS running the shutdown metafunction
Not waiting for processes to completely exit
Shutdown operation was successful
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can use the &lt;code&gt;-F&lt;/code&gt; (or &lt;code&gt;--force&lt;/code&gt;) option to shut down a subcluster immediately, without checking for active user sessions or draining the subcluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t stop_subcluster -d verticadb -c analytics --password &lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt; -F
--- Subcluster shutdown ---
Verifying subcluster &amp;#39;analytics&amp;#39;
Node &amp;#39;v_verticadb_node0004&amp;#39; will shutdown
Node &amp;#39;v_verticadb_node0005&amp;#39; will shutdown
Node &amp;#39;v_verticadb_node0006&amp;#39; will shutdown
Connecting to database to begin shutdown of subcluster &amp;#39;analytics&amp;#39;
Running shutdown metafunction. Not using connection draining
STATUS: vertica.engine.api.db_client.module is still running on 1 host: 192.168.111.31 as of 2022-07-28 13:13:57. See /opt/vertica/log/adminTools.log for full details.
STATUS: vertica.engine.api.db_client.module is still running on 1 host: 192.168.111.31 as of 2022-07-28 13:14:07. See /opt/vertica/log/adminTools.log for full details.
SUCCESS running the shutdown metafunction
Not waiting for processes to completely exit
Shutdown operation was successful
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you want to shut down all subclusters in a database, see &lt;a href=&#34;../../../en/admin/operating-db/stopping-db/#Stopping&#34;&gt;Stopping an Eon Mode Database&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Eon: Altering subcluster settings</title>
      <link>/en/eon/managing-subclusters/altering-subcluster-settings/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/managing-subclusters/altering-subcluster-settings/</guid>
      <description>
        
        
        &lt;p&gt;There are several settings you can alter on a subcluster using the &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-subcluster/#&#34;&gt;ALTER SUBCLUSTER&lt;/a&gt; statement. You can also switch a subcluster from a primary to a secondary subcluster, or from a secondary to a primary.&lt;/p&gt;
&lt;h2 id=&#34;renaming-a-subcluster&#34;&gt;Renaming a subcluster&lt;/h2&gt;
&lt;p&gt;To rename an existing subcluster, use the ALTER SUBCLUSTER statement&#39;s RENAME TO clause:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER SUBCLUSTER default_subcluster RENAME TO load_subcluster;
ALTER SUBCLUSTER

=&amp;gt; SELECT DISTINCT subcluster_name FROM subclusters;
  subcluster_name
-------------------
 load_subcluster
 analytics_cluster
(2 rows)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;a name=&#34;ChangeDefault&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;changing-the-default-subcluster&#34;&gt;Changing the default subcluster&lt;/h2&gt;
&lt;p&gt;The default subcluster designates which subcluster Vertica adds nodes to if you do not explicitly specify a subcluster when adding nodes to the database. When you create a new database (or when a database is upgraded from a version prior to 9.3.0) the default_subcluster is the default. You can find the current default subcluster by querying the is_default column of the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/subclusters/#&#34;&gt;SUBCLUSTERS&lt;/a&gt; system table.&lt;/p&gt;
&lt;p&gt;The following example demonstrates finding the default subcluster, and then changing it to the subcluster named analytics_cluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
  subcluster_name
--------------------
 default_subcluster
(1 row)

=&amp;gt; ALTER SUBCLUSTER analytics_cluster SET DEFAULT;
ALTER SUBCLUSTER
=&amp;gt; SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
  subcluster_name
-------------------
 analytics_cluster
(1 row)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;converting-a-subcluster-from-primary-to-secondary-or-secondary-to-primary&#34;&gt;Converting a subcluster from primary to secondary, or secondary to primary&lt;/h2&gt;
&lt;p&gt;You usually choose whether a subcluster is &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary&lt;/a&gt; or &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/secondary-subcluster/&#34; title=&#34;A secondary subcluster is a type of subcluster that is easy to start and shutdown on demand.&#34;&gt;secondary&lt;/a&gt; when creating it (see &lt;a href=&#34;../../../en/eon/managing-subclusters/creating-subclusters/#&#34;&gt;Creating subclusters&lt;/a&gt; for more information). However, you can switch a subcluster between the two settings after you have created it. You may want to change whether a subcluster is primary or secondary to impact the &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/k-safety/&#34; title=&#34;For more information, see Designing for K-Safety.&#34;&gt;K-safety&lt;/a&gt; of your database. For example, if you have a single primary subcluster that has down nodes that you cannot easily replace, you can promote a secondary subcluster to primary to ensure losing another primary node will not cause your database to shut down. On the oither hand, you may choose to convert a primary subcluster to a secondary before eventually shutting it down. This conversion can prevent the database from losing K-Safety if the subcluster you are shutting down contains half or more of the total number of primary nodes in the database.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

You cannot promote or demote a subcluster containing the &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/initiator-node/&#34; title=&#34;In the context of a client connection, the initiator node is the node associated with the specific host to which the connection was made.&#34;&gt;initiator node&lt;/a&gt;. You must be connected to a node in a subcluster other than the one you want to promote or demote.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;To make a secondary subcluster into a primary subcluster, use the &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/promote-subcluster-to-primary/#&#34;&gt;PROMOTE_SUBCLUSTER_TO_PRIMARY&lt;/a&gt; function:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | f
 load_subcluster   | t
(2 rows)


=&amp;gt; SELECT PROMOTE_SUBCLUSTER_TO_PRIMARY(&amp;#39;analytics_cluster&amp;#39;);
 PROMOTE_SUBCLUSTER_TO_PRIMARY
-------------------------------
 PROMOTE SUBCLUSTER TO PRIMARY
(1 row)


=&amp;gt; SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | t
 load_subcluster   | t
(2 rows)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Making a primary subcluster into a secondary subcluster is similar. Unlike converting a secondary subcluster to a primary, there are several issues that may prevent you from making a primary into a secondary. Vertica prevents you from making a primary into a secondary if any of the following is true:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The subcluster contains a &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/critical-node/&#34; title=&#34;A critical node is a node whose failure would cause the database to become unsafe and force a shutdown.&#34;&gt;critical node&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The subcluster is the only primary subcluster in the database. You must have at least one primary subcluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/initiator-node/&#34; title=&#34;In the context of a client connection, the initiator node is the node associated with the specific host to which the connection was made.&#34;&gt;initiator node&lt;/a&gt; is a member of the subcluster you are trying to demote. You must call DEMOTE_SUBCLUSTER_TO_SECONDARY from another subcluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To convert a primary subcluster to secondary, use the &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/eon-functions/demote-subcluster-to-secondary/#&#34;&gt;DEMOTE_SUBCLUSTER_TO_SECONDARY&lt;/a&gt; function:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | t
 load_subcluster   | t
(2 rows)

=&amp;gt; SELECT DEMOTE_SUBCLUSTER_TO_SECONDARY(&amp;#39;analytics_cluster&amp;#39;);
 DEMOTE_SUBCLUSTER_TO_SECONDARY
--------------------------------
 DEMOTE SUBCLUSTER TO SECONDARY
(1 row)

=&amp;gt; SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | f
 load_subcluster   | t
(2 rows)
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Eon: Removing subclusters</title>
      <link>/en/eon/managing-subclusters/removing-subclusters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/managing-subclusters/removing-subclusters/</guid>
      <description>
        
        
        &lt;p&gt;Removing a subcluster from the database deletes the subcluster from the Vertica catalog. During the removal, Vertica removes any nodes in the subcluster from the database. These nodes are still part of the database cluster, but are no longer part of the database. If you view your cluster in the MC, you will see these nodes with the status STANDBY. They can be added back to the database by adding them to another subcluster. See &lt;a href=&#34;../../../en/eon/managing-subclusters/creating-subclusters/#&#34;&gt;Creating subclusters&lt;/a&gt; and &lt;a href=&#34;../../../en/eon/managing-subclusters/adding-and-removing-nodes-from-subclusters/#AddingNodes&#34;&gt;Adding New Nodes to a Subcluster&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Vertica places several restrictions on removing a subcluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You cannot remove the &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/default-subcluster/&#34; title=&#34;The default subcluster is the subcluster OpenText&amp;amp;trade; Analytics Database adds new nodes to if you do not specify a subcluster to contain the new nodes.&#34;&gt;default subcluster&lt;/a&gt;. If you want to remove the subcluster that is set as the default, you must make another subcluster the default. See &lt;a href=&#34;../../../en/eon/managing-subclusters/altering-subcluster-settings/#ChangeDefault&#34;&gt;Changing the Default Subcluster&lt;/a&gt; for details.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You cannot remove the last &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary subcluster&lt;/a&gt; in the database. Your database must always have at least one primary subcluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Removing a subcluster can fail if the database is repartitioning. If this happens, you will see the error message &amp;quot;Transaction commit aborted because session subscriptions do not match catalog.&amp;quot; Wait until the repartitioning is done before removing a subcluster.

&lt;/div&gt;
&lt;p&gt;To remove a subcluster, use the admintools command line &lt;code&gt;db_remove_subcluster&lt;/code&gt; tool:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t db_remove_subcluster -h
Usage: db_remove_subcluster [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of database to be modified
  -c SCNAME, --subcluster=SCNAME
                        Name of subcluster to be removed
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete (&amp;#39;never&amp;#39;) will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.
  --skip-directory-cleanup
                        Caution: this option will force you to do a manual
                        cleanup. This option skips directory deletion during
                        remove subcluster. This is best used in a cloud
                        environment where the hosts being removed will be
                        subsequently discarded.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example removes the subcluster named analytics_cluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ adminTools -t db_remove_subcluster -d verticadb -c analytics_cluster -p &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt;&amp;#39;
Found node v_verticadb_node0004 in subcluster analytics_cluster
Found node v_verticadb_node0005 in subcluster analytics_cluster
Found node v_verticadb_node0006 in subcluster analytics_cluster
Found node v_verticadb_node0007 in subcluster analytics_cluster
Waiting for rebalance shards. We will wait for at most 36000 seconds.
Rebalance shards polling iteration number [0], started at [17:09:35], time
    out at [03:09:35]
Attempting to drop node v_verticadb_node0004 ( 10.11.12.40 )
    Shutting down node v_verticadb_node0004
    Sending node shutdown command to &amp;#39;[&amp;#39;v_verticadb_node0004&amp;#39;, &amp;#39;10.11.12.40&amp;#39;,
        &amp;#39;/vertica/data&amp;#39;, &amp;#39;/vertica/data&amp;#39;]&amp;#39;
    Deleting catalog and data directories
    Update admintools metadata for v_verticadb_node0004
    Eon mode detected. The node v_verticadb_node0004 has been removed from
        host 10.11.12.40. To remove the node metadata completely, please clean
        up the files corresponding to this node, at the communal location:
        s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0004
Attempting to drop node v_verticadb_node0005 ( 10.11.12.50 )
    Shutting down node v_verticadb_node0005
    Sending node shutdown command to &amp;#39;[&amp;#39;v_verticadb_node0005&amp;#39;, &amp;#39;10.11.12.50&amp;#39;,
        &amp;#39;/vertica/data&amp;#39;, &amp;#39;/vertica/data&amp;#39;]&amp;#39;
    Deleting catalog and data directories
    Update admintools metadata for v_verticadb_node0005
    Eon mode detected. The node v_verticadb_node0005 has been removed from
        host 10.11.12.50. To remove the node metadata completely, please clean
        up the files corresponding to this node, at the communal location:
        s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0005
Attempting to drop node v_verticadb_node0006 ( 10.11.12.60 )
    Shutting down node v_verticadb_node0006
    Sending node shutdown command to &amp;#39;[&amp;#39;v_verticadb_node0006&amp;#39;, &amp;#39;10.11.12.60&amp;#39;,
        &amp;#39;/vertica/data&amp;#39;, &amp;#39;/vertica/data&amp;#39;]&amp;#39;
    Deleting catalog and data directories
    Update admintools metadata for v_verticadb_node0006
    Eon mode detected. The node v_verticadb_node0006 has been removed from
        host 10.11.12.60. To remove the node metadata completely, please clean
        up the files corresponding to this node, at the communal location:
        s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0006
Attempting to drop node v_verticadb_node0007 ( 10.11.12.70 )
    Shutting down node v_verticadb_node0007
    Sending node shutdown command to &amp;#39;[&amp;#39;v_verticadb_node0007&amp;#39;, &amp;#39;10.11.12.70&amp;#39;,
        &amp;#39;/vertica/data&amp;#39;, &amp;#39;/vertica/data&amp;#39;]&amp;#39;
    Deleting catalog and data directories
    Update admintools metadata for v_verticadb_node0007
    Eon mode detected. The node v_verticadb_node0007 has been removed from
        host 10.11.12.70. To remove the node metadata completely, please clean
        up the files corresponding to this node, at the communal location:
        s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0007
    Reload spread configuration
    Replicating configuration to all nodes
    Checking database state
    Node Status: v_verticadb_node0001: (UP) v_verticadb_node0002: (UP)
        v_verticadb_node0003: (UP)
Communal storage detected: syncing catalog
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
  </channel>
</rss>
