<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Managing nodes</title>
    <link>/en/admin/managing-db/managing-nodes/</link>
    <description>Recent content in Managing nodes on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/admin/managing-db/managing-nodes/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Admin: Stop the database on a node</title>
      <link>/en/admin/managing-db/managing-nodes/stop-on-node/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/stop-on-node/</guid>
      <description>
        
        
        &lt;p&gt;In some cases, you need to take down a node to perform maintenance tasks, or upgrade hardware. You can do this with one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#Administ&#34;&gt;Administration Tools&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#cmdLine&#34;&gt;Command line&lt;/a&gt; admintools stop_node&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Before removing a node from a cluster, check that the cluster has the minimum number of nodes required to comply with K-safety. If necessary, &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/removing-nodes/lowering-ksafety-to-enable-node-removal/&#34;&gt;temporarily lower the database K-safety level&lt;/a&gt;.
&lt;/div&gt;
&lt;p&gt;&lt;a name=&#34;Administ&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;administration-tools&#34;&gt;Administration tools&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run Administration Tools, select &lt;strong&gt;Advanced Menu&lt;/strong&gt;, and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Stop Vertica on Host&lt;/strong&gt; and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose the host that you want to stop and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Return to the Main Menu, select &lt;strong&gt;View Database Cluster State&lt;/strong&gt;, and click &lt;strong&gt;OK&lt;/strong&gt;. The host you previously stopped should appear DOWN.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can now perform maintenance.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/restart-on-node/&#34;&gt;Restart the database on a node&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;cmdLine&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;command-line&#34;&gt;Command line&lt;/h2&gt;
&lt;p&gt;You can use the command line tool stop_node to stop the database on one or more nodes. stop_node takes one or more node IP addresses as arguments. For example, the following command stops the database on two nodes:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t stop_node -s 192.0.2.1,192.0.2.2
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Admin: Restart the database on a node</title>
      <link>/en/admin/managing-db/managing-nodes/restart-on-node/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/restart-on-node/</guid>
      <description>
        
        
        &lt;p&gt;After &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/stop-on-node/&#34;&gt;stopping a node&lt;/a&gt; to perform maintenance tasks such as upgrading hardware, you need to restart the node so it can reconnect with the database cluster.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run Administration Tools. From the Main Menu select &lt;strong&gt;Restart Vertica on Host&lt;/strong&gt; and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the database and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the host that you want to restart and click &lt;strong&gt;OK&lt;/strong&gt;.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

This process may take a few moments.

&lt;/div&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Return to the Main Menu, select &lt;strong&gt;View Database Cluster State&lt;/strong&gt;, and click &lt;strong&gt;OK&lt;/strong&gt;. The host you restarted now appears as UP, as shown.&lt;br /&gt;&lt;br /&gt;&lt;img src=&#34;../../../../images/howtotbringanodebackup-example.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Setting node type</title>
      <link>/en/admin/managing-db/managing-nodes/setting-node-type/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/setting-node-type/</guid>
      <description>
        
        
        &lt;p&gt;When you create a node, OpenText™ Analytics Database automatically sets its type to &lt;code&gt;PERMANENT&lt;/code&gt;. This enables the database to use this node to store data. You can change a node&#39;s type with 
&lt;code&gt;&lt;a href=&#34;../../../../en/sql-reference/statements/alter-statements/alter-node/#&#34;&gt;ALTER NODE&lt;/a&gt;&lt;/code&gt;, to one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;PERMANENT: (default): A node that stores data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;EPHEMERAL: A node that is in transition from one type to another—typically, from PERMANENT to either STANDBY or EXECUTE.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;STANDBY: A node that is reserved to replace any node when it goes down. A standby node stores no segments or data until it is called to replace a down node. When used as a replacement node, the database changes its type to PERMANENT. For more information, see &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/active-standby-nodes/#&#34;&gt;Active standby nodes&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;EXECUTE: A node that is reserved for computation purposes only. An execute node contains no segments or data.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

STANDBY and EXECUTE node types are supported only in Enterprise Mode.

&lt;/div&gt;


      </description>
    </item>
    
    <item>
      <title>Admin: Active standby nodes</title>
      <link>/en/admin/managing-db/managing-nodes/active-standby-nodes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/active-standby-nodes/</guid>
      <description>
        
        
        &lt;p&gt;An &lt;em&gt;active standby node&lt;/em&gt; exists is a node in an Enterprise Mode database that is available to replace any failed node. Unlike permanent OpenText™ Analytics Database nodes, a standby node does not perform computations or contain data. If a permanent node fails, an active standby node can replace the failed node, after the failed node exceeds the failover time limit. After replacing the failed node, the active standby node contains the projections and performs all calculations of the node it replaced.&lt;/p&gt;
&lt;h2 id=&#34;in-this-section&#34;&gt;In this section&lt;/h2&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Large cluster</title>
      <link>/en/admin/managing-db/managing-nodes/large-cluster/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/large-cluster/</guid>
      <description>
        
        
        &lt;p&gt;OpenText™ Analytics Database uses the Spread service to broadcast control messages between database nodes. This service can limit the growth of a database cluster. As you increase the number of cluster nodes, the load on the Spread service also increases as more participants exchange messages. This increased load can slow overall cluster performance. Also, network addressing limits the maximum number of participants in the Spread service to 120 (and often far less). In this case, you can use &lt;em&gt;large cluster&lt;/em&gt; to overcome these Spread limitations.&lt;/p&gt;
&lt;p&gt;When large cluster is enabled, a subset of cluster nodes, called &lt;em&gt;control nodes&lt;/em&gt;, exchange messages using the Spread service. Other nodes in the cluster are assigned to one of these control nodes, and depend on them for cluster-wide communication. Each control node passes messages from the Spread service to its dependent nodes. When a dependent node needs to broadcast a message to other nodes in the cluster, it passes the message to its control node, which in turn sends the message out to its other dependent nodes and the Spread service.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;../../../../images/large-cluster/basic-large-cluster.svg&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;p&gt;By setting up dependencies between control nodes and other nodes, you can grow the total number of database nodes, and remain in compliance with the Spread limit of 120 nodes.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Technically, when large cluster is disabled, all of the nodes in the cluster are control nodes. In this case, all nodes connect to Spread. When large cluster is enabled, some nodes become dependent on control nodes.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;A downside of the large cluster feature is that if a control node fails, its dependent nodes are cut off from the rest of the database cluster. These nodes cannot participate in database activities, and the database considers them to be down as well. When the control node recovers, it re-establishes communication between its dependent nodes and the database, so all of the nodes rejoin the cluster.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

The Spread service demon runs as an independent process on the control node host. It is not part of the database process. If the database process goes down on the node—for example, you use admintools to stop the database process on the host—Spread continues to run. As long as the Spread demon runs on the control node, the node&#39;s dependents can communicate with the database cluster and participate in database activity. Normally, the control node only goes down if the node&#39;s host has an issue—or example, you shut it down, it becomes disconnected from the network, or a hardware failure occurs.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;large-cluster-and-database-growth&#34;&gt;Large cluster and database growth&lt;/h2&gt;
&lt;p&gt;When your database has large cluster enabled, the database decides whether to make a newly added node into a control or a dependent node as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In Enterprise Mode, if the number of control nodes configured for the database cluster is greater than the current number of nodes it contains, the database makes the new node a control node. In Eon Mode, the number of control nodes is set at the subcluster level. If the number of control nodes set for the subcluster containing the new node is less than this setting, the database makes the new node a control node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the Enterprise Mode cluster or Eon Mode subcluster has reached its limit on control nodes, a new node becomes a dependent of an existing control node.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When a newly-added node is a dependent node, the database automatically assigns it to a control node. Which control node it chooses is guided by the database mode:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Enterprise Mode database: The database assigns the new node to the control node with the least number of dependents. If you created fault groups in your database, it chooses a control node in the same fault group as the new node. This feature lets you use fault groups to organize control nodes and their dependents to reflect the physical layout of the underlying host hardware. For example, you might want dependent nodes to be in the same rack as their control nodes. Otherwise, a failure that affects the entire rack (such as a power supply failure) will not only cause nodes in the rack to go down, but also nodes in other racks whose control node is in the affected rack. See &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/fault-groups/#&#34;&gt;Fault groups&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Eon Mode database: The database always adds new nodes to a subcluster. The database assigns the new node to the control node with the fewest dependent nodes in that subcluster. Every subcluster in an Eon Mode database with large cluster enabled has at least one control node. Keeping dependent nodes in the same subcluster as their control node maintains subcluster isolation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Spread&#39;s upper limit of 120 participants can cause errors when adding a subcluster to an Eon Mode database. If your database cluster has 120 control nodes, attempting to create a subcluster fails with an error. Every subcluster must have at least one control node. When your cluster has 120 control nodes, the database cannot create a control node for the new subcluster. If this error occurs, you must reduce the number of control nodes in your database cluster before adding a subcluster.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;When&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;when-to-enable-large-cluster&#34;&gt;When to enable large cluster&lt;/h2&gt;
&lt;p&gt;The database automatically enables large cluster in two cases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The database cluster contains 120 or more nodes. This is true for both Enterprise Mode and Eon Mode.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You create an Eon Mode &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/subcluster/&#34; title=&#34;A subset of a cluster in an Eon Mode database.&#34;&gt;subcluster&lt;/a&gt; (either a &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary subcluster&lt;/a&gt; or a &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/secondary-subcluster/&#34; title=&#34;A secondary subcluster is a type of subcluster that is easy to start and shutdown on demand.&#34;&gt;secondary subcluster&lt;/a&gt;) with an initial node count of 16 or more.&lt;/p&gt;
&lt;p&gt;The database does not automatically enable large cluster if you expand an existing subcluster to 16 or more nodes by adding nodes to it.&lt;/p&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

You can prevent the database from automatically enabling large cluster when you create a subcluster with 16 or more nodes by setting the control-set-size parameter to -1. See &lt;a href=&#34;../../../../en/eon/managing-subclusters/creating-subclusters/#&#34;&gt;Creating subclusters&lt;/a&gt; for details.

&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can choose to manually enable large cluster mode before the database automatically enables it. Your best practice is to enable large cluster when your database cluster size reaches a threshold:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;For cloud-based databases, enable large cluster when the cluster contains 16 or more nodes. In a cloud environment, your database uses point-to-point network communications. Spread scales poorly in point-to-point communications mode. Enabling large cluster when the database cluster reaches 16 nodes helps limit the impact caused by Spread being in point-to-point mode.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For on-premises databases, enable large cluster when the cluster reaches 50 to 80 nodes. Spread scales better in an on-premises environment. However, by the time the cluster size reaches 50 to 80 nodes, Spread may begin exhibiting performance issues.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In either cloud or on-premises environments, enable large cluster if you begin to notice Spread-related performance issues. Symptoms of Spread performance issues include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The load on the Spread service begins to cause performance issues. The database uses Spread for cluster-wide control messages, so Spread performance issues can adversely affect database performance. This is particularly true for cloud-based databases, where Spread performance problems becomes a bottleneck sooner, due to the nature of network broadcasting in the cloud infrastructure. In on-premises databases, broadcast messages are usually less of a concern because messages usually remain within the local subnet. Even so, eventually, Spread usually becomes a bottleneck before the database automatically enables large cluster automatically when the cluster reaches 120 nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The compressed list of addresses in your cluster is too large to fit in a maximum transmission unit (MTU) packet (1478 bytes). The MTU packet has to contain all of the addresses for the nodes participating in the Spread service. Under ideal circumstances (when your nodes have the IP addresses 1.1.1.1, 1.1.1.2 and so on) 120 addresses can fit in this packet. This is why the database automatically enables large cluster if your database cluster reaches 120 nodes. In practice, the compressed list of IP addresses will reach the MTU packet size limit at 50 to 80 nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Fault groups</title>
      <link>/en/admin/managing-db/managing-nodes/fault-groups/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/fault-groups/</guid>
      <description>
        
        
        
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

You cannot create fault groups for an Eon Mode database. Rather, OpenText™ Analytics Database automatically creates fault groups on a large cluster Eon database; these fault groups are configured around the control nodes and their dependents of each subcluster. These fault groups are managed internally by the database and are not accessible to users.

&lt;/div&gt;
&lt;p&gt;Fault groups let you configure an Enterprise Mode database for your physical cluster layout. Sharing your cluster topology lets you use &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/terrace-routing/&#34;&gt;terrace routing&lt;/a&gt; to reduce the buffer requirements of large queries. It also helps to minimize the risk of correlated failures inherent in your environment, usually caused by shared resources.&lt;/p&gt;
&lt;p&gt;The database automatically creates fault groups around 
control nodes (servers that run &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/spread/&#34; title=&#34;An open source toolkit used in OpenText&amp;amp;trade; Analytics Database to provide a high performance messaging service that is resilient to network faults.&#34;&gt;spread&lt;/a&gt;) in large cluster arrangements, placing nodes that share a control node in the same fault group. Automatic and user-defined fault groups do not include ephemeral nodes because such nodes hold no data.&lt;/p&gt;
&lt;p&gt;Consider defining your own fault groups specific to your cluster&#39;s physical layout if you want to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use terrace routing to reduce the buffer requirements of large queries.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reduce the risk of correlated failures. For example, by defining your rack layout, the database can better tolerate a rack failure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Influence the placement of control nodes in the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;OpenText™ Analytics Database supports complex, hierarchical fault groups of different shapes and sizes. The database platform provides a fault group script (DDL generator), SQL statements, system tables, and other monitoring tools.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../en/architecture/enterprise-concepts/high-availability-with-fault-groups/#&#34;&gt;High availability with fault groups&lt;/a&gt; for an overview of fault groups with a cluster topology example.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Terrace routing</title>
      <link>/en/admin/managing-db/managing-nodes/terrace-routing/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/terrace-routing/</guid>
      <description>
        
        
        
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Before you apply terrace routing to your database, be sure you are familiar with &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/large-cluster/&#34;&gt;large cluster&lt;/a&gt; and &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/fault-groups/&#34;&gt;fault groups&lt;/a&gt;.
&lt;/div&gt;
&lt;p&gt;Terrace routing can significantly reduce message buffering on a large cluster database. The following sections describe how OpenText™ Analytics Database implements terrace routing on &lt;a href=&#34;#TerraceRoutingEnterprise&#34;&gt;Enterprise Mode&lt;/a&gt; and &lt;a href=&#34;#TerraceRoutingEon&#34;&gt;Eon Mode&lt;/a&gt; databases.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;TerraceRoutingEnterprise&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;terrace-routing-on-enterprise-mode&#34;&gt;Terrace routing on Enterprise Mode&lt;/h2&gt;
&lt;p&gt;Terrace routing on an Enterprise Mode database is implemented through fault groups that define a rack-based topology. In a large cluster with terrace routing disabled, nodes in a database cluster form a fully connected network, where each non-dependent (&lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/large-cluster/planning-large-cluster/&#34;&gt;control&lt;/a&gt;) node sends messages across the database cluster through connections with all other non-dependent nodes, both within and outside its own rack/fault group:&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;../../../../images/large-cluster/no-terrace-routing2-racks.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;p&gt;In this case, large database clusters can require many connections on each node, where each connection incurs its own network buffering requirements. The total number of buffers required for each node is calculated as follows:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;(&lt;span class=&#34;code-variable&#34;&gt;numRacks&lt;/span&gt; * &lt;span class=&#34;code-variable&#34;&gt;numRackNodes&lt;/span&gt;) - 1
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In a two-rack cluster with 4 nodes per rack as shown above, this resolves to 7 buffers for each node.&lt;/p&gt;
&lt;p&gt;With terrace routing enabled, you can considerably reduce large cluster network buffering. Each &lt;em&gt;n&lt;/em&gt;th node in a rack/fault group is paired with the corresponding &lt;em&gt;n&lt;/em&gt;th node of all other fault groups. For example, with terrace routing enabled, messaging in the same two-rack cluster is now implemented as follows:&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;../../../../images/large-cluster/terrace-routing2-racks.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;p&gt;Thus, a message that originates from node 2 on rack A (A2) is sent to all other nodes on rack A; each rack A node then conveys the message to its corresponding node on rack B—A1 to B1, A2 to B2, and so on.&lt;/p&gt;
&lt;p&gt;With terrace routing enabled, each node of a given rack avoids the overhead of maintaining message buffers to all other nodes. Instead, each node is only responsible for maintaining connections to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;All other nodes of the same rack (&lt;em&gt;&lt;code&gt;numRackNodes&lt;/code&gt;&lt;/em&gt;&lt;code&gt; - 1&lt;/code&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;One node on each of the other racks (&lt;em&gt;&lt;code&gt;numRacks&lt;/code&gt;&lt;/em&gt;&lt;code&gt; - 1&lt;/code&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thus, the total number of message buffers required for each node is calculated as follows:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;(&lt;span class=&#34;code-variable&#34;&gt;numRackNodes&lt;/span&gt;-1) + (&lt;span class=&#34;code-variable&#34;&gt;numRacks&lt;/span&gt;-1)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In a two-rack cluster with 4 nodes as shown earlier, this resolves to 4 buffers for each node.&lt;/p&gt;
&lt;p&gt;Terrace routing trades time (intra-rack hops) for space (network message buffers). As a cluster expands with additional racks and nodes, the argument favoring this trade off becomes increasingly persuasive:&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;../../../../images/large-cluster/terrace-routing3-racks.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;p&gt;In this three-rack cluster with 4 nodes per rack, without terrace routing the number of buffers required by each node would be 11. With terrace routing, the number of buffers per node is 5. As a cluster expands with the addition of racks and nodes per rack, the disparity between buffer requirements widens. For example, given a six-rack cluster with 16 nodes per rack, without terrace routing the number of buffers required per node is 95; with terrace routing, 20.&lt;/p&gt;
&lt;h3 id=&#34;enabling-terrace-routing&#34;&gt;Enabling terrace routing&lt;/h3&gt;
&lt;p&gt;Terrace routing depends on &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/fault-groups/creating-fault-group-input-file/&#34;&gt;fault group definitions&lt;/a&gt; that describe a cluster network topology organized around racks and their member nodes. As noted earlier, when terrace routing is enabled, the database first distributes data within the rack/fault group; it then uses &lt;em&gt;n&lt;/em&gt;th node-to-&lt;em&gt;n&lt;/em&gt;th node mappings to forward this data to all other racks in the database cluster.&lt;/p&gt;
&lt;p&gt;You enable (or disable) terrace routing for any Enterprise Mode large cluster that implements rack-based fault groups through configuration parameter &lt;a href=&#34;../../../../en/sql-reference/config-parameters/general-parameters/#TerraceRoutingFactor&#34;&gt;TerraceRoutingFactor&lt;/a&gt;. To enable terrace routing, set this parameter as follows:&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;../../../../images/large-cluster/terrace-routing-factor.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;p&gt;where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;numRackNodes&lt;/code&gt;&lt;/em&gt;: Number of nodes in a rack&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;numRacks&lt;/code&gt;&lt;/em&gt;: Number of racks in the cluster&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th  rowspan=&#34;2&#34; &gt;
#Racks&lt;/th&gt; 

&lt;th  rowspan=&#34;2&#34; &gt;
Nodes/rack&lt;/th&gt; 


&lt;th  colspan=&#34;2&#34;  class=&#34;hcenter&#34; &gt;
#Connections&lt;/th&gt; 

&lt;th  rowspan=&#34;2&#34; &gt;
Terrace routing enabled if &lt;br /&gt;TerraceRoutingFactor less than:&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;th &gt;
Without terrace routing&lt;/th&gt; 

&lt;th &gt;
With terrace routing&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 


&lt;td  class=&#34;hright&#34; &gt;
2&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
16&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
31&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
16&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
1.94&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 


&lt;td  class=&#34;hright&#34; &gt;
4&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
16&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
63&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
18&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
3.5&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 


&lt;td  class=&#34;hright&#34; &gt;
6&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
16&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
95&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
20&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
4.75&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 


&lt;td  class=&#34;hright&#34; &gt;
8&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
16&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
127&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
22&lt;/td&gt; 


&lt;td  class=&#34;hright&#34; &gt;
5.77&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;By default, TerraceRoutingFactor is set to 2, which generally ensures that terrace routing is enabled for any Enterprise Mode large cluster that implements rack-based fault groups. It is recommended that you enable terrace routing for any cluster that contains 64 or more nodes, or if queries often require excessive buffer space.&lt;/p&gt;
&lt;p&gt;To disable terrace routing, set TerraceRoutingFactor to a large integer such as 1000:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER DATABASE DEFAULT SET TerraceRoutingFactor = 1000;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a name=&#34;TerraceRoutingEon&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;terrace-routing-on-eon-mode&#34;&gt;Terrace routing on Eon Mode&lt;/h2&gt;
&lt;p&gt;As in Enterprise Mode mode, terrace routing is enabled by default on an Eon Mode database, and is implemented through fault groups. However, you do not create fault groups for an Eon Mode database. Rather, OpenText™ Analytics Database automatically creates fault groups on a large cluster database; these fault groups are configured around the control nodes and their dependents of each subcluster. These fault groups are managed internally and are not accessible to users.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Elastic cluster</title>
      <link>/en/admin/managing-db/managing-nodes/elastic-cluster/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/elastic-cluster/</guid>
      <description>
        
        
        
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Elastic Cluster is an Enterprise Mode-only feature. For scaling your database under Eon Mode, see &lt;a href=&#34;../../../../en/eon/scaling-your-eon-db/#&#34;&gt;Scaling your Eon Mode database&lt;/a&gt;.

&lt;/div&gt;
&lt;p&gt;You can scale your cluster up or down to meet the needs of your database. The most common case is to add nodes to your database cluster to accommodate more data and provide better query performance. However, you can scale down your cluster if you find that it is over-provisioned, or if you need to divert hardware for other uses.&lt;/p&gt;
&lt;p&gt;You scale your cluster by adding or removing nodes. Nodes can be added or removed without shutting down or restarting the database. After adding a node or before removing a node, OpenText™ Analytics Database begins a rebalancing process that moves data around the cluster to populate the new nodes or move data off nodes about to be removed from the database. During this process, nodes can exchange data that are not being added or removed to maintain robust intelligent &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/k-safety/&#34; title=&#34;For more information, see Designing for K-Safety.&#34;&gt;K-safety&lt;/a&gt;. If the database determines that the data cannot be rebalanced in a single iteration due to lack of disk space, then the rebalance operation spans multiple iterations.&lt;/p&gt;
&lt;p&gt;To help make data rebalancing due to cluster scaling more efficient, the database locally segments data storage on each node so it can be easily moved to other nodes in the cluster. When a new node is added to the cluster, existing nodes in the cluster give up some of their data segments to populate the new node. They also exchange segments to minimize the number of nodes that any one node depends upon. This strategy minimizes the number of nodes that might become &lt;a href=&#34;../../../../en/architecture/enterprise-concepts/k-safety-an-enterprise-db/#criticalNodes&#34;&gt;critical when a node fails&lt;/a&gt;. When a node is removed from the cluster, its storage containers are moved to other nodes in the cluster (which also relocates data segments to minimize how many nodes might become critical when a node fails). This method of breaking data into portable segments is referred to as elastic cluster, as it facilitates enlarging or shrinking the cluster.&lt;/p&gt;
&lt;p&gt;The alternative to elastic cluster is re-segmenting all projection data and redistributing it evenly among all database nodes any time a node is added or removed. This method requires more processing and more disk space, as it requires all data in all projections to be dumped and reloaded.&lt;/p&gt;
&lt;h2 id=&#34;elastic-cluster-scaling-factor&#34;&gt;Elastic cluster scaling factor&lt;/h2&gt;
&lt;p&gt;In a new installation, each node has a &lt;em&gt;scaling factor&lt;/em&gt; that specifies the number of local segments (see &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/elastic-cluster/scaling-factor/#&#34;&gt;Scaling factor&lt;/a&gt;). Rebalance efficiently redistributes data by relocating local segments provided that, after nodes are added or removed, there are sufficient local segments in the cluster to redistribute the data evenly (determined by &lt;a href=&#34;../../../../en/sql-reference/system-tables/v-catalog-schema/elastic-cluster/&#34;&gt;MAXIMUM_SKEW_PERCENT&lt;/a&gt;). For example, if the scaling factor = 8, and there are initially 5 nodes, then there are a total of 40 local segments cluster-wide.&lt;/p&gt;
&lt;p&gt;If you add two additional nodes (seven nodes) the database relocates five local segments on two nodes, and six such segments on five nodes, resulting in roughly a 16.7 percent skew. Rebalance relocates local segments only if the resulting skew is less than the allowed threshold, as determined by MAXIMUM_SKEW_PERCENT. Otherwise, segmentation space (and hence data, if uniformly distributed over this space) is evenly distributed among the seven nodes, and new local segment boundaries are drawn for each node, such that each node again has eight local segments.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

By default, the scaling factor only has an effect while the database rebalances itself. While rebalancing, each node breaks the projection segments it contains into storage containers, which it then moves to other nodes if necessary. After rebalancing, the data is recombined into &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/ros-read-optimized-store/&#34; title=&#34;Read Optimized Store (ROS) is a highly optimized, read-oriented, disk storage structure, organized by projection.&#34;&gt;ROS&lt;/a&gt; containers. It is possible to have the database always group data into storage containers. See &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/elastic-cluster/local-data-segmentation/#&#34;&gt;Local data segmentation&lt;/a&gt; for more information.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;enabling-elastic-cluster&#34;&gt;Enabling elastic cluster&lt;/h2&gt;
&lt;p&gt;You enable elastic cluster with &lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/cluster-functions/enable-elastic-cluster/#&#34;&gt;ENABLE_ELASTIC_CLUSTER&lt;/a&gt;. Query the &lt;a href=&#34;../../../../en/sql-reference/system-tables/v-catalog-schema/elastic-cluster/#&#34;&gt;ELASTIC_CLUSTER&lt;/a&gt; system table to verify that elastic cluster is enabled:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT is_enabled FROM ELASTIC_CLUSTER;
 is_enabled
------------
 t
(1 row)
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Admin: Adding nodes</title>
      <link>/en/admin/managing-db/managing-nodes/adding-nodes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/adding-nodes/</guid>
      <description>
        
        
        &lt;p&gt;There are many reasons to add one or more nodes to an OpenText™ Analytics Database cluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Increase system performance or capacity.&lt;/strong&gt; Add nodes due to a high query load or load latency, or increase disk space in Enterprise Mode without adding storage locations to existing nodes.&lt;/p&gt;
&lt;p&gt;The database response time depends on factors such as type and size of the application query, database design, data size and data types stored, available computational power, and network bandwidth. Adding nodes to a database cluster does not necessarily improve the system response time for every query, especially if the response time is already short or not hardware-bound.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Make the database K-safe&lt;/strong&gt; (&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/k-safety/&#34; title=&#34;For more information, see Designing for K-Safety.&#34;&gt;K-safety&lt;/a&gt;=1) or increase K-safety to 2. See &lt;a href=&#34;../../../../en/admin/failure-recovery/#&#34;&gt;Failure recovery&lt;/a&gt; for details.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Swap or replace hardware.&lt;/strong&gt; Swap out a node to perform maintenance or hardware upgrades.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
If you install the database on a single node without specifying the IP address or host name (or you used &lt;code&gt;localhost&lt;/code&gt;), you cannot expand the cluster. You must reinstall the database and specify an IP address or host name that is not &lt;code&gt;localhost/127.0.0.1&lt;/code&gt;.
&lt;/div&gt;
&lt;p&gt;Adding nodes consists of the following general tasks:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../../en/admin/backup-and-restore/creating-backups/creating-full-backups/&#34;&gt;Back up the database&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It is strongly recommended that you back up the database before you perform this significant operation because it entails creating new projections, refreshing them, and then deleting the old projections. See &lt;a href=&#34;../../../../en/admin/backup-and-restore/#&#34;&gt;Backing up and restoring the database&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;The process of migrating the projection design to include the additional nodes could take a while; however during this time, all user activity on the database can proceed normally, using the old projections.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the hosts you want to add to the cluster.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../en/setup/set-up-on-premises/before-you-install/&#34;&gt;Before you install the database&lt;/a&gt;. You will also need to edit the hosts configuration file on all of the existing nodes in the cluster to ensure they can resolve the new host.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/adding-nodes/adding-hosts-to-cluster/&#34;&gt;Add one or more hosts to the cluster&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/adding-nodes/adding-nodes-to-db/&#34;&gt;Add the hosts&lt;/a&gt; you added to the cluster (in step 3) to the database.&lt;/p&gt;
&lt;p&gt;When you add a host to the database, it becomes a node. You can add nodes to your database using either the &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/admin-tools/&#34; title=&#34;OpenText&amp;amp;trade; Analytics Database Administration Tools provides a graphical user interface for managing the database.&#34;&gt;Administration tools&lt;/a&gt; or the &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/mc/&#34; title=&#34;A database management tool that provides a unified view of your OpenText&amp;amp;trade; Analytics Database and lets you monitor multiple clusters from a single point of access.&#34;&gt;Management Console&lt;/a&gt; (See &lt;a href=&#34;../../../../en/mc/monitoring-using-mc/#&#34;&gt;Monitoring with MC&lt;/a&gt;).  Adding nodes using &lt;code&gt;admintools&lt;/code&gt; preserves the specific order of the nodes you add.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;After you add nodes to the database, it automatically distributes updated configuration files to the rest of the nodes in the cluster and starts the process of rebalancing data in the cluster. See &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/rebalancing-data-across-nodes/#&#34;&gt;Rebalancing data across nodes&lt;/a&gt; for details.&lt;/p&gt;
&lt;p&gt;If you have previously created storage locations using &lt;a href=&#34;../../../../en/sql-reference/statements/create-statements/create-location/&#34;&gt;CREATE LOCATION...ALL NODES&lt;/a&gt;, you must create those locations on the new nodes.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Removing nodes</title>
      <link>/en/admin/managing-db/managing-nodes/removing-nodes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/removing-nodes/</guid>
      <description>
        
        
        &lt;p&gt;Although less common than adding a node, permanently removing a node is useful if the host system is obsolete or over-provisioned.

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Before removing a node from a cluster, check that the cluster has the minimum number of nodes required to comply with K-safety. If necessary, &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/removing-nodes/lowering-ksafety-to-enable-node-removal/&#34;&gt;temporarily lower the database K-safety level&lt;/a&gt;.
&lt;/div&gt;
&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Replacing nodes</title>
      <link>/en/admin/managing-db/managing-nodes/replacing-nodes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/replacing-nodes/</guid>
      <description>
        
        
        &lt;p&gt;If you have a &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/k-safety/&#34; title=&#34;For more information, see Designing for K-Safety.&#34;&gt;K-Safe&lt;/a&gt; database, you can replace nodes, as necessary, without bringing the system down. For example, you might want to replace an existing node if you:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Need to repair an existing host system that no longer functions and restore it to the cluster&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Want to exchange an existing host system for another more powerful system&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

OpenText™ Analytics Database does not support replacing a node on a K-safe=0 database. Use the procedures to &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/adding-nodes/&#34;&gt;add&lt;/a&gt; and &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/removing-nodes/&#34;&gt;remove&lt;/a&gt; nodes instead.

&lt;/div&gt;
&lt;p&gt;The process you use to replace a node depends on whether you are replacing the node with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A host that uses the same name and IP address&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A host that uses a different name and IP address&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An active standby node&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;prerequisites&#34;&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Configure the replacement hosts for the database. See &lt;a href=&#34;../../../../en/setup/set-up-on-premises/before-you-install/&#34;&gt;Before you install the database&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read the Important &lt;strong&gt;Tips&lt;/strong&gt; sections under &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/adding-nodes/adding-hosts-to-cluster/#&#34;&gt;Adding hosts to a cluster&lt;/a&gt; and &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/removing-nodes/removing-hosts-from-cluster/#&#34;&gt;Removing hosts from a cluster&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ensure that the database administrator user exists on the new host and is configured identically to the existing hosts. OpenText™ Analytics Database will setup passwordless ssh as needed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ensure that directories for Catalog Path, Data Path, and any storage locations are added to the database when you create it and/or are mounted correctly on the new host and have read and write access permissions for the database administrator user. Also ensure that there is sufficient disk space.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Follow the best practice procedure below for introducing the failed hardware back into the cluster to avoid spurious full-node rebuilds.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;best-practice-for-restoring-failed-hardware&#34;&gt;Best practice for restoring failed hardware&lt;/h2&gt;
&lt;p&gt;Following this procedure will prevent OpenText™ Analytics Database from misdiagnosing missing disk or bad mounts as data corruptions, which would result in a time-consuming, full-node recovery.&lt;/p&gt;
&lt;p&gt;If a server fails due to hardware issues, for example a bad disk or a failed controller, upon repairing the hardware:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Reboot the machine into runlevel 1, which is a root and console-only mode.&lt;/p&gt;
&lt;p&gt;Runlevel 1 prevents network connectivity and keeps the database from attempting to reconnect to the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In runlevel 1, validate that the hardware has been repaired, the controllers are online, and any RAID recover is able to proceed.&lt;/p&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

You do not need to initialize RAID recover in runlevel 1; simply validate that it can recover.

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the hardware is confirmed consistent, only then reboot to runlevel 3 or higher.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At this point, the network activates, and the database rejoins the cluster and automatically recovers any missing data. Note that, on a single-node database, if any files that were associated with a projection have been deleted or corrupted, the database deletes all files associated with that projection, which could result in data loss.&lt;/p&gt;


      </description>
    </item>
    
    <item>
      <title>Admin: Rebalancing data across nodes</title>
      <link>/en/admin/managing-db/managing-nodes/rebalancing-data-across-nodes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/rebalancing-data-across-nodes/</guid>
      <description>
        
        
        &lt;p&gt;OpenText™ Analytics Database can rebalance your database when you add or remove nodes. As a superuser, you can manually trigger a rebalance with &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/rebalancing-data-across-nodes/rebalancing-data-using-admin-tools-ui/&#34;&gt;Administration Tools&lt;/a&gt;, &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/rebalancing-data-across-nodes/rebalancing-data-using-sql-functions/&#34;&gt;SQL functions&lt;/a&gt;, or the &lt;a href=&#34;../../../../en/mc/db-management/subclusters-mc/rebalancing-data-using-mc/&#34;&gt;Management Console&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A rebalance operation can take some time, depending on the cluster size, and the number of projections and the amount of data they contain. You should allow the process to complete uninterrupted. If you must cancel the operation, call 
&lt;code&gt;&lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/cluster-functions/cancel-rebalance-cluster/#&#34;&gt;CANCEL_REBALANCE_CLUSTER&lt;/a&gt;&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;why-rebalance&#34;&gt;Why rebalance?&lt;/h2&gt;
&lt;p&gt;Rebalancing is useful or even necessary after you perform one of the following operations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Change the size of the cluster by adding or removing nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mark one or more nodes as ephemeral in preparation of removing them from the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Change the &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/elastic-cluster/setting-scaling-factor/&#34;&gt;scaling factor&lt;/a&gt; of an elastic cluster, which determines the number of storage containers used to store a projection across the database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the control node size or realign control nodes on a &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/large-cluster/&#34;&gt;large cluster&lt;/a&gt; layout.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Specify more than 120 nodes in your initial database cluster configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Modify a &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/fault-groups/&#34;&gt;fault group&lt;/a&gt; by adding or removing nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;general-rebalancing-tasks&#34;&gt;General rebalancing tasks&lt;/h2&gt;
&lt;p&gt;When you rebalance a database cluster, OpenText™ Analytics Database performs the following tasks for all projections, segmented and unsegmented alike:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Distributes data based on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;User-defined &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/fault-groups/&#34;&gt;fault groups&lt;/a&gt;, if specified&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/large-cluster/&#34;&gt;Large cluster&lt;/a&gt; automatic fault groups&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ignores node-specific distribution specifications in projection definitions. Node rebalancing always distributes data across all nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;When rebalancing is complete, sets the &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/ancient-history-mark-ahm/&#34; title=&#34;Also known as AHM, the ancient history mark is the oldest epoch whose data is accessible to historical queries.&#34;&gt;Ancient History Mark&lt;/a&gt; the greatest allowable epoch (now).&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The database rebalances segmented and unsegmented projections differently, as described below.&lt;/p&gt;
&lt;h2 id=&#34;rebalancing-segmented-projections&#34;&gt;Rebalancing segmented projections&lt;/h2&gt;
&lt;p&gt;For each segmented projection, the database performs the following tasks:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Copies and renames projection buddies and distributes them evenly across all nodes. The renamed projections share the same base name.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Refreshes the new projections.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Drops the original projections.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;rebalancing-unsegmented-projections&#34;&gt;Rebalancing unsegmented projections&lt;/h2&gt;
&lt;p&gt;For each unsegmented projection, the database performs the following tasks:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;If adding nodes:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Creates projection buddies on them.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Maps the new projections to their shared name in the database catalog.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;If dropping nodes&lt;/strong&gt;: drops the projection buddies from them.&lt;/p&gt;
&lt;h2 id=&#34;k-safety-and-rebalancing&#34;&gt;K-safety and rebalancing&lt;/h2&gt;
&lt;p&gt;Until rebalancing completes, the database operates with the existing &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/k-safety/&#34; title=&#34;For more information, see Designing for K-Safety.&#34;&gt;K-safe&lt;/a&gt; value. After rebalancing completes, the database operates with the K-safe value specified during the rebalance operation. The new K-safe value must be equal to or higher than current K-safety. OpenText™ Analytics Database does not support downgrading K-safety and returns a warning if you try to reduce it from its current value. For more information, see &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/removing-nodes/lowering-ksafety-to-enable-node-removal/#&#34;&gt;Lowering K-Safety to enable node removal&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;rebalancing-failure-and-projections&#34;&gt;Rebalancing failure and projections&lt;/h2&gt;
&lt;p&gt;If a failure occurs while rebalancing the database, you can rebalance again. If the cause of the failure has been resolved, the rebalance operation continues from where it failed. However, a failed data rebalance can result in projections becoming out of date.&lt;/p&gt;
&lt;p&gt;To locate out-of-date projections, query the system table 
&lt;code&gt;&lt;a href=&#34;../../../../en/sql-reference/system-tables/v-catalog-schema/projections/#&#34;&gt;PROJECTIONS&lt;/a&gt;&lt;/code&gt; as follows:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT projection_name, anchor_table_name, is_up_to_date FROM projections
   WHERE is_up_to_date = false;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To remove out-of-date projections, use 
&lt;code&gt;&lt;a href=&#34;../../../../en/sql-reference/statements/drop-statements/drop-projection/#&#34;&gt;DROP PROJECTION&lt;/a&gt;&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;temporary-tables&#34;&gt;Temporary tables&lt;/h2&gt;
&lt;p&gt;Node rebalancing has no effect on projections of temporary tables.&lt;/p&gt;
&lt;p&gt;For Detailed Information About Rebalancing&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&#34;https://www.vertica.com/knowledgebase/&#34;&gt;Knowledge Base&lt;/a&gt; articles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://vertica.com/kb/Understanding-Rebalancing-Part-1-What-Happens-During-Rebalancing/Content/BestPractices/Understanding-Rebalancing-Part-1-What-Happens-During-Rebalancing.htm&#34;&gt;What Happens During Rebalancing&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://vertica.com/kb/Understanding-Rebalancing-Part-2-Optimizing-for-Rebalancing/Content/BestPractices/Understanding-Rebalancing-Part-2-Optimizing-for-Rebalancing.htm&#34;&gt;Optimizing for Rebalancing&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Redistributing configuration files to nodes</title>
      <link>/en/admin/managing-db/managing-nodes/redistributing-config-files-to-nodes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/redistributing-config-files-to-nodes/</guid>
      <description>
        
        
        &lt;p&gt;The add and remove node processes automatically redistribute the OpenText™ Analytics Database configuration files. You rarely need to redistribute the configuration files to help resolve configuration issues.&lt;/p&gt;
&lt;p&gt;To distribute configuration files to a host:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log on to a host that contains these files and &lt;a href=&#34;../../../../en/admin/using-admin-tools/running-admin-tools/&#34;&gt;start Administration Tools&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the Administration Tools &lt;strong&gt;Main Menu&lt;/strong&gt;, select &lt;strong&gt;Configuration Menu&lt;/strong&gt; and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the &lt;strong&gt;Configuration Menu&lt;/strong&gt;, select &lt;strong&gt;Distribute Config Files&lt;/strong&gt; and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Database Configuration&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the database where you want to distribute the files and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Database configuration files are distributed to all other database hosts. If the files already existed on a host, they are overwritten.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the &lt;strong&gt;Configuration Menu&lt;/strong&gt;, select &lt;strong&gt;Distribute Config Files&lt;/strong&gt; and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;SSL Keys&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Certifications and keys are distributed to all other database hosts. If they already existed on a host, they are overwritten.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the &lt;strong&gt;Configuration Menu&lt;/strong&gt;, select &lt;strong&gt;Distribute Config Files&lt;/strong&gt; and click &lt;strong&gt;OK&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Select &lt;strong&gt;AdminTools Meta-Data&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Administration Tools metadata is distributed to every host in the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../../en/admin/failure-recovery/restarting-db/&#34;&gt;Restart the database&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

&lt;p&gt;To distribute the configuration file &lt;code&gt;admintools.conf&lt;/code&gt; via the command line or scripts, use the admintools option &lt;code&gt;distribute_config_files&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t distribute_config_files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Stopping and starting nodes on MC</title>
      <link>/en/admin/managing-db/managing-nodes/stopping-and-starting-nodes-on-mc/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/stopping-and-starting-nodes-on-mc/</guid>
      <description>
        
        
        &lt;p&gt;You can start and stop one or more database nodes through the &lt;strong&gt;Manage&lt;/strong&gt; page by clicking a specific node to select it and then clicking the Start or Stop button in the Node List.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

The Stop and Start buttons in the toolbar start and stop the database, not individual nodes.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;On the &lt;strong&gt;Databases and Clusters&lt;/strong&gt; page, you must click a database first to select it. To stop or start a node on that database, click the &lt;strong&gt;View&lt;/strong&gt; button. You&#39;ll be directed to the Overview page. Click &lt;strong&gt;Manage&lt;/strong&gt; in the applet panel at the bottom of the page and you&#39;ll be directed to the database node view.&lt;/p&gt;
&lt;p&gt;The Start and Stop database buttons are always active, but the node Start and Stop buttons are active only when one or more nodes of the same status are selected; for example, all nodes are UP or DOWN.&lt;/p&gt;
&lt;p&gt;After you click a Start or Stop button, Management Console updates the status and message icons for the nodes or databases you are starting or stopping.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Upgrading your operating system on nodes in your database cluster</title>
      <link>/en/admin/managing-db/managing-nodes/upgrading-your-os-on-nodes-your-cluster/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/upgrading-your-os-on-nodes-your-cluster/</guid>
      <description>
        
        
        &lt;p&gt;If you need to upgrade the operating system on the nodes in your OpenText™ Analytics Database cluster, check with the documentation for your Linux distribution to make sure they support the particular upgrade you are planning.&lt;/p&gt;
&lt;p&gt;For example, the following articles provide information about upgrading Red Hat:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://access.redhat.com/solutions/637583&#34;&gt;How do I upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7?&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/upgrading_from_rhel_7_to_rhel_8/index&#34;&gt;Upgrading from REHL 7 to RHEL 8&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://access.redhat.com/solutions/21964&#34;&gt;Does Red Hat support upgrades between major versions of Red Hat Enterprise Linux?&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After you confirm that you can perform the upgrade, follow the steps at &lt;a href=&#34;https://vertica.com/kb/UpgradingtheOperatingSystem/Content/BestPractices/UpdateOSinVerticaCluster.htm&#34;&gt;Best Practices for Upgrading the Operating System on Nodes in a Database Cluster&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Reconfiguring node messaging</title>
      <link>/en/admin/managing-db/managing-nodes/reconfiguring-node-messaging/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/reconfiguring-node-messaging/</guid>
      <description>
        
        
        &lt;p&gt;Sometimes, nodes of an existing, operational OpenText™ Analytics Database cluster &lt;a href=&#34;#Changing_Node_IP_Addresses&#34;&gt;require new IP addresses&lt;/a&gt;. Cluster nodes might also need to &lt;a href=&#34;#ChangingNodeMessagingProtocol&#34;&gt;change their messaging protocols&lt;/a&gt;—for example, from broadcast to point-to-point. The admintools &lt;code&gt;re_ip&lt;/code&gt; utility performs both tasks.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

You cannot change from one address family—IPv4 or IPv6—to another. For example, if hosts in the database cluster are identified by IPv4 network addresses, you can only change host addresses to another set of IPv4 addresses.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Changing_Node_IP_Addresses&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;changing-ip-addresses&#34;&gt;Changing IP addresses&lt;/h2&gt;
&lt;p&gt;You can use &lt;code&gt;re_ip&lt;/code&gt; to perform two tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#UpdateIP_Addresses&#34;&gt;Update node IP addresses&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#NewNodeControlAddresses&#34;&gt;Change node control and broadcast addresses&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In both cases, &lt;code&gt;re_ip&lt;/code&gt; requires a mapping file that identifies the current node IP addresses, which are stored in &lt;code&gt;admintools.conf&lt;/code&gt;. You can get these addresses in two ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use the admintools utility &lt;code&gt;list_allnodes&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t list_allnodes
Node             | Host          | State | Version        | DB
-----------------+---------------+-------+----------------+-----------
v_vmart_node0001 | 192.0.2.254   | UP    | vertica-12.0.1 | VMart
v_vmart_node0002 | 192.0.2.255   | UP    | vertica-12.0.1 | VMart
v_vmart_node0003 | 192.0.2.256   | UP    | vertica-12.0.1 | VMart
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition tip&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Tip&lt;/h4&gt;

&lt;code&gt;list_allnodes&lt;/code&gt; can help you identify issues that you might have to access the database. For example, if hosts are not communicating with each other, the &lt;code&gt;Version&lt;/code&gt; column displays Unavailable.

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Print the content of &lt;code&gt;admintools.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat /opt/vertica/config/admintools.conf
...
[Cluster]
hosts = 192.0.2.254, 192.0.2.255, 192.0.2.256

[Nodes]
node0001 = 192.0.2.254/home/dbadmin,/home/dbadmin
node0002 = 192.0.2.255/home/dbadmin,/home/dbadmin
node0003 = 192.0.2.256/home/dbadmin,/home/dbadmin
...
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a name=&#34;UpdateIP_Addresses&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;update-node-ip-addresses&#34;&gt;Update node IP addresses&lt;/h3&gt;
&lt;p&gt;You can update IP addresses with &lt;code&gt;re_ip&lt;/code&gt; as described below. &lt;code&gt;re_ip&lt;/code&gt; automatically backs up &lt;code&gt;admintools.conf&lt;/code&gt; so you can recover the original settings if necessary.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a mapping file with lines in the following format:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;oldIPaddress&lt;/span&gt; &lt;span class=&#34;code-variable&#34;&gt;newIPaddress&lt;/span&gt;[, &lt;span class=&#34;code-variable&#34;&gt;controlAddress&lt;/span&gt;, &lt;span class=&#34;code-variable&#34;&gt;controlBroadcast&lt;/span&gt;]
...
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;192.0.2.254 198.51.100.255, 198.51.100.255, 203.0.113.255
192.0.2.255 198.51.100.256, 198.51.100.256, 203.0.113.255
192.0.2.256 198.51.100.257, 198.51.100.257, 203.0.113.255
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;em&gt;&lt;code&gt;controlAddress&lt;/code&gt;&lt;/em&gt; and &lt;em&gt;&lt;code&gt;controlBroadcast&lt;/code&gt;&lt;/em&gt; are optional. If omitted:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;controlAddress&lt;/code&gt;&lt;/em&gt; defaults to &lt;em&gt;&lt;code&gt;newIPaddress&lt;/code&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;controlBroadcast&lt;/code&gt;&lt;/em&gt; defaults to the host of &lt;em&gt;&lt;code&gt;newIPaddress&lt;/code&gt;&lt;/em&gt;’s broadcast IP address.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stop the database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run &lt;code&gt;re_ip&lt;/code&gt; to map old IP addresses to new IP addresses:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t re_ip -f &lt;span class=&#34;code-variable&#34;&gt;mapfile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;re_ip&lt;/code&gt; issues warnings for the following mapping file errors:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;IP addresses are incorrectly formatted.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Duplicate IP addresses, whether old or new.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If &lt;code&gt;re_ip&lt;/code&gt; finds no syntax errors, it performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Remaps the IP addresses as listed in the mapping file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the &lt;code&gt;-i&lt;/code&gt; option is omitted, asks to confirm updates to the database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Updates required local configuration files with the new IP addresses.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Distributes the updated configuration files to the hosts using new IP addresses.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Parsing mapfile...
New settings for Host 192.0.2.254 are:
address: 198.51.100.255
New settings for Host 192.0.2.255 are:
address: 198.51.100.256
New settings for Host 192.0.2.254 are:
address: 198.51.100.257

The following databases would be affected by this tool: Vmart

Checking DB status ...
Enter &amp;#34;yes&amp;#34; to write new settings or &amp;#34;no&amp;#34; to exit &amp;gt; yes
Backing up local admintools.conf ...
Writing new settings to local admintools.conf ...

Writing new settings to the catalogs of database Vmart ...
The change was applied to all nodes.
Success. Change committed on a quorum of nodes.

Initiating admintools.conf distribution ...
Success. Local admintools.conf sent to all hosts in the cluster.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the database.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;re_ip-and-export-ip-address&#34;&gt;re_ip and export IP address&lt;/h3&gt;
&lt;p&gt;By default, a node&#39;s IP address and its export IP address are identical. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name, node_address, export_address FROM nodes;
    node_name      | node_address    | export_address
------------------------------------------------------
v_VMartDB_node0001 | 192.168.100.101 | 192.168.100.101
v_VMartDB_node0002 | 192.168.100.102 | 192.168.100.101
v_VMartDB_node0003 | 192.168.100.103 | 192.168.100.101
v_VMartDB_node0004 | 192.168.100.104 | 192.168.100.101
(4 rows)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The export address is the IP address of the node on the network. This address provides access to other DBMS systems, and enables you to import and export data across the network.&lt;/p&gt;
&lt;p&gt;If node IP and export IP addresses are the same, then running &lt;code&gt;re_ip&lt;/code&gt; changes both to the new address. Conversely, if you &lt;a href=&#34;https://vertica.com/kb/Configuring-Network-to-Import-and-Export-Data/Content/BestPractices/Configuring-Network-to-Import-and-Export-Data.htm&#34;&gt;manually change the export address&lt;/a&gt;, subsequent &lt;code&gt;re_ip&lt;/code&gt; operations leave your export address changes untouched.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;NewNodeControlAddresses&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;change-node-control-and-broadcast-addresses&#34;&gt;Change node control and broadcast addresses&lt;/h3&gt;
&lt;p&gt;You can map IP addresses for the database only, by using the &lt;code&gt;re_ip&lt;/code&gt; option &lt;code&gt;-O&lt;/code&gt; (or &lt;code&gt;--db-only&lt;/code&gt;). Database-only operations are useful for error recovery. The node names and IP addresses that are specified in the mapping file must be the same as the node information in &lt;code&gt;admintools.conf&lt;/code&gt;. In this case, &lt;code&gt;admintools.conf&lt;/code&gt; is not updated. The database updates only &lt;code&gt;spread.conf&lt;/code&gt; and the catalog with the changes.&lt;/p&gt;
&lt;p&gt;You can also use &lt;code&gt;re_ip&lt;/code&gt; to change the node control and broadcast addresses. In this case the mapping file must contain the control messaging IP address and associated broadcast address. This task allows nodes on the same host to have different data and control addresses.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a mapping file with lines in the following format:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;nodeName&lt;/span&gt; &lt;span class=&#34;code-variable&#34;&gt;nodeIPaddress&lt;/span&gt;, &lt;span class=&#34;code-variable&#34;&gt;controlAddress&lt;/span&gt;, &lt;span class=&#34;code-variable&#34;&gt;controlBroadcast
    ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition tip&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Tip&lt;/h4&gt;

Query the system table &lt;a href=&#34;../../../../en/sql-reference/system-tables/v-catalog-schema/nodes/#&#34;&gt;NODES&lt;/a&gt; for node names.

&lt;/div&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vertica_node001 192.0.2.254, 203.0.113.255, 203.0.113.258
vertica_node002 192.0.2.255, 203.0.113.256, 203.0.113.258
vertica_node003 192.0.2.256, 203.0.113.257, 203.0.113.258
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stop the database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command to map the new IP addresses:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t re_ip -f &lt;span class=&#34;code-variable&#34;&gt;mapfile&lt;/span&gt; -O -d &lt;span class=&#34;code-variable&#34;&gt;dbname&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the database.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a name=&#34;ChangingNodeMessagingProtocol&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;changing-node-messaging-protocols&#34;&gt;Changing node messaging protocols&lt;/h2&gt;
&lt;p&gt;You can use &lt;code&gt;re_ip&lt;/code&gt; to reconfigure spread messaging between nodes. &lt;code&gt;re_ip&lt;/code&gt; configures node messaging to broadcast or point-to-point (unicast) messaging with these options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-U&lt;/code&gt;, &lt;code&gt;--broadcast&lt;/code&gt; (default)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-T&lt;/code&gt;, &lt;code&gt;--point-to-point&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Both options support up to 80 spread daemons. You can exceed the 80-node limit by using &lt;a href=&#34;../../../../en/admin/managing-db/managing-nodes/large-cluster/&#34;&gt;large cluster&lt;/a&gt; mode, which does not install a spread daemon on each node.&lt;/p&gt;
&lt;p&gt;For example, to set the database cluster messaging protocol to point-to-point:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t re_ip -d &lt;span class=&#34;code-variable&#34;&gt;dbname&lt;/span&gt; -T
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To set the messaging protocol to broadcast:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t re_ip -d &lt;span class=&#34;code-variable&#34;&gt;dbname&lt;/span&gt; -U
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a name=&#34;re_ip_Timeout&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;setting-re_ip-timeout&#34;&gt;Setting re_ip timeout&lt;/h2&gt;
&lt;p&gt;You can configure how long &lt;code&gt;re_ip&lt;/code&gt; executes a given task before it times out, by editing the setting of &lt;code&gt;prepare_timeout_sec&lt;/code&gt; in &lt;code&gt;admintools.conf&lt;/code&gt;. By default, this parameter is set to 7200 (seconds).&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Adjusting Spread Daemon timeouts for virtual environments</title>
      <link>/en/admin/managing-db/managing-nodes/adjusting-spread-daemon-timeouts-virtual-environments/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-nodes/adjusting-spread-daemon-timeouts-virtual-environments/</guid>
      <description>
        
        
        &lt;p&gt;OpenText™ Analytics Database relies on &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/spread/&#34; title=&#34;An open source toolkit used in OpenText&amp;amp;trade; Analytics Database to provide a high performance messaging service that is resilient to network faults.&#34;&gt;Spread&lt;/a&gt; daemons to pass messages between database nodes. Occasionally, nodes fail to respond to messages within the specified Spread timeout. These failures might be caused by spikes in network latency or brief pauses in the node&#39;s VM—for example, scheduled &lt;a href=&#34;#Azure-s&#34;&gt;Azure maintenance timeouts&lt;/a&gt;. In either case, the database assumes that the non-responsive nodes are down and starts to remove them, even though they might still be running. You can address this issue by &lt;a href=&#34;#Setting&#34;&gt;adjusting the Spread timeout&lt;/a&gt; as needed.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Setting&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;adjusting-spread-timeout&#34;&gt;Adjusting spread timeout&lt;/h2&gt;
&lt;p&gt;By default, the Spread timeout depends on the number of configured Spread segments:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Configured Spread segments&lt;/th&gt; 

&lt;th &gt;
Default timeout&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
1&lt;/td&gt; 

&lt;td &gt;
8 seconds&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&amp;gt; 1&lt;/td&gt; 

&lt;td &gt;
25 seconds&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;


&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
If you deploy your database cluster with Azure Marketplace, the default Spread timeout is set to 35 seconds. If you manually create your cluster in Azure, the default Spread timeout is set to 8 or 25 seconds.
&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;If the Spread timeout is likely to elapse before the network or database nodes can respond, increase the timeout to the maximum length of non-responsive time plus five seconds. For example, if Azure memory-preserving maintenance pauses node VMs for up to 30 seconds, set the Spread timeout to 35 seconds.&lt;/p&gt;
&lt;p&gt;If you are unsure how long network or node disruptions are liable to last, gradually increase the Spread timeout until fewer instances of UP nodes leave the database.

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
The database cannot react to a node going down or being shut down improperly before the timeout period elapses. Changing Spread’s timeout to a value too high can result in longer query restarts if a node goes down.
&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;To see the current setting of the Spread timeout, query system table 
&lt;code&gt;&lt;a href=&#34;../../../../en/sql-reference/system-tables/v-monitor-schema/spread-state/#&#34;&gt;SPREAD_STATE&lt;/a&gt;&lt;/code&gt;. For example, the following query shows that the current timeout setting (&lt;code&gt;token_timeout&lt;/code&gt;) is set to 8000ms:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT * FROM V_MONITOR.SPREAD_STATE;
    node_name     | token_timeout
------------------+---------------
 v_vmart_node0003 |          8000
 v_vmart_node0001 |          8000
 v_vmart_node0002 |          8000
(3 rows)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To change the Spread timeout, call the meta-function &lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/db-functions/set-spread-option/#&#34;&gt;SET_SPREAD_OPTION&lt;/a&gt; and set the token timeout to a new value. The following example sets the timeout to 35000ms (35 seconds):&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT SET_SPREAD_OPTION( &amp;#39;TokenTimeout&amp;#39;, &amp;#39;35000&amp;#39;);
NOTICE 9003:  Spread has been notified about the change
                   SET_SPREAD_OPTION
--------------------------------------------------------
 Spread option &amp;#39;TokenTimeout&amp;#39; has been set to &amp;#39;35000&amp;#39;.

(1 row)

=&amp;gt; SELECT * FROM V_MONITOR.SPREAD_STATE;
    node_name     | token_timeout
------------------+---------------
 v_vmart_node0001 |         35000
 v_vmart_node0002 |         35000
 v_vmart_node0003 |         35000
(3 rows);
&lt;/code&gt;&lt;/pre&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Changing Spread settings with SET_SPREAD_OPTION has minor impact on your cluster as it pauses while the new settings are propagated across the cluster. Because of this delay, changes to the Spread timeout are not immediately visible in system table &lt;code&gt;SPREAD_STATE&lt;/code&gt;.

&lt;/div&gt;

&lt;p&gt;&lt;a name=&#34;Azure-s&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;azure-maintenance-and-spread-timeouts&#34;&gt;Azure maintenance and spread timeouts&lt;/h2&gt;
&lt;p&gt;Azure &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/virtual-machines/linux/maintenance-and-updates&#34;&gt;scheduled maintenance on virtual machines&lt;/a&gt; might pause nodes longer than the Spread timeout period. If so, OpenText™ Analytics Database is liable to view nodes that do not respond to Spread messages as down and remove them from the database.&lt;/p&gt;
&lt;p&gt;The length of Azure maintenance tasks is usually well-defined. For example, memory-preserving updates can pause a VM for up to 30 seconds while performing maintenance on the system hosting the VM. This pause does not disrupt the node, which resumes normal operation after maintenance is complete. To prevent the database from removing nodes while they undergo Azure maintenance, &lt;a href=&#34;#Setting&#34;&gt;adjust the Spread timeout&lt;/a&gt; as needed.&lt;/p&gt;
&lt;h2 id=&#34;see-also&#34;&gt;See also&lt;/h2&gt;



&lt;ul&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../../en/setup/set-up-on-cloud/on-azure/manually-deploy-on-azure/configure-and-launch-new-instance/&#34;&gt;Configure and launch a new instance&lt;/a&gt;&lt;/li&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../../en/setup/set-up-on-cloud/on-azure/manually-deploy-on-azure/configure-storage/&#34;&gt;Configure storage&lt;/a&gt;&lt;/li&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../../en/setup/set-up-on-cloud/on-azure/manually-deploy-on-azure/connect-to-virtual-machine/&#34;&gt;Connect to a virtual machine&lt;/a&gt;&lt;/li&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../../en/setup/set-up-on-cloud/on-azure/deploy-from-azure-marketplace/&#34;&gt;Deploy Vertica from the Azure Marketplace&lt;/a&gt;&lt;/li&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../../en/setup/set-up-on-cloud/on-azure/eon-on-azure-prerequisites/&#34;&gt;Eon Mode on Azure prerequisites&lt;/a&gt;&lt;/li&gt;
	
&lt;/ul&gt;



      </description>
    </item>
    
  </channel>
</rss>
