<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Revive an Eon DB</title>
    <link>/en/eon/revive-eon-db/</link>
    <description>Recent content in Revive an Eon DB on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/eon/revive-eon-db/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Eon: In-DB restore points</title>
      <link>/en/eon/revive-eon-db/in-db-restore-points/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/revive-eon-db/in-db-restore-points/</guid>
      <description>
        
        
        &lt;p&gt;In-database restore points are lightweight, copy-free backups for Eon Mode databases that enable you to roll back a database to the state at which the restore point was saved. Unlike &lt;code&gt;vbr&lt;/code&gt;-based backups, restore points are stored in-database and do not require additional data copies to be stored externally, which results in fast restore point creation and low storage overhead.&lt;/p&gt;
&lt;p&gt;Restore points are useful for cases such as the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If a user error compromises data integrity, such as unintentionally dropping a table or inserting the wrong data into a table, you can restore the database to a state before the problem transactions.&lt;/li&gt;
&lt;li&gt;In the case of a failed upgrade, you can restore the database to its pre-upgrade state.&lt;/li&gt;
&lt;li&gt;You want to create a number of backups while avoiding the overhead of storing copies of the database externally.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Before saving restore points, you must first create an archive to store them. You can then save restore points to the archive and select a restore point when reviving the database.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Eon: Revive with communal storage</title>
      <link>/en/eon/revive-eon-db/reviving-an-eon-db-cluster/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/eon/revive-eon-db/reviving-an-eon-db-cluster/</guid>
      <description>
        
        
        &lt;p&gt;If you have terminated your Eon Mode database&#39;s cluster, but have not deleted the database&#39;s communal storage, you can use it to revive your database. Reviving the database using communal storage restores it to its pre-shutdown state. The revival process requires creating a new database cluster and configuring it to use the database&#39;s communal storage location. See &lt;a href=&#34;../../../en/architecture/eon-concepts/stopping-starting-terminating-and-reviving-eon-db-clusters/#&#34;&gt;Stopping, starting, terminating, and reviving Eon Mode database clusters&lt;/a&gt; for more information.&lt;/p&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

You cannot revive sandboxed subclusters. If you call the &lt;code&gt;revive_db&lt;/code&gt; admintools command on a cluster with both sandboxed and unsandboxed subclusters, the nodes in the unsandboxed subclusters start as expected, but the nodes in the sandboxed subclusters remain down. Attempting to revive only a sandboxed subcluster returns an error.

&lt;/div&gt;

&lt;p&gt;You can also use the revive process to restart a database when its nodes do not have persistent local storage. You may choose to configure your node&#39;s instances in your cloud-based Eon Mode cluster with non-persistent local storage to reduce cost. Cloud providers such as AWS and GCP charge less for instances when they are not required to retain data when you shut them down.&lt;/p&gt;
&lt;p&gt;You revive a database using either the Management Console or admintools. The MC and admintools offer different revival methods:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The MC always revives onto a newly-provision cluster that it creates itself. It cannot revive onto an existing cluster. Use the MC to revive a database when you do not have a cluster already provisioned for your database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;admintools only revives onto an existing database cluster. You can manually create a cluster to revive your database. See &lt;a href=&#34;../../../en/setup/set-up-on-premises/#&#34;&gt;Set up Vertica on-premises&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can also revive a database whose hosts use instance storage where data is not persistently stored between shutdowns. In this case, admintools treats the existing database cluster as a new cluster, because the hosts do not contain the database&#39;s catalog data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Currently, only admintools lets you revive just the &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary subclusters&lt;/a&gt; in a database cluster. This option is useful if you want to revive the minimum number of nodes necessary to start your database. See &lt;a href=&#34;#Reviving&#34;&gt;Reviving Only Primary Subclusters &lt;/a&gt;below.&lt;/p&gt;
&lt;p&gt;The MC always revives the entire database cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

You cannot revive a database from a communal storage location that is currently running on another cluster. The revive process fails if it detects that there is a cluster already running the database. Having two instances of a database running on separate clusters using the same communal storage location leads to data corruption.

&lt;/div&gt;
&lt;h2 id=&#34;reviving-using-the-management-console&#34;&gt;Reviving using the Management Console&lt;/h2&gt;
&lt;p&gt;You can use a wizard in the Management Console to provision a new cluster and revive a database onto it from a browser. For details, see:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href=&#34;../../../en/mc/cloud-platforms/aws-mc/reviving-an-eon-db-on-aws-mc/#&#34;&gt;Reviving an Eon Mode database on AWS in MC&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../en/mc/cloud-platforms/gcp-mc/reviving-an-eon-db-on-gcp-mc/#&#34;&gt;Reviving an Eon Mode database on GCP in MC&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a name=&#34;ReviveAdminTools&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;revive-using-admintools&#34;&gt;Revive using admintools&lt;/h2&gt;
&lt;p&gt;You can use admintools to revive your Eon Mode database on an existing cluster.&lt;/p&gt;
&lt;h3 id=&#34;cluster-requirements&#34;&gt;Cluster requirements&lt;/h3&gt;
&lt;p&gt;This existing cluster must:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Have the same version (or later version) of Vertica installed on it. You can repurpose an existing Vertica cluster whose database you have shut down. Another option is to create a cluster from scratch by manually installing Vertica (see &lt;a href=&#34;../../../en/setup/set-up-on-premises/#&#34;&gt;Set up Vertica on-premises&lt;/a&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Contain a number of hosts in the cluster that is equal to or greater than either:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The total number of nodes that the database cluster had when it shut down.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The total number of &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/primary-node/&#34; title=&#34;In Eon Mode, a primary node is a node that is a member of a.&#34;&gt;primary nodes&lt;/a&gt; the database cluster had when it shut down. When you supply a cluster that matches the number of primary nodes in the database, admintools revives just the primary nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When reviving, you supply admintools with a list of the hosts in the cluster to revive the database onto. The number of hosts in this list must match either the total number of nodes or the number of primary nodes in the database when it shut down. If the number of nodes you supply does not match either of these values, admintools returns an error.&lt;/p&gt;
&lt;p&gt;You do not need to use all of the hosts in the cluster to revive the database. You can revive a database onto a subset of the hosts in the cluster. But you must have at least enough hosts to revive all of the primary nodes.&lt;/p&gt;
&lt;p&gt;For example, suppose you want to revive a database that had 16 nodes when it was shut down, with four of those nodes being primary nodes. In that case, you can revive:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Just the primary nodes onto a cluster that contains at least four nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;All of the 16 nodes onto a cluster that contains at least 16 nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You may choose to revive your database onto a cluster with more nodes that is necessary in cases where you want to quickly add new nodes. You may also want to revive just the primary nodes in a database onto a larger cluster. In this case, you can use the extra nodes in the cluster to start one or more secondary subclusters.&lt;/p&gt;
&lt;h3 id=&#34;required-database-information&#34;&gt;Required database information&lt;/h3&gt;
&lt;p&gt;To revive the database, you must know:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The name of the database to revive (note that the database name is case sensitive)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The version of Vertica that created the database, so you can use the same or later version&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The total number of all nodes or the number of primary nodes in the database when it shut down&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The URL and credentials for the database&#39;s communal storage location&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The user name and password of the database administrator&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The IP addresses of all hosts in the cluster you want to revive onto&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you do not know what version of Vertica created the database or are unsure how many nodes it had, see &lt;a href=&#34;#Findings&#34;&gt;Getting Database Details From a Communal Storage Location&lt;/a&gt; below.&lt;/p&gt;
&lt;h3 id=&#34;required-database-settings&#34;&gt;Required database settings&lt;/h3&gt;
&lt;p&gt;Before starting the revive process, verify the following conditions are true for your Eon Mode environment:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Eon environment&lt;/th&gt; 

&lt;th &gt;
Revived database requirements&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
All&lt;/td&gt; 

&lt;td &gt;




&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The uppermost directories of the &lt;code&gt;catalog&lt;/code&gt;, &lt;code&gt;data&lt;/code&gt;, and &lt;code&gt;depot&lt;/code&gt; directories on all nodes exist and are owned by the database dbadmin&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The cluster has no other database running on it&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Azure&lt;/td&gt; 

&lt;td &gt;








&lt;p&gt;If your database does not use Azure managed identities to authenticate with the communal storage blob container, the following values must be set:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;AzureStorageCredentials&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AzureStorageEndpointConfig&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/sql-reference/file-systems-and-object-stores/azure-blob-storage-object-store/#&#34;&gt;Azure Blob Storage object store&lt;/a&gt; for details.&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;


S3: AWS, on-premises&lt;/td&gt; 

&lt;td &gt;
















&lt;p&gt;The following configuration parameters are set:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/config-parameters/s3-parameters/#AWSEndpoint&#34;&gt;AWSEndpoint&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/config-parameters/s3-parameters/#AWSRegion&#34;&gt;AWSRegion&lt;/a&gt; (AWS only)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/config-parameters/s3-parameters/#AWSAuth&#34;&gt;AWSAuth&lt;/a&gt; / &lt;a href=&#34;../../../en/setup/set-up-on-cloud/on-aws/aws-authentication/&#34;&gt;IAM role&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/config-parameters/s3-parameters/#AWSEnableHttps&#34;&gt;AWSEnableHttps&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
If migrating to an on-premises database, set configuration parameter AWSEnableHttps to be compatible with the database TLS setup: AWSEnableHttps=1 if using TLS, otherwise 0. If settings are incompatible, the migration returns with an error.
&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
GCP&lt;/td&gt; 

&lt;td &gt;








&lt;p&gt;The following configuration parameters are set:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/config-parameters/google-cloud-storage-parameters/#GCSAuth&#34;&gt;GCSAuth&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/config-parameters/google-cloud-storage-parameters/#GCSEnableHttps&#34;&gt;GCSEnableHttps&lt;/a&gt; (if not using the default value)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/config-parameters/google-cloud-storage-parameters/#GCSEndpoint&#34;&gt;GCSEndpoint&lt;/a&gt; (if not using the default value)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Findings&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;getting-database-details-from-a-communal-storage-location&#34;&gt;Getting database details from a communal storage location&lt;/h3&gt;
&lt;p&gt;To revive a database, you must know:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The version of Vertica that created it (so you can use the same or a later version)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The total number of nodes (when reviving both primary and secondary nodes) or primary nodes (when just reviving the primary nodes) in the database&#39;s cluster when it shut down.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you do not know these details, you can determine them based on the contents of the communal storage location.&lt;/p&gt;
&lt;p&gt;If you are not sure which version of Vertica created the database stored in a communal storage location, examine the &lt;code&gt;cluster_config.json&lt;/code&gt; file. This file is stored in the communal storage location in the folder named &lt;code&gt;metadata/&lt;/code&gt;&lt;em&gt;&lt;code&gt;databasename&lt;/code&gt;&lt;/em&gt;. For example, suppose you have a database named mydb stored in the communal storage location &lt;code&gt;s3://mybucket/mydb&lt;/code&gt;. Then you can download and examine the file &lt;code&gt;s3://mybucket/mydb/metadata/mydb/cluster_config.json&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In the &lt;code&gt;cluster_config.json&lt;/code&gt;, the Vertica version that created the database is stored with the JSON key named DatabaseVersion near the top of the file:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;{
   &amp;#34;CatalogTruncationVersion&amp;#34; : 804,
   &amp;#34;ClusterLeaseExpiration&amp;#34; : &amp;#34;2020-12-21 21:52:31.005936&amp;#34;,
   &amp;#34;Database&amp;#34; : {
      &amp;#34;branch&amp;#34; : &amp;#34;&amp;#34;,
      &amp;#34;name&amp;#34; : &amp;#34;verticadb&amp;#34;
   },
   &lt;span class=&#34;code-input&#34;&gt;&#34;DatabaseVersion&#34; : &#34;v10.1.0&#34;,&lt;/span&gt;
   &amp;#34;GlobalSettings&amp;#34; : {
      &amp;#34;TupleMoverServices&amp;#34; : -33,
      &amp;#34;appliedUpgrades&amp;#34; : [
 . . .
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In this example, you can revive the storage location using Vertica version 10.1.0 or later.&lt;/p&gt;
&lt;p&gt;If you do not know how many nodes or primary nodes the cluster had when it shut down, use the &lt;code&gt;--display-only&lt;/code&gt; option of the admintools revive_db tool. Adding this option prevents admintools from reviving the database. Instead, it validates the files in the communal storage and reports details about the nodes that made up the database cluster. Parts of this report show the total number of nodes in the cluster and the number of primary nodes:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t revive_db --display-only --communal-storage-location \
             s3://mybucket/verticadb -d verticadb
Attempting to retrieve file: [s3://mybucket/verticadb/metadata/verticadb/cluster_config.json]

&lt;span class=&#34;code-input&#34;&gt;Validated 6-node database verticadb&lt;/span&gt; defined at communal storage s3://mybucket/verticadb.

Expected layout of database after reviving from communal storage: s3://mybucket/verticadb

== Communal location details: ==
{
 &amp;#34;communal_storage_url&amp;#34;: &amp;#34;s3://mybucket/verticadb&amp;#34;,
 &amp;#34;num_shards&amp;#34;: &amp;#34;3&amp;#34;,
 &amp;#34;depot_path&amp;#34;: &amp;#34;/vertica/data&amp;#34;,
   . . .
]

&lt;span class=&#34;code-input&#34;&gt;Number of primary nodes: 3
&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can use &lt;code&gt;grep&lt;/code&gt; to find just the relevant lines in the report:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t revive_db --display-only --communal-storage-location \
             s3://mybucket/verticadb -d verticadb | grep  &amp;#39;Validated\|primary nodes&amp;#39;
Validated 6-node database verticadb defined at communal storage s3://mybucket/verticadb.
Number of primary nodes: 3
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;creating-a-parameter-file&#34;&gt;Creating a parameter file&lt;/h3&gt;
&lt;p&gt;For Eon Mode deployments that are not on AWS, you must create a configuration file to pass the parameters listed in the table in the previous section to admintools. Traditionally this file is named &lt;code&gt;auth_params.conf&lt;/code&gt; although you can choose any file name you want.&lt;/p&gt;
&lt;p&gt;For on-premises Eon Mode databases, this parameter file is the same one you used when initially installing the database. See the following links for instructions on creating a parameter file for the communal storage solution you are using for your database:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/eon/create-db-eon/create-an-eon-db-on-premises-with-flashblade/#Step3&#34;&gt;Pure Storage FlashBlade&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/eon/create-db-eon/create-an-eon-db-on-premises-with-hdfs/#Step3&#34;&gt;HDFS&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/eon/create-db-eon/create-an-eon-db-on-premises-with-minio/#Step3&#34;&gt;MinIO&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For databases running on Microsoft Azure, the parameter file is only necessary if your database does not use managed identities. This file is the same format that you use to manually install an Eon Mode database. See &lt;a href=&#34;../../../en/eon/create-db-eon/manually-create-an-eon-db-on-azure/#&#34;&gt;Manually create an Eon Mode database on Azure&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;To revive an Eon Mode database on GCP manually, create a configuration file to hold the GCSAuth parameter and optionally, the GCSEnableHttp parameter.&lt;/p&gt;
&lt;p&gt;You must supply the GCSAuth parameter to enable Vertica to read from the communal storage location stored in GCS. The value for this parameter is the HMAC access key and secret:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;GCSAuth = &lt;span class=&#34;code-variable&#34;&gt;HMAC_access_key&lt;/span&gt;:&lt;span class=&#34;code-variable&#34;&gt;HMAC_secret_key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;See &lt;a href=&#34;../../../en/setup/set-up-on-cloud/on-gcp/deploy-from-google-cloud-marketplace/eon-on-gcp-prerequisites/#Creating&#34;&gt;Creating an HMAC Key&lt;/a&gt; for more information about HMAC keys.&lt;/p&gt;
&lt;p&gt;If your Eon Mode database does not use encryption when accessing communal storage on GCS, then disable HTTPS access by adding the following line to &lt;code&gt;auth_params.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;GCSEnableHttps = 0
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;running-the-revive_db-tool&#34;&gt;Running the revive_db tool&lt;/h3&gt;
&lt;p&gt;Use the admintools revive_db tool to revive the database:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Use SSH to access a cluster host as an administrator.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Depending on your environment, run one of the following admintools commands:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;AWS:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t revive_db \
 --communal-storage-location=s3://&lt;span class=&#34;code-variable&#34;&gt;communal_store_path&lt;/span&gt; \
 -s &lt;span class=&#34;code-variable&#34;&gt;host1,...&lt;/span&gt; -d &lt;span class=&#34;code-variable&#34;&gt;database_name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
&lt;p&gt;If you revive an on-premises Eon Mode database to AWS, check the &lt;code&gt;controlmode&lt;/code&gt; setting in &lt;code&gt;/opt/vertica/config/admintools.conf&lt;/code&gt;. This setting must be compatible with the network messaging requirements of your Eon implementation. AWS relies on unicast messaging, which is compatible with a &lt;code&gt;controlmode&lt;/code&gt; setting of &lt;code&gt;point-to-point&lt;/code&gt; (pt2pt). If the source database &lt;code&gt;controlmode&lt;/code&gt; setting was &lt;code&gt;broacast&lt;/code&gt; and you migrate to S3/AWS communal storage, you must change &lt;code&gt;controlmode&lt;/code&gt; with admintools:&lt;/p&gt;
&lt;pre class=&#34;table-pre&#34;&gt;$ admintools -t re_ip -d &lt;span class=&#34;code-variable&#34;&gt;dbname&lt;/span&gt; -T&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On-premises and other environments:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t revive_db -x auth_params.conf \
  --communal-storage-location=&lt;span class=&#34;code-variable&#34;&gt;storage-schema&lt;/span&gt;://&lt;span class=&#34;code-variable&#34;&gt;communal_store_path&lt;/span&gt; \
  -s &lt;span class=&#34;code-variable&#34;&gt;host1_ip,...&lt;/span&gt; -d &lt;span class=&#34;code-variable&#34;&gt;database_name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This example revives a six-node on-premises database:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t revive_db -x auth_params.conf \
   --communal-storage-location=s3://mybucket/mydir \
   -s 172.16.116.27,172.16.116.28,172.16.116.29,172.16.116.30,\
   172.16.116.31,172.16.116.32 -d VMart
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following example demonstrates reviving a three-node database hosted on GCP:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t revive_db -x auth_params.conf \
--communal-storage-location gs://mybucket/verticadb \
-s 10.142.0.35,10.142.0.38,10.142.0.39 -d VerticaDB

Attempting to retrieve file:
   [gs://mybucket/verticadb/metadata/VerticaDB/cluster_config.json]
Validated 3-node database VerticaDB defined at communal storage
  gs://mybucket/verticadb .
Cluster lease has expired.
Preparation succeeded all hosts
Calculated necessary addresses for all nodes.
Starting to bootstrap nodes. Please wait, databases with a large
  catalog may take a while to initialize.
&amp;gt;&amp;gt;Calling bootstrap on node v_verticadb_node0002 (10.142.0.38)
&amp;gt;&amp;gt;Calling bootstrap on node v_verticadb_node0003 (10.142.0.39)
Load Remote Catalog succeeded on all hosts
Database revived successfully.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a name=&#34;Reviving&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;reviving-only-primary-subclusters&#34;&gt;Reviving only primary subclusters&lt;/h3&gt;
&lt;p&gt;You can revive just the primary subclusters in an Eon Mode database. Make the list of hosts you pass to the admintools revive_db tool&#39;s &lt;code&gt;--hosts&lt;/code&gt; (or &lt;code&gt;-s&lt;/code&gt;) argument match the number of primary nodes that were in the database when it shut down. For example, if you have a six-node Eon Mode database that had three primary nodes, you can revive just the primary nodes by supplying three hosts in the &lt;code&gt;--hosts&lt;/code&gt; argument:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t revive_db --communal-storage-location=s3://verticadb -d verticadb \
             -x auth_params.conf --hosts node01,node02,node03
Attempting to retrieve file: [s3://verticadb/metadata/verticadb/cluster_config.json]
Consider reviving to only primary nodes: communal storage indicates 6 nodes, while
  3 nodes were specified

Validated 3-node database verticadb defined at communal storage s3://verticadb.
Cluster lease has expired.
Preparation succeeded all hosts

Calculated necessary addresses for all nodes.
Starting to bootstrap nodes. Please wait, databases with a large catalog may take a
  while to initialize.
&amp;gt;&amp;gt;Calling bootstrap on node v_verticadb_node0002 (192.168.56.103)
&amp;gt;&amp;gt;Calling bootstrap on node v_verticadb_node0003 (192.168.56.104)
Load Remote Catalog succeeded on all hosts

Database revived successfully.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In a database where you have revived only the primary nodes, the secondary nodes are down. Their IP address is set to 0.0.0.0 so they are not part of the database. For example, querying the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/nodes/&#34;&gt;NODES&lt;/a&gt; system table in the database revived in the previous example shows the secondary nodes are all down:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name,node_state,node_address,subcluster_name FROM NODES;
      node_name       | node_state |  node_address  |  subcluster_name
----------------------+------------+----------------+--------------------
 v_verticadb_node0001 | UP         | 192.168.56.102 | default_subcluster
 v_verticadb_node0002 | UP         | 192.168.56.103 | default_subcluster
 v_verticadb_node0003 | UP         | 192.168.56.104 | default_subcluster
 v_verticadb_node0004 | DOWN       | 0.0.0.0        | analytics
 v_verticadb_node0005 | DOWN       | 0.0.0.0        | analytics
 v_verticadb_node0006 | DOWN       | 0.0.0.0        | analytics
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

&lt;p&gt;Secondary nodes that have not been revived may cause error messages if your database has the large cluster feature enabled. (See &lt;a href=&#34;../../../en/admin/managing-db/managing-nodes/large-cluster/#&#34;&gt;Large cluster&lt;/a&gt; for more information about the large cluster feature.)&lt;/p&gt;
&lt;p&gt;For example, adding a node to a secondary subcluster can fail if the new node would be assigned a control node that has not been revived. In this case, Vertica reports that adding the node failed because the control node has an invalid IP address.&lt;/p&gt;
&lt;p&gt;If you encounter errors involving control nodes with invalid IP addresses, consider reviving the unrevived secondary subcluster, as explained below.&lt;/p&gt;


&lt;/div&gt;
&lt;p&gt;Because Vertica considers these unrevived nodes to be down, it may not allow you to remove them or remove their subcluster while they are in their unrevived state. The best way to remove the nodes or the secondary subcluster is to revive them first.&lt;/p&gt;
&lt;h3 id=&#34;reviving-unrevived-secondary-subclusters&#34;&gt;Reviving unrevived secondary subclusters&lt;/h3&gt;
&lt;p&gt;If you revived just the primary subclusters in your database, you can later choose to revive some or all of the secondary subclusters. Your cluster must have hosts that are not nodes in the database that Vertica can use to revive the unrevived nodes. If your cluster does not have enough of these non-node hosts, you can add more hosts. See &lt;a href=&#34;../../../en/admin/managing-db/managing-nodes/adding-nodes/adding-hosts-to-cluster/#&#34;&gt;Adding hosts to a cluster&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You revive a secondary subcluster by using the admintools&#39; restart_subcluster tool. You supply it with the list of hosts in the &lt;code&gt;--hosts&lt;/code&gt; argument where the nodes will be revived. The number of hosts in this list must match the number of nodes in the subcluster. You must revive all nodes in the subcluster at the same time. If you pass restart_subcluster a list with fewer or more hosts than the number of nodes defined in the subcluster, it returns an error.&lt;/p&gt;
&lt;p&gt;The follow example demonstrates reviving the secondary subcluster named analytics shown in the previous examples.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t restart_subcluster -d verticadb --hosts node04,node05,node06 \
             -p &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt;&amp;#39; -c analytics
Updating hostnames of nodes in subcluster analytics.
    Replicating configuration to all nodes
    Generating new configuration information and reloading spread
Hostnames of nodes in subcluster analytics updated successfully.
*** Restarting subcluster for database verticadb ***
    Restarting host [192.168.56.105] with catalog [v_verticadb_node0004_catalog]
    Restarting host [192.168.56.106] with catalog [v_verticadb_node0005_catalog]
    Restarting host [192.168.56.107] with catalog [v_verticadb_node0006_catalog]
    Issuing multi-node restart
    Starting nodes:
        v_verticadb_node0004 (192.168.56.105)
        v_verticadb_node0005 (192.168.56.106)
        v_verticadb_node0006 (192.168.56.107)
    Starting Vertica on all nodes. Please wait, databases with a large catalog may take a while to initialize.
    Node Status: v_verticadb_node0004: (DOWN) v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
    Node Status: v_verticadb_node0004: (DOWN) v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
    Node Status: v_verticadb_node0004: (DOWN) v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
    Node Status: v_verticadb_node0004: (INITIALIZING) v_verticadb_node0005: (INITIALIZING) v_verticadb_node0006: (INITIALIZING)
    Node Status: v_verticadb_node0004: (UP) v_verticadb_node0005: (UP) v_verticadb_node0006: (UP)
Syncing catalog on verticadb with 2000 attempts.
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;see-also&#34;&gt;See also&lt;/h2&gt;



&lt;ul&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../en/architecture/eon-concepts/stopping-starting-terminating-and-reviving-eon-db-clusters/&#34;&gt;Stopping, starting, terminating, and reviving Eon Mode database clusters&lt;/a&gt;&lt;/li&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../en/eon/terminating-an-eon-db-cluster/&#34;&gt;Terminating an Eon Mode database cluster&lt;/a&gt;&lt;/li&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../en/architecture/eon-concepts/eon-architecture/&#34;&gt;Eon Mode architecture&lt;/a&gt;&lt;/li&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../en/eon/managing-subclusters/adding-and-removing-nodes-from-subclusters/&#34;&gt;Adding and removing nodes from subclusters&lt;/a&gt;&lt;/li&gt;
	
	&lt;li&gt;&lt;a href=&#34;../../../en/eon/managing-subclusters/altering-subcluster-settings/&#34;&gt;Altering subcluster settings&lt;/a&gt;&lt;/li&gt;
	
&lt;/ul&gt;



      </description>
    </item>
    
  </channel>
</rss>
