<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Backing up and restoring the database</title>
    <link>/en/admin/backup-and-restore/</link>
    <description>Recent content in Backing up and restoring the database on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/admin/backup-and-restore/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Admin: Common use cases</title>
      <link>/en/admin/backup-and-restore/common-use-cases/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/common-use-cases/</guid>
      <description>
        
        
        &lt;p&gt;You can use &lt;code&gt;vbr&lt;/code&gt; to perform many tasks related to backup and restore. The &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/#&#34;&gt;vbr reference&lt;/a&gt; describes all of the tasks in detail. This section summarizes common use cases. For each of these cases, there are additional requirements not covered here. Be sure to read the linked topics for details.&lt;/p&gt;
&lt;p&gt;This is not a complete list of Backup/Restore capabilities.&lt;/p&gt;
&lt;h2 id=&#34;routine-backups-in-enterprise-mode&#34;&gt;Routine backups in Enterprise Mode&lt;/h2&gt;
&lt;p&gt;A full backup stores a copy of your data in another location—ideally a location that is separated from your database location, such as on different hardware or in the cloud. You give the backup a name (the snapshot name), which allows you to have different backups and backup types without interference. In your configuration file, you can map database nodes to backup locations and set some other parameters.&lt;/p&gt;
&lt;p&gt;Before your first backup, run the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr init task&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Use the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr backup task&lt;/a&gt; to perform a full backup. The &lt;a href=&#34;../../../en/admin/backup-and-restore/sample-vbr-config-files/external-full-backuprestore/#&#34;&gt;External full backup/restore&lt;/a&gt; example provides a starting point for your configuration. For complete documentation of full backups, see &lt;a href=&#34;../../../en/admin/backup-and-restore/creating-backups/creating-full-backups/#&#34;&gt;Creating full backups&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;routine-backups-in-eon-mode&#34;&gt;Routine backups in Eon Mode&lt;/h2&gt;
&lt;p&gt;For the most part, backups in Eon Mode work the same way as backups in Enterprise Mode. Eon Mode has some additional requirements described in &lt;a href=&#34;../../../en/admin/backup-and-restore/eon-db-requirements/#&#34;&gt;Eon Mode database requirements&lt;/a&gt;, and some configuration parameters are different for backups to cloud storage. You can back up or restore Eon Mode databases that run in the cloud or on-premises using a &lt;a href=&#34;../../../en/admin/backup-and-restore/&#34;&gt;supported cloud storage&lt;/a&gt; location.&lt;/p&gt;
&lt;p&gt;Use the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr backup task&lt;/a&gt; to perform a full backup. The &lt;a href=&#34;../../../en/admin/backup-and-restore/sample-vbr-config-files/backuprestore-to-cloud-storage/#&#34;&gt;Backup/restore to cloud storage&lt;/a&gt; example provides a starting point for your configuration. For complete documentation of full backups, see &lt;a href=&#34;../../../en/admin/backup-and-restore/creating-backups/creating-full-backups/#&#34;&gt;Creating full backups&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;checkpoint-backups-backing-up-before-a-major-operation&#34;&gt;Checkpoint backups: backing up before a major operation&lt;/h2&gt;
&lt;p&gt;It is a good idea to back up your database before performing destructive operations such as dropping tables, or before major operations such as upgrading OpenText™ Analytics Database to a new version.&lt;/p&gt;
&lt;p&gt;You can perform a regular full backup for this purpose, but a faster way is to create a hard-link local backup. This kind of backup copies your catalog and links your data files to another location on the local file system on each node. (You can also do a hard-link backup of specific objects rather than the whole database.) A hard-link local backup does not provide the same protection as a backup stored externally. For example, it does not protect you from local system failures. However, for a backup that you expect to need only temporarily, a hard-link local backup is an expedient option. Do not use hard-link local backups as substitutes for regular backups to other nodes.&lt;/p&gt;
&lt;p&gt;Hard-link backups use the same &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr backup task&lt;/a&gt; as other backups, but with a different configuration. The &lt;a href=&#34;../../../en/admin/backup-and-restore/sample-vbr-config-files/full-hard-link-backuprestore/#&#34;&gt;Full hard-link backup/restore&lt;/a&gt; example provides a starting point for your configuration. See &lt;a href=&#34;../../../en/admin/backup-and-restore/creating-backups/creating-hard-link-local-backups/#&#34;&gt;Creating hard-link local backups&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2 id=&#34;restoring-selected-objects&#34;&gt;Restoring selected objects&lt;/h2&gt;
&lt;p&gt;Sometimes you need to restore specific objects, such as a table you dropped, rather than the entire database. You can restore individual tables or schemas from any backup that contains them, whether a full backup or an object backup.&lt;/p&gt;
&lt;p&gt;Use the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr restore task&lt;/a&gt; and the &lt;code&gt;--restore-objects&lt;/code&gt; parameter to specify what to restore. Usually you use the same configuration file that you used to create the backup. See &lt;a href=&#34;../../../en/admin/backup-and-restore/restoring-backups/restoring-individual-objects/#&#34;&gt;Restoring individual objects&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2 id=&#34;restoring-an-entire-database&#34;&gt;Restoring an entire database&lt;/h2&gt;
&lt;p&gt;You can restore both Enterprise Mode and Eon Mode databases from complete backups. You cannot use restore to change the mode of your database. In Eon Mode, you can restore to the primary subcluster without regard to secondary subclusters.&lt;/p&gt;
&lt;p&gt;Use the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr restore task&lt;/a&gt; to restore a database. As when restoring selected objects, you usually use the same configuration file that you used to create the backup. See &lt;a href=&#34;../../../en/admin/backup-and-restore/restoring-backups/restoring-db-from-full-backup/#&#34;&gt;Restoring a database from a full backup&lt;/a&gt; and &lt;a href=&#34;../../../en/admin/backup-and-restore/restoring-backups/restoring-hard-link-local-backups/#&#34;&gt;Restoring hard-link local backups&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2 id=&#34;copying-a-cluster&#34;&gt;Copying a cluster&lt;/h2&gt;
&lt;p&gt;You might need to copy a database to another cluster of computers, such as when you are promoting a database from a staging environment to production. Copying a database to another cluster is essentially a simultaneous backup and restore operation. The data is backed up from the source database cluster and restored to the destination cluster in a single operation.&lt;/p&gt;
&lt;p&gt;Use the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr copycluster task&lt;/a&gt; to copy a cluster. The &lt;a href=&#34;../../../en/admin/backup-and-restore/sample-vbr-config-files/db-copy-to-an-alternate-cluster/#&#34;&gt;Database copy to an alternate cluster&lt;/a&gt; example provides a starting point for your configuration. See &lt;a href=&#34;../../../en/admin/backup-and-restore/copying-db-to-another-cluster/#&#34;&gt;Copying the database to another cluster&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2 id=&#34;replicating-selected-objects-to-another-database&#34;&gt;Replicating selected objects to another database&lt;/h2&gt;
&lt;p&gt;You might want to replicate specific tables or schemas from one database to another. For example, you might do this to copy data from a production database to a test database to investigate a problem in isolation. Another example is when you complete a large data load in one database, replication to another database might be more efficient than repeating the load operation in the other database.&lt;/p&gt;
&lt;p&gt;Use the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr replicate task&lt;/a&gt; to replicate objects. You specify the objects to replicate in the configuration file. The &lt;a href=&#34;../../../en/admin/backup-and-restore/sample-vbr-config-files/object-replication-to-an-alternate-db/#&#34;&gt;Object replication to an alternate database&lt;/a&gt; example provides a starting point for your configuration. See &lt;a href=&#34;../../../en/admin/backup-and-restore/replicating-objects-to-another-db-cluster/#&#34;&gt;Replicating objects to another database cluster&lt;/a&gt; for more information.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Sample vbr configuration files</title>
      <link>/en/admin/backup-and-restore/sample-vbr-config-files/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/sample-vbr-config-files/</guid>
      <description>
        
        
        &lt;p&gt;The vbr utility uses configuration files to provide the information it needs to back up and restore a full or object-level backup or copy a cluster. No default configuration file exists. You must always specify a configuration file with the vbr command.&lt;/p&gt;
&lt;p&gt;OpenText™ Analytics Database includes sample configuration files that you can copy, edit, and deploy for various vbr tasks. These files are automatically installed at:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;/opt/vertica/share/vbr/example_configs&lt;/code&gt;&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Eon Mode database requirements</title>
      <link>/en/admin/backup-and-restore/eon-db-requirements/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/eon-db-requirements/</guid>
      <description>
        
        
        &lt;p&gt;Eon Mode databases perform the same backup and restore operations as Enterprise Mode databases. Additional requirements pertain to Eon Mode because it uses a different architecture.&lt;/p&gt;
&lt;p&gt;Eon Mode databases also support saving &lt;a href=&#34;../../../en/eon/revive-eon-db/in-db-restore-points/&#34;&gt;in-db restore points&lt;/a&gt;, which are copy-free backups that enable you to roll back a database to a previous state. Unlike &lt;code&gt;vbr&lt;/code&gt;-based backups, restore points are stored in-database and do not require additional data copies to be stored externally. However, because restore points are in-database, they are lost if the database&#39;s communal storage is compromised. For more information about restore points, see &lt;a href=&#34;../../../en/eon/revive-eon-db/#&#34;&gt;Revive an Eon DB&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

These requirements are for cloud storage locations listed in &lt;a href=&#34;../../../en/admin/backup-and-restore/#&#34;&gt;Backing up and restoring the database&lt;/a&gt;, and on-premises with communal storage on HDFS.

&lt;/div&gt;
&lt;h2 id=&#34;cloud-storage-requirements&#34;&gt;Cloud storage requirements&lt;/h2&gt;
&lt;p&gt;Eon Mode databases must be backed up to supported cloud storage locations. The following &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/cloudstorage/#&#34;&gt;[CloudStorage]&lt;/a&gt; configuration parameters must be set:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;cloud_storage_backup_path&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;cloud_storage_backup_file_system_path&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A backup path is valid for one database only. You cannot use the same path to store backups for multiple databases.&lt;/p&gt;
&lt;p&gt;Eon Mode databases that use S3-compatible on-premises cloud storage can back up to Amazon Web Services (AWS) S3.&lt;/p&gt;
&lt;h3 id=&#34;cloud-storage-access&#34;&gt;Cloud storage access&lt;/h3&gt;
&lt;p&gt;In addition to having access to the cloud storage bucket used for the database&#39;s communal storage, you must have access to the cloud storage backup location. Verify that the credential you use to access communal storage also has access to the backup location. For more information about configuring cloud storage access for OpenText™ Analytics Database, see &lt;a href=&#34;../../../en/admin/backup-and-restore/setting-up-backup-locations/configuring-cloud-storage-backups/#&#34;&gt;Configuring cloud storage backups&lt;/a&gt;.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

While an AWS backup location can be in a different region, backup and restore operations across different S3 regions are incompatible with virtual private cloud (VPC) endpoints.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;eon-on-premises-and-private-cloud-storage&#34;&gt;Eon on-premises and private cloud storage&lt;/h2&gt;
&lt;p&gt;If an Eon database runs on-premises, then communal storage is not on AWS but on another storage platform that uses the S3 or GS protocol. This means there can be two endpoints and two sets of credentials, depending on where you back up. This additional information is stored in &lt;a href=&#34;../../../en/admin/backup-and-restore/setting-up-backup-locations/configuring-cloud-storage-backups/#Environm&#34;&gt;environment variables&lt;/a&gt;, and not in &lt;code&gt;vbr&lt;/code&gt; configuration parameters.&lt;/p&gt;
&lt;p&gt;Backups of Eon Mode on-premises databases do not support AWS IAM profiles.&lt;/p&gt;
&lt;h2 id=&#34;hdfs-on-premises-storage&#34;&gt;HDFS on-premises storage&lt;/h2&gt;
&lt;p&gt;To back up an Eon Mode database that uses HDFS on-premises storage, the communal storage and backup location must use the same HDFS credentials and domain. All &lt;code&gt;vbr&lt;/code&gt; operations are supported, except &lt;code&gt;copycluster&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;OpenText™ Analytics Database supports Kerberos authentication, High Availability Name Node, and wire encryption for &lt;code&gt;vbr&lt;/code&gt; operations. OpenText™ Analytics Database does not support at-rest encryption for Hadoop storage.&lt;/p&gt;
&lt;p&gt;For details, see &lt;a href=&#34;../../../en/admin/backup-and-restore/setting-up-backup-locations/configuring-backups-to-and-from-hdfs/#&#34;&gt;Configuring backups to and from HDFS&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;database-restore-requirements&#34;&gt;Database restore requirements&lt;/h2&gt;
&lt;p&gt;When restoring a backup of an Eon Mode database, the target database must satisfy the following requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Share the same name as the source database.&lt;/li&gt;
&lt;li&gt;Have at least as many nodes as the primary subcluster(s) in the source database.&lt;/li&gt;
&lt;li&gt;Have the same node names as the nodes of the source database.&lt;/li&gt;
&lt;li&gt;Use the same catalog directory location as the source database.&lt;/li&gt;
&lt;li&gt;Use the same port numbers as the source database.&lt;/li&gt;
&lt;li&gt;For object-level restore, if you restore to an existing &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/#restore&#34;&gt;target namespace&lt;/a&gt;, the target namespace and the objects&#39; source namespace must have the same shard count, shard boundaries, and node subscriptions. For details, see &lt;a href=&#34;../../../en/admin/backup-and-restore/eon-db-requirements/#object-level-tasks-with-multiple-namespaces&#34;&gt;object-level tasks with multiple namespaces&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can restore a full or object backup that was taken from a database with primary and secondary subclusters to the primary subclusters in the target database. The database can have only primary subclusters, or it can also have any number of secondary subclusters. Secondary subclusters do not need to match the backup database. The same is true for replicating a database; only the primary subclusters are required. The requirements are similar to those for &lt;a href=&#34;../../../en/eon/revive-eon-db/reviving-an-eon-db-cluster/#&#34;&gt;Revive with communal storage&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Use the &lt;code&gt;[Mapping]&lt;/code&gt; section in the configuration file to specify the mappings for the primary subcluster.&lt;/p&gt;
&lt;h2 id=&#34;object-level-tasks-with-multiple-namespaces&#34;&gt;Object-level tasks with multiple namespaces&lt;/h2&gt;
&lt;p&gt;Eon Mode databases group schemas and tables into one or more &lt;a href=&#34;../../../en/architecture/eon-concepts/shards-and-subscriptions/&#34;&gt;namespaces&lt;/a&gt;. By default, Eon databases contain only one namespace, &lt;code&gt;default_namespace&lt;/code&gt;, which is created during database creation. Unless you have created additional namespaces, the &lt;code&gt;default_namespace&lt;/code&gt; contains all schemas and tables. If you do not specify the namespace of an object, &lt;code&gt;vbr&lt;/code&gt; assumes the object belongs to the &lt;code&gt;default_namespace&lt;/code&gt;. Full database &lt;code&gt;vbr&lt;/code&gt; tasks are unaffected by the number of namespaces.&lt;/p&gt;

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
For &lt;code&gt;vbr&lt;/code&gt; tasks, namespaces are prefixed with a period. For example, &lt;code&gt;.n.s.t&lt;/code&gt; refers to table &lt;code&gt;t&lt;/code&gt; in schema &lt;code&gt;s&lt;/code&gt; in namespace &lt;code&gt;n&lt;/code&gt;.
&lt;/div&gt;
&lt;p&gt;For object-level backups, you can specify the included objects in the &lt;code&gt;objects&lt;/code&gt; parameter of your &lt;code&gt;vbr&lt;/code&gt; configuration file. For example, to create an object-level backup of all objects in the &lt;code&gt;orders&lt;/code&gt; and &lt;code&gt;customers&lt;/code&gt; schemas in the &lt;code&gt;store_1&lt;/code&gt; namespace, add the following lines to your configuration file:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;objects = .store_1.orders*, .store_1.customers.*
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Alternatively, you can specify the included and excluded objects using the &lt;code&gt;includeObjects&lt;/code&gt; and &lt;code&gt;excludeObjects&lt;/code&gt; parameters. If you set these parameters, the &lt;code&gt;objects&lt;/code&gt; parameter must be empty.&lt;/p&gt;
&lt;p&gt;For object-level restore and replicate &lt;code&gt;vbr&lt;/code&gt; tasks, you can use the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/#restore&#34;&gt;&lt;code&gt;--target-namespace&lt;/code&gt;&lt;/a&gt; argument to specify the namespace to which the objects are restored or replicated. &lt;p&gt;&lt;code&gt;vbr&lt;/code&gt; behaves differently depending on whether the target namespace exists:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Exists: &lt;code&gt;vbr&lt;/code&gt; attempts to restore or replicate the objects to the existing namespace, which must have the same shard count, shard boundaries, and node subscriptions as the source namespace. If these conditions are not met, the &lt;code&gt;vbr&lt;/code&gt; task fails.&lt;/li&gt;
&lt;li&gt;Nonexistent: &lt;code&gt;vbr&lt;/code&gt; creates a namespace in the target database with the name specified in &lt;code&gt;--target-namespace&lt;/code&gt; and the shard count of the source namespace, and then replicates or restores the objects to that namespace.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If no target namespace is specified, &lt;code&gt;vbr&lt;/code&gt; attempts to restore or replicate objects to a namespace with the same name as the source namespace.&lt;/p&gt;
&lt;/p&gt;
&lt;p&gt;You can specify how restore operations handle duplicate objects with &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/misc/#objectRestoreMode&#34;&gt;objectRestoreMode&lt;/a&gt; parameter in the &lt;code&gt;vbr&lt;/code&gt; configuration file.&lt;/p&gt;
&lt;p&gt;The following command restores the &lt;code&gt;store_1.orders&lt;/code&gt; schema of the source database to the &lt;code&gt;store_2&lt;/code&gt; namespace in the target database:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;$ vbr --task restore --config-file&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;db.ini --restore-objects&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;.store_1.orders.* --target-namespace&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;store_2
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;If no target namespace is specified, &lt;code&gt;vbr&lt;/code&gt; attempts to restore the objects to a namespace with the same name as the source namespace. For example, you can omit the &lt;code&gt;--target-namespace=store_1&lt;/code&gt; argument when restoring the &lt;code&gt;store_1.orders&lt;/code&gt; schema to the &lt;code&gt;store_1&lt;/code&gt; namespace:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;$ vbr --task restore --config-file&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;db.ini --restore-objects&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;.store_1.orders.* 
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;restoring-a-database-with-multiple-communal-storage-locations&#34;&gt;Restoring a database with multiple communal storage locations&lt;/h2&gt;
&lt;p&gt;You can back up and restore Eon Mode databases that have multiple communal storage locations. Both object-level and full database restore operations are supported:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Full database restore: the result of the restore operation depends on whether you are restoring to the same communal storage locations from which you performed the backup:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Same communal storage locations: &lt;code&gt;vbr&lt;/code&gt; attempts to copy all data to the communal storage locations from which they were backed up. If a storage location has been dropped since the backup was taken, the restore operation attempts to reinstate the dropped location before restoring the data. If the dropped storage location cannot be reinstated, its associated data is copied to the main communal storage location.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Different communal storage location: all data is copied to the communal storage location specified in the &lt;code&gt;vbr&lt;/code&gt; configuration file. Regardless of how many communal storage locations existed before the restore, there will be only one communal storage location after the full restore.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Object restore: the location to which an object is restored depends on whether it has an existing &lt;a href=&#34;../../../en/admin/managing-storage-locations/creating-storage-policies/&#34;&gt;storage policy&lt;/a&gt; in the target database:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Storage policy: &lt;code&gt;vbr&lt;/code&gt; restores the object to the communal storage location specified by the object&#39;s highest priority storage policy, which is determined by the following hierarchy, listed from highest priority to lowest:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Table-level policy&lt;/li&gt;
&lt;li&gt;Schema-level policy&lt;/li&gt;
&lt;li&gt;Database-level policy
When the communal storage location specified by the highest priority policy does not exist, &lt;code&gt;vbr&lt;/code&gt; attempts to execute the policy with the next highest priority. If none of the policies are valid, the object is restored to the main communal storage location.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;No storage policy: the object is copied to the main communal storage location.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For details on creating and configuring storage policies for multiple communal storage locations, see &lt;a href=&#34;../../../en/eon/configuring-your-cluster-eon/#adding-communal-storage-locations&#34;&gt;Configuring your Vertica cluster for Eon Mode&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Requirements for backing up and restoring HDFS storage locations</title>
      <link>/en/admin/backup-and-restore/requirements-backing-up-and-restoring-hdfs-storage-locations/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/requirements-backing-up-and-restoring-hdfs-storage-locations/</guid>
      <description>
        
        
        &lt;p&gt;There are several considerations for backing up and restoring HDFS storage locations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The HDFS directory for the storage location must have snapshotting enabled. You can either directly configure this yourself or enable the database administrator’s Hadoop account to do it for you automatically. See &lt;a href=&#34;../../../en/hadoop-integration/using-hdfs-storage-locations/hadoop-config-backup-and-restore/#&#34;&gt;Hadoop configuration for backup and restore&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the Hadoop cluster uses Kerberos, OpenText™ Analytics Database nodes must have access to certain Hadoop configuration files. See &lt;a href=&#34;#Configur2&#34;&gt;Configuring Kerberos&lt;/a&gt; below.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To restore an HDFS storage location, your database cluster must be able to run the Hadoop &lt;code&gt;distcp&lt;/code&gt; command. See &lt;a href=&#34;#Configur&#34;&gt;Configuring distcp on a database cluster&lt;/a&gt; below.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HDFS storage locations do not support object-level backups. You must perform a full database backup to back up the data in your HDFS storage locations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Data in an HDFS storage location is backed up to HDFS. This backup guards against accidental deletion or corruption of data. It does not prevent data loss in the case of a catastrophic failure of the entire Hadoop cluster. To prevent data loss, you must have a backup and disaster recovery plan for your Hadoop cluster.&lt;/p&gt;
&lt;p&gt;Data stored on the Linux native file system is still backed up to the location you specify in the backup configuration file. It and the data in HDFS storage locations are handled separately by the &lt;code&gt;vbr&lt;/code&gt; backup script.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a name=&#34;Configur2&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;configuring-kerberos&#34;&gt;Configuring Kerberos&lt;/h2&gt;
&lt;p&gt;If HDFS uses Kerberos, then to back up your HDFS storage locations you must take the following additional steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Grant Hadoop superuser privileges to the Kerberos principals for each database node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy Hadoop configuration files to your database nodes as explained in &lt;a href=&#34;../../../en/hadoop-integration/configuring-hdfs-access/#Accessin&#34;&gt;Accessing Hadoop Configuration Files&lt;/a&gt;. The database needs access to &lt;code&gt;core-site.xml&lt;/code&gt;, &lt;code&gt;hdfs-site.xml&lt;/code&gt;, and &lt;code&gt;yarn-site.xml&lt;/code&gt; for backup and restore. If your database nodes are co-located on HDFS nodes, these files are already present.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the HadoopConfDir parameter to the location of the directory containing these files. The value can be a path, if the files are in multiple directories. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER DATABASE exampledb SET HadoopConfDir = &amp;#39;/etc/hadoop/conf:/etc/hadoop/test&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;All three configuration files must be present on this path on every database node.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If your database nodes are co-located on HDFS nodes and you are using Kerberos, you must also change some Hadoop configuration parameters. These changes are needed in order for restoring from backups to work. In &lt;code&gt;yarn-site.xml&lt;/code&gt; on every database node, set the following parameters:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Parameter&lt;/th&gt; 

&lt;th &gt;
Value&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;yarn.resourcemanager.proxy-user-privileges.enabled&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
true&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;yarn.resourcemanager.proxyusers.*.groups&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
&lt;ul&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;yarn.resourcemanager.proxyusers.*.hosts&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
&lt;ul&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;yarn.resourcemanager.proxyusers.*.users&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
&lt;ul&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;yarn.timeline-service.http-authentication.proxyusers.*.groups&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
&lt;ul&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;yarn.timeline-service.http-authentication.proxyusers.*.hosts&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
&lt;ul&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;yarn.timeline-service.http-authentication.proxyusers.*.users&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
&lt;ul&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;No changes are needed on HDFS nodes that are not also database nodes.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Configur&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;configuring-distcp-on-a-database-cluster&#34;&gt;Configuring distcp on a database cluster&lt;/h2&gt;
&lt;p&gt;Your database cluster must be able to run the Hadoop &lt;code&gt;distcp&lt;/code&gt; command to restore a backup of an HDFS storage location. The easiest way to enable your cluster to run this command is to install several Hadoop packages on each node. These packages must be from the same distribution and version of Hadoop that is running on your Hadoop cluster.&lt;/p&gt;
&lt;p&gt;The steps you need to take depend on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The distribution and version of Hadoop running on the Hadoop cluster containing your HDFS storage location.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The distribution of Linux running on your database cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Installing the Hadoop packages necessary to run &lt;code&gt;distcp&lt;/code&gt; does not turn your database into a Hadoop cluster. This process installs just enough of the Hadoop support files on your cluster to run the &lt;code&gt;distcp&lt;/code&gt; command. There is no additional overhead placed on the database cluster, aside from a small amount of additional disk space consumed by the Hadoop support files.

&lt;/div&gt;
&lt;h3 id=&#34;configuration-overview&#34;&gt;Configuration overview&lt;/h3&gt;
&lt;p&gt;The steps for configuring your database cluster to restore backups for HDFS storage location are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;If necessary, install and configure a Java runtime on the hosts in the database cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Find the location of your Hadoop distribution&#39;s package repository.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the Hadoop distribution&#39;s package repository to the Linux package manager on all hosts in your cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install the necessary Hadoop packages on your database hosts.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set two configuration parameters in your database related to Java and Hadoop.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Confirm that the Hadoop &lt;code&gt;distcp&lt;/code&gt; command runs on your database hosts.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The following sections describe these steps in greater detail.&lt;/p&gt;
&lt;h3 id=&#34;installing-a-java-runtime&#34;&gt;Installing a Java runtime&lt;/h3&gt;
&lt;p&gt;Your database cluster must have a Java Virtual Machine (JVM) installed to run the Hadoop &lt;code&gt;distcp&lt;/code&gt; command. It already has a JVM installed if you have configured it to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Execute user-defined extensions developed in Java. See &lt;a href=&#34;../../../en/extending/developing-udxs/#&#34;&gt;Developing user-defined extensions (UDxs)&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Access Hadoop data using the HCatalog Connector. See &lt;a href=&#34;../../../en/hadoop-integration/using-hcatalog-connector/#&#34;&gt;Using the HCatalog Connector&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If your database has a JVM installed, verify that your Hadoop distribution supports it. See your Hadoop distribution&#39;s documentation to determine which JVMs it supports.&lt;/p&gt;
&lt;p&gt;If the JVM installed on your database cluster is not supported by your Hadoop distribution you must uninstall it. Then you must install a JVM that is supported by both OpenText™ Analytics Database and your Hadoop distribution. See &lt;a href=&#34;../../../en/supported-platforms/sdks/&#34;&gt;SDKs&lt;/a&gt; for a list of the JVMs compatible with OpenText™ Analytics Database.&lt;/p&gt;
&lt;p&gt;If your database cluster does not have a JVM (or its existing JVM is incompatible with your Hadoop distribution), follow the instructions in &lt;a href=&#34;../../../en/hadoop-integration/using-hcatalog-connector/installing-java-runtime-on-your-cluster/#&#34;&gt;Installing the Java runtime on your database cluster&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;finding-your-hadoop-distributions-package-repository&#34;&gt;Finding your Hadoop distribution&#39;s package repository&lt;/h3&gt;
&lt;p&gt;Many Hadoop distributions have their own installation system, such as Cloudera Manager or Ambari. However, they also support manual installation using native Linux packages such as RPM and &lt;code&gt;.deb&lt;/code&gt; files. These package files are maintained in a repository. You can configure your database hosts to access this repository to download and install Hadoop packages.&lt;/p&gt;
&lt;p&gt;Consult your Hadoop distribution&#39;s documentation to find the location of its Linux package repository. This information is often located in the portion of the documentation covering manual installation techniques.&lt;/p&gt;
&lt;p&gt;Each Hadoop distribution maintains separate repositories for each of the major Linux package management systems. Find the specific repository for the Linux distribution running your database cluster. Be sure that the package repository that you select matches the version used by your Hadoop cluster.&lt;/p&gt;
&lt;h3 id=&#34;configuring-database-nodes-to-access-the-hadoop-distributions-package-repository&#34;&gt;Configuring database nodes to access the Hadoop Distribution’s package repository&lt;/h3&gt;
&lt;p&gt;Configure the nodes in your database cluster so they can access your Hadoop distribution&#39;s package repository. Your Hadoop distribution&#39;s documentation should explain how to add the repositories to your Linux platform. If the documentation does not explain how to add the repository to your packaging system, refer to your Linux distribution&#39;s documentation.&lt;/p&gt;
&lt;p&gt;The steps you need to take depend on the package management system your Linux platform uses. Usually, the process involves:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Downloading a configuration file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Adding the configuration file to the package management system&#39;s configuration directory.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For Debian-based Linux distributions, adding the Hadoop repository encryption key to the root account keyring.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Updating the package management system&#39;s index to have it discover new packages.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You must add the Hadoop repository to all hosts in your database cluster.&lt;/p&gt;
&lt;h3 id=&#34;installing-the-required-hadoop-packages&#34;&gt;Installing the required Hadoop packages&lt;/h3&gt;
&lt;p&gt;After configuring the repository, you are ready to install the Hadoop packages. The packages you need to install are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;hadoop&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;hadoop-hdfs&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;hadoop-client&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The names of the packages are usually the same across all Hadoop and Linux distributions. These packages often have additional dependencies. Always accept any additional packages that the Linux package manager asks to install.&lt;/p&gt;
&lt;p&gt;To install these packages, use the package manager command for your Linux distribution. The package manager command you need to use depends on your Linux distribution:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;On Red Hat and CentOS, the package manager command is &lt;code&gt;yum&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On Debian and Ubuntu, the package manager command is &lt;code&gt;apt-get&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On SUSE the package manager command is &lt;code&gt;zypper&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Consult your Linux distribution&#39;s documentation for instructions on installing packages.&lt;/p&gt;
&lt;h3 id=&#34;setting-configuration-parameters&#34;&gt;Setting configuration parameters&lt;/h3&gt;
&lt;p&gt;You must set two &lt;a href=&#34;../../../en/sql-reference/config-parameters/hadoop-parameters/&#34;&gt;Hadoop configuration parameters&lt;/a&gt; to enable the database to restore HDFS data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;JavaBinaryForUDx is the path to the Java executable. You may have already set this value to use Java UDxs or the HCatalog Connector. You can find the path for the default Java executable from the Bash command shell using the command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ which java
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HadoopHome is the directory that contains &lt;code&gt;bin/hadoop&lt;/code&gt; (the bin directory containing the Hadoop executable file). The default value for this parameter is &lt;code&gt;/usr&lt;/code&gt;. The default value is correct if your Hadoop executable is located at &lt;code&gt;/usr/bin/hadoop&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following example shows how to set and then review the values of these parameters:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER DATABASE DEFAULT SET PARAMETER JavaBinaryForUDx = &amp;#39;/usr/bin/java&amp;#39;;
=&amp;gt; SELECT current_value FROM configuration_parameters WHERE parameter_name = &amp;#39;JavaBinaryForUDx&amp;#39;;
 current_value
---------------
 /usr/bin/java
(1 row)
=&amp;gt; ALTER DATABASE DEFAULT SET HadoopHome = &amp;#39;/usr&amp;#39;;
=&amp;gt; SELECT current_value FROM configuration_parameters WHERE parameter_name = &amp;#39;HadoopHome&amp;#39;;
 current_value
---------------
 /usr
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can also set the following parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;HadoopFSReadRetryTimeout and HadoopFSWriteRetryTimeout specify how long to wait before failing. The default value for each is 180 seconds. If you are confident that your file system will fail more quickly, you can improve performance by lowering these values.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HadoopFSReplication specifies the number of replicas HDFS makes. By default, the Hadoop client chooses this; the database uses the same value for all nodes.&lt;/p&gt;

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

Do not change this setting unless directed otherwise by the customer support team.

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HadoopFSBlockSizeBytes is the block size to write to HDFS; larger files are divided into blocks of this size. The default is 64MB.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;confirming-that-distcp-runs&#34;&gt;Confirming that distcp runs&lt;/h3&gt;
&lt;p&gt;After the packages are installed on all hosts in your cluster, your database should be able to run the Hadoop &lt;code&gt;distcp&lt;/code&gt; command. To test it:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log into any host in your cluster as the &lt;a class=&#34;glosslink&#34; href=&#34;../../../en/glossary/db-superuser/&#34; title=&#34;&#34;&gt;database superuser&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;At the Bash shell, enter the command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ hadoop distcp
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The command should print a message similar to the following:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;usage: distcp OPTIONS [source_path...] &amp;lt;target_path&amp;gt;
              OPTIONS
 -async                 Should distcp execution be blocking
 -atomic                Commit all changes or none
 -bandwidth &amp;lt;arg&amp;gt;       Specify bandwidth per map in MB
 -delete                Delete from target, files missing in source
 -f &amp;lt;arg&amp;gt;               List of files that need to be copied
 -filelimit &amp;lt;arg&amp;gt;       (Deprecated!) Limit number of files copied to &amp;lt;= n
 -i                     Ignore failures during copy
 -log &amp;lt;arg&amp;gt;             Folder on DFS where distcp execution logs are
                        saved
 -m &amp;lt;arg&amp;gt;               Max number of concurrent maps to use for copy
 -mapredSslConf &amp;lt;arg&amp;gt;   Configuration for ssl config file, to use with
                        hftps://
 -overwrite             Choose to overwrite target files unconditionally,
                        even if they exist.
 -p &amp;lt;arg&amp;gt;               preserve status (rbugpc)(replication, block-size,
                        user, group, permission, checksum-type)
 -sizelimit &amp;lt;arg&amp;gt;       (Deprecated!) Limit number of files copied to &amp;lt;= n
                        bytes
 -skipcrccheck          Whether to skip CRC checks between source and
                        target paths.
 -strategy &amp;lt;arg&amp;gt;        Copy strategy to use. Default is dividing work
                        based on file sizes
 -tmp &amp;lt;arg&amp;gt;             Intermediate work path to be used for atomic
                        commit
 -update                Update target, copying only missingfiles or
                        directories
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Repeat these steps on the other hosts in your database to verify that all of the hosts can run &lt;code&gt;distcp&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;troubleshooting&#34;&gt;Troubleshooting&lt;/h3&gt;
&lt;p&gt;If you cannot run the &lt;code&gt;distcp&lt;/code&gt; command, try the following steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If Bash cannot find the &lt;code&gt;hadoop&lt;/code&gt; command, you may need to manually add Hadoop&#39;s &lt;code&gt;bin&lt;/code&gt; directory to the system search path. An alternative is to create a symbolic link in an existing directory in the search path (such as &lt;code&gt;/usr/bin&lt;/code&gt;) to the &lt;code&gt;hadoop&lt;/code&gt; binary.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ensure the version of Java installed on your database cluster is compatible with your Hadoop distribution.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Review the Linux package installation tool&#39;s logs for errors. In some cases, packages may not be fully installed, or may not have been downloaded due to network issues.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ensure that the database administrator account has permission to execute the &lt;code&gt;hadoop&lt;/code&gt; command. You might need to add the account to a specific group in order to allow it to run the necessary commands.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Setting up backup locations</title>
      <link>/en/admin/backup-and-restore/setting-up-backup-locations/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/setting-up-backup-locations/</guid>
      <description>
        
        
        
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Inadequate security on backups can compromise overall database security. Be sure to secure backup locations and strictly limit access to backups only to users who already have permissions to access all database data.
&lt;/div&gt;

&lt;p&gt;Full and object-level backups reside on &lt;em&gt;backup hosts&lt;/em&gt;, the computer systems on which backups and archives are stored. On the backup hosts, OpenText™ Analytics Database saves backups in a specific &lt;em&gt;backup location&lt;/em&gt; (directory).&lt;/p&gt;
&lt;p&gt;You must set up your backup hosts before you can create backups.&lt;/p&gt;
&lt;p&gt;The storage format type at your backup locations must support fcntl lockf (POSIX) file locking.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Creating backups</title>
      <link>/en/admin/backup-and-restore/creating-backups/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/creating-backups/</guid>
      <description>
        
        
        
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Inadequate security on backups can compromise overall database security. Be sure to secure backup locations and strictly limit access to backups only to users who already have permissions to access all database data.
&lt;/div&gt;

&lt;p&gt;You should perform full backups of your database regularly. You should also perform a full backup under the following circumstances:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Before…&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Upgrading OpenText™ Analytics Database to another release.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Dropping a partition.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Adding, removing, or replacing nodes in the database cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;After…&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Loading a large volume of data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Adding, removing, or replacing nodes in the database cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Recovering a cluster from a crash.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;If…&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The epoch of the latest backup predates the current ancient history mark.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ideally, schedule ongoing backups to back up your data. You can run &lt;code&gt;vbr&lt;/code&gt; from a &lt;code&gt;cron&lt;/code&gt; job or other task scheduler.&lt;/p&gt;
&lt;p&gt;You can also back up selected objects. Use object backups to supplement full backups, not to replace them. Backup types are described in &lt;a href=&#34;../../../en/admin/backup-and-restore/creating-backups/types-of-backups/#&#34;&gt;Types of backups&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Running &lt;code&gt;vbr&lt;/code&gt; does not affect active database applications. &lt;code&gt;vbr&lt;/code&gt; supports creating backups while concurrently running applications that execute DML statements, including COPY, INSERT, UPDATE, DELETE, and SELECT.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;BackupLocationContents&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;backup-locations-and-contents&#34;&gt;Backup locations and contents&lt;/h2&gt;
&lt;p&gt;Full and object-level backups reside on &lt;em&gt;backup hosts&lt;/em&gt;, the computer systems on which backups and archives are stored.&lt;/p&gt;
&lt;p&gt;The database saves backups in a specific &lt;em&gt;backup location&lt;/em&gt;, the directory on a backup host. This location can contain multiple backups, both full and object-level, including associated archives. The backups are also compatible, allowing you to restore any objects from a full database backup. Backup locations for Eon Mode databases must be on S3.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

OpenText does not recommend concurrent backups. If you must run multiple backups concurrently, use separate backup and temp directories for each. Having separate backup directories detracts from the advantage of sharing data among historical backups.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;Before beginning a backup, you must prepare your backup locations using the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr init task&lt;/a&gt;, as in the following example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vbr -t init -c full_backup.ini
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;For more information about backup locations, see &lt;a href=&#34;../../../en/admin/backup-and-restore/setting-up-backup-locations/#&#34;&gt;Setting up backup locations&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Backups contain all committed data for the backed-up objects as of the start time of the backup. Backups do not contain uncommitted data or data committed during the backup. Backups do not delay mergeout or load activity.&lt;/p&gt;
&lt;h2 id=&#34;backing-up-hdfs-storage-locations&#34;&gt;Backing up HDFS storage locations&lt;/h2&gt;
&lt;p&gt;If your database cluster uses HDFS storage locations, you must do some additional configuration before you can perform backups. See &lt;a href=&#34;../../../en/admin/backup-and-restore/requirements-backing-up-and-restoring-hdfs-storage-locations/#&#34;&gt;Requirements for backing up and restoring HDFS storage locations&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;HDFS storage locations support only full backup and restore. You cannot perform object backup or restore on a cluster that uses HDFS storage locations.&lt;/p&gt;
&lt;h2 id=&#34;impact-of-backups-on-database-nodes&#34;&gt;Impact of backups on database nodes&lt;/h2&gt;
&lt;p&gt;While a backup is taking place, the backup process can consume additional storage. The amount of space consumed depends on the size of your catalog and any objects that you drop during the backup. The backup process releases this storage when the backup is complete.&lt;/p&gt;
&lt;h2 id=&#34;best-practices-for-creating-backups&#34;&gt;Best practices for creating backups&lt;/h2&gt;
&lt;p&gt;When creating backup configuration files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create separate configuration files to create full and object-level backups.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use a unique snapshot name in each configuration file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the same backup host directory location for both kinds of backups:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Because the backups share disk space, they are compatible when performing a restore.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Each cluster node must also use the same directory location on its designated backup host.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For best network performance, use one backup host per cluster node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use one directory on each backup node to store successive backups.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For future reference, append the major OpenText™ Analytics Database version number to the configuration file name (&lt;code&gt;mybackup&lt;/code&gt;&lt;em&gt;9x&lt;/em&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The selected objects of a backup can include one or more schemas or tables, or a combination of both. For example, you can include schema &lt;code&gt;S1&lt;/code&gt; and tables &lt;code&gt;T1&lt;/code&gt; and &lt;code&gt;T2&lt;/code&gt; in an object-level backup. Multiple backups can be combined into a single backup. A schema-level backup can be integrated with a database backup (and a table backup integrated with a schema-level backup, and so on).&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Restoring backups</title>
      <link>/en/admin/backup-and-restore/restoring-backups/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/restoring-backups/</guid>
      <description>
        
        
        &lt;p&gt;You can use the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/&#34;&gt;vbr &lt;code&gt;restore&lt;/code&gt; task&lt;/a&gt; to restore your full database or selected objects from backups created by &lt;code&gt;vbr&lt;/code&gt;. Typically you use the same configuration file for both operations. The minimal restore command is:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vbr --task restore --config-file &lt;span class=&#34;code-variable&#34;&gt;config-file&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You must log in using the database administrator&#39;s account (not root).&lt;/p&gt;
&lt;p&gt;For full restores, the database must be DOWN. For object restores, the database must be UP.&lt;/p&gt;
&lt;p&gt;Usually you restore to the cluster that you backed up, but you can also restore to an alternate cluster if the original one is no longer available.&lt;/p&gt;
&lt;p&gt;Restoring must be done on the same architecture as the backup from which you are restoring. You cannot back up an Enterprise Mode database and restore it in Eon Mode or vice versa.&lt;/p&gt;
&lt;p&gt;You can perform restore tasks on Permanent node types. You cannot restore data on Ephemeral, Execute, or Standby nodes. To restore or replicate to these nodes, you must first change the &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-node/&#34;&gt;destination node type&lt;/a&gt; to PERMANENT. For more information, refer to &lt;a href=&#34;../../../en/admin/managing-db/managing-nodes/setting-node-type/#&#34;&gt;Setting node type&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;restoring-objects-to-a-higher-version&#34;&gt;Restoring objects to a higher version&lt;/h2&gt;
&lt;p&gt;OpenText™ Analytics Database supports restoration to a database that is no more than one minor version higher than the current database version. For example, you can restore objects from a 12.0.x database to a 12.1.x database.&lt;/p&gt;
&lt;p&gt;If restored objects require a UDx library that is not present in the later-version database, the database displays the following error:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;ERROR 2858:  Could not find function definition
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can resolve this issue by &lt;a href=&#34;../../../en/extending/udxs/updating-udx-libraries/udx-library-compatibility-with-new-server-versions/&#34;&gt;installing compatible libraries&lt;/a&gt; in the target database.&lt;/p&gt;

&lt;h2 id=&#34;restoring-hdfs-storage-locations&#34;&gt;Restoring HDFS storage locations&lt;/h2&gt;
&lt;p&gt;If your database cluster uses HDFS storage locations, you must do some additional configuration before you can restore. See &lt;a href=&#34;../../../en/admin/backup-and-restore/requirements-backing-up-and-restoring-hdfs-storage-locations/#&#34;&gt;Requirements for backing up and restoring HDFS storage locations&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;HDFS storage locations support only full backup and restore. You cannot perform object backup or restore on a cluster that uses HDFS storage locations.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Copying the database to another cluster</title>
      <link>/en/admin/backup-and-restore/copying-db-to-another-cluster/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/copying-db-to-another-cluster/</guid>
      <description>
        
        
        
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Inadequate security on backups can compromise overall database security. Be sure to secure backup locations and strictly limit access to backups only to users who already have permissions to access all database data.
&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;vbr&lt;/code&gt; task &lt;code&gt;copycluster&lt;/code&gt; combines two other &lt;code&gt;vbr&lt;/code&gt; tasks—
&lt;code&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/creating-backups/creating-full-backups/#&#34;&gt;backup&lt;/a&gt;&lt;/code&gt; and 
&lt;code&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/restoring-backups/#&#34;&gt;restore&lt;/a&gt;&lt;/code&gt;—as a single operation, enabling you to back up an entire data from one Enterprise Mode database cluster and then restore it on another. This can facilitate routine operations, such as copying a database between development and production environments.

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

&lt;code&gt;copycluster&lt;/code&gt; overwrites all existing data in the destination database. To preserve that data, back up the destination database before launching the &lt;code&gt;copycluster&lt;/code&gt; task.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;restrictions&#34;&gt;Restrictions&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;copycluster&lt;/code&gt; is invalid with Eon databases. It is also incompatible with HDFS storage locations; OpenText™ Analytics Database does not transfer data to a remote HDFS cluster as it does for a Linux cluster.&lt;/p&gt;
&lt;h2 id=&#34;prerequisites&#34;&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;copycluster&lt;/code&gt; requires that the target and source database clusters be identical in the following respects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Database hotfix version—for example, 12.0.1-1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Number of nodes and node names, as shown in the system table NODES:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name FROM nodes;
  node_name
------------------
 v_vmart_node0001
 v_vmart_node0002
 v_vmart_node0003
(3 rows)
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Database name&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Database catalog, data, and temp directory paths as shown in the system table DISK_STORAGE:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name,storage_path,storage_usage FROM disk_storage;
    node_name     |                     storage_path                     | storage_usage
------------------+------------------------------------------------------+---------------
 v_vmart_node0001 | /home/dbadmin/VMart/v_vmart_node0001_catalog/Catalog | CATALOG
 v_vmart_node0001 | /home/dbadmin/VMart/v_vmart_node0001_data            | DATA,TEMP
 v_vmart_node0001 | /home/dbadmin/verticadb                              | DEPOT
 v_vmart_node0002 | /home/dbadmin/VMart/v_vmart_node0002_catalog/Catalog | CATALOG
...
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Directory paths for the catalog, data, and temp storage are the same on all nodes.

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Database administrator accounts&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following requirements also apply:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The target cluster has adequate disk space for &lt;code&gt;copycluster&lt;/code&gt; to complete.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The source cluster&#39;s database administrator must be able to log in to all target cluster nodes through SSH without a password.&lt;/p&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Passwordless access &lt;em&gt;within&lt;/em&gt; the cluster is not the same as passwordless access &lt;em&gt;between&lt;/em&gt; clusters. The SSH ID of the administrator account on the source cluster and the target cluster are likely not the same. You must configure each host in the target cluster to accept the SSH authentication of the source cluster.

&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;copycluster-procedure&#34;&gt;Copycluster procedure&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a configuration file for the &lt;code&gt;copycluster&lt;/code&gt; operation. The OpenText™ Analytics Database installation includes a sample configuration file:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/opt/vertica/share/vbr/example_configs/copycluster.ini
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;For each node in the source database, create a &lt;code&gt;[Mapping]&lt;/code&gt; entry that specifies the host name of each destination database node. Unlike other &lt;code&gt;vbr&lt;/code&gt; tasks such as &lt;code&gt;restore&lt;/code&gt; and &lt;code&gt;backup&lt;/code&gt;, mappings for &lt;code&gt;copycluster&lt;/code&gt; only require the destination host name. &lt;code&gt;copycluster&lt;/code&gt; always stores backup data in the catalog and data directories of the destination database.&lt;/p&gt;
&lt;p&gt;The following example configures &lt;code&gt;vbr&lt;/code&gt; to copy the &lt;code&gt;vmart&lt;/code&gt; database from its three-node &lt;code&gt;v_vmart&lt;/code&gt; cluster to the &lt;code&gt;test-host&lt;/code&gt; cluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[Misc]
snapshotName = CopyVmart
tempDir = /tmp/vbr

[Database]
dbName = vmart
dbUser = dbadmin
dbPassword = password
dbPromptForPassword = False

[Transmission]
encrypt = False
port_rsync = 50000

[Mapping]
; backupDir is not used for cluster copy
v_vmart_node0001= test-host01
v_vmart_node0002= test-host02
v_vmart_node0003= test-host03
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stop the target cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As database administrator, invoke the &lt;code&gt;vbr&lt;/code&gt; task &lt;code&gt;copycluster&lt;/code&gt; from a source database node:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vbr -t copycluster -c copycluster.ini
Starting copy of database VMART.
Participating nodes: vmart_node0001, vmart_node0002, vmart_node0003, vmart_node0004.
Enter vertica password:
Snapshotting database.
Snapshot complete.
Determining what data to copy.
[==================================================] 100%
Approximate bytes to copy: 987394852 of 987394852 total.
Syncing data to destination cluster.
[==================================================] 100%
Reinitializing destination catalog.
Copycluster complete!
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
If the &lt;code&gt;copycluster&lt;/code&gt; task is interrupted, the destination cluster retains data files that already transferred. If you retry the operation, the database does not resend these files.
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Replicating objects to another database cluster</title>
      <link>/en/admin/backup-and-restore/replicating-objects-to-another-db-cluster/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/replicating-objects-to-another-db-cluster/</guid>
      <description>
        
        
        &lt;p&gt;The &lt;code&gt;vbr&lt;/code&gt; task &lt;code&gt;replicate&lt;/code&gt; supports replication of tables and schemas from one database cluster to another. You might consider replication for the following reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Copy tables and schemas between test, staging, and production clusters.Replicate certain objects immediately after an important change, such as a large table data load, instead of waiting until the next scheduled backup.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In both cases, replicating objects is generally more efficient than exporting and importing them. The first replication of an object replicates the entire object. Subsequent replications copy only data that has changed since the last replication. OpenText™ Analytics Database replicates data as of the current epoch on the target database. Used with a cron job, you can replicate key objects to create a backup database.&lt;/p&gt;
&lt;h2 id=&#34;replicate-versus-copycluster&#34;&gt;Replicate versus copycluster&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;replicate&lt;/code&gt; only supports tables, schemas, and—in Eon Mode databases—namespaces. In situations where the target database is down, or you plan to replicate the entire database, OpenText recommends that you use the &lt;a href=&#34;../../../en/admin/backup-and-restore/copying-db-to-another-cluster/#&#34;&gt;copycluster&lt;/a&gt; task to copy the database to another cluster. Thereafter, you can use &lt;code&gt;replicate&lt;/code&gt; to update individual objects.&lt;/p&gt;
&lt;h2 id=&#34;replication-procedure&#34;&gt;Replication procedure&lt;/h2&gt;
&lt;p&gt;To replicate objects to another database, perform these actions from the source database:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#Verify_Replication_Requirements&#34;&gt;Verify replication requirements&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#EditVbrConfigurationFile&#34;&gt;Identify the objects to replicate and target database&lt;/a&gt; in the &lt;code&gt;vbr&lt;/code&gt; configuration file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#ReplicateObjects&#34;&gt;Replicate objects&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a name=&#34;Verify_Replication_Requirements&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;verify-replication-requirements&#34;&gt;Verify replication requirements&lt;/h3&gt;
&lt;p&gt;The following requirements apply to the source and target databases and their respective clusters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;All nodes in both databases are UP, else DOWN nodes are handled as described &lt;a href=&#34;#HandlingDownNodes&#34;&gt;below&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Versions of the two databases must be compatible. OpenText™ Analytics Database supports object replication to a target database up to one minor version higher than the current database version. For example, you can replicate objects from a 12.0.x database to a 12.1.x database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The same Linux user is associated with the dbadmin account of both databases.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The source cluster database administrator can log on to all target nodes through SSH without a password.&lt;/p&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

The SSH ID of the administrator account on the source cluster and the target cluster are likely not the same. You must configure each host in the target cluster to accept the SSH authentication of the source cluster.

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enterprise Mode: The following requirements apply:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Both databases have the same number of nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Clusters of both databases have the same number of fault groups, where corresponding fault groups in each cluster have the same number of nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Eon Mode: The following requirements apply:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The primary subclusters of both databases have the same node subscriptions.&lt;/li&gt;
&lt;li&gt;Primary subclusters of the target database have as many or more nodes as primary subclusters of the source database.&lt;/li&gt;
&lt;li&gt;For databases with multiple namespaces, the target and source namespaces must satisfy the requirements described in &lt;a href=&#34;../../../en/admin/backup-and-restore/eon-db-requirements/#backup-and-restore-with-multiple-namespaces&#34;&gt;Eon Mode database requirements&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a name=&#34;EditVbrConfigurationFile&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;edit-vbr-configuration-file&#34;&gt;Edit vbr configuration file&lt;/h3&gt;

&lt;div class=&#34;alert admonition tip&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Tip&lt;/h4&gt;

As a best practice, create a separate configuration file for each replication task.

&lt;/div&gt;
&lt;p&gt;Edit the &lt;code&gt;vbr&lt;/code&gt; configuration file to use for the &lt;code&gt;replicate&lt;/code&gt; task as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/misc/#&#34;&gt;[misc]&lt;/a&gt; section, set the &lt;code&gt;objects&lt;/code&gt; parameter to the objects to be replicated:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
; Identify the objects that you want to replicate
objects = &lt;span class=&#34;code-variable&#34;&gt;schema&lt;/span&gt;.&lt;span class=&#34;code-variable&#34;&gt;objectName    &lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
If your Eon Mode database has multiple &lt;a href=&#34;../../../en/architecture/eon-concepts/shards-and-subscriptions/&#34;&gt;namespaces&lt;/a&gt;, you must specify the namespace to which the objects belong. For &lt;code&gt;vbr&lt;/code&gt; tasks, namespace names are prefixed with a period. For example, &lt;code&gt;.n.s.t&lt;/code&gt; refers to table &lt;code&gt;t&lt;/code&gt; in schema &lt;code&gt;s&lt;/code&gt; in namespace &lt;code&gt;n&lt;/code&gt;. See &lt;a href=&#34;../../../en/admin/backup-and-restore/eon-db-requirements/#backup-and-restore-with-multiple-namespaces&#34;&gt;Eon Mode database requirements&lt;/a&gt; for more information.
&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/misc/#&#34;&gt;[misc]&lt;/a&gt; section, set the &lt;code&gt;snapshotName&lt;/code&gt; parameter to a unique snapshot identifier. Multiple &lt;code&gt;replicate&lt;/code&gt; tasks can run concurrently with each other and with &lt;code&gt;backup&lt;/code&gt; tasks, but only if their snapshot names are different.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
snapshotName = &lt;span class=&#34;code-variable&#34;&gt;name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/db/#&#34;&gt;[database]&lt;/a&gt; section, set the following parameters:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;; parameters used to replicate objects between databases
dest_dbName =
dest_dbUser =
dest_dbPromptForPassword =
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you use a stored password, be sure to configure the &lt;code&gt;dest_dbPassword&lt;/code&gt; parameter in your &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/password-config-file/&#34;&gt;password configuration file&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/mapping/#&#34;&gt;[mapping]&lt;/a&gt; section, map source nodes to target hosts:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[Mapping]
v_source_node0001 = targethost01
v_source_node0002 = targethost02
v_source_node0003 = targethost03
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a name=&#34;ReplicateObjects&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;replicate-objects&#34;&gt;Replicate objects&lt;/h3&gt;
&lt;p&gt;Run &lt;code&gt;vbr&lt;/code&gt; with the &lt;code&gt;replicate&lt;/code&gt; task:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vbr -t replicate -c &lt;span class=&#34;code-variable&#34;&gt;configfile&lt;/span&gt;.ini
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The &lt;code&gt;replicate&lt;/code&gt; task can run concurrently with &lt;code&gt;backup&lt;/code&gt; and other &lt;code&gt;replicate&lt;/code&gt; tasks in either direction, provided all tasks have unique snapshot names. &lt;code&gt;replicate&lt;/code&gt; cannot run concurrently with other &lt;code&gt;vbr&lt;/code&gt; tasks.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;HandlingDownNodes&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;handling-down-nodes&#34;&gt;Handling DOWN nodes&lt;/h2&gt;
&lt;p&gt;You can replicate objects if some nodes are down in either the source or target database, provided the nodes are visible on the network.&lt;/p&gt;
&lt;p&gt;The effect of DOWN nodes on a replication task depends on whether they are present in the source or target database.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Location&lt;/th&gt; 

&lt;th &gt;
Effect on replication&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
DOWN source nodes&lt;/td&gt; 

&lt;td &gt;


The database can replicate objects from a source database containing DOWN nodes. If nodes in the source database are DOWN, set the corresponding nodes in the target database to DOWN as well.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
DOWN target nodes&lt;/td&gt; 

&lt;td &gt;
The database can replicate objects when the target database has DOWN nodes. If nodes in the target database are DOWN, exclude the corresponding source database nodes using the &lt;code&gt;--nodes&lt;/code&gt; parameter on the &lt;code&gt;vbr&lt;/code&gt; command line.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;h2 id=&#34;monitoring-object-replication&#34;&gt;Monitoring object replication&lt;/h2&gt;
&lt;p&gt;You can monitor object replication in the following ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;View &lt;code&gt;vbr&lt;/code&gt; logs on the source database&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check database logs on the source and target databases&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Query &lt;a href=&#34;../../../en/sql-reference/system-tables/v-monitor-schema/remote-replication-status/#&#34;&gt;REMOTE_REPLICATION_STATUS&lt;/a&gt; on the source database&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Including and excluding objects</title>
      <link>/en/admin/backup-and-restore/including-and-excluding-objects/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/including-and-excluding-objects/</guid>
      <description>
        
        
        &lt;p&gt;You specify objects to include in backup, restore, and replicate operations with the &lt;code&gt;vbr&lt;/code&gt; configuration and command-line parameters &lt;code&gt;includeObject&lt;/code&gt;s and &lt;code&gt;--include-objects&lt;/code&gt;, respectively. You can optionally modify the set of included objects with the &lt;code&gt;vbr&lt;/code&gt; configuration and command line parameters &lt;code&gt;excludeObjects&lt;/code&gt; and &lt;code&gt;--exclude-objects&lt;/code&gt;, respectively. Both parameters support wildcard expressions to include and exclude groups of objects.&lt;/p&gt;

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
If your Eon Mode database has multiple &lt;a href=&#34;../../../en/architecture/eon-concepts/shards-and-subscriptions/&#34;&gt;namespaces&lt;/a&gt;, you must specify the namespace to which the objects belong. For &lt;code&gt;vbr&lt;/code&gt; tasks, namespace names are prefixed with a period. For example, &lt;code&gt;.n.s.t&lt;/code&gt; refers to table &lt;code&gt;t&lt;/code&gt; in schema &lt;code&gt;s&lt;/code&gt; in namespace &lt;code&gt;n&lt;/code&gt;. See &lt;a href=&#34;../../../en/admin/backup-and-restore/eon-db-requirements/#backup-and-restore-with-multiple-namespaces&#34;&gt;Eon Mode database requirements&lt;/a&gt; for more information.
&lt;/div&gt;

&lt;p&gt;For example, you might back up all tables in the schema &lt;code&gt;store&lt;/code&gt;, and then exclude from the backup the table &lt;code&gt;store.orders&lt;/code&gt; and all tables in the same schema whose name includes the string &lt;code&gt;account&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vbr --task=backup --config-file=db.ini --include-objects &amp;#39;store.*&amp;#39; --exclude-objects &amp;#39;store.orders,store.*account*&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;wildcard-characters&#34;&gt;Wildcard characters&lt;/h2&gt;

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Character&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
?&lt;/td&gt; 

&lt;td &gt;
Matches any single character. Case-insensitive.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;ul&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt; 

&lt;td &gt;
Matches 0 or more characters. Case-insensitive.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
\&lt;/td&gt; 

&lt;td &gt;
Escapes the next character. To include a literal ? or * in your table or schema name, use the \ character immediately before the escaped character. To escape the \ character itself, use a double \.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&amp;quot;&lt;/td&gt; 

&lt;td &gt;
Escapes the . character. To include a literal . in your table or schema name, wrap the character in double quotation marks.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;

&lt;h2 id=&#34;matching-schemas&#34;&gt;Matching schemas&lt;/h2&gt;
&lt;p&gt;Any string pattern without a period (&lt;code&gt;.&lt;/code&gt;) character represents a schema. For example, the following &lt;code&gt;includeObjects&lt;/code&gt; list can match any schema name that starts with the string &lt;code&gt;customer&lt;/code&gt;, and any two-character schema name that starts with the letter &lt;code&gt;s&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;includeObjects = customer*,s?
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When a &lt;code&gt;vbr&lt;/code&gt; operation specifies a schema that is unqualified by table references, the operation includes all tables of that schema. In this case, you cannot exclude individual tables from the same schema. For example, the following &lt;code&gt;vbr.ini&lt;/code&gt; entries are invalid:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;; invalid:
includeObjects = VMart
excludeObjects = VMart.?table?
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can exclude tables from an included schema by identifying the schema with the pattern &lt;em&gt;&lt;code&gt;schemaname&lt;/code&gt;&lt;/em&gt;.*. In this case, the pattern explicitly specifies to include all tables in that schema with the wildcard *. In the following example, the &lt;code&gt;include-objects&lt;/code&gt; parameter includes all tables in the VMart schema, and then excludes specific tables—specifically, the table &lt;code&gt;VMart.sales&lt;/code&gt; and all VMart tables that include the string &lt;code&gt;account&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
--include-objects &amp;#39;VMart.*&amp;#39;
--exclude-objects &amp;#39;VMart.sales,VMart.*account*&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;matching-tables&#34;&gt;Matching tables&lt;/h2&gt;
&lt;p&gt;Any pattern that includes a period (&lt;code&gt;.&lt;/code&gt;) represents a table. For example, in a configuration file, the following &lt;code&gt;includeObjects&lt;/code&gt; list matches the table name &lt;code&gt;sales.newclients&lt;/code&gt;, and any two-character table name in the same schema:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;includeObjects = sales.newclients,sales.??
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can also match all schemas and tables in a database or backup by using the pattern *.*. For example, you can restore all tables and schemas in a backup using this command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;--include-objects &amp;#39;*.*&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Because a &lt;code&gt;vbr&lt;/code&gt; parameter is evaluated on the command line, you must enclose the wildcards in single quote marks to prevent Linux from misinterpreting them.&lt;/p&gt;
&lt;h2 id=&#34;testing-wildcard-patterns&#34;&gt;Testing wildcard patterns&lt;/h2&gt;
&lt;p&gt;You can test the results of any pattern by using the &lt;code&gt;--dry-run&lt;/code&gt; parameter with a backup or restore command. Commands that include &lt;code&gt;--dry-run&lt;/code&gt; do not affect your database. Instead, &lt;code&gt;vbr&lt;/code&gt; displays the result of the command without executing it. For more information on &lt;code&gt;--dry-run&lt;/code&gt;, refer to the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-reference/#&#34;&gt;vbr reference&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;using-wildcards-with-backups&#34;&gt;Using wildcards with backups&lt;/h2&gt;
&lt;p&gt;You can identify objects to include in your object backup tasks using the &lt;code&gt;includeObjects&lt;/code&gt; and &lt;code&gt;excludeObjects&lt;/code&gt; parameters in your configuration file. A typical configuration file might include the following content:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[Misc]
snapshotName = dbobjects
restorePointLimit = 1
enableFreeSpaceCheck = True
includeObjects = VMart.*,online_sales.*
excludeObjects = *.*temp*
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In this example, the backup would include all tables from the VMart and &lt;code&gt;online_sales&lt;/code&gt; schemas, while excluding any table containing the string &#39;temp&#39; in its name belonging to any schema.&lt;/p&gt;
&lt;p&gt;After it evaluates included objects, &lt;code&gt;vbr&lt;/code&gt; evaluates excluded objects and removes excluded objects from the included set. For example, if you included schema1.table1 and then excluded schema1.table1, that object would be excluded. If no other objects were included in the task, the task would fail. The same is true for wildcards. If an exclusion pattern removes all included objects, the task fails.&lt;/p&gt;
&lt;h2 id=&#34;using-wildcards-with-restore&#34;&gt;Using wildcards with restore&lt;/h2&gt;
&lt;p&gt;You can identify objects to include in your restore tasks using the &lt;code&gt;--include-objects&lt;/code&gt; and &lt;code&gt;--exclude-objects&lt;/code&gt; parameters.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Take extra care when using wildcard patterns to restore database objects. Depending on your object restore mode settings, restored objects can overwrite existing objects. Test the impact of a wildcard restore with the &lt;code&gt;--dry-run&lt;/code&gt; &lt;code&gt;vbr&lt;/code&gt; parameter before performing the actual task.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;As with backups, &lt;code&gt;vbr&lt;/code&gt; evaluates excluded objects after it evaluates included objects and removes excluded objects from the included set. If no objects remain, the task fails.&lt;/p&gt;
&lt;p&gt;A typical restore command might include this content. (Line wrapped in the documentation for readability, but this is one command.)&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vbr -t restore -c verticaconfig --include-objects &amp;#39;customers.*,sales??&amp;#39;
    --exclude-objects &amp;#39;customers.199?,customers.200?&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example includes the schema customers, minus any tables with names matching 199 and 200 plus one character, as well as all any schema matching &#39;sales&#39; plus two characters.&lt;/p&gt;
&lt;p&gt;Another typical restore command might include this content.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vbr -t restore -c replicateconfig --include-objects &amp;#39;*.transactions,flights.*&amp;#39;
    --exclude-objects &amp;#39;flights.DTW*,flights.LAS*,flights.LAX*&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example includes any table named transactions, regardless of schema, and any tables beginning with DTW, LAS, or LAX belonging to the schema flights. Although these three-letter airport codes are capitalized in the example, &lt;code&gt;vbr&lt;/code&gt; is case-insensitive.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Managing backups</title>
      <link>/en/admin/backup-and-restore/managing-backups/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/managing-backups/</guid>
      <description>
        
        
        
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Inadequate security on backups can compromise overall database security. Be sure to secure backup locations and strictly limit access to backups only to users who already have permissions to access all database data.
&lt;/div&gt;

&lt;p&gt;&lt;code&gt;vbr&lt;/code&gt; provides several tasks related to managing backups: listing them, checking their integrity, selectively deleting them, and more. In addition, &lt;code&gt;vbr&lt;/code&gt; has parameters to allow you to restrict its use of system resources.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Troubleshooting backup and restore</title>
      <link>/en/admin/backup-and-restore/troubleshooting-backup-and-restore/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/troubleshooting-backup-and-restore/</guid>
      <description>
        
        
        &lt;p&gt;These tips can help you avoid issues related to backup and restore with OpenText™ Analytics Database and to troubleshoot any problems that occur.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;vbrLogFile&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;check-vbr-log&#34;&gt;Check vbr log&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;vbr&lt;/code&gt; log is separate from vertica.log. Its location is set by the &lt;code&gt;vbr&lt;/code&gt; configuration parameter &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/misc/#tempDir&#34;&gt;tempDir&lt;/a&gt;, by default &lt;code&gt;/tmp/vbr&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If the log has no explanation for an error or unexpected results, try increasing the logging level with the &lt;code&gt;vbr&lt;/code&gt; option &lt;code&gt;--debug&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vbr -t backup -c &lt;span class=&#34;code-variable&#34;&gt;config-file&lt;/span&gt; --debug &lt;span class=&#34;code-variable&#34;&gt;debug-level&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;where &lt;em&gt;&lt;code&gt;debug-level&lt;/code&gt;&lt;/em&gt; is an integer between 0 (default) and 3 (verbose), inclusive. As you increase the logging level, the file size of the log increases. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vbr -t backup -c full_backup.ini --debug 3
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Scrutinize reports do not include &lt;code&gt;vbr&lt;/code&gt; logs.

&lt;/div&gt;
&lt;h2 id=&#34;check-status-of-backup-nodes&#34;&gt;Check status of backup nodes&lt;/h2&gt;
&lt;p&gt;Backups fail if you run out of disk space on the backup hosts or if &lt;code&gt;vbr&lt;/code&gt; cannot reach them all. Check that you have sufficient space on each backup host and that you can reach each host via ssh.&lt;/p&gt;
&lt;p&gt;Sometimes &lt;code&gt;vbr&lt;/code&gt; leaves rsync processes running on the database or backup nodes. These processes can interfere with new ones. If you get an rsync error in the console, look for runaway processes and kill them.&lt;/p&gt;
&lt;h2 id=&#34;common-errors&#34;&gt;Common errors&lt;/h2&gt;
&lt;h3 id=&#34;object-replication-fails&#34;&gt;Object replication fails&lt;/h3&gt;
&lt;p&gt;If you do not exclude the DOWN node, replication fails with the following error:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Error connecting to a destination database node on the host &amp;lt;hostname&amp;gt; : &amp;lt;error&amp;gt;  ...
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Confirm that you excluded all DOWN nodes from the object replication operation.&lt;/p&gt;
&lt;h3 id=&#34;error-restoring-an-archive&#34;&gt;Error restoring an archive&lt;/h3&gt;
&lt;p&gt;You might see an error like the following when restoring an archive:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vbr --task restore --archive prd_db_20190131_183111 --config-file /home/dbadmin/backup.ini
IOError: [Errno 2] No such file or directory: &amp;#39;/tmp/vbr/vbr_20190131_183111_s0rpYR/prd_db.info&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The problem is that the archive name is not in the correct format. Specify only the date/timestamp suffix of the directory name that identifies the archive to restore, as described in &lt;a href=&#34;../../../en/admin/backup-and-restore/restoring-backups/restoring-db-from-full-backup/#Archive&#34;&gt;Restoring an Archive&lt;/a&gt;. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vbr --task restore --archive 20190131_183111 --config-file /home/dbadmin/backup.ini
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;backup-or-restore-fails-when-using-an-hdfs-storage-location&#34;&gt;Backup or restore fails when using an HDFS storage location&lt;/h3&gt;
&lt;p&gt;When performing a backup of a cluster that includes HDFS storage locations, you might see an error like the following:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;ERROR 5127:  Unable to create snapshot No such file /usr/bin/hadoop:
check the HadoopHome configuration parameter
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This error is caused by the backup script not being able to back up the HDFS storage locations. You must configure OpenText™ Analytics Database and Hadoop to enable the backup script to back up these locations. See &lt;a href=&#34;../../../en/admin/backup-and-restore/requirements-backing-up-and-restoring-hdfs-storage-locations/#&#34;&gt;Requirements for backing up and restoring HDFS storage locations&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Object-level backup and restore are not supported with HDFS storage locations. You must use full backup and restore.&lt;/p&gt;
&lt;h3 id=&#34;could-not-connect-to-endpoint-url&#34;&gt;Could not connect to endpoint URL&lt;/h3&gt;
&lt;p&gt;(Eon Mode) When performing a cross-endpoint operation, you can see a connection error if you failed to specify the endpoint URL for your communal storage (&lt;code&gt;VBR_COMMUNAL_STORAGE_ENDPOINT_URL&lt;/code&gt;). When the endpoint is missing but you specify credentials for communal storage, &lt;code&gt;vbr&lt;/code&gt; tries to use those credentials to access AWS. This access fails, because those credentials are for your on-premises storage, not AWS. When performing cross-endpoint operations, check that all environment variables described in &lt;a href=&#34;../../../en/admin/backup-and-restore/setting-up-backup-locations/configuring-cloud-storage-backups/#CrossEndpoint&#34;&gt;Cross-Endpoint Backups in Eon Mode&lt;/a&gt; are set correctly.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: vbr reference</title>
      <link>/en/admin/backup-and-restore/vbr-reference/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/vbr-reference/</guid>
      <description>
        
        
        &lt;p&gt;&lt;code&gt;vbr&lt;/code&gt; can back up and restore the full database, or specific schemas and tables. It also supports a number of other backup-related tasks—for example, list the history of all backups.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vbr&lt;/code&gt; is located in the OpenText™ Analytics Database binary directory—typically, 
&lt;code&gt;/opt/vertica/bin/vbr&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vbr { --help | -h }
  | { --task | -t } &lt;span class=&#34;code-variable&#34;&gt;task&lt;/span&gt;  { --config-file | -c } &lt;span class=&#34;code-variable&#34;&gt;configfile&lt;/span&gt; [ &lt;span class=&#34;code-variable&#34;&gt;option&lt;/span&gt;[...] ]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;global-options&#34;&gt;Global options&lt;/h2&gt;
&lt;p&gt;The following options apply to all &lt;code&gt;vbr&lt;/code&gt; tasks. For additional options, see &lt;a href=&#34;#Task-Spe&#34;&gt;Task-Specific Options&lt;/a&gt;.&lt;/p&gt;

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--help | -h&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Display a brief &lt;code&gt;vbr&lt;/code&gt; usage guide.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;{--task | -t} &lt;/code&gt;&lt;em&gt;&lt;code&gt;task&lt;/code&gt;&lt;/em&gt;&lt;/td&gt; 

&lt;td &gt;
































&lt;p&gt;The &lt;code&gt;vbr&lt;/code&gt; task to execute, one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#backup&#34;&gt;backup&lt;/a&gt;: create a full or object-level backup&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#collect-&#34;&gt;collect-garbage&lt;/a&gt;: rebuild the backup manifest and delete any unreferenced objects in the backup location&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/copying-db-to-another-cluster/#&#34;&gt;copycluster&lt;/a&gt;: copy the database to another cluster (Enterprise Mode only, invalid for HDFS)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#full-che&#34;&gt;full-check&lt;/a&gt;: verify all objects in the backup manifest and report missing or unreferenced objects&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#init&#34;&gt;init&lt;/a&gt;: prepare a new backup location&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#listback&#34;&gt;listbackup&lt;/a&gt;: show available backups&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/managing-backups/checking-backup-integrity/#QuickCheck&#34;&gt;quick-check&lt;/a&gt;: confirm that all backed-up objects are in the backup manifest and report discrepancies between objects in the backup location and objects listed in the backup manifest&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/managing-backups/repairing-backups/#&#34;&gt;quick-repair&lt;/a&gt;: build a replacement backup manifest based on storage locations and objects&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#remove&#34;&gt;remove&lt;/a&gt;: remove specified restore points&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#replicat&#34;&gt;replicate&lt;/a&gt;: copy objects from one cluster to another&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#restore&#34;&gt;restore&lt;/a&gt;: restore a full or object-level backup&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;
&lt;p&gt;In general, tasks cannot run concurrently, with one exception: multiple &lt;code&gt;replicate&lt;/code&gt; tasks can run concurrently with each other, and with &lt;code&gt;backup&lt;/code&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;{--config-file | -c} &lt;/code&gt;&lt;em&gt;&lt;code&gt;path&lt;/code&gt;&lt;/em&gt;&lt;/td&gt; 

&lt;td &gt;
File path of the configuration file to use for the given task.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--debug &lt;/code&gt;&lt;em&gt;&lt;code&gt;level&lt;/code&gt;&lt;/em&gt;&lt;/td&gt; 

&lt;td &gt;
Level of debug messaging to the &lt;code&gt;vbr&lt;/code&gt; log, an integer from 0 to 3 inclusive, where 0 (default) turns off debug messaging, and 3 is the most verbose level of messaging.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--nodes &lt;/code&gt;&lt;em&gt;&lt;code&gt;nodeslist&lt;/code&gt;&lt;/em&gt;&lt;/td&gt; 

&lt;td &gt;









&lt;p&gt;(Enterprise Mode only) Comma-delimited list of nodes on which to perform a &lt;code&gt;vbr&lt;/code&gt; task. Listed nodes must match names in the &lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/mapping/&#34;&gt;Mapping&lt;/a&gt; section of the configuration file. Use this option to exclude DOWN nodes from a task, so &lt;code&gt;vbr&lt;/code&gt; does not return with an error.&lt;/p&gt;
&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;
&lt;p&gt;If you use &lt;code&gt;--nodes&lt;/code&gt; with a &lt;code&gt;backup&lt;/code&gt; task, be sure that the nodes list includes all UP nodes; omitting any UP node can cause data loss in that backup.&lt;/p&gt;
&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--showconfig&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;








&lt;p&gt;Displays the configuration values used to perform a specific task, displayed in raw JSON format before &lt;code&gt;vbr&lt;/code&gt; starts task execution:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vbr -t &lt;/code&gt;&lt;em&gt;&lt;code&gt;task&lt;/code&gt;&lt;/em&gt;&lt;code&gt;-c&lt;/code&gt;&lt;em&gt;&lt;code&gt;configfile&lt;/code&gt;&lt;/em&gt;&lt;code&gt; --showconfig&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;--showconfig&lt;/code&gt; can also show settings for a given configuration file:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vbr -c &lt;/code&gt;&lt;em&gt;&lt;code&gt;configfile&lt;/code&gt;&lt;/em&gt;&lt;code&gt; --showconfig&lt;/code&gt;&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;

&lt;p&gt;&lt;a name=&#34;Task-Spe&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;task-specific-options&#34;&gt;Task-specific options&lt;/h2&gt;
&lt;p&gt;Some &lt;code&gt;vbr&lt;/code&gt; tasks support additional options, described in the sections that follow.&lt;/p&gt;
&lt;p&gt;The following &lt;code&gt;vbr&lt;/code&gt; tasks have no task-specific options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;copycluster&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;quick-check&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;quick-repair&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a name=&#34;backup&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;backup&#34;&gt;Backup&lt;/h3&gt;
&lt;p&gt;Create a &lt;a href=&#34;../../../en/admin/backup-and-restore/creating-backups/creating-full-backups/&#34;&gt;full database&lt;/a&gt; or &lt;a href=&#34;../../../en/admin/backup-and-restore/creating-backups/creating-object-level-backups/&#34;&gt;object-level&lt;/a&gt; backup, depending on configuration file settings.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--dry-run&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Perform a test run to evaluate impact of the backup operation—for example, its size and potential overhead.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;collect-&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;collect-garbage&#34;&gt;Collect-garbage&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/managing-backups/repairing-backups/#GarbageCollection&#34;&gt;Rebuild the backup manifest&lt;/a&gt; and delete any unreferenced objects in the backup location.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--report-file&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Output results to a delimited JSON file.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;full-che&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;full-check&#34;&gt;Full-check&lt;/h3&gt;
&lt;p&gt;Produce a &lt;a href=&#34;../../../en/admin/backup-and-restore/managing-backups/checking-backup-integrity/#FullCheck&#34;&gt;full backup integrity check&lt;/a&gt; that verifies all objects in the backup manifest against file system metadata, and then outputs missing and unreferenced objects.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--report-file&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Output results to a delimited JSON file.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;init&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;init&#34;&gt;Init&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/creating-backups/#BackupLocationContents&#34;&gt;Create a backup directory&lt;/a&gt; or prepare an existing one for use, and create backup manifests. This task must precede the first &lt;code&gt;vbr&lt;/code&gt; backup operation.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;

&lt;code&gt;--&lt;a href=&#34;../../../en/admin/backup-and-restore/setting-up-backup-locations/additional-considerations-cloud-storage/#ReinitializingCloudBackupStorage&#34;&gt;cloud-force-init&lt;/a&gt;&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Qualifies the &lt;code&gt;--task init&lt;/code&gt; command to force the &lt;code&gt;init&lt;/code&gt; task to succeed on S3 or GS storage targets when an identity/lock file mismatch occurs.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--report-file&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Output results to a delimited JSON file.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;listback&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;listbackup&#34;&gt;Listbackup&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/managing-backups/viewing-backups/&#34;&gt;Displays backups&lt;/a&gt; associated with the specified configuration file. Use this task to get archive (restore point) identifiers for &lt;code&gt;restore&lt;/code&gt; and &lt;code&gt;remove&lt;/code&gt; tasks.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;

&lt;code&gt;--&lt;a href=&#34;../../../en/admin/backup-and-restore/managing-backups/viewing-backups/#ListAllBackups&#34;&gt;list-all&lt;/a&gt;&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
List all backups stored on the hosts and paths in the configuration file.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--list-output-file &lt;/code&gt;&lt;em&gt;&lt;code&gt;filename&lt;/code&gt;&lt;/em&gt;&lt;/td&gt; 

&lt;td &gt;
Redirect output to the specified file.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--json&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Use JSON delimited format.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;remove&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;remove&#34;&gt;Remove&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/managing-backups/removing-backups/&#34;&gt;Remove the backup restore points&lt;/a&gt; specified by the &lt;code&gt;--archive&lt;/code&gt; option.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--archive&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;








&lt;p&gt;Restore points to remove, one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;timestamp&lt;/code&gt;&lt;/em&gt;: A single restore point to remove.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;timestamp&lt;/code&gt;&lt;/em&gt;&lt;code&gt;:&lt;/code&gt;&lt;em&gt;&lt;code&gt;timestamp&lt;/code&gt;&lt;/em&gt;: A range of contiguous restore points to remove.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;all&lt;/code&gt;: Remove all restore points.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You obtain timestamp identifiers for the target restore points with the &lt;code&gt;listbackup&lt;/code&gt; task. For details, see &lt;a href=&#34;../../../en/admin/backup-and-restore/managing-backups/viewing-backups/#ListBackups&#34;&gt;vbr listbackup&lt;/a&gt;.&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;replicat&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;replicate&#34;&gt;Replicate&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/replicating-objects-to-another-db-cluster/&#34;&gt;Copy objects&lt;/a&gt; from one cluster to an alternate cluster. This task can run concurrently with &lt;code&gt;backup&lt;/code&gt; and other &lt;code&gt;replicate&lt;/code&gt; tasks.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--archive&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Timestamp of the backup restore point to replicate, obtained from the &lt;code&gt;listbackup&lt;/code&gt; task.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--dry-run&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Perform a test run to evaluate impact of the replicate operation—for example, its size and potential overhead.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--target-namespace&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;









&lt;p&gt;Eon Mode only, the namespace in the target database to which objects are replicated.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vbr&lt;/code&gt; behaves differently depending on whether the target namespace exists:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Exists: &lt;code&gt;vbr&lt;/code&gt; attempts to restore or replicate the objects to the existing namespace, which must have the same shard count, shard boundaries, and node subscriptions as the source namespace. If these conditions are not met, the &lt;code&gt;vbr&lt;/code&gt; task fails.&lt;/li&gt;
&lt;li&gt;Nonexistent: &lt;code&gt;vbr&lt;/code&gt; creates a namespace in the target database with the name specified in &lt;code&gt;--target-namespace&lt;/code&gt; and the shard count of the source namespace, and then replicates or restores the objects to that namespace.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If no target namespace is specified, &lt;code&gt;vbr&lt;/code&gt; attempts to restore or replicate objects to a namespace with the same name as the source namespace.&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;restore&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;restore&#34;&gt;Restore&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/restoring-backups/&#34;&gt;Restore&lt;/a&gt; a full or object-level database backup.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Description&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--archive&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Timestamp of the backup to restore, obtained from the &lt;code&gt;listbackup&lt;/code&gt; task. If omitted, &lt;code&gt;vbr&lt;/code&gt; restores the latest backup of the specified configuration.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;

&lt;code&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/restoring-backups/restoring-individual-objects/#&#34;&gt;--restore-objects&lt;/a&gt;&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Comma-delimited list of objects—tables and schemas—to restore from a given backup.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;

&lt;code&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/including-and-excluding-objects/#&#34;&gt;--include-objects&lt;/a&gt;&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Comma-delimited list of database objects or &lt;a href=&#34;../../../en/admin/backup-and-restore/including-and-excluding-objects/&#34;&gt;patterns of objects&lt;/a&gt; to include from a full or object-level backup.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;

&lt;code&gt;&lt;a href=&#34;../../../en/admin/backup-and-restore/including-and-excluding-objects/#&#34;&gt;--exclude-objects&lt;/a&gt;&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Comma-delimited list of database objects or &lt;a href=&#34;../../../en/admin/backup-and-restore/including-and-excluding-objects/&#34;&gt;patterns of objects&lt;/a&gt; to exclude from the set specified by &lt;code&gt;--include-objects&lt;/code&gt;. This option can only be used together with &lt;code&gt;--include-objects&lt;/code&gt;.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--dry-run&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Perform a test run to evaluate impact of the restore operation—for example, its size and potential overhead.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--target-namespace&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;









&lt;p&gt;Eon Mode only, the namespace in the target database to which objects are restored.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vbr&lt;/code&gt; behaves differently depending on whether the target namespace exists:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Exists: &lt;code&gt;vbr&lt;/code&gt; attempts to restore or replicate the objects to the existing namespace, which must have the same shard count, shard boundaries, and node subscriptions as the source namespace. If these conditions are not met, the &lt;code&gt;vbr&lt;/code&gt; task fails.&lt;/li&gt;
&lt;li&gt;Nonexistent: &lt;code&gt;vbr&lt;/code&gt; creates a namespace in the target database with the name specified in &lt;code&gt;--target-namespace&lt;/code&gt; and the shard count of the source namespace, and then replicates or restores the objects to that namespace.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If no target namespace is specified, &lt;code&gt;vbr&lt;/code&gt; attempts to restore or replicate objects to a namespace with the same name as the source namespace.&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;


&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

The &lt;code&gt;--restore-objects&lt;/code&gt; option and the &lt;code&gt;--include-objects&lt;/code&gt;/&lt;code&gt;exclude-objects&lt;/code&gt; options are mutually exclusive. You can use &lt;code&gt;--include-objects&lt;/code&gt; to specify a set of objects and combine it with &lt;code&gt;--exclude-objects&lt;/code&gt; to remove objects from the set.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;interrupting-vbr&#34;&gt;Interrupting vbr&lt;/h2&gt;
&lt;p&gt;To cancel a backup, use Ctrl+C or send a SIGINT to the &lt;code&gt;vbr&lt;/code&gt; Python process. &lt;code&gt;vbr&lt;/code&gt; stops the backup process after it completes copying the data. Canceling a &lt;code&gt;vbr&lt;/code&gt; backup with Ctrl+C closes the session immediately.&lt;/p&gt;
&lt;p&gt;The files generated by an interrupted backup process remain in the target backup location directory. The next backup process picks up where the interrupted process left off.&lt;/p&gt;
&lt;p&gt;Backup operations are atomic, so interrupting a backup operation does not affect the previous backup. The latest backup replaces the previous backup only after all other backup steps are complete.

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

&lt;code&gt;restore&lt;/code&gt; or &lt;code&gt;copycluster&lt;/code&gt; operations overwrite the database catalog directory. Interrupting either of these processes leaves the database unusable until you restart the process and allow it to finish.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;see-also&#34;&gt;See also&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href=&#34;../../../en/admin/backup-and-restore/vbr-config-file-reference/#&#34;&gt;vbr configuration file reference&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../en/admin/backup-and-restore/sample-vbr-config-files/#&#34;&gt;Sample vbr configuration files&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: vbr configuration file reference</title>
      <link>/en/admin/backup-and-restore/vbr-config-file-reference/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/backup-and-restore/vbr-config-file-reference/</guid>
      <description>
        
        
        &lt;p&gt;&lt;code&gt;vbr&lt;/code&gt; configuration files divide backup settings into sections, under section-specific headings such as &lt;code&gt;[Database]&lt;/code&gt; and &lt;code&gt;[CloudStorage]&lt;/code&gt;, which contain database access and cloud storage location settings, respectively. Sections can appear in any order and can be repeated—for example, multiple &lt;code&gt;[Database]&lt;/code&gt; sections.

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Section headings are case-sensitive.
&lt;/div&gt;&lt;/p&gt;

      </description>
    </item>
    
  </channel>
</rss>
