<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Database export and import</title>
    <link>/en/data-export/db-export-and-import/</link>
    <description>Recent content in Database export and import on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/data-export/db-export-and-import/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Data-Export: Configuring connection security between clusters</title>
      <link>/en/data-export/db-export-and-import/configuring-connection-security-between-clusters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/data-export/db-export-and-import/configuring-connection-security-between-clusters/</guid>
      <description>
        
        
        &lt;p&gt;When copying data between clusters, OpenText™ Analytics Database can encrypt both data and plan metadata.&lt;/p&gt;
&lt;p&gt;Data is encrypted if you configure internode encryption (see &lt;a href=&#34;../../../en/security-and-authentication/internode-tls/#&#34;&gt;Internode TLS&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;For metadata, by default OpenText™ Analytics Database tries TLS first and falls back to plaintext. You can configure the database to require TLS and to fail if the connection cannot be made. You can also have the database verify the certificate and hostname before connecting.&lt;/p&gt;
&lt;h2 id=&#34;enabling-tls-between-clusters&#34;&gt;Enabling TLS between clusters&lt;/h2&gt;
&lt;p&gt;To use TLS between clusters, you must first configure TLS between nodes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Set the &lt;a href=&#34;../../../en/security-and-authentication/internode-tls/control-channel-spread-tls/&#34;&gt;EncryptSpreadComms&lt;/a&gt; parameter.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the &lt;a href=&#34;../../../en/security-and-authentication/internode-tls/data-channel-tls/&#34;&gt;data_channel&lt;/a&gt; TLS Configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the ImportExportTLSMode parameter.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To specify the level of strictness when connecting to another cluster, set the ImportExportTLSMode configuration parameter. This parameter applies for both importing and exporting data. The possible values are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;PREFER&lt;/code&gt;: Try TLS but fall back to plaintext if TLS fails.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;REQUIRE&lt;/code&gt;: Use TLS and fail if the server does not support TLS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;VERIFY_CA&lt;/code&gt;: Require TLS (as with REQUIRE), and also validate the other server&#39;s certificate using the CA specified by the &amp;quot;server&amp;quot; TLS Configuration&#39;s CA certificates (in this case, &amp;quot;ca_cert&amp;quot; and &amp;quot;ica_cert&amp;quot;):&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT name, certificate, ca_certificate, mode FROM tls_configurations WHERE name = &amp;#39;server&amp;#39;;
  name  |   certificate    |   ca_certificate   |   mode
--------+------------------+--------------------+-----------
 server | server_cert      | ca_cert,ica_cert   | VERIFY_CA
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;VERIFY_FULL&lt;/code&gt;: Require TLS and validate the certificate (as with VERIFY_CA), and also validate the server certificate&#39;s hostname.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;REQUIRE_FORCE&lt;/code&gt;, &lt;code&gt;VERIFY_CA_FORCE&lt;/code&gt;, and &lt;code&gt;VERIFY_FULL_FORCE&lt;/code&gt;: Same behavior as &lt;code&gt;REQUIRE&lt;/code&gt;, &lt;code&gt;VERIFY_CA&lt;/code&gt;, and &lt;code&gt;VERIFY_FULL&lt;/code&gt;, respectively, and cannot be overridden by &lt;a href=&#34;../../../en/sql-reference/statements/connect-to/#&#34;&gt;CONNECT TO VERTICA&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ImportExportTLSMode is a global parameter that applies to all import and export connections you make using &lt;a href=&#34;../../../en/sql-reference/statements/connect-to/#&#34;&gt;CONNECT TO VERTICA&lt;/a&gt;. You can override it for an individual connection.&lt;/p&gt;
&lt;p&gt;For more information about these and other configuration parameters, see &lt;a href=&#34;../../../en/sql-reference/config-parameters/security-parameters/#&#34;&gt;Security parameters&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Data-Export: Exporting data to another database</title>
      <link>/en/data-export/db-export-and-import/exporting-data-to-another-db/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/data-export/db-export-and-import/exporting-data-to-another-db/</guid>
      <description>
        
        
        &lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/statements/export-to/#&#34;&gt;EXPORT TO VERTICA&lt;/a&gt; exports table data from one OpenText™ Analytics database to another. The following requirements apply:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You already opened a connection to the target database with &lt;a href=&#34;../../../en/sql-reference/statements/connect-to/#&#34;&gt;CONNECT TO VERTICA&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The source database is no more than one major release behind the target database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The table in the target database must exist.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Source and target table columns must have the same or &lt;a href=&#34;../../../en/sql-reference/data-types/data-type-coercion-chart/&#34;&gt;compatible&lt;/a&gt; data types.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each EXPORT TO VERTICA statement exports data from only one table at a time. You can use the same database connection for multiple export operations.&lt;/p&gt;
&lt;h2 id=&#34;export-process&#34;&gt;Export process&lt;/h2&gt;
&lt;p&gt;Exporting is a three-step process:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Connect to the target database with &lt;a href=&#34;../../../en/sql-reference/statements/connect-to/#&#34;&gt;CONNECT TO VERTICA&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CONNECT TO VERTICA testdb USER dbadmin PASSWORD &amp;#39;&amp;#39; ON &amp;#39;VertTest01&amp;#39;, 5433;
CONNECT
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Export the desired data with EXPORT TO VERTICA. For example, the following statement exports all table data in &lt;code&gt;customer_dimension&lt;/code&gt; to a table of the same name in target database &lt;code&gt;testdb&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
=&amp;gt; EXPORT TO VERTICA testdb.customer_dimension FROM customer_dimension;
Rows Exported
---------------
         23416
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/statements/disconnect/#&#34;&gt;DISCONNECT&lt;/a&gt; disconnects from the target database when all export and &lt;a href=&#34;../../../en/data-export/db-export-and-import/copying-data-from-another-db/&#34;&gt;import&lt;/a&gt; operations are complete:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; DISCONNECT testdb;
DISCONNECT
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Closing your session also closes the database connection. However, it is a good practice to explicitly close the connection to the other database, both to free up resources and to prevent issues with other SQL scripts that might be running in your session. Always closing the connection prevents potential errors if you run a script in the same session that attempts to open a connection to the same database, since each session can only have one connection to a given database at a time.

&lt;/div&gt;

&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;mapping-between-source-and-target-columns&#34;&gt;Mapping between source and target columns&lt;/h2&gt;
&lt;p&gt;If you export all table data from one database to another as in the previous example, EXPORT TO VERTICA can omit specifying column lists. This is possible only if column definitions in both tables comply with the following conditions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Same number of columns&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Identical column names&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Same sequence of columns&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Matching or &lt;a href=&#34;../../../en/sql-reference/data-types/data-type-coercion-chart/&#34;&gt;compatible&lt;/a&gt; column data types&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If any of these conditions is not true, the EXPORT TO VERTICA statement must include column lists that explicitly map source and target columns to each other, as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Contain the same number of columns.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;List source and target columns in the same order.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Pair columns with the same (or &lt;a href=&#34;../../../en/sql-reference/data-types/data-type-coercion-chart/&#34;&gt;compatible&lt;/a&gt;) data types.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; EXPORT TO VERTICA testdb.people (name, gender, age)
   FROM customer_dimension (customer_name, customer_gender, customer_age);
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;exporting-subsets-of-table-data&#34;&gt;Exporting subsets of table data&lt;/h2&gt;
&lt;p&gt;In general, you can export a subset of table data in two ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Export data of specific source table columns.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Export the result set of a query (including &lt;a href=&#34;../../../en/data-analysis/queries/historical-queries/&#34;&gt;historical queries&lt;/a&gt;) on the source table.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In both cases, the EXPORT TO VERTICA statement typically must specify column lists for the source and target tables.&lt;/p&gt;
&lt;p&gt;The following example exports data from three columns in the source table to three columns in the target table. Accordingly, the EXPORT TO VERTICA statement specifies a column list for each table. The order of columns in each list determines how OpenText™ Analytics Database maps target columns to source columns. In this case, target columns &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;gender&lt;/code&gt;, and &lt;code&gt;age&lt;/code&gt; map to source columns &lt;code&gt;customer_name&lt;/code&gt;, &lt;code&gt;customer_gender&lt;/code&gt;, and &lt;code&gt;customer_age&lt;/code&gt;, respectively:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; EXPORT TO VERTICA testdb.people (name, gender, age) FROM customer_dimension
(customer_name, customer_gender, customer_age);
Rows Exported
---------------
         23416
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The next example queries source table &lt;code&gt;customer_dimension&lt;/code&gt;, and exports the result set to table &lt;code&gt;ma_customers&lt;/code&gt; in target database &lt;code&gt;testdb&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; EXPORT TO VERTICA testdb.ma_customers(customer_key, customer_name, annual_income)
   AS SELECT customer_key, customer_name, annual_income FROM customer_dimension WHERE customer_state = &amp;#39;MA&amp;#39;;
Rows Exported
---------------
          3429
(1 row)
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

In this example, the source and target column names are identical, so specifying a columns list for target table &lt;code&gt;ma_customers&lt;/code&gt; is optional. If one or more of the queried source columns did not have a match in the target table, the statement would be required to include a columns list for the target table.

&lt;/div&gt;
&lt;h2 id=&#34;exporting-identity-columns&#34;&gt;Exporting IDENTITY columns&lt;/h2&gt;
&lt;p&gt;You can export tables (or columns) that contain &lt;a href=&#34;../../../en/admin/working-with-native-tables/sequences/identity-sequences/&#34;&gt;IDENTITY&lt;/a&gt; values, but the sequence values are not incremented automatically at the target table. You must use &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-sequence/#&#34;&gt;ALTER SEQUENCE&lt;/a&gt; to make updates.&lt;/p&gt;
&lt;p&gt;Export IDENTITY columns as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If both source and destination tables have an IDENTITY column and configuration parameter CopyFromVerticaWithIdentity is set to true (1), you do not need to list them.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If source table has an IDENTITY column, but target table does not, you must explicitly list the source and target columns.&lt;/p&gt;

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

Failure to list which IDENTITY columns to export can cause an error, because the IDENTITY column will be interpreted as missing in the destination table.

&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, EXPORT TO VERTICA exports all IDENTITY columns . To disable this behavior globally, set the &lt;code&gt;CopyFromVerticaWithIdentity&lt;/code&gt; configuration parameter.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Data-Export: Copying data from another OpenText Analytics Database</title>
      <link>/en/data-export/db-export-and-import/copying-data-from-another-db/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/data-export/db-export-and-import/copying-data-from-another-db/</guid>
      <description>
        
        
        &lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/statements/copy-from/#&#34;&gt;COPY FROM VERTICA&lt;/a&gt; imports table data from one OpenText™ Analytics database to another. The following requirements apply:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You already opened a connection to the target database with &lt;a href=&#34;../../../en/sql-reference/statements/connect-to/#&#34;&gt;CONNECT TO VERTICA&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The source database is no more than one major release behind the target database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The table in the target database must exist.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Source and target table columns must have the same or &lt;a href=&#34;../../../en/sql-reference/data-types/data-type-coercion-chart/&#34;&gt;compatible&lt;/a&gt; data types.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;import-process&#34;&gt;Import process&lt;/h2&gt;
&lt;p&gt;Importing is a three-step process:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Connect to the source database with &lt;a href=&#34;../../../en/sql-reference/statements/connect-to/#&#34;&gt;CONNECT TO VERTICA&lt;/a&gt;. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CONNECT TO VERTICA vmart USER dbadmin PASSWORD &amp;#39;&amp;#39; ON &amp;#39;VertTest01&amp;#39;,5433;
CONNECT
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Import the desired data with COPY FROM VERTICA. For example, the following statement imports all table data in &lt;code&gt;customer_dimension&lt;/code&gt; to a table of the same name:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
=&amp;gt; COPY customer_dimension FROM  VERTICA vmart.customer_dimension;
 Rows Loaded
-------------
      500000
(1 row)
=&amp;gt; DISCONNECT vmart;
DISCONNECT
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Successive COPY FROM VERTICA statements in the same session can import data from multiple tables over the same connection.

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/statements/disconnect/#&#34;&gt;DISCONNECT&lt;/a&gt; disconnects from the source database when all import and &lt;a href=&#34;../../../en/data-export/db-export-and-import/exporting-data-to-another-db/&#34;&gt;export&lt;/a&gt; operations are complete:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; DISCONNECT vmart;
DISCONNECT
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Closing your session also closes the database connection. However, it is a good practice to explicitly close the connection to the other database, both to free up resources and to prevent issues with other SQL scripts that might be running in your session. Always closing the connection prevents potential errors if you run a script in the same session that attempts to open a connection to the same database, since each session can only have one connection to a given database at a time.

&lt;/div&gt;

&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;importing-identity-columns&#34;&gt;Importing IDENTITY columns&lt;/h2&gt;
&lt;p&gt;You can import &lt;a href=&#34;../../../en/admin/working-with-native-tables/sequences/identity-sequences/&#34;&gt;IDENTITY&lt;/a&gt; columns as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If both source and destination tables have an IDENTITY column and configuration parameter CopyFromVerticaWithIdentity is set to true (1), you do not need to list them.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If source table has an IDENTITY column, but target table does not, you must explicitly list the source and target columns.&lt;/p&gt;

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

Failure to list which IDENTITY columns to export can cause an error, because the IDENTITY column will be interpreted as missing in the destination table.

&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After importing the columns, the IDENTITY column values do not increment automatically. Use &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-sequence/#&#34;&gt;ALTER SEQUENCE&lt;/a&gt; to make updates.&lt;/p&gt;
&lt;p&gt;The default behavior for this statement is to import IDENTITY columns by specifying them directly in the source table. To disable this behavior globally, set the &lt;a href=&#34;../../../en/sql-reference/config-parameters/general-parameters/#CopyFromVerticaWithIdentity&#34;&gt;CopyFromVerticaWithIdentity&lt;/a&gt; configuration parameter.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Data-Export: Copy and export data on AWS</title>
      <link>/en/data-export/db-export-and-import/copy-and-export-data-on-aws/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/data-export/db-export-and-import/copy-and-export-data-on-aws/</guid>
      <description>
        
        
        &lt;p&gt;There are common issues that occur when exporting or copying on AWS clusters, as described below. Except for these specific issues as they relate to AWS, copying and exporting data works as documented in &lt;a href=&#34;../../../en/data-export/db-export-and-import/#&#34;&gt;Database export and import&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To copy or export data on AWS:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Verify that all nodes in source and destination clusters have their own elastic IPs (or public IPs) assigned.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If your destination cluster is located within the same VPC as your source cluster, proceed to step 3. Each node in one cluster must be able to communicate with each node in the other cluster. Thus, each source and destination node needs an elastic IP (or public IP) assigned.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(For non-CloudFormation Template installs) Create an S3 gateway endpoint.
&lt;p&gt;If you aren&#39;t using a CloudFormation Template (CFT) to install Vertica, you must create an S3 gateway endpoint in your VPC. For more information, see &lt;a href=&#34;https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html&#34;&gt;the AWS documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For example, the Vertica CFT has the following VPC endpoint:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&amp;#34;S3Enpoint&amp;#34; : {
    &amp;#34;Type&amp;#34; : &amp;#34;AWS::EC2::VPCEndpoint&amp;#34;,
    &amp;#34;Properties&amp;#34; : {
    &amp;#34;PolicyDocument&amp;#34; : {
        &amp;#34;Version&amp;#34;:&amp;#34;2012-10-17&amp;#34;,
        &amp;#34;Statement&amp;#34;:[{
        &amp;#34;Effect&amp;#34;:&amp;#34;Allow&amp;#34;,
        &amp;#34;Principal&amp;#34;: &amp;#34;*&amp;#34;,
        &amp;#34;Action&amp;#34;:[&amp;#34;*&amp;#34;],
        &amp;#34;Resource&amp;#34;:[&amp;#34;*&amp;#34;]
        }]
    },
    &amp;#34;RouteTableIds&amp;#34; : [ {&amp;#34;Ref&amp;#34; : &amp;#34;RouteTable&amp;#34;} ],
    &amp;#34;ServiceName&amp;#34; : { &amp;#34;Fn::Join&amp;#34;: [ &amp;#34;&amp;#34;, [ &amp;#34;com.amazonaws.&amp;#34;, { &amp;#34;Ref&amp;#34;: &amp;#34;AWS::Region&amp;#34; }, &amp;#34;.s3&amp;#34; ] ] },
    &amp;#34;VpcId&amp;#34; : {&amp;#34;Ref&amp;#34; : &amp;#34;VPC&amp;#34;}
}
&lt;/code&gt;&lt;/pre&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Verify that your security group allows the AWS clusters to communicate.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Check your security groups for both your source and destination AWS clusters. Verify that ports 5433 and 5434 are open. If one of your AWS clusters is on a separate VPC, verify that your network access control list (ACL) allows communication on port 5434.&lt;/p&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

&lt;strong&gt;Note:&lt;/strong&gt;
This communication method exports and copies (imports) data across the Internet. You can alternatively use non-public IPs and gateways, or VPN to connect the source and destination clusters.

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If there are one or more elastic load balancers (ELBs) between the clusters, verify that port 5433 is open between the ELBs and clusters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you use the OpenText™ Analytics Database client to connect to one or more ELBs, the ELBs only distribute incoming connections. The data transmission path occurs between clusters.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Data-Export: Changing node export addresses</title>
      <link>/en/data-export/db-export-and-import/changing-node-export-addresses/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/data-export/db-export-and-import/changing-node-export-addresses/</guid>
      <description>
        
        
        &lt;p&gt;You can change the export address for your OpenText™ Analytics Database cluster. You might need to do so to export data between clusters in different network subnets.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a subnet for importing and exporting data between database clusters. The CREATE SUBNET statement identifies the public network IP addresses residing on the same subnet.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE SUBNET kv_subnet with &amp;#39;10.10.10.0&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alter the database to specify the subnet name of a public network for import/export.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER DATABASE DEFAULT EXPORT ON kv_subnet;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create network interfaces for importing and exporting data from individual nodes to other database clusters. The CREATE NETWORK INTERFACE statement identifies the public network IP addresses residing on multiple subnets.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE NETWORK INTERFACE kv_node1 on v_VMartDB_node0001 with &amp;#39;10.10.10.1&amp;#39;;
=&amp;gt; CREATE NETWORK INTERFACE kv_node2 on v_VMartDB_node0002 with &amp;#39;10.10.10.2&amp;#39;;
=&amp;gt; CREATE NETWORK INTERFACE kv_node3 on v_VMartDB_node0003 with &amp;#39;10.10.10.3&amp;#39;;
=&amp;gt; CREATE NETWORK INTERFACE kv_node4 on v_VMartDB_node0004 with &amp;#39;10.10.10.4&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;For users on Amazon Web Services (AWS) or using Network Address Translation (NAT), refer to &lt;a href=&#34;../../../en/setup/set-up-on-cloud/on-aws/#&#34;&gt;OpenText Analytics Database on Amazon Web Services&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alter the node settings to change the export address. When used with the EXPORT ON clause, the ALTER NODE specifies the network interface of the public network on individual nodes for importing and exporting data.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER NODE v_VMartDB_node0001 export on kv_node1;
=&amp;gt; ALTER NODE v_VMartDB_node0002 export on kv_node2;
=&amp;gt; ALTER NODE v_VMartDB_node0003 export on kv_node3;
=&amp;gt; ALTER NODE v_VMartDB_node0004 export on kv_node4;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify if the node address and the export address are different on different network subnets of the database cluster.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name, node_address, export_address FROM nodes;
     node_name     | node_address    | export_address
-------------------+-----------------+----------------
v_VMartDB_node0001 | 192.168.100.101 | 10.10.10.1
v_VMartDB_node0002 | 192.168.100.102 | 10.10.10.2
v_VMartDB_node0003 | 192.168.100.103 | 10.10.10.3
v_VMartDB_node0004 | 192.168.100.104 | 10.10.10.4
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Creating a network interface and altering the node settings to change the export address takes precedence over creating a subnet and altering the database for import/export.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Data-Export: Using public and private IP networks</title>
      <link>/en/data-export/db-export-and-import/using-public-and-private-ip-networks/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/data-export/db-export-and-import/using-public-and-private-ip-networks/</guid>
      <description>
        
        
        &lt;p&gt;In many configurations, OpenText™ Analytics Database cluster hosts use two network IP addresses as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A private address for communication between the cluster hosts.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A public IP address for communication with client connections.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By default, importing from and exporting to another database uses the private network.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Ensure port 5433 or the port the database is using is not blocked.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;To use the public network address for copy and export activities, as well as moving large amounts of data, configure the system to use the public network to support exporting to or importing from another database cluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/data-export/db-export-and-import/using-public-and-private-ip-networks/identify-public-network-to/&#34;&gt;Identify the Public Network to the Database&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../en/data-export/db-export-and-import/using-public-and-private-ip-networks/identify-db-or-nodes-used-importexport/#&#34;&gt;Identify the database or nodes used for import/export&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The database encrypts data during transmission (if you have configured a certificate). The database attempts to also encrypt plan metadata but, by default, falls back to plaintext if needed. You can configure the database to require encryption for metadata too; see &lt;a href=&#34;../../../en/data-export/db-export-and-import/configuring-connection-security-between-clusters/#&#34;&gt;Configuring connection security between clusters&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In certain instances, both public and private addresses exceed the demand capacity of a single Local Area Network (LAN). If you encounter this type of scenario, then configure your database cluster to use two LANs: one for public network traffic and one for private network traffic.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Data-Export: Handling node failure during copy/export</title>
      <link>/en/data-export/db-export-and-import/handling-node-failure-during-copyexport/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/data-export/db-export-and-import/handling-node-failure-during-copyexport/</guid>
      <description>
        
        
        &lt;p&gt;When an export (&lt;code&gt;EXPORT TO VERTICA&lt;/code&gt;) or import (&lt;code&gt;COPY FROM VERTICA&lt;/code&gt;) task is in progress, and a non-initiator node fails, OpenText™ Analytics Database does not complete the task automatically. A non-initiator node is any node that is not the source or target node in your export or import statement. To complete the task, you must run the statement again.&lt;/p&gt;
&lt;p&gt;You address the problem of a non-initiator node failing during an import or export as follows:

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Both OpenText™ Analytics databases must be running in a safe state.

&lt;/div&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You export or import from one cluster to another using the &lt;code&gt;EXPORT TO VERTICA&lt;/code&gt; or &lt;code&gt;COPY FROM VERTICA&lt;/code&gt; statement.&lt;/p&gt;
&lt;p&gt;During the export or import, a non-initiating node on the target or source cluster fails. The database displays an error message that indicates possible node failure, one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;ERROR 4534: Receive on v_tpchdb1_node0002: Message receipt from v_tpchdb2_node0005 failed&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;WARNING 4539: Received no response from v_tpchdb1_node0004 in abandon plan&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;ERROR 3322: [tpchdb2] Execution canceled by operator&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Complete your import or export by running the statement again. The failed node does not need to be up for the database to successfully complete the export or import.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Data-Export: Using EXPORT functions</title>
      <link>/en/data-export/db-export-and-import/using-export-functions/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/data-export/db-export-and-import/using-export-functions/</guid>
      <description>
        
        
        &lt;p&gt;OpenText™ Analytics Database provides several EXPORT_ functions that let you recreate a database, or specific schemas and tables, in a target database. For example, you can use the EXPORT_ functions to transfer some or all of the designs and objects you create in a development or test environment to a production database.&lt;/p&gt;
&lt;p&gt;The EXPORT_ functions create SQL scripts that you can run to generate the exported database designs or objects. These functions serve different purposes to the export statements, &lt;a href=&#34;../../../en/sql-reference/statements/copy-from/#&#34;&gt;COPY FROM VERTICA&lt;/a&gt; (pull data) and &lt;a href=&#34;../../../en/sql-reference/statements/export-to/#&#34;&gt;EXPORT TO VERTICA&lt;/a&gt; (push data). These statements transfer data directly from source to target database across a network connection between both. They are dynamic actions and do not generate SQL scripts.&lt;/p&gt;
&lt;p&gt;The EXPORT_ functions appear in the following table. Depending on what you need to export, you can use one or more of the functions. EXPORT_CATALOG creates the most comprehensive SQL script, while EXPORT_TABLES and EXPORT_OBJECTS are subsets of that function to narrow the export scope.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Use this function...&lt;/th&gt; 

&lt;th &gt;
To recreate...&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;


&lt;a href=&#34;../../../en/sql-reference/functions/management-functions/catalog-functions/export-catalog/#&#34;&gt;EXPORT_CATALOG&lt;/a&gt;&lt;/td&gt; 

&lt;td &gt;








&lt;p&gt;These catalog items:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;An existing schema design, tables, projections, constraints, views, and stored procedures.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Database Designer-created schema design, tables, projections, constraints, and views&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A design on a different cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;


&lt;a href=&#34;../../../en/sql-reference/functions/management-functions/catalog-functions/export-tables/#&#34;&gt;EXPORT_TABLES&lt;/a&gt;&lt;/td&gt; 

&lt;td &gt;


Non-virtual objects up to, and including, the schema of one or more tables.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;


&lt;a href=&#34;../../../en/sql-reference/functions/management-functions/catalog-functions/export-objects/#&#34;&gt;EXPORT_OBJECTS&lt;/a&gt;&lt;/td&gt; 

&lt;td &gt;


Catalog objects in order dependency for replication.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;The designs and object definitions that the script creates depend on the EXPORT_ function scope you specify. The following sections give examples of the commands and output for each function and the scopes it supports.&lt;/p&gt;
&lt;h2 id=&#34;saving-scripts-for-export-functions&#34;&gt;Saving scripts for export functions&lt;/h2&gt;
&lt;p&gt;All of the examples in this section were generated using the standard VMART database, with some additional test objects and tables. One output directory was created for all SQL scripts that the functions created:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/home/dbadmin/xtest
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you specify the destination argument as an empty string (&lt;code&gt;&#39;&#39;&lt;/code&gt;), the function writes the export results to STDOUT.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

A superuser can export all available database output to a file with the EXPORT_ functions. For a non-superuser, the EXPORT_ functions generate a script containing only the objects to which the user has access.

&lt;/div&gt;&lt;/p&gt;

      </description>
    </item>
    
  </channel>
</rss>
