<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Hadoop functions</title>
    <link>/en/sql-reference/functions/hadoop-functions/</link>
    <description>Recent content in Hadoop functions on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/sql-reference/functions/hadoop-functions/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Sql-Reference: CLEAR_HDFS_CACHES</title>
      <link>/en/sql-reference/functions/hadoop-functions/clear-hdfs-caches/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/clear-hdfs-caches/</guid>
      <description>
        
        
        &lt;p&gt;Clears the configuration information copied from HDFS and any cached connections.&lt;/p&gt;
&lt;p&gt;This function affects reads using the &lt;code&gt;hdfs&lt;/code&gt; scheme in the following ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;This function flushes information loaded from configuration files copied from Hadoop (such as core-site.xml). These files are found on the path set by the HadoopConfDir configuration parameter.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;This function flushes information about which NameNode is active in a High Availability (HA) Hadoop cluster. Therefore, the first request to Hadoop after calling this function is slower than expected.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;OpenText™ Analytics Database maintains a cache of open connections to NameNodes to reduce latency. This function flushes that cache.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;CLEAR_HDFS_CACHES (&lt;span class=&#34;code-variable&#34;&gt; &lt;/span&gt;)
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;Superuser&lt;/p&gt;

&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following example clears the Hadoop configuration information:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT CLEAR_HDFS_CACHES();
 CLEAR_HDFS_CACHES
--------------
 Cleared
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;see-also&#34;&gt;See also&lt;/h2&gt;
&lt;a href=&#34;../../../../en/sql-reference/config-parameters/hadoop-parameters/#&#34;&gt;Hadoop parameters&lt;/a&gt;

      </description>
    </item>
    
    <item>
      <title>Sql-Reference: EXTERNAL_CONFIG_CHECK</title>
      <link>/en/sql-reference/functions/hadoop-functions/external-config-check/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/external-config-check/</guid>
      <description>
        
        
        &lt;p&gt;Tests the Hadoop configuration of an OpenText™ Analytics Database cluster. This function tests HDFS configuration files, HCatalog Connector configuration, and Kerberos configuration.&lt;/p&gt;
&lt;p&gt;This function calls the following functions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/db-functions/kerberos-config-check/#&#34;&gt;KERBEROS_CONFIG_CHECK&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hadoop-impersonation-config-check/#&#34;&gt;HADOOP_IMPERSONATION_CONFIG_CHECK&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hdfs-cluster-config-check/#&#34;&gt;HDFS_CLUSTER_CONFIG_CHECK&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hcatalogconnector-config-check/#&#34;&gt;HCATALOGCONNECTOR_CONFIG_CHECK&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you call this function with an argument, it passes the argument to functions it calls that also take an argument.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;EXTERNAL_CONFIG_CHECK( [&amp;#39;&lt;span class=&#34;code-variable&#34;&gt;what_to_test&lt;/span&gt;&amp;#39; ] )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;arguments&#34;&gt;Arguments&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;what_to_test&lt;/span&gt;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;A string specifying the authorities, nameservices, and/or HCatalog schemas to test. The format is a comma-separated list of &amp;quot;key=value&amp;quot; pairs, where keys are &amp;quot;authority&amp;quot;, &amp;quot;nameservice&amp;quot;, and &amp;quot;schema&amp;quot;. The value is passed to all of the sub-functions; see those reference pages for details on how values are interpreted.&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;This function does not require privileges.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following example tests the configuration of only the nameservice named &amp;quot;ns1&amp;quot;. Output has been omitted due to length.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT EXTERNAL_CONFIG_CHECK(&amp;#39;nameservice=ns1&amp;#39;);
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: GET_METADATA</title>
      <link>/en/sql-reference/functions/hadoop-functions/get-metadata/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/get-metadata/</guid>
      <description>
        
        
        &lt;p&gt;Returns the metadata of a Parquet file. Metadata includes the number and sizes of row groups, column names, and information about chunks and compression. Metadata is returned as JSON.&lt;/p&gt;
&lt;p&gt;This function inspects one file. Parquet data usually spans many files in a single directory; choose one. The function does not accept a directory name as an argument.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;GET_METADATA( &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;filename&lt;/span&gt;&amp;#39; )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;arguments&#34;&gt;Arguments&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt; &lt;span class=&#34;code-variable&#34;&gt;filename&lt;/span&gt; &lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;The name of a Parquet file. Any path that is valid for COPY is valid for this function. This function does not operate on files in other formats.&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;Superuser, or non-superuser with READ privileges on the USER-accessible storage location (see &lt;a href=&#34;../../../../en/sql-reference/statements/grant-statements/grant-storage-location/#&#34;&gt;GRANT (storage location)&lt;/a&gt;).&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;You must call this function with a single file, not a directory or glob:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT GET_METADATA(&amp;#39;/data/emp-row.parquet&amp;#39;);
                GET_METADATA
----------------------------------------------------------------------------------------------------
 schema:
required group field_id=-1 spark_schema {
  optional int32 field_id=-1 employeeID;
  optional group field_id=-1 personal {
    optional binary field_id=-1 name (String);
    optional group field_id=-1 address {
      optional binary field_id=-1 street (String);
      optional binary field_id=-1 city (String);
      optional int32 field_id=-1 zipcode;
    }
    optional int32 field_id=-1 taxID;
  }
  optional binary field_id=-1 department (String);
}

 data page version:
  data page v1

 metadata:
{
  &amp;#34;FileName&amp;#34;: &amp;#34;/data/emp-row.parquet&amp;#34;,
  &amp;#34;FileFormat&amp;#34;: &amp;#34;Parquet&amp;#34;,
  &amp;#34;Version&amp;#34;: &amp;#34;1.0&amp;#34;,
  &amp;#34;CreatedBy&amp;#34;: &amp;#34;parquet-mr version 1.10.1 (build a89df8f9932b6ef6633d06069e50c9b7970bebd1)&amp;#34;,
  &amp;#34;TotalRows&amp;#34;: &amp;#34;4&amp;#34;,
  &amp;#34;NumberOfRowGroups&amp;#34;: &amp;#34;1&amp;#34;,
  &amp;#34;NumberOfRealColumns&amp;#34;: &amp;#34;3&amp;#34;,
  &amp;#34;NumberOfColumns&amp;#34;: &amp;#34;7&amp;#34;,
  &amp;#34;Columns&amp;#34;: [
     { &amp;#34;Id&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Name&amp;#34;: &amp;#34;employeeID&amp;#34;, &amp;#34;PhysicalType&amp;#34;: &amp;#34;INT32&amp;#34;, &amp;#34;ConvertedType&amp;#34;: &amp;#34;NONE&amp;#34;, &amp;#34;LogicalType&amp;#34;: {&amp;#34;Type&amp;#34;: &amp;#34;None&amp;#34;} },
     { &amp;#34;Id&amp;#34;: &amp;#34;1&amp;#34;, &amp;#34;Name&amp;#34;: &amp;#34;personal.name&amp;#34;, &amp;#34;PhysicalType&amp;#34;: &amp;#34;BYTE_ARRAY&amp;#34;, &amp;#34;ConvertedType&amp;#34;: &amp;#34;UTF8&amp;#34;, &amp;#34;LogicalType&amp;#34;: {&amp;#34;Type&amp;#34;: &amp;#34;String&amp;#34;} },
     { &amp;#34;Id&amp;#34;: &amp;#34;2&amp;#34;, &amp;#34;Name&amp;#34;: &amp;#34;personal.address.street&amp;#34;, &amp;#34;PhysicalType&amp;#34;: &amp;#34;BYTE_ARRAY&amp;#34;, &amp;#34;ConvertedType&amp;#34;: &amp;#34;UTF8&amp;#34;, &amp;#34;LogicalType&amp;#34;: {&amp;#34;Type&amp;#34;: &amp;#34;String&amp;#34;} },
     { &amp;#34;Id&amp;#34;: &amp;#34;3&amp;#34;, &amp;#34;Name&amp;#34;: &amp;#34;personal.address.city&amp;#34;, &amp;#34;PhysicalType&amp;#34;: &amp;#34;BYTE_ARRAY&amp;#34;, &amp;#34;ConvertedType&amp;#34;: &amp;#34;UTF8&amp;#34;, &amp;#34;LogicalType&amp;#34;: {&amp;#34;Type&amp;#34;: &amp;#34;String&amp;#34;} },
     { &amp;#34;Id&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;Name&amp;#34;: &amp;#34;personal.address.zipcode&amp;#34;, &amp;#34;PhysicalType&amp;#34;: &amp;#34;INT32&amp;#34;, &amp;#34;ConvertedType&amp;#34;: &amp;#34;NONE&amp;#34;, &amp;#34;LogicalType&amp;#34;: {&amp;#34;Type&amp;#34;: &amp;#34;None&amp;#34;} },
     { &amp;#34;Id&amp;#34;: &amp;#34;5&amp;#34;, &amp;#34;Name&amp;#34;: &amp;#34;personal.taxID&amp;#34;, &amp;#34;PhysicalType&amp;#34;: &amp;#34;INT32&amp;#34;, &amp;#34;ConvertedType&amp;#34;: &amp;#34;NONE&amp;#34;, &amp;#34;LogicalType&amp;#34;: {&amp;#34;Type&amp;#34;: &amp;#34;None&amp;#34;} },
     { &amp;#34;Id&amp;#34;: &amp;#34;6&amp;#34;, &amp;#34;Name&amp;#34;: &amp;#34;department&amp;#34;, &amp;#34;PhysicalType&amp;#34;: &amp;#34;BYTE_ARRAY&amp;#34;, &amp;#34;ConvertedType&amp;#34;: &amp;#34;UTF8&amp;#34;, &amp;#34;LogicalType&amp;#34;: {&amp;#34;Type&amp;#34;: &amp;#34;String&amp;#34;} }
  ],
  &amp;#34;RowGroups&amp;#34;: [
     {
       &amp;#34;Id&amp;#34;: &amp;#34;0&amp;#34;,  &amp;#34;TotalBytes&amp;#34;: &amp;#34;642&amp;#34;,  &amp;#34;TotalCompressedBytes&amp;#34;: &amp;#34;0&amp;#34;,  &amp;#34;Rows&amp;#34;: &amp;#34;4&amp;#34;,
       &amp;#34;ColumnChunks&amp;#34;: [
          {&amp;#34;Id&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Values&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;StatsSet&amp;#34;: &amp;#34;True&amp;#34;, &amp;#34;Stats&amp;#34;: {&amp;#34;NumNulls&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;DistinctValues&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Max&amp;#34;: &amp;#34;51513&amp;#34;, &amp;#34;Min&amp;#34;: &amp;#34;17103&amp;#34; },
           &amp;#34;Compression&amp;#34;: &amp;#34;SNAPPY&amp;#34;, &amp;#34;Encodings&amp;#34;: &amp;#34;PLAIN RLE BIT_PACKED &amp;#34;, &amp;#34;UncompressedSize&amp;#34;: &amp;#34;67&amp;#34;, &amp;#34;CompressedSize&amp;#34;: &amp;#34;69&amp;#34; },
          {&amp;#34;Id&amp;#34;: &amp;#34;1&amp;#34;, &amp;#34;Values&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;StatsSet&amp;#34;: &amp;#34;True&amp;#34;, &amp;#34;Stats&amp;#34;: {&amp;#34;NumNulls&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;DistinctValues&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Max&amp;#34;: &amp;#34;Sheldon Cooper&amp;#34;, &amp;#34;Min&amp;#34;: &amp;#34;Howard Wolowitz&amp;#34; },
           &amp;#34;Compression&amp;#34;: &amp;#34;SNAPPY&amp;#34;, &amp;#34;Encodings&amp;#34;: &amp;#34;PLAIN RLE BIT_PACKED &amp;#34;, &amp;#34;UncompressedSize&amp;#34;: &amp;#34;142&amp;#34;, &amp;#34;CompressedSize&amp;#34;: &amp;#34;145&amp;#34; },
          {&amp;#34;Id&amp;#34;: &amp;#34;2&amp;#34;, &amp;#34;Values&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;StatsSet&amp;#34;: &amp;#34;True&amp;#34;, &amp;#34;Stats&amp;#34;: {&amp;#34;NumNulls&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;DistinctValues&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Max&amp;#34;: &amp;#34;52 Broad St&amp;#34;, &amp;#34;Min&amp;#34;: &amp;#34;100 Main St Apt 4A&amp;#34; },
           &amp;#34;Compression&amp;#34;: &amp;#34;SNAPPY&amp;#34;, &amp;#34;Encodings&amp;#34;: &amp;#34;PLAIN RLE BIT_PACKED &amp;#34;, &amp;#34;UncompressedSize&amp;#34;: &amp;#34;139&amp;#34;, &amp;#34;CompressedSize&amp;#34;: &amp;#34;123&amp;#34; },
          {&amp;#34;Id&amp;#34;: &amp;#34;3&amp;#34;, &amp;#34;Values&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;StatsSet&amp;#34;: &amp;#34;True&amp;#34;, &amp;#34;Stats&amp;#34;: {&amp;#34;NumNulls&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;DistinctValues&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Max&amp;#34;: &amp;#34;Pasadena&amp;#34;, &amp;#34;Min&amp;#34;: &amp;#34;Pasadena&amp;#34; },
           &amp;#34;Compression&amp;#34;: &amp;#34;SNAPPY&amp;#34;, &amp;#34;Encodings&amp;#34;: &amp;#34;RLE PLAIN_DICTIONARY BIT_PACKED &amp;#34;, &amp;#34;UncompressedSize&amp;#34;: &amp;#34;95&amp;#34;, &amp;#34;CompressedSize&amp;#34;: &amp;#34;99&amp;#34; },
          {&amp;#34;Id&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;Values&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;StatsSet&amp;#34;: &amp;#34;True&amp;#34;, &amp;#34;Stats&amp;#34;: {&amp;#34;NumNulls&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;DistinctValues&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Max&amp;#34;: &amp;#34;91021&amp;#34;, &amp;#34;Min&amp;#34;: &amp;#34;91001&amp;#34; },
           &amp;#34;Compression&amp;#34;: &amp;#34;SNAPPY&amp;#34;, &amp;#34;Encodings&amp;#34;: &amp;#34;PLAIN RLE BIT_PACKED &amp;#34;, &amp;#34;UncompressedSize&amp;#34;: &amp;#34;68&amp;#34;, &amp;#34;CompressedSize&amp;#34;: &amp;#34;70&amp;#34; },
          {&amp;#34;Id&amp;#34;: &amp;#34;5&amp;#34;, &amp;#34;Values&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;StatsSet&amp;#34;: &amp;#34;True&amp;#34;, &amp;#34;Stats&amp;#34;: {&amp;#34;NumNulls&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;DistinctValues&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Max&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Min&amp;#34;: &amp;#34;0&amp;#34; },
           &amp;#34;Compression&amp;#34;: &amp;#34;SNAPPY&amp;#34;, &amp;#34;Encodings&amp;#34;: &amp;#34;PLAIN RLE BIT_PACKED &amp;#34;, &amp;#34;UncompressedSize&amp;#34;: &amp;#34;28&amp;#34;, &amp;#34;CompressedSize&amp;#34;: &amp;#34;30&amp;#34; },
          {&amp;#34;Id&amp;#34;: &amp;#34;6&amp;#34;, &amp;#34;Values&amp;#34;: &amp;#34;4&amp;#34;, &amp;#34;StatsSet&amp;#34;: &amp;#34;True&amp;#34;, &amp;#34;Stats&amp;#34;: {&amp;#34;NumNulls&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;DistinctValues&amp;#34;: &amp;#34;0&amp;#34;, &amp;#34;Max&amp;#34;: &amp;#34;Physics&amp;#34;, &amp;#34;Min&amp;#34;: &amp;#34;Astronomy&amp;#34; },
           &amp;#34;Compression&amp;#34;: &amp;#34;SNAPPY&amp;#34;, &amp;#34;Encodings&amp;#34;: &amp;#34;RLE PLAIN_DICTIONARY BIT_PACKED &amp;#34;, &amp;#34;UncompressedSize&amp;#34;: &amp;#34;103&amp;#34;, &amp;#34;CompressedSize&amp;#34;: &amp;#34;107&amp;#34; }
        ]
     }
  ]
}

(1 row)
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: HADOOP_IMPERSONATION_CONFIG_CHECK</title>
      <link>/en/sql-reference/functions/hadoop-functions/hadoop-impersonation-config-check/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/hadoop-impersonation-config-check/</guid>
      <description>
        
        
        &lt;p&gt;Reports the delegation tokens OpenText™ Analytics Database will use when accessing Kerberized data in HDFS. The HadoopImpersonationConfig configuration parameter specifies one or more authorities, nameservices, and HCatalog schemas and their associated tokens. For each tested value, the function reports what doAs user or delegation token the database will use for access. Use this function to confirm that you have defined your delegation tokens as you intended.&lt;/p&gt;
&lt;p&gt;You can call this function with an argument to specify the authority, nameservice, or HCatalog schema to test, or without arguments to test all configured values.&lt;/p&gt;
&lt;p&gt;This function does not check that you can use these delegation tokens to access HDFS.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/#&#34;&gt;Proxy users and delegation tokens&lt;/a&gt; for more about impersonation.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;HADOOP_IMPERSONATION_CONFIG_CHECK( [&amp;#39;&lt;span class=&#34;code-variable&#34;&gt;what_to_test&lt;/span&gt;&amp;#39; ] )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;arguments&#34;&gt;Arguments&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;what_to_test&lt;/span&gt;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;A string specifying the authorities, nameservices, and/or HCatalog schemas to test. For example, a value of &#39;nameservice=ns1&#39; means the function tests only access to the nameservice &amp;quot;ns1&amp;quot; and ignores any other authorities and schemas. A value of &#39;nameservice=ns1, schema=hcat1&#39; means the function tests one nameservice and one HCatalog schema.
&lt;p&gt;If you do not specify this argument, the function tests all authorities, nameservices, and schemas defined in HadoopImpersonationConfig .&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;This function does not require privileges.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;Consider the following definition of HadoopImpersonationConfig:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[{
        &amp;#34;nameservice&amp;#34;: &amp;#34;ns1&amp;#34;,
        &amp;#34;token&amp;#34;: &amp;#34;RANDOM-TOKEN-STRING&amp;#34;
    },
    {
        &amp;#34;nameservice&amp;#34;: &amp;#34;*&amp;#34;,
        &amp;#34;doAs&amp;#34;: &amp;#34;Paul&amp;#34;
    },
    {
        &amp;#34;schema&amp;#34;: &amp;#34;hcat1&amp;#34;,
        &amp;#34;doAs&amp;#34;: &amp;#34;Fred&amp;#34;
    }
]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following query tests only the &amp;quot;ns1&amp;quot; name service:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT HADOOP_IMPERSONATION_CONFIG_CHECK(&amp;#39;nameservice=ns1&amp;#39;);

-- hadoop_impersonation_config_check --
Connections to nameservice [ns1] will use a delegation token with hash [b3dd9e71cd695d91]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This function returns a hash of the token for security reasons. You can call &lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hash-external-token/#&#34;&gt;HASH_EXTERNAL_TOKEN&lt;/a&gt; with the expected value and compare that hash to the one in this function&#39;s output.&lt;/p&gt;
&lt;p&gt;A query with no argument tests all values:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT HADOOP_IMPERSONATION_CONFIG_CHECK();

-- hadoop_impersonation_config_check --
Connections to nameservice [ns1] will use a delegation token with hash [b3dd9e71cd695d91]
JDBC connections for HCatalog schema [hcat1] will doAs [Fred]
[!] hadoop_impersonation_config_check : [PASS]
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: HASH_EXTERNAL_TOKEN</title>
      <link>/en/sql-reference/functions/hadoop-functions/hash-external-token/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/hash-external-token/</guid>
      <description>
        
        
        &lt;p&gt;Returns a hash of a string token, for use with &lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hadoop-impersonation-config-check/#&#34;&gt;HADOOP_IMPERSONATION_CONFIG_CHECK&lt;/a&gt;. Call &lt;code&gt;HASH_EXTERNAL_TOKEN&lt;/code&gt; with the delegation token you expect OpenText™ Analytics Database to use and compare it to the hash in the output of &lt;code&gt;HADOOP_IMPERSONATION_CONFIG_CHECK&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;HASH_EXTERNAL_TOKEN( &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;token&lt;/span&gt;&amp;#39; )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;arguments&#34;&gt;Arguments&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;token&lt;/span&gt;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;A string specifying the token to hash. The token is configured in the HadoopImpersonationConfig parameter.&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;This function does not require privileges.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following query tests the expected value shown in the example on the &lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hadoop-impersonation-config-check/#&#34;&gt;HADOOP_IMPERSONATION_CONFIG_CHECK&lt;/a&gt; reference page.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT HASH_EXTERNAL_TOKEN(&amp;#39;RANDOM-TOKEN-STRING&amp;#39;);
hash_external_token
---------------------
b3dd9e71cd695d91
(1 row)
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: HCATALOGCONNECTOR_CONFIG_CHECK</title>
      <link>/en/sql-reference/functions/hadoop-functions/hcatalogconnector-config-check/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/hcatalogconnector-config-check/</guid>
      <description>
        
        
        &lt;p&gt;Tests the configuration of an OpenText™ Analytics Database cluster that uses the HCatalog Connector to access Hive data. The function first verifies that the HCatalog Connector is properly installed and reports on the values of several related configuration parameters. It then tests the connection using HiveServer2. This function does not support the WebHCat server.&lt;/p&gt;
&lt;p&gt;If you specify an HCatalog schema, and if you have defined a delegation token for that schema, this function uses the delegation token. Otherwise, the function uses the default endpoint without a delegation token.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/#&#34;&gt;Proxy users and delegation tokens&lt;/a&gt; for more about delegation tokens.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;HCATALOGCONNECTOR_CONFIG_CHECK( [&amp;#39;&lt;span class=&#34;code-variable&#34;&gt;what_to_test&lt;/span&gt;&amp;#39; ] )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;arguments&#34;&gt;Arguments&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;what_to_test&lt;/span&gt;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;A string specifying the HCatalog schemas to test. For example, a value of &#39;schema=hcat1&#39; means the function tests only the &amp;quot;hcat1&amp;quot; schema and ignores any others that are found.&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;This function does not require privileges.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following query tests with the default endpoint and no delegation token.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT HCATALOGCONNECTOR_CONFIG_CHECK();

-- hcatalogconnector_config_check --

    HCatalogConnectorUseHiveServer2 : [1]
    EnableHCatImpersonation : [1]
    HCatalogConnectorUseORCReader : [1]
    HCatalogConnectorUseParquetReader : [1]
    HCatalogConnectorUseTxtReader : [0]
  [INFO] Vertica is not configured to use its internal parsers for delimited files.
  [INFO] This is off by default, but will be changed in a future release.
    HCatalogConnectorUseLibHDFSPP : [1]

  [OK] HCatalog connector library is properly installed.
  [INFO] Creating JDBC connection as session user.
  [OK] Successful JDBC connection to HiveServer2 as user [USER].

  [!] hcatalogconnector_config_check : [PASS]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To test with the configured delegation token, pass the schema as an argument:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT HCATALOGCONNECTOR_CONFIG_CHECK(&amp;#39;schema=hcat1&amp;#39;);
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: HDFS_CLUSTER_CONFIG_CHECK</title>
      <link>/en/sql-reference/functions/hadoop-functions/hdfs-cluster-config-check/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/hdfs-cluster-config-check/</guid>
      <description>
        
        
        &lt;p&gt;Tests the configuration of an OpenText™ Analytics Database cluster that uses HDFS. The function scans the Hadoop configuration files found in HadoopConfDir and performs configuration checks on each cluster it finds. If you have more than one cluster configured, you can specify which one to test instead of testing all of them.&lt;/p&gt;
&lt;p&gt;For each Hadoop cluster, it reports properties including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Nameservice name and associated NameNodes&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;High-availability status&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;RPC encryption status&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kerberos authentication status&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HTTP(S) status&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It then tests connections using &lt;code&gt;http(s)&lt;/code&gt;, &lt;code&gt;hdfs&lt;/code&gt;, and &lt;code&gt;webhdfs&lt;/code&gt; URL schemes. It tests the latter two using both the database and session user.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../en/hadoop-integration/configuring-hdfs-access/#&#34;&gt;Configuring HDFS access&lt;/a&gt; for information about configuration files and HadoopConfDir.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;HDFS_CLUSTER_CONFIG_CHECK( [&amp;#39;&lt;span class=&#34;code-variable&#34;&gt;what_to_test&lt;/span&gt;&amp;#39; ] )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;arguments&#34;&gt;Arguments&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;what_to_test&lt;/span&gt;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;A string specifying the authorities or nameservices to test. For example, a value of &#39;nameservice=ns1&#39; means the function tests only &amp;quot;ns1&amp;quot; cluster. If you specify both an authority and a nameservice, the authority must be a NameNode in the specified nameservice for the check to pass.
&lt;p&gt;If you do not specify this argument, the function tests all cluster configurations found in HadoopConfDir.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;This function does not require privileges.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following example tests all clusters.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT HDFS_CLUSTER_CONFIG_CHECK();

-- hdfs_cluster_config_check --

    Hadoop Conf Path : [/conf/hadoop_conf]
  [OK] HadoopConfDir verified on all nodes
    Connection Timeout (seconds) : [60]
    Token Refresh Frequency (seconds) : [0]
    HadoopFSBlockSizeBytes (MiB) : [64]

  [OK] Found [1] hadoop cluster configurations

------------- Cluster 1 -------------
    Is DefaultFS : [true]
    Nameservice : [vmns]
    Namenodes : [node1.example.com:8020, node2.example.com:8020]
    High Availability : [true]
    RPC Encryption : [false]
    Kerberos Authentication : [true]
    HTTPS Only : [false]
  [INFO] Checking connections to [hdfs:///]
    vertica : [OK]
    dbuser : [OK]

  [INFO] Checking connections to [http://node1.example.com:50070]
  [INFO] Node is in standby
  [INFO] Checking connections to [http://node2.example.com:50070]
  [OK] Can make authenticated external curl connection
  [INFO] Checking webhdfs
    vertica : [OK]
    USER : [OK]

  [!] hdfs_cluster_config_check : [PASS]
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: KERBEROS_HDFS_CONFIG_CHECK</title>
      <link>/en/sql-reference/functions/hadoop-functions/kerberos-hdfs-config-check/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/kerberos-hdfs-config-check/</guid>
      <description>
        
        
        
&lt;div class=&#34;admonition deprecated&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Deprecated&lt;/h4&gt;

This function is deprecated and will be removed in a future release. Instead, use &lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/external-config-check/#&#34;&gt;EXTERNAL_CONFIG_CHECK&lt;/a&gt;.

&lt;/div&gt;
&lt;p&gt;Tests the Kerberos configuration of an OpenText™ Analytics Database cluster that uses HDFS. The function succeeds if it can use both the database keytab file and the session user to access HDFS and reports errors otherwise. This function is a more specific version of &lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/db-functions/kerberos-config-check/#&#34;&gt;KERBEROS_CONFIG_CHECK&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If the current session is not Kerberized, this function will not be able to use secured HDFS connections and will fail.&lt;/p&gt;
&lt;p&gt;You can call this function with arguments to specify an HDFS configuration to test, or without arguments. If you call it with no arguments, this function reads the HDFS configuration files and fails if it does not find them. See &lt;a href=&#34;../../../../en/hadoop-integration/configuring-hdfs-access/#&#34;&gt;Configuring HDFS access&lt;/a&gt;. If it finds configuration files, it tests all configured nameservices.&lt;/p&gt;
&lt;p&gt;The function performs the following tests, in order:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Are Kerberos services available?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Does a keytab file exist and are the Kerberos and HDFS configuration parameters set in the database?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Can OpenText™ Analytics Database read and invoke kinit with the keys to authenticate to HDFS and obtain the database Kerberos ticket?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Can OpenText™ Analytics Database perform &lt;code&gt;hdfs&lt;/code&gt; and &lt;code&gt;webhdfs&lt;/code&gt; operations using both the database Kerberos ticket and user-forwardable tickets for the current session?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Can OpenText™ Analytics Database connect to HiveServer2? (This function does not support WebHCat.)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If any test fails, the function returns a descriptive error message.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;KERBEROS_HDFS_CONFIG_CHECK( [&amp;#39;&lt;span class=&#34;code-variable&#34;&gt;hdfsHost&lt;/span&gt;:&lt;span class=&#34;code-variable&#34;&gt;hdfsPort&lt;/span&gt;&amp;#39;,
  &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;webhdfsHost&lt;/span&gt;:&lt;span class=&#34;code-variable&#34;&gt;webhdfsPort&lt;/span&gt;&amp;#39;, &amp;#39;&lt;span class=&#34;code-variable&#34;&gt;webhcatHost&lt;/span&gt;&amp;#39; ] )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;arguments&#34;&gt;Arguments&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;hdfsHost&lt;/span&gt;, &lt;span class=&#34;code-variable&#34;&gt;hdfsPort&lt;/span&gt;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;The hostname or IP address and port of the HDFS NameNode. The database uses this server to access data that is specified with &lt;code&gt;hdfs&lt;/code&gt; URLs. If the value is &#39; &#39;, the function skips this part of the check.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;webhdfsHost&lt;/span&gt;, &lt;span class=&#34;code-variable&#34;&gt;webhdfsPort&lt;/span&gt;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;The hostname or IP address and port of the WebHDFS server. The database uses this server to access data that is specified with &lt;code&gt;webhdfs&lt;/code&gt; URLs. If the value is &#39; &#39;, the function skips this part of the check.&lt;/dd&gt;
&lt;dt&gt;&lt;em&gt;&lt;code&gt;webhcatHost&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Pass any value in this position. WebHCat is deprecated and this value is ignored but must be present.&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;This function does not require privileges.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Sql-Reference: SYNC_WITH_HCATALOG_SCHEMA</title>
      <link>/en/sql-reference/functions/hadoop-functions/sync-with-hcatalog-schema/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/sync-with-hcatalog-schema/</guid>
      <description>
        
        
        &lt;p&gt;Copies the structure of a Hive database schema available through the HCatalog Connector to an OpenText™ Analytics Database schema. If the HCatalog schema and the target database schema have matching table names, SYNC_WITH_HCATALOG_SCHEMA overwrites the database tables.&lt;/p&gt;
&lt;p&gt;This function can synchronize the HCatalog schema directly. In this case, call it with the same schema name for the &lt;em&gt;&lt;code&gt;vertica_schema&lt;/code&gt;&lt;/em&gt; and &lt;em&gt;&lt;code&gt;hcatalog_schema&lt;/code&gt;&lt;/em&gt; parameters. The function can also synchronize a different schema to the HCatalog schema.&lt;/p&gt;
&lt;p&gt;If you change the settings of &lt;a href=&#34;../../../../en/sql-reference/config-parameters/hadoop-parameters/#HCatalog&#34;&gt;HCatalog Connector configuration parameters&lt;/a&gt;, you must call this function again.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;SYNC_WITH_HCATALOG_SCHEMA( &lt;span class=&#34;code-variable&#34;&gt;vertica_schema&lt;/span&gt;, &lt;span class=&#34;code-variable&#34;&gt;hcatalog_schema&lt;/span&gt;, [&lt;span class=&#34;code-variable&#34;&gt;drop_non_existent&lt;/span&gt;] )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;parameters&#34;&gt;Parameters&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;em&gt;&lt;code&gt;vertica_schema&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The target database schema to store the copied HCatalog schema&#39;s metadata. This can be the same schema as &lt;em&gt;&lt;code&gt;hcatalog_schema&lt;/code&gt;&lt;/em&gt;, or it can be a separate one created with &lt;a href=&#34;../../../../en/sql-reference/statements/create-statements/create-schema/#&#34;&gt;CREATE SCHEMA&lt;/a&gt;.

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

Do not use the database schema to store other data.

&lt;/div&gt;&lt;/dd&gt;
&lt;dt&gt;&lt;em&gt;&lt;code&gt;hcatalog_schema&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The HCatalog schema to copy, created with &lt;a href=&#34;../../../../en/sql-reference/statements/create-statements/create-hcatalog-schema/#&#34;&gt;CREATE HCATALOG SCHEMA&lt;/a&gt;&lt;/dd&gt;
&lt;dt&gt;&lt;em&gt;&lt;code&gt;drop_non_existent&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;If &lt;code&gt;true&lt;/code&gt;, drop any tables in &lt;em&gt;&lt;code&gt;vertica_schema&lt;/code&gt;&lt;/em&gt; that do not correspond to a table in &lt;em&gt;&lt;code&gt;hcatalog_schema&lt;/code&gt;&lt;/em&gt;&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;Non-superuser: CREATE privileges on &lt;em&gt;&lt;code&gt;vertica_schema&lt;/code&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Users also require access to Hive data, one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;USAGE permissions on &lt;em&gt;&lt;code&gt;hcat_schema&lt;/code&gt;&lt;/em&gt;, if Hive does not use an authorization service to manage access.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Permission through an authorization service (Sentry or Ranger), and access to the underlying files in HDFS. (Sentry can provide that access through ACL synchronization.)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;dbadmin user privileges, with or without an authorization service.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;data-type-matching&#34;&gt;Data type matching&lt;/h2&gt;
&lt;p&gt;Hive STRING and BINARY data types are matched in OpenText™ Analytics Database to the VARCHAR(65000) and VARBINARY(65000) types. Adjust the data types with &lt;a href=&#34;../../../../en/sql-reference/statements/alter-statements/alter-table/#&#34;&gt;ALTER TABLE&lt;/a&gt; as needed after creating the schema. The maximum size of a VARCHAR or VARBINARY in the database is 65000, but you can use LONG VARCHAR and LONG VARBINARY to specify larger values.&lt;/p&gt;
&lt;p&gt;Hive and the database define string length in different ways. In Hive, the length is the number of characters; in the database, it is the number of bytes. Thus, a character encoding that uses more than one byte, such as Unicode can cause mismatches between the two. To avoid data truncation, set values in the database based on bytes, not characters.&lt;/p&gt;
&lt;p&gt;If data size exceeds the column size, the database logs an event at read time in the QUERY_EVENTS system table.&lt;/p&gt;

&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following example uses SYNC_WITH_HCATALOG_SCHEMA to synchronize an HCatalog schema named hcat:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE HCATALOG SCHEMA hcat WITH hostname=&amp;#39;hcathost&amp;#39; HCATALOG_SCHEMA=&amp;#39;default&amp;#39;
   HCATALOG_USER=&amp;#39;hcatuser&amp;#39;;
CREATE SCHEMA
=&amp;gt; SELECT sync_with_hcatalog_schema(&amp;#39;hcat&amp;#39;, &amp;#39;hcat&amp;#39;);
sync_with_hcatalog_schema
----------------------------------------
Schema hcat synchronized with hcat
tables in hcat = 56
tables altered in hcat = 0
tables created in hcat = 56
stale tables in hcat = 0
table changes erred in hcat = 0
(1 row)

=&amp;gt; -- Use vsql&amp;#39;s \d command to describe a table in the synced schema

=&amp;gt; \d hcat.messages
List of Fields by Tables
  Schema   |   Table  | Column  |      Type      | Size  | Default | Not Null | Primary Key | Foreign Key
-----------+----------+---------+----------------+-------+---------+----------+-------------+-------------
hcat       | messages | id      | int            |     8 |         | f        | f           |
hcat       | messages | userid  | varchar(65000) | 65000 |         | f        | f           |
hcat       | messages | &amp;#34;time&amp;#34;  | varchar(65000) | 65000 |         | f        | f           |
hcat       | messages | message | varchar(65000) | 65000 |         | f        | f           |
(4 rows)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following example uses SYNC_WITH_HCATALOG_SCHEMA followed by ALTER TABLE to adjust a column value:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE HCATALOG SCHEMA hcat WITH hostname=&amp;#39;hcathost&amp;#39; HCATALOG_SCHEMA=&amp;#39;default&amp;#39;
-&amp;gt; HCATALOG_USER=&amp;#39;hcatuser&amp;#39;;
CREATE SCHEMA
=&amp;gt; SELECT sync_with_hcatalog_schema(&amp;#39;hcat&amp;#39;, &amp;#39;hcat&amp;#39;);
...
=&amp;gt; ALTER TABLE hcat.t ALTER COLUMN a1 SET DATA TYPE long varchar(1000000);
=&amp;gt; ALTER TABLE hcat.t ALTER COLUMN a2 SET DATA TYPE long varbinary(1000000);
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following example uses SYNC_WITH_HCATALOG_SCHEMA with a local (non-HCatalog) schema:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE HCATALOG SCHEMA hcat WITH hostname=&amp;#39;hcathost&amp;#39; HCATALOG_SCHEMA=&amp;#39;default&amp;#39;
-&amp;gt; HCATALOG_USER=&amp;#39;hcatuser&amp;#39;;
CREATE SCHEMA
=&amp;gt; CREATE SCHEMA hcat_local;
CREATE SCHEMA
=&amp;gt; SELECT sync_with_hcatalog_schema(&amp;#39;hcat_local&amp;#39;, &amp;#39;hcat&amp;#39;);
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: SYNC_WITH_HCATALOG_SCHEMA_TABLE</title>
      <link>/en/sql-reference/functions/hadoop-functions/sync-with-hcatalog-schema-table/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/sync-with-hcatalog-schema-table/</guid>
      <description>
        
        
        &lt;p&gt;Copies the structure of a single table in a Hive database schema available through the HCatalog Connector to an OpenText™ Analytics Database table.&lt;/p&gt;
&lt;p&gt;This function can synchronize the HCatalog schema directly. In this case, call it with the same schema name for the &lt;em&gt;&lt;code&gt;vertica_schema&lt;/code&gt;&lt;/em&gt; and &lt;em&gt;&lt;code&gt;hcatalog_schema&lt;/code&gt;&lt;/em&gt; parameters. The function can also synchronize a different schema to the HCatalog schema.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;SYNC_WITH_HCATALOG_SCHEMA_TABLE( &lt;span class=&#34;code-variable&#34;&gt;vertica_schema&lt;/span&gt;, &lt;span class=&#34;code-variable&#34;&gt;hcatalog_schema&lt;/span&gt;, &lt;span class=&#34;code-variable&#34;&gt;table_name&lt;/span&gt; )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;parameters&#34;&gt;Parameters&lt;/h2&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;em&gt;&lt;code&gt;vertica_schema&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The existing OpenText™ Analytics Database schema to store the copied HCatalog schema&#39;s metadata. This can be the same schema as &lt;em&gt;&lt;code&gt;hcatalog_schema&lt;/code&gt;&lt;/em&gt;, or it can be a separate one created with &lt;a href=&#34;../../../../en/sql-reference/statements/create-statements/create-schema/#&#34;&gt;CREATE SCHEMA&lt;/a&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;em&gt;&lt;code&gt;hcatalog_schema&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The HCatalog schema to copy, created with &lt;a href=&#34;../../../../en/sql-reference/statements/create-statements/create-hcatalog-schema/#&#34;&gt;CREATE HCATALOG SCHEMA&lt;/a&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;em&gt;&lt;code&gt;table_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The table in &lt;em&gt;&lt;code&gt;hcatalog_schema&lt;/code&gt;&lt;/em&gt; to copy. If &lt;em&gt;&lt;code&gt;table_name&lt;/code&gt;&lt;/em&gt; already exists in &lt;em&gt;&lt;code&gt;vertica_schema&lt;/code&gt;&lt;/em&gt;, the function overwrites it.&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;Non-superuser: CREATE privileges on &lt;em&gt;&lt;code&gt;vertica_schema&lt;/code&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Users also require access to Hive data, one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;USAGE permissions on &lt;em&gt;&lt;code&gt;hcat_schema&lt;/code&gt;&lt;/em&gt;, if Hive does not use an authorization service to manage access.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Permission through an authorization service (Sentry or Ranger), and access to the underlying files in HDFS. (Sentry can provide that access through ACL synchronization.)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;dbadmin user privileges, with or without an authorization service.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;data-type-matching&#34;&gt;Data type matching&lt;/h2&gt;
&lt;p&gt;Hive STRING and BINARY data types are matched in OpenText™ Analytics Database to the VARCHAR(65000) and VARBINARY(65000) types. Adjust the data types with &lt;a href=&#34;../../../../en/sql-reference/statements/alter-statements/alter-table/#&#34;&gt;ALTER TABLE&lt;/a&gt; as needed after creating the schema. The maximum size of a VARCHAR or VARBINARY in the database is 65000, but you can use LONG VARCHAR and LONG VARBINARY to specify larger values.&lt;/p&gt;
&lt;p&gt;Hive and the database define string length in different ways. In Hive, the length is the number of characters; in the database, it is the number of bytes. Thus, a character encoding that uses more than one byte, such as Unicode can cause mismatches between the two. To avoid data truncation, set values in the database based on bytes, not characters.&lt;/p&gt;
&lt;p&gt;If data size exceeds the column size, the database logs an event at read time in the QUERY_EVENTS system table.&lt;/p&gt;

&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following example uses SYNC_WITH_HCATALOG_SCHEMA_TABLE to synchronize the &amp;quot;nation&amp;quot; table:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE SCHEMA &amp;#39;hcat_local&amp;#39;;
CREATE SCHEMA

=&amp;gt; CREATE HCATALOG SCHEMA hcat WITH hostname=&amp;#39;hcathost&amp;#39; HCATALOG_SCHEMA=&amp;#39;hcat&amp;#39;
   HCATALOG_USER=&amp;#39;hcatuser&amp;#39;;
CREATE SCHEMA

=&amp;gt; SELECT sync_with_hcatalog_schema_table(&amp;#39;hcat_local&amp;#39;, &amp;#39;hcat&amp;#39;, &amp;#39;nation&amp;#39;);
sync_with_hcatalog_schema_table
-----------------------------------------------------------------------------
    Schema hcat_local synchronized with hcat for table nation
    table nation is created in schema hcat_local
    (1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following example shows the behavior if the &amp;quot;nation&amp;quot; table already exists in the local schema:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT sync_with_hcatalog_schema_table(&amp;#39;hcat_local&amp;#39;,&amp;#39;hcat&amp;#39;,&amp;#39;nation&amp;#39;);
sync_with_hcatalog_schema_table
-----------------------------------------------------------------------------
    Schema hcat_local synchronized with hcat for table nation
    table nation is altered in schema hcat_local
    (1 row)
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: VERIFY_HADOOP_CONF_DIR</title>
      <link>/en/sql-reference/functions/hadoop-functions/verify-hadoop-conf-dir/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/functions/hadoop-functions/verify-hadoop-conf-dir/</guid>
      <description>
        
        
        &lt;p&gt;Verifies that the Hadoop configuration that is used to access HDFS is valid on all OpenText™ Analytics Database nodes. The configuration is valid if:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;all required configuration files are found on the path defined by the HadoopConfDir configuration parameter&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;all properties needed by the database are set in those files&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This function does not attempt to validate the settings of those properties; it only verifies that they have values.&lt;/p&gt;
&lt;p&gt;It is possible for Hadoop configuration to be valid on some nodes and invalid on others. The function reports a validation failure if the value is invalid on any node; the rest of the output reports the details.&lt;/p&gt;
&lt;p&gt;This is a meta-function. You must call meta-functions in a top-level &lt;a href=&#34;../../../../en/sql-reference/statements/select/#&#34;&gt;SELECT&lt;/a&gt; statement.&lt;/p&gt;

&lt;h2 id=&#34;behavior-type&#34;&gt;Behavior type&lt;/h2&gt;
&lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/volatile-functions/&#34; title=&#34;&#34;&gt;Volatile&lt;/a&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;VERIFY_HADOOP_CONF_DIR( )
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;parameters&#34;&gt;Parameters&lt;/h2&gt;
&lt;p&gt;This function has no parameters.&lt;/p&gt;
&lt;h2 id=&#34;privileges&#34;&gt;Privileges&lt;/h2&gt;
&lt;p&gt;This function does not require privileges.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following example shows the results when the Hadoop configuration is valid.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT VERIFY_HADOOP_CONF_DIR();
    verify_hadoop_conf_dir
-------------------------------------------------------------------
Validation Success
v_vmart_node0001: HadoopConfDir [PG_TESTOUT/config] is valid
v_vmart_node0002: HadoopConfDir [PG_TESTOUT/config] is valid
v_vmart_node0003: HadoopConfDir [PG_TESTOUT/config] is valid
v_vmart_node0004: HadoopConfDir [PG_TESTOUT/config] is valid
    (1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In the following example, the Hadoop configuration is valid on one node, but on other nodes a needed value is missing.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT VERIFY_HADOOP_CONF_DIR();
    verify_hadoop_conf_dir
-------------------------------------------------------------------
Validation Failure
v_vmart_node0001: HadoopConfDir [PG_TESTOUT/test_configs/config] is valid
v_vmart_node0002: No fs.defaultFS parameter found in config files in [PG_TESTOUT/config]
v_vmart_node0003: No fs.defaultFS parameter found in config files in [PG_TESTOUT/config]
v_vmart_node0004: No fs.defaultFS parameter found in config files in [PG_TESTOUT/config]
    (1 row)
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
  </channel>
</rss>
