<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Proxy users and delegation tokens</title>
    <link>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/</link>
    <description>Recent content in Proxy users and delegation tokens on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Hadoop-Integration: User impersonation (doAs)</title>
      <link>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/user-impersonation-doas/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/user-impersonation-doas/</guid>
      <description>
        
        
        &lt;p&gt;You can use user impersonation to access data in an HDFS cluster from OpenText™ Analytics Database. This approach is called &amp;quot;doAs&amp;quot; (for &amp;quot;do as&amp;quot;) because the database uses a single proxy user on behalf of another (Hadoop) user. The impersonated Hadoop user does not need to also be a database user.&lt;/p&gt;
&lt;p&gt;In the following illustration, Alice is a Hadoop user but not a database user. She connects to the database as the proxy user, vertica-etl. In her session, the database obtains a delegation token (DT) on behalf of the doAs user (Alice), and uses that delegation token to access HDFS.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;../../../../images/hadoop/do-as.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;p&gt;You can use doAs with or without Kerberos, so long as HDFS and the database match. If HDFS uses Kerberos then the database must too.&lt;/p&gt;
&lt;h2 id=&#34;user-configuration&#34;&gt;User configuration&lt;/h2&gt;
&lt;p&gt;The Hadoop administrator must create a &lt;a href=&#34;https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Superusers.html&#34;&gt;proxy user&lt;/a&gt; and allow it to access HDFS on behalf of other users. Set values in core-site.xml as in the following example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&amp;lt;name&amp;gt;hadoop.proxyuser.vertica-etl.users&amp;lt;/name&amp;gt;
&amp;lt;value&amp;gt;*&amp;lt;/value&amp;gt;
&amp;lt;name&amp;gt;hadoop.proxyuser.vertica-etl.hosts&amp;lt;/name&amp;gt;
&amp;lt;value&amp;gt;*&amp;lt;/value&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In the database, create a corresponding user.&lt;/p&gt;
&lt;h2 id=&#34;session-configuration&#34;&gt;Session configuration&lt;/h2&gt;
&lt;p&gt;To make requests on behalf of a Hadoop user, first set the &lt;a href=&#34;../../../../en/sql-reference/config-parameters/hadoop-parameters/#HadoopImpersonationConfig&#34;&gt;HadoopImpersonationConfig&lt;/a&gt; session parameter to specify the user and HDFS cluster. The database will access HDFS as that user until the session ends or you change the parameter.&lt;/p&gt;
&lt;p&gt;The value of this session parameter is a collection of JSON objects. Each object specifies an HDFS cluster and a Hadoop user. For the cluster, you can specify either a name service or an individual name node. If you are using HA name node, then you must either use a name service or specify all name nodes. &lt;a href=&#34;../../../../en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/hadoopimpersonationconfig-format/#&#34;&gt;HadoopImpersonationConfig format&lt;/a&gt; describes the full JSON syntax.&lt;/p&gt;
&lt;p&gt;The following example shows access on behalf of two different users. The users &amp;quot;stephanie&amp;quot; and &amp;quot;bob&amp;quot; are Hadoop users, not database users. &amp;quot;vertica-etl&amp;quot; is a database user.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
$ vsql -U vertica-etl

=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig = &amp;#39;[{&amp;#34;nameservice&amp;#34;:&amp;#34;hadoopNS&amp;#34;, &amp;#34;doAs&amp;#34;:&amp;#34;stephanie&amp;#34;}]&amp;#39;;
=&amp;gt; COPY nation FROM &amp;#39;webhdfs:///user/stephanie/nation.dat&amp;#39;;

=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig = &amp;#39;[{&amp;#34;nameservice&amp;#34;:&amp;#34;hadoopNS&amp;#34;, &amp;#34;doAs&amp;#34;:&amp;#34;bob&amp;#34;}, {&amp;#34;authority&amp;#34;:&amp;#34;hadoop2:50070&amp;#34;, &amp;#34;doAs&amp;#34;:&amp;#34;rob&amp;#34;}]&amp;#39;;
=&amp;gt; COPY nation FROM &amp;#39;webhdfs:///user/bob/nation.dat&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The database uses Hadoop delegation tokens, obtained from the name node, to impersonate Hadoop users. In a long-running session, a token could expire. The database attempts to renew tokens automatically; see &lt;a href=&#34;../../../../en/hadoop-integration/accessing-kerberized-hdfs-data/token-expiration/#&#34;&gt;Token expiration&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;testing-the-configuration&#34;&gt;Testing the configuration&lt;/h2&gt;
&lt;p&gt;You can use the &lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hadoop-impersonation-config-check/#&#34;&gt;HADOOP_IMPERSONATION_CONFIG_CHECK&lt;/a&gt; function to test your HDFS delegation tokens and &lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hcatalogconnector-config-check/#&#34;&gt;HCATALOGCONNECTOR_CONFIG_CHECK&lt;/a&gt; to test your HCatalog Connector delegation token.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Hadoop-Integration: Bring your own delegation token</title>
      <link>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/bring-your-own-delegation-token/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/bring-your-own-delegation-token/</guid>
      <description>
        
        
        &lt;p&gt;Instead of creating a proxy user and giving it access to HDFS for use with doAs, you can give OpenText™ Analytics Database a Hadoop delegation token to use. You must obtain this delegation token from the Hadoop name node. In this model, security is handled entirely on the Hadoop side, with the database just passing along a token. The database may or may not be Kerberized.&lt;/p&gt;
&lt;p&gt;A typical workflow is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In an ETL front end, a user submits a query.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The ETL system uses authentication and authorization services to verify that the user has sufficient permission to run the query.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The ETL system requests a delegation token for the user from the name node.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The ETL system makes a client connection to the database, sets the delegation token for the session, and runs the query.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When using a delegation token, clients can connect as any database user. No proxy user is required.&lt;/p&gt;
&lt;p&gt;In the following illustration, Bob has a Hadoop-issued delegation token. He connects to the database and the database uses that delegation token to access files in HDFS.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;../../../../images/hadoop/delegation-toknes.png&#34; alt=&#34;&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;session-configuration&#34;&gt;Session configuration&lt;/h2&gt;
&lt;p&gt;Set the &lt;a href=&#34;../../../../en/sql-reference/config-parameters/hadoop-parameters/#HadoopImpersonationConfig&#34;&gt;HadoopImpersonationConfig&lt;/a&gt; session parameter to specify the delegation token and HDFS cluster. The database will access HDFS using that delegation token until the session ends, the token expires, or you change the parameter.&lt;/p&gt;
&lt;p&gt;The value of this session parameter is a collection of JSON objects. Each object specifies a delegation token (&amp;quot;token&amp;quot;) in WebHDFS format and an HDFS name service or name node. &lt;a href=&#34;../../../../en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/hadoopimpersonationconfig-format/#&#34;&gt;HadoopImpersonationConfig format&lt;/a&gt; describes the full JSON syntax.&lt;/p&gt;
&lt;p&gt;The following example shows access on behalf of two different users. The users &amp;quot;stephanie&amp;quot; and &amp;quot;bob&amp;quot; are Hadoop users, not database users. &amp;quot;dbuser1&amp;quot; is a database user with no special privileges.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vsql -U dbuser1

=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig =&amp;#39;[{&amp;#34;authority&amp;#34;:&amp;#34;hadoop1:50070&amp;#34;,&amp;#34;token&amp;#34;:&amp;#34;JAAGZGJldGwxBmRiZXRsMQCKAWDXJgB9igFg-zKEfY4gao4BmhSJYtXiWqrhBHbbUn4VScNg58HWQxJXRUJIREZTIGRlbGVnYXRpb24RMTAuMjAuMTAwLjU0OjgwMjA&amp;#34;}]&amp;#39;;
=&amp;gt; COPY nation FROM &amp;#39;webhdfs:///user/stephanie/nation.dat&amp;#39;;

=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig =&amp;#39;[{&amp;#34;authority&amp;#34;:&amp;#34;hadoop1:50070&amp;#34;,&amp;#34;token&amp;#34;:&amp;#34;HgADdG9tA3RvbQCKAWDXJgAoigFg-zKEKI4gaI4BmhRoOUpq_jPxrVhZ1NSMnodAQnhUthJXRUJIREZTIGRlbGVnYXRpb24RMTAuMjAuMTAwLjU0OjgwMjA&amp;#34;}]&amp;#39;;
=&amp;gt; COPY nation FROM &amp;#39;webhdfs:///user/bob/nation.dat&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can use the &lt;a href=&#34;https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#Delegation+Token+Operations&#34;&gt;WebHDFS REST API&lt;/a&gt; to get delegation tokens:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ curl -s --noproxy &amp;#34;*&amp;#34; --negotiate -u: -X GET &amp;#34;http://hadoop1:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The database does not, and cannot, renew delegation tokens when they expire. You must either keep sessions shorter than token lifetime or implement a renewal scheme.&lt;/p&gt;
&lt;h2 id=&#34;delegation-tokens-and-the-hcatalog-connector&#34;&gt;Delegation tokens and the HCatalog Connector&lt;/h2&gt;
&lt;p&gt;HiveServer2 uses a different format for delegation tokens. To use the HCatalog Connector, therefore, you must set two delegation tokens, one as usual (authority) and one for HiveServer2 (schema). The HCatalog Connector uses the schema token to access metadata and the authority token to access data. The schema name is the same Hive schema you specified in CREATE HCATALOG SCHEMA. The following example shows how to use these two delegation tokens.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vsql -U dbuser1

-- set delegation token for user and HiveServer2
=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig=&amp;#39;[
     {&amp;#34;nameservice&amp;#34;:&amp;#34;hadoopNS&amp;#34;,&amp;#34;token&amp;#34;:&amp;#34;JQAHcmVsZWFzZQdyZWxlYXNlAIoBYVJKrYSKAWF2VzGEjgmzj_IUCIrI9b8Dqu6awFTHk5nC-fHB8xsSV0VCSERGUyBkZWxlZ2F0aW9uETEwLjIwLjQyLjEwOTo4MDIw&amp;#34;},
     {&amp;#34;schema&amp;#34;:&amp;#34;access&amp;#34;,&amp;#34;token&amp;#34;:&amp;#34;UwAHcmVsZWFzZQdyZWxlYXNlL2hpdmUvZW5nLWc5LTEwMC52ZXJ0aWNhY29ycC5jb21AVkVSVElDQUNPUlAuQ09NigFhUkmyTooBYXZWNk4BjgETFKN2xPURn19Yq9tf-0nekoD51TZvFUhJVkVfREVMRUdBVElPTl9UT0tFThZoaXZlc2VydmVyMkNsaWVudFRva2Vu&amp;#34;}]&amp;#39;;

-- uses HiveServer2 token to get metadata
=&amp;gt; CREATE HCATALOG SCHEMA access WITH hcatalog_schema &amp;#39;access&amp;#39;;

-- uses both tokens
=&amp;gt; SELECT * FROM access.t1;

--uses only HiveServer2 token
=&amp;gt; SELECT * FROM hcatalog_tables;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;HiveServer2 does not provide a REST API for delegation tokens like WebHDFS does. See &lt;a href=&#34;../../../../en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/getting-hiveserver2-delegation-token/#&#34;&gt;Getting a HiveServer2 delegation token&lt;/a&gt; for some tips.&lt;/p&gt;
&lt;h2 id=&#34;testing-the-configuration&#34;&gt;Testing the configuration&lt;/h2&gt;
&lt;p&gt;You can use the &lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hadoop-impersonation-config-check/#&#34;&gt;HADOOP_IMPERSONATION_CONFIG_CHECK&lt;/a&gt; function to test your HDFS delegation tokens and &lt;a href=&#34;../../../../en/sql-reference/functions/hadoop-functions/hcatalogconnector-config-check/#&#34;&gt;HCATALOGCONNECTOR_CONFIG_CHECK&lt;/a&gt; to test your HCatalog Connector delegation token.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Hadoop-Integration: Getting a HiveServer2 delegation token</title>
      <link>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/getting-hiveserver2-delegation-token/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/getting-hiveserver2-delegation-token/</guid>
      <description>
        
        
        &lt;p&gt;To acccess Hive metadata using HiveServer2, you need a special delegation token. (See &lt;a href=&#34;../../../../en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/bring-your-own-delegation-token/#&#34;&gt;Bring your own delegation token&lt;/a&gt;.) HiveServer2 does not provide an easy way to get this token, unlike the REST API that grants HDFS (data) delegation tokens.&lt;/p&gt;
&lt;p&gt;The following utility code shows a way to get this token. You will need to modify this code for your own cluster; in particular, change the value of the &lt;code&gt;connectURL&lt;/code&gt; static.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;import java.io.FileWriter;
import java.io.PrintStream;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.io.Writer;
import java.security.PrivilegedExceptionAction;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.shims.Utils;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hive.jdbc.HiveConnection;
import org.json.simple.JSONArray;
import org.json.simple.JSONObject;

public class JDBCTest {
  public static final String driverName = &amp;#34;org.apache.hive.jdbc.HiveDriver&amp;#34;;
  public static String connectURL = &amp;#34;jdbc:hive2://node2.cluster0.example.com:2181,node1.cluster0.example.com:2181,node3.cluster0.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2&amp;#34;;
  public static String schemaName = &amp;#34;hcat&amp;#34;;
  public static String verticaUser = &amp;#34;condor&amp;#34;;
  public static String proxyUser = &amp;#34;condor-2&amp;#34;;
  public static String krb5conf = &amp;#34;/home/server/kerberos/krb5.conf&amp;#34;;
  public static String realm = &amp;#34;EXAMPLE.COM&amp;#34;;
  public static String keytab = &amp;#34;/home/server/kerberos/kt.keytab&amp;#34;;

  public static void main(String[] args) {
    if (args.length &amp;lt; 7) {
      System.out.println(
          &amp;#34;Usage: JDBCTest &amp;lt;jdbc_url&amp;gt; &amp;lt;hive_schema&amp;gt; &amp;lt;kerberized_user&amp;gt; &amp;lt;proxy_user&amp;gt; &amp;lt;krb5_conf&amp;gt; &amp;lt;krb_realm&amp;gt; &amp;lt;krb_keytab&amp;gt;&amp;#34;);
      System.exit(1);
    }
    connectURL = args[0];
    schemaName = args[1];
    verticaUser = args[2];
    proxyUser = args[3];
    krb5conf = args[4];
    realm = args[5];
    keytab = args[6];

    System.out.println(&amp;#34;connectURL: &amp;#34; + connectURL);
    System.out.println(&amp;#34;schemaName: &amp;#34; + schemaName);
    System.out.println(&amp;#34;verticaUser: &amp;#34; + verticaUser);
    System.out.println(&amp;#34;proxyUser: &amp;#34; + proxyUser);
    System.out.println(&amp;#34;krb5conf: &amp;#34; + krb5conf);
    System.out.println(&amp;#34;realm: &amp;#34; + realm);
    System.out.println(&amp;#34;keytab: &amp;#34; + keytab);
    try {
      Class.forName(&amp;#34;org.apache.hive.jdbc.HiveDriver&amp;#34;);
      System.out.println(&amp;#34;Found HiveServer2 JDBC driver&amp;#34;);
    } catch (ClassNotFoundException e) {
      System.out.println(&amp;#34;Couldn&amp;#39;t find HiveServer2 JDBC driver&amp;#34;);
    }
    try {
      Configuration conf = new Configuration();
      System.setProperty(&amp;#34;java.security.krb5.conf&amp;#34;, krb5conf);
      conf.set(&amp;#34;hadoop.security.authentication&amp;#34;, &amp;#34;kerberos&amp;#34;);
      UserGroupInformation.setConfiguration(conf);
      dtTest();
    } catch (Throwable e) {
      Writer stackString = new StringWriter();
      e.printStackTrace(new PrintWriter(stackString));
      System.out.println(e);
      System.out.printf(&amp;#34;Error occurred when connecting to HiveServer2 with [%s]: %s\n%s\n&amp;#34;,
          new Object[] { connectURL, e.getMessage(), stackString.toString() });
    }
  }

  private static void dtTest() throws Exception {
    UserGroupInformation user = UserGroupInformation.loginUserFromKeytabAndReturnUGI(verticaUser + &amp;#34;@&amp;#34; + realm, keytab);
    user.doAs(new PrivilegedExceptionAction() {
      public Void run() throws Exception {
        System.out.println(&amp;#34;In doas: &amp;#34; + UserGroupInformation.getLoginUser());
        Connection con = DriverManager.getConnection(JDBCTest.connectURL);
        System.out.println(&amp;#34;Connected to HiveServer2&amp;#34;);
        JDBCTest.showUser(con);
        System.out.println(&amp;#34;Getting delegation token for user&amp;#34;);
        String token = ((HiveConnection) con).getDelegationToken(JDBCTest.proxyUser, &amp;#34;hive/_HOST@&amp;#34; + JDBCTest.realm);
        System.out.println(&amp;#34;Got token: &amp;#34; + token);
        System.out.println(&amp;#34;Closing original connection&amp;#34;);
        con.close();

        System.out.println(&amp;#34;Setting delegation token in UGI&amp;#34;);
        Utils.setTokenStr(Utils.getUGI(), token, &amp;#34;hiveserver2ClientToken&amp;#34;);
        con = DriverManager.getConnection(JDBCTest.connectURL + &amp;#34;;auth=delegationToken&amp;#34;);
        System.out.println(&amp;#34;Connected to HiveServer2 with delegation token&amp;#34;);
        JDBCTest.showUser(con);
        con.close();

        JDBCTest.writeDTJSON(token);

        return null;
      }
    });
  }

  private static void showUser(Connection con) throws Exception {
    String sql = &amp;#34;select current_user()&amp;#34;;
    Statement stmt = con.createStatement();
    ResultSet res = stmt.executeQuery(sql);
    StringBuilder result = new StringBuilder();
    while (res.next()) {
      result.append(res.getString(1));
    }
    System.out.println(&amp;#34;\tcurrent_user: &amp;#34; + result.toString());
  }

  private static void writeDTJSON(String token) {
    JSONArray arr = new JSONArray();
    JSONObject obj = new JSONObject();
    obj.put(&amp;#34;schema&amp;#34;, schemaName);
    obj.put(&amp;#34;token&amp;#34;, token);
    arr.add(obj);
    try {
      FileWriter fileWriter = new FileWriter(&amp;#34;hcat_delegation.json&amp;#34;);
      fileWriter.write(arr.toJSONString());
      fileWriter.flush();
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Following is an example call and its output:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ java -cp hs2token.jar JDBCTest &amp;#39;jdbc:hive2://test.example.com:10000/default;principal=hive/_HOST@EXAMPLE.COM&amp;#39; &amp;#34;default&amp;#34; &amp;#34;testuser&amp;#34; &amp;#34;test&amp;#34; &amp;#34;/etc/krb5.conf&amp;#34; &amp;#34;EXAMPLE.COM&amp;#34; &amp;#34;/test/testuser.keytab&amp;#34;
connectURL: jdbc:hive2://test.example.com:10000/default;principal=hive/_HOST@EXAMPLE.COM
schemaName: default
verticaUser: testuser
proxyUser: test
krb5conf: /etc/krb5.conf
realm: EXAMPLE.COM
keytab: /test/testuser.keytab
Found HiveServer2 JDBC driver
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
In doas: testuser@EXAMPLE.COM (auth:KERBEROS)
Connected to HiveServer2
        current_user: testuser
Getting delegation token for user
Got token: JQAEdGVzdARoaXZlB3JlbGVhc2WKAWgvBOwzigFoUxFwMwKOAgMUHfqJ5ma7_27LiePN8C7MxJ682bsVSElWRV9ERUxFR0FUSU9OX1RPS0VOFmhpdmVzZXJ2ZXIyQ2xpZW50VG9rZW4
Closing original connection
Setting delegation token in UGI
Connected to HiveServer2 with delegation token
        current_user: testuser
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Hadoop-Integration: HadoopImpersonationConfig format</title>
      <link>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/hadoopimpersonationconfig-format/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/hadoopimpersonationconfig-format/</guid>
      <description>
        
        
        &lt;p&gt;The value of the &lt;a href=&#34;../../../../en/sql-reference/config-parameters/hadoop-parameters/#HadoopImpersonationConfig&#34;&gt;HadoopImpersonationConfig&lt;/a&gt; session parameter is a set of one or more JSON objects. Each object describes one doAs user or delegation token for one Hadoop destination.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[ { (&amp;#34;doAs&amp;#34; | &amp;#34;token&amp;#34;): &lt;span class=&#34;code-variable&#34;&gt;value&lt;/span&gt;,
    (&amp;#34;nameservice&amp;#34; | &amp;#34;authority&amp;#34; | &amp;#34;schema&amp;#34;): &lt;span class=&#34;code-variable&#34;&gt;value&lt;/span&gt;} [,...]
]
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;properties&#34;&gt;Properties&lt;/h2&gt;

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;doAs&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;


The name of a Hadoop user to impersonate.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;token&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;


A delegation token to use for HDFS access.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;nameservice&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;


A Hadoop name service. All access to this name service uses the doAs user or delegation token.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;authority&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;


A name node authority. All access to this authority uses the doAs user or delegation token. If the name node fails over to another name node, the doAs user or delegation token does &lt;em&gt;not&lt;/em&gt; automatically apply to the failover name node. If you are using HA name node, use &lt;code&gt;nameservice&lt;/code&gt; instead of &lt;code&gt;authority&lt;/code&gt; or include objects for every name node.&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;schema&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;


A Hive schema, for use with the HCatalog Connector. OpenText™ Analytics Database uses this object&#39;s doAs user or token to access Hive metadata only. For data access you must also specify a name service or authority object, just like for all other data access.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;

&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;In the following example of doAs, Bob is a Hadoop user and vertica-etl is a Kerberized proxy user.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ kinit vertica-etl -kt /home/dbadmin/vertica-etl.keytab
$ vsql -U vertica-etl

=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig = &amp;#39;[{&amp;#34;nameservice&amp;#34;:&amp;#34;hadoopNS&amp;#34;, &amp;#34;doAs&amp;#34;:&amp;#34;Bob&amp;#34;}]&amp;#39;;
=&amp;gt; COPY nation FROM &amp;#39;webhdfs:///user/bob/nation.dat&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In the following example, the current database user (it doesn&#39;t matter who that is) uses a Hadoop delegation token. This token belongs to Alice, but you never specify the user name here. Instead, you use it to get the delegation token from Hadoop.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vsql -U dbuser1

=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig =&amp;#39;[{&amp;#34;nameservice&amp;#34;:&amp;#34;hadoopNS&amp;#34;,&amp;#34;token&amp;#34;:&amp;#34;JAAGZGJldGwxBmRiZXRsMQCKAWDXJgB9igFg-zKEfY4gao4BmhSJYtXiWqrhBHbbUn4VScNg58HWQxJXRUJIREZTIGRlbGVnYXRpb24RMTAuMjAuMTAwLjU0OjgwMjA&amp;#34;}]&amp;#39;;
=&amp;gt; COPY nation FROM &amp;#39;webhdfs:///user/alice/nation.dat&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In the following example, &amp;quot;authority&amp;quot; specifies the (single) name node on a Hadoop cluster that does not use high availability.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vsql -U dbuser1

=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig =&amp;#39;[{&amp;#34;authority&amp;#34;:&amp;#34;hadoop1:50070&amp;#34;, &amp;#34;doAs&amp;#34;:&amp;#34;Stephanie&amp;#34;}]&amp;#39;;
=&amp;gt; COPY nation FROM &amp;#39;webhdfs://hadoop1:50070/user/stephanie/nation.dat&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To access data in Hive you need to specify two delegation tokens. The first, for a name service or authority, is for data access as usual. The second is for the HiveServer2 metadata for the schema. HiveServer2 requires a delegation token in WebHDFS format. The schema name is the Hive schema you specify with CREATE HCATALOG SCHEMA.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vsql -U dbuser1

-- set delegation token for user and HiveServer2
=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig=&amp;#39;[
     {&amp;#34;nameservice&amp;#34;:&amp;#34;hadoopNS&amp;#34;,&amp;#34;token&amp;#34;:&amp;#34;JQAHcmVsZWFzZQdyZWxlYXNlAIoBYVJKrYSKAWF2VzGEjgmzj_IUCIrI9b8Dqu6awFTHk5nC-fHB8xsSV0VCSERGUyBkZWxlZ2F0aW9uETEwLjIwLjQyLjEwOTo4MDIw&amp;#34;},
     {&amp;#34;schema&amp;#34;:&amp;#34;access&amp;#34;,&amp;#34;token&amp;#34;:&amp;#34;UwAHcmVsZWFzZQdyZWxlYXNlL2hpdmUvZW5nLWc5LTEwMC52ZXJ0aWNhY29ycC5jb21AVkVSVElDQUNPUlAuQ09NigFhUkmyTooBYXZWNk4BjgETFKN2xPURn19Yq9tf-0nekoD51TZvFUhJVkVfREVMRUdBVElPTl9UT0tFThZoaXZlc2VydmVyMkNsaWVudFRva2Vu&amp;#34;}]&amp;#39;;

-- uses HiveServer2 token to get metadata
=&amp;gt; CREATE HCATALOG SCHEMA access WITH hcatalog_schema &amp;#39;access&amp;#39;;

-- uses both tokens
=&amp;gt; SELECT * FROM access.t1;

--uses only HiveServer2 token
=&amp;gt; SELECT * FROM hcatalog_tables;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Each object in the HadoopImpersonationConfig collection specifies one connection to one Hadoop cluster. You can add as many connections as you like, including to more than one Hadoop cluster. The following example shows delegation tokens for two different Hadoop clusters. The database uses the correct token for each cluster when connecting.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vsql -U dbuser1

=&amp;gt; ALTER SESSION SET
   HadoopImpersonationConfig =&amp;#39;[
    {&amp;#34;nameservice&amp;#34;:&amp;#34;productionNS&amp;#34;,&amp;#34;token&amp;#34;:&amp;#34;JAAGZGJldGwxBmRiZXRsMQCKAWDXJgB9igFg-zKEfY4gao4BmhSJYtXiWqrhBHbbUn4VScNg58HWQxJXRUJIREZTIGRlbGVnYXRpb24RMTAuMjAuMTAwLjU0OjgwMjA&amp;#34;},
    {&amp;#34;nameservice&amp;#34;:&amp;#34;testNS&amp;#34;, &amp;#34;token&amp;#34;:&amp;#34;HQAHcmVsZWFzZQdyZWxlYXNlAIoBYVJKrYSKAWF2VzGEjgmzj_IUCIrI9b8Dqu6awFTHk5nC-fHB8xsSV0VCSERGUyBkZWxlZ2F0aW9uETEwLjIwLjQyLjEwOTo4MDIw&amp;#34;}]&amp;#39;;

=&amp;gt; COPY clicks FROM &amp;#39;webhdfs://productionNS/data/clickstream.dat&amp;#39;;
=&amp;gt; COPY testclicks FROM &amp;#39;webhdfs://testNS/data/clickstream.dat&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
  </channel>
</rss>
