<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – File systems and object stores</title>
    <link>/en/sql-reference/file-systems-and-object-stores/</link>
    <description>Recent content in File systems and object stores on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/sql-reference/file-systems-and-object-stores/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Sql-Reference: Azure Blob Storage object store</title>
      <link>/en/sql-reference/file-systems-and-object-stores/azure-blob-storage-object-store/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/file-systems-and-object-stores/azure-blob-storage-object-store/</guid>
      <description>
        
        
        &lt;p&gt;Azure has several interfaces for accessing data. OpenText™ Analytics Database reads and always writes Block Blobs in Azure Storage. The database can read external data created using ADLS Gen2, and data that the database exports can be read using ADLS Gen2.&lt;/p&gt;
&lt;h2 id=&#34;uri-format&#34;&gt;URI format&lt;/h2&gt;
&lt;p&gt;One of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;

&lt;code&gt;&lt;code&gt;azb://&lt;/code&gt;&lt;em&gt;&lt;code&gt;account&lt;/code&gt;&lt;/em&gt;&lt;code&gt;/&lt;/code&gt;&lt;em&gt;&lt;code&gt;container&lt;/code&gt;&lt;/em&gt;&lt;code&gt;/&lt;/code&gt;&lt;em&gt;&lt;code&gt;path&lt;/code&gt;&lt;/em&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;

&lt;code&gt;&lt;code&gt;azb://[&lt;/code&gt;&lt;em&gt;&lt;code&gt;account&lt;/code&gt;&lt;/em&gt;&lt;code&gt;@]&lt;/code&gt;&lt;em&gt;&lt;code&gt;host&lt;/code&gt;&lt;/em&gt;&lt;code&gt;[:&lt;/code&gt;&lt;em&gt;&lt;code&gt;port&lt;/code&gt;&lt;/em&gt;&lt;code&gt;]/&lt;/code&gt;&lt;em&gt;&lt;code&gt;container&lt;/code&gt;&lt;/em&gt;&lt;code&gt;/&lt;/code&gt;&lt;em&gt;&lt;code&gt;path&lt;/code&gt;&lt;/em&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the first version, a URI like &#39;azb://myaccount/mycontainer/path&#39; treats the first token after the &#39;//&#39; as the account name. In the second version, you can specify account and must specify host explicitly.&lt;/p&gt;
&lt;p&gt;The following rules apply to the second form:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If &lt;em&gt;&lt;code&gt;account&lt;/code&gt;&lt;/em&gt; is not specified, the first label of the host is used. For example, if the URI is &#39;azb://myaccount.blob.core.windows.net/mycontainer/my/object&#39;, then &#39;myaccount&#39; is used for &lt;em&gt;&lt;code&gt;account&lt;/code&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;If &lt;em&gt;&lt;code&gt;account&lt;/code&gt;&lt;/em&gt; is not specified and &lt;em&gt;&lt;code&gt;host&lt;/code&gt;&lt;/em&gt; has a single label and no port, the endpoint is &lt;em&gt;&lt;code&gt;host&lt;/code&gt;&lt;/em&gt;&lt;code&gt;.blob.core.windows.net&lt;/code&gt;. Otherwise, the endpoint is the host and port specified in the URI.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The protocol (HTTP or HTTPS) is specified in the AzureStorageEndpointConfig configuration parameter.&lt;/p&gt;
&lt;h2 id=&#34;authentication&#34;&gt;Authentication&lt;/h2&gt;
&lt;p&gt;If you are using Azure managed identities, no further configuration is needed in the database. If your Azure storage uses multiple managed identities, you must tag the one to be used. The database looks for an Azure tag with a key of VerticaManagedIdentityClientId, the value of which must be the client_id attribute of the managed identity to be used. If you update the Azure tag, call &lt;a href=&#34;../../../en/sql-reference/functions/management-functions/cloud-functions/azure-token-cache-clear/#&#34;&gt;AZURE_TOKEN_CACHE_CLEAR&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you are not using managed identities, use the AzureStorageCredentials configuration parameter to provide credentials to Azure. If loading data, you can set the parameter at the session level. If using Eon Mode communal storage on Azure, you must set this configuration parameter at the database level.&lt;/p&gt;
&lt;p&gt;In Azure you must also grant access to the containers for the identities used from the database.&lt;/p&gt;
&lt;h2 id=&#34;configuration-parameters&#34;&gt;Configuration parameters&lt;/h2&gt;
&lt;p&gt;The following database configuration parameters apply to the Azure blob file system. You can set parameters at different levels with the appropriate ALTER statement, such as &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-session/&#34;&gt;ALTER SESSION...SET PARAMETER&lt;/a&gt;. Query the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-monitor-schema/config-parameters/#&#34;&gt;CONFIGURATION_PARAMETERS&lt;/a&gt; system table to determine what levels (node, session, user, database) are valid for a given parameter.&lt;/p&gt;
&lt;p&gt;For external tables using highly partitioned data in an object store, see the &lt;a href=&#34;../../../en/sql-reference/config-parameters/general-parameters/#ObjectStoreGlobStrategy&#34;&gt;ObjectStoreGlobStrategy&lt;/a&gt; configuration parameter and &lt;a href=&#34;../../../en/data-load/partitioned-file-paths/#Partitio&#34;&gt;Partitions on Object Stores&lt;/a&gt;.&lt;/p&gt;

&lt;dl&gt;
&lt;dt&gt;&lt;a name=&#34;AzureStorageCredentials&#34;&gt;&lt;/a&gt;AzureStorageCredentials&lt;/dt&gt;
&lt;dd&gt;Collection of JSON objects, each of which specifies connection credentials for one endpoint. This parameter takes precedence over Azure managed identities.
&lt;p&gt;The collection must contain at least one object and may contain more. Each object must specify at least one of &lt;code&gt;accountName&lt;/code&gt; or &lt;code&gt;blobEndpoint&lt;/code&gt;, and at least one of &lt;code&gt;accountKey&lt;/code&gt; or &lt;code&gt;sharedAccessSignature&lt;/code&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;accountName&lt;/code&gt;: If not specified, uses the label of &lt;code&gt;blobEndpoint&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;blobEndpoint&lt;/code&gt;: Host name with optional port (&lt;code&gt;host:port&lt;/code&gt;). If not specified, uses &lt;em&gt;&lt;code&gt;account&lt;/code&gt;&lt;/em&gt;&lt;code&gt;.blob.core.windows.net&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;accountKey&lt;/code&gt;: Access key for the account or endpoint.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sharedAccessSignature&lt;/code&gt;: Access token for finer-grained access control, if being used by the Azure endpoint.&lt;/li&gt;
&lt;/ul&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;a name=&#34;AzureStorageEndpointConfig&#34;&gt;&lt;/a&gt;AzureStorageEndpointConfig&lt;/dt&gt;
&lt;dd&gt;Collection of JSON objects, each of which specifies configuration elements for one endpoint. Each object must specify at least one of &lt;code&gt;accountName&lt;/code&gt; or &lt;code&gt;blobEndpoint&lt;/code&gt;.
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;accountName&lt;/code&gt;: If not specified, uses the label of &lt;code&gt;blobEndpoint&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;blobEndpoint&lt;/code&gt;: Host name with optional port (&lt;code&gt;host:port&lt;/code&gt;). If not specified, uses &lt;em&gt;&lt;code&gt;account&lt;/code&gt;&lt;/em&gt;&lt;code&gt;.blob.core.windows.net&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;protocol&lt;/code&gt;: HTTPS (default) or HTTP.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;isMultiAccountEndpoint&lt;/code&gt;: true if the endpoint supports multiple accounts, false otherwise (default is false). To use multiple-account access, you must include the account name in the URI. If a URI path contains an account, this value is assumed to be true unless explicitly set to false.&lt;/li&gt;
&lt;/ul&gt;
&lt;/dd&gt;
&lt;/dl&gt;

&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following examples use these values for the configuration parameters. AzureStorageCredentials contains sensitive information and is set at the session level in this example.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER SESSION SET AzureStorageCredentials =
    &amp;#39;[{&amp;#34;accountName&amp;#34;: &amp;#34;myaccount&amp;#34;, &amp;#34;accountKey&amp;#34;: &amp;#34;REAL_KEY&amp;#34;},
      {&amp;#34;accountName&amp;#34;: &amp;#34;myaccount&amp;#34;, &amp;#34;blobEndpoint&amp;#34;: &amp;#34;localhost:8080&amp;#34;, &amp;#34;accountKey&amp;#34;: &amp;#34;TEST_KEY&amp;#34;}]&amp;#39;;

=&amp;gt; ALTER DATABASE default SET AzureStorageEndpointConfig =
    &amp;#39;[{&amp;#34;accountName&amp;#34;: &amp;#34;myaccount&amp;#34;, &amp;#34;blobEndpoint&amp;#34;: &amp;#34;localhost:8080&amp;#34;, &amp;#34;protocol&amp;#34;: &amp;#34;http&amp;#34;}]&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following example creates an external table using data from Azure. The URI specifies an account name of &amp;quot;myaccount&amp;quot;.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE EXTERNAL TABLE users (id INT, name VARCHAR(20))
    AS COPY FROM &amp;#39;azb://myaccount/mycontainer/my/object/*&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The database uses AzureStorageEndpointConfig and the account name to produce the following location for the files:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;https://myaccount.blob.core.windows.net/mycontainer/my/object/*
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Data is accessed using the REAL_KEY credential.&lt;/p&gt;
&lt;p&gt;If the URI in the COPY statement is instead &lt;code&gt;azb://myaccount.blob.core.windows.net/mycontainer/my/object&lt;/code&gt;, then the resulting location is &lt;code&gt;https://myaccount.blob.core.windows.net/mycontainer/my/object&lt;/code&gt;, again using the REAL_KEY credential.&lt;/p&gt;
&lt;p&gt;However, if the URI in the COPY statement is &lt;code&gt;azb://myaccount@localhost:8080/mycontainer/my/object&lt;/code&gt;, then the host and port specify a different endpoint: &lt;code&gt;http://localhost:8080/myaccount/mycontainer/my/object&lt;/code&gt;. This endpoint is configured to use a different credential, TEST_KEY.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Sql-Reference: Google Cloud Storage (GCS) object store</title>
      <link>/en/sql-reference/file-systems-and-object-stores/google-cloud-storage-gcs-object-store/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/file-systems-and-object-stores/google-cloud-storage-gcs-object-store/</guid>
      <description>
        
        
        &lt;p&gt;File system using the Google Cloud Storage platform.&lt;/p&gt;
&lt;h2 id=&#34;uri-format&#34;&gt;URI format&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&lt;code&gt;gs://&lt;/code&gt;&lt;em&gt;&lt;code&gt;bucket&lt;/code&gt;&lt;/em&gt;&lt;code&gt;/&lt;/code&gt;&lt;em&gt;&lt;code&gt;path&lt;/code&gt;&lt;/em&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;authentication&#34;&gt;Authentication&lt;/h2&gt;
&lt;p&gt;To access data in Google Cloud Storage (GCS) you must first do the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a default project, obtain a developer key, and enable S3 interoperability mode as described in &lt;a href=&#34;https://cloud.google.com/storage/docs/migrating#migration-simple&#34;&gt;the GCS documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the GCSAuth configuration parameter as in the following example.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER SESSION SET GCSAuth=&amp;#39;&lt;span class=&#34;code-variable&#34;&gt;id:secret&lt;/span&gt;&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;configuration-parameters&#34;&gt;Configuration parameters&lt;/h2&gt;
&lt;p&gt;The following database configuration parameters apply to the GCS file system. You can set parameters at different levels with the appropriate ALTER statement, such as &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-session/&#34;&gt;ALTER SESSION...SET PARAMETER&lt;/a&gt;. Query the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-monitor-schema/config-parameters/#&#34;&gt;CONFIGURATION_PARAMETERS&lt;/a&gt; system table to determine what levels (node, session, user, database) are valid for a given parameter. For information about all parameters related to GCS, see &lt;a href=&#34;../../../en/sql-reference/config-parameters/google-cloud-storage-parameters/#&#34;&gt;Google Cloud Storage parameters&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For external tables using highly partitioned data in an object store, see the &lt;a href=&#34;../../../en/sql-reference/config-parameters/general-parameters/#ObjectStoreGlobStrategy&#34;&gt;ObjectStoreGlobStrategy&lt;/a&gt; configuration parameter and &lt;a href=&#34;../../../en/data-load/partitioned-file-paths/#Partitio&#34;&gt;Partitions on Object Stores&lt;/a&gt;.&lt;/p&gt;

&lt;dl&gt;
&lt;dt&gt;&lt;a name=&#34;GCSAuth&#34;&gt;&lt;/a&gt;GCSAuth&lt;/dt&gt;
&lt;dd&gt;An ID and secret key to authenticate to GCS. For extra security, do not store credentials in the database; instead, use &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-session/&#34;&gt;ALTER SESSION...SET PARAMETER&lt;/a&gt; to set this value for the current session only.&lt;/dd&gt;
&lt;dt&gt;&lt;a name=&#34;GCSEnableHttps&#34;&gt;&lt;/a&gt;GCSEnableHttps&lt;/dt&gt;
&lt;dd&gt;Boolean, whether to use the HTTPS protocol when connecting to GCS, can be set only at the database level with &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-db/&#34;&gt;ALTER DATABASE...SET PARAMETER&lt;/a&gt;.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; 1 (enabled)&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;a name=&#34;GCSEndpoint&#34;&gt;&lt;/a&gt;GCSEndpoint&lt;/dt&gt;
&lt;dd&gt;The connection endpoint address.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; &lt;code&gt;storage.googleapis.com&lt;/code&gt;&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following example loads data from GCS:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER SESSION SET GCSAuth=&amp;#39;my_id:my_secret_key&amp;#39;;

=&amp;gt; COPY t FROM &amp;#39;gs://DataLake/clicks.parquet&amp;#39; PARQUET;
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Sql-Reference: HDFS file system</title>
      <link>/en/sql-reference/file-systems-and-object-stores/hdfs-file-system/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/file-systems-and-object-stores/hdfs-file-system/</guid>
      <description>
        
        
        &lt;p&gt;HDFS is the Hadoop Distributed File System. You can use the &lt;code&gt;webhdfs&lt;/code&gt; and &lt;code&gt;swebhdfs&lt;/code&gt; schemes to access data through the WebHDFS service. OpenText™ Analytics Database also supports the &lt;code&gt;hdfs&lt;/code&gt; scheme, which uses WebHDFS.&lt;/p&gt;
&lt;p&gt;If you specify a &lt;code&gt;webhdfs&lt;/code&gt; URI but the Hadoop HTTP policy (&lt;code&gt;dfs.http.policy&lt;/code&gt;) is set to HTTPS_ONLY, the database automatically uses &lt;code&gt;swebhdfs&lt;/code&gt; instead.&lt;/p&gt;
&lt;h2 id=&#34;uri-format&#34;&gt;URI format&lt;/h2&gt;
&lt;p&gt;URIs in the &lt;code&gt;webhdfs&lt;/code&gt;, &lt;code&gt;swebhdfs&lt;/code&gt;, and &lt;code&gt;hdfs&lt;/code&gt; schemes all have two formats, depending on whether you specify a name service or the host and port of a name node:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;[[s]web]hdfs://[&lt;/code&gt;&lt;em&gt;&lt;code&gt;nameservice&lt;/code&gt;&lt;/em&gt;&lt;code&gt;]/&lt;/code&gt;&lt;em&gt;&lt;code&gt;path&lt;/code&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;[[s]web]hdfs://&lt;/code&gt;&lt;em&gt;&lt;code&gt;namenode-host:port&lt;/code&gt;&lt;/em&gt;&lt;code&gt;/&lt;/code&gt;&lt;em&gt;&lt;code&gt;path&lt;/code&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Characters may be URL-encoded (%NN where NN is a two-digit hexadecimal number) but are not required to be, except that the &#39;%&#39; character must be encoded.&lt;/p&gt;
&lt;p&gt;To use the default name service specified in the HDFS configuration files, omit &lt;em&gt;&lt;code&gt;nameservice&lt;/code&gt;&lt;/em&gt;. Use this shorthand only for reading external data, not for creating a storage location.&lt;/p&gt;
&lt;p&gt;Always specify a name service or host explicitly when using the database with more than one HDFS cluster. The name service or host name must be globally unique. Using &lt;code&gt;[web]hdfs:///&lt;/code&gt; could produce unexpected results because the database uses the first value of &lt;code&gt;fs.defaultFS&lt;/code&gt; that it finds.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Authenti&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;authentication&#34;&gt;Authentication&lt;/h2&gt;
&lt;p&gt;The database can use Kerberos authentication with Cloudera or Hortonworks HDFS clusters. See &lt;a href=&#34;../../../en/hadoop-integration/accessing-kerberized-hdfs-data/#&#34;&gt;Accessing kerberized HDFS data&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For loading and exporting data, the database can access HDFS clusters protected by mTLS through the &lt;code&gt;swebhdfs&lt;/code&gt; scheme. You must create a certificate and key and set the &lt;a href=&#34;../../../en/sql-reference/config-parameters/hadoop-parameters/#WebhdfsClientCertConf&#34;&gt;WebhdfsClientCertConf&lt;/a&gt; configuration parameter.&lt;/p&gt;
&lt;p&gt;You can use &lt;a href=&#34;../../../en/sql-reference/statements/create-statements/create-key/#&#34;&gt;CREATE KEY&lt;/a&gt; and &lt;a href=&#34;../../../en/sql-reference/statements/create-statements/create-certificate/#&#34;&gt;CREATE CERTIFICATE&lt;/a&gt; to create temporary, session-scoped values if you specify the &lt;span class=&#34;sql&#34;&gt;TEMPORARY&lt;/span&gt; keyword. Temporary keys and certificates are stored in memory, not on disk.&lt;/p&gt;
&lt;p&gt;The WebhdfsClientCertConf configuration parameter holds client credentials for one or more HDFS clusters. The value is a JSON string listing name services or authorities and their corresponding keys. You can set the configuration parameter at the session or database level. Setting the parameter at the database level has the following additional requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The &lt;a href=&#34;../../../en/sql-reference/config-parameters/general-parameters/#UseServerIdentityOverUserIdentity&#34;&gt;UseServerIdentityOverUserIdentity&lt;/a&gt; configuration parameter must be set to 1 (true).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The user must be dbadmin or must have access to the user storage location on HDFS.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following example shows how to use mTLS. The key and certificate values themselves are not shown, just the beginning and end markers:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE TEMPORARY KEY client_key TYPE &amp;#39;RSA&amp;#39;
   AS &amp;#39;-----BEGIN PRIVATE KEY-----...-----END PRIVATE KEY-----&amp;#39;;

-&amp;gt; CREATE TEMPORARY CERTIFICATE client_cert
   AS &amp;#39;-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----&amp;#39; key client_key;

=&amp;gt; ALTER SESSION SET WebhdfsClientCertConf =
   &amp;#39;[{&amp;#34;authority&amp;#34;: &amp;#34;my.hdfs.namenode1:50088&amp;#34;, &amp;#34;certName&amp;#34;: &amp;#34;client_cert&amp;#34;}]&amp;#39;;

=&amp;gt; COPY people FROM &amp;#39;swebhdfs://my.hdfs.namenode1:50088/path/to/file/1.txt&amp;#39;;
Rows Loaded
-------------
1
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To configure access to more than one HDFS cluster, define the keys and certificates and then include one object per cluster in the value of WebhdfsClientCertConf:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER SESSION SET WebhdfsClientCertConf =
    &amp;#39;[{&amp;#34;authority&amp;#34; : &amp;#34;my.authority.com:50070&amp;#34;, &amp;#34;certName&amp;#34; : &amp;#34;myCert&amp;#34;},
      {&amp;#34;nameservice&amp;#34; : &amp;#34;prod&amp;#34;, &amp;#34;certName&amp;#34; : &amp;#34;prodCert&amp;#34;}]&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;configuration-parameters&#34;&gt;Configuration parameters&lt;/h2&gt;
&lt;p&gt;The following database configuration parameters apply to the HDFS file system. You can set parameters at different levels with the appropriate ALTER statement, such as &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-session/&#34;&gt;ALTER SESSION...SET PARAMETER&lt;/a&gt;. Query the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-monitor-schema/config-parameters/#&#34;&gt;CONFIGURATION_PARAMETERS&lt;/a&gt; system table to determine what levels (node, session, user, database) are valid for a given parameter. For information about all parameters related to Hadoop, see &lt;a href=&#34;../../../en/sql-reference/config-parameters/hadoop-parameters/#&#34;&gt;Hadoop parameters&lt;/a&gt;.&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;EnableHDFSBlockInfoCache&lt;/dt&gt;
&lt;dd&gt;Boolean, whether to distribute block location metadata collected during planning on the initiator to all database nodes for execution, reducing name node contention. Disabled by default.&lt;/dd&gt;
&lt;dt&gt;HadoopConfDir&lt;/dt&gt;
&lt;dd&gt;Directory path containing the XML configuration files copied from Hadoop. The same path must be valid on every database node. The files are accessed by the Linux user under which the database server process runs.&lt;/dd&gt;
&lt;dt&gt;HadoopImpersonationConfig&lt;/dt&gt;
&lt;dd&gt;Session parameter specifying the delegation token or Hadoop user for HDFS access. See &lt;a href=&#34;../../../en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/hadoopimpersonationconfig-format/#&#34;&gt;HadoopImpersonationConfig format&lt;/a&gt; for information about the value of this parameter and &lt;a href=&#34;../../../en/hadoop-integration/accessing-kerberized-hdfs-data/proxy-users-and-delegation-tokens/#&#34;&gt;Proxy users and delegation tokens&lt;/a&gt; for more general context.&lt;/dd&gt;
&lt;dt&gt;WebhdfsUseCanonicalHost&lt;/dt&gt;
&lt;dd&gt;This Boolean parameter controls the use of the canonical hostname during WebHDFS communication, including Kerberos authentication. It is disabled by default.&lt;/dd&gt;
&lt;dt&gt;WebhdfsClientCertConf&lt;/dt&gt;
&lt;dd&gt;mTLS configurations for accessing one or more WebHDFS servers, a JSON string. Each object must specify either a &lt;code&gt;nameservice&lt;/code&gt; or &lt;code&gt;authority&lt;/code&gt; field and a &lt;code&gt;certName&lt;/code&gt; field. See &lt;a href=&#34;#Authenti&#34;&gt;Authentication&lt;/a&gt;.&lt;/dd&gt;
&lt;/dl&gt;
&lt;h2 id=&#34;configuration-files&#34;&gt;Configuration files&lt;/h2&gt;
&lt;p&gt;The path specified in HadoopConfDir must include a directory containing the files listed in the following table. The database reads these files at database start time. If you do not set a value, the database looks for the files in /etc/hadoop/conf.&lt;/p&gt;
&lt;p&gt;If a property is not defined, the database uses the defaults shown in the table. If no default is specified for a property, the configuration files must specify a value.

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
File&lt;/th&gt; 

&lt;th &gt;
Properties&lt;/th&gt; 

&lt;th &gt;
Default&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
core-site.xml&lt;/td&gt; 

&lt;td &gt;
fs.defaultFS&lt;/td&gt; 

&lt;td &gt;
none&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;/td&gt; 

&lt;td &gt;


(for doAs users:) hadoop.proxyuser.&lt;em&gt;&lt;code&gt;username&lt;/code&gt;&lt;/em&gt;.users&lt;/td&gt; 

&lt;td &gt;
none&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;/td&gt; 

&lt;td &gt;
(for doAs users:) hadoop.proxyuser.&lt;em&gt;&lt;code&gt;username&lt;/code&gt;&lt;/em&gt;.hosts&lt;/td&gt; 

&lt;td &gt;
none&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
hdfs-site.xml&lt;/td&gt; 

&lt;td &gt;
dfs.client.failover.max.attempts&lt;/td&gt; 

&lt;td &gt;
15&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;/td&gt; 

&lt;td &gt;
dfs.client.failover.sleep.base.millis&lt;/td&gt; 

&lt;td &gt;
500&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;/td&gt; 

&lt;td &gt;
dfs.client.failover.sleep.max.millis&lt;/td&gt; 

&lt;td &gt;
15000&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;/td&gt; 

&lt;td &gt;
(For HA NN:) dfs.nameservices&lt;/td&gt; 

&lt;td &gt;
none&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;/td&gt; 

&lt;td &gt;
(WebHDFS:) dfs.namenode.http-address or dfs.namenode.https-address&lt;/td&gt; 

&lt;td &gt;
none&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;/td&gt; 

&lt;td &gt;
(WebHDFS:) dfs.datanode.http.address or dfs.datanode.https.address&lt;/td&gt; 

&lt;td &gt;
none&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;/td&gt; 

&lt;td &gt;
(WebHDFS:) dfs.http.policy&lt;/td&gt; 

&lt;td &gt;
HTTP_ONLY&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;If using High Availability (HA) Name Nodes, the individual name nodes must also be defined in hdfs-site.xml.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

If you are using Eon Mode with communal storage on HDFS, then if you set dfs.encrypt.data.transfer you must use the &lt;code&gt;swebhdfs&lt;/code&gt; scheme for communal storage.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;To verify that the database can find configuration files in HadoopConfDir, use the &lt;a href=&#34;../../../en/sql-reference/functions/hadoop-functions/verify-hadoop-conf-dir/#&#34;&gt;VERIFY_HADOOP_CONF_DIR&lt;/a&gt; function.&lt;/p&gt;
&lt;p&gt;To test access through the &lt;code&gt;hdfs&lt;/code&gt; scheme, use the &lt;a href=&#34;../../../en/sql-reference/functions/hadoop-functions/hdfs-cluster-config-check/#&#34;&gt;HDFS_CLUSTER_CONFIG_CHECK&lt;/a&gt; function.&lt;/p&gt;
&lt;p&gt;For more information about testing your configuration, see &lt;a href=&#34;../../../en/hadoop-integration/configuring-hdfs-access/verifying-hdfs-config/#&#34;&gt;Verifying HDFS configuration&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To reread the configuration files, use the &lt;a href=&#34;../../../en/sql-reference/functions/hadoop-functions/clear-hdfs-caches/#&#34;&gt;CLEAR_HDFS_CACHES&lt;/a&gt; function.&lt;/p&gt;
&lt;h2 id=&#34;name-nodes-and-name-services&#34;&gt;Name nodes and name services&lt;/h2&gt;
&lt;p&gt;You can access HDFS data using the default name node by not specifying a name node or name service:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; COPY users FROM &amp;#39;webhdfs:///data/users.csv&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The database uses the &lt;code&gt;fs.defaultFS&lt;/code&gt; Hadoop configuration parameter to find the name node. (It then uses that name node to locate the data.) You can instead specify a host and port explicitly using the following format:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;webhdfs://&lt;span class=&#34;code-variable&#34;&gt;nn-host&lt;/span&gt;:&lt;span class=&#34;code-variable&#34;&gt;nn-port&lt;/span&gt;/
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The specified host is the name node, not an individual data node. If you are using High Availability (HA) Name Nodes you should not use an explicit host because high availability is provided through name services instead.&lt;/p&gt;
&lt;p&gt;If the HDFS cluster uses High Availability Name Nodes or defines name services, use the name service instead of the host and port, in the format &lt;code&gt;webhdfs://nameservice/&lt;/code&gt;. The name service you specify must be defined in &lt;code&gt;hdfs-site.xml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The following example shows how you can use a name service, hadoopNS:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE EXTERNAL TABLE users (id INT, name VARCHAR(20))
    AS COPY FROM &amp;#39;webhdfs://hadoopNS/data/users.csv&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are using the database to access data from more than one HDFS cluster, always use explicit name services or hosts in the URL. Using the &lt;code&gt;///&lt;/code&gt; shorthand could produce unexpected results because the database uses the first value of &lt;code&gt;fs.defaultFS&lt;/code&gt; that it finds. To access multiple HDFS clusters, you must use host and service names that are globally unique. See &lt;a href=&#34;../../../en/hadoop-integration/configuring-hdfs-access/#&#34;&gt;Configuring HDFS access&lt;/a&gt; for more information.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Sql-Reference: S3 object store</title>
      <link>/en/sql-reference/file-systems-and-object-stores/s3-object-store/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/sql-reference/file-systems-and-object-stores/s3-object-store/</guid>
      <description>
        
        
        &lt;p&gt;File systems using the S3 protocol, including AWS, Pure Storage, and MinIO.&lt;/p&gt;
&lt;h2 id=&#34;uri-format&#34;&gt;URI format&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&lt;code&gt;s3://&lt;/code&gt;&lt;em&gt;&lt;code&gt;bucket&lt;/code&gt;&lt;/em&gt;&lt;code&gt;/&lt;/code&gt;&lt;em&gt;&lt;code&gt;path&lt;/code&gt;&lt;/em&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;For AWS, specify the region using the AWSRegion configuration parameter, not the URI. If the region is incorrect, you might experience a delay before the load fails because OpenText™ Analytics Database retries several times before giving up. The default region is &lt;code&gt;us-east-1&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;authentication&#34;&gt;Authentication&lt;/h2&gt;
&lt;p&gt;For AWS:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To access S3 you must create an &lt;a href=&#34;http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html&#34;&gt;IAM role&lt;/a&gt; and grant that role permission to access your S3 resources.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;By default, bucket access is restricted to the communal storage bucket. Use an &lt;a href=&#34;../../../en/sql-reference/config-parameters/s3-parameters/&#34;&gt;AWS access key&lt;/a&gt; to load data from non-communal storage buckets.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Either set the AWSAuth configuration parameter to provide credentials or create a USER storage location for the S3 path (see &lt;a href=&#34;../../../en/sql-reference/statements/create-statements/create-location/#&#34;&gt;CREATE LOCATION&lt;/a&gt;) and grant users access.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can use AWS STS temporary session tokens to load data. Because they are session tokens, do not use them for access to storage locations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can configure S3 buckets individually with the per-bucket parameters S3BucketConfig and S3BucketCredentials. For details, see &lt;a href=&#34;../../../en/sql-reference/file-systems-and-object-stores/s3-object-store/per-bucket-s3-configs/#&#34;&gt;Per-bucket S3 configurations&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;configuration-parameters&#34;&gt;Configuration parameters&lt;/h2&gt;
&lt;p&gt;The following database configuration parameters apply to the S3 file system. You can set parameters at different levels with the appropriate ALTER statement, such as &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-session/&#34;&gt;ALTER SESSION...SET PARAMETER&lt;/a&gt;. Query the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-monitor-schema/config-parameters/#&#34;&gt;CONFIGURATION_PARAMETERS&lt;/a&gt; system table to determine what levels (node, session, user, database) are valid for a given parameter.&lt;/p&gt;
&lt;p&gt;You can configure individual buckets using the S3BucketConfig and S3BucketCredentials parameters instead of the global parameters.&lt;/p&gt;
&lt;p&gt;For external tables using highly partitioned data in an object store, see the &lt;a href=&#34;../../../en/sql-reference/config-parameters/general-parameters/#ObjectStoreGlobStrategy&#34;&gt;ObjectStoreGlobStrategy&lt;/a&gt; configuration parameter and &lt;a href=&#34;../../../en/data-load/partitioned-file-paths/#Partitio&#34;&gt;Partitions on Object Stores&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The following descriptions are summaries. For details about all parameters specific to S3, see &lt;a href=&#34;../../../en/sql-reference/config-parameters/s3-parameters/#&#34;&gt;S3 parameters&lt;/a&gt;.&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;AWSAuth&lt;/dt&gt;
&lt;dd&gt;An ID and secret key for authentication. AWS calls these AccessKeyID and SecretAccessKey. For extra security, do not store credentials in the database; use &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-session/&#34;&gt;ALTER SESSION...SET PARAMETER&lt;/a&gt; to set this value for the current session only.&lt;/dd&gt;
&lt;dt&gt;AWSCAFile&lt;/dt&gt;
&lt;dd&gt;The file name of the TLS server certificate bundle to use. You must set a value when installing a CA certificate on a SUSE Linux Enterprise Server.&lt;/dd&gt;
&lt;dt&gt;AWSCAPath&lt;/dt&gt;
&lt;dd&gt;The path the database uses to look up TLS server certificates. You must set a value when installing a CA certificate on a SUSE Linux Enterprise Server.&lt;/dd&gt;
&lt;dt&gt;AWSEnableHttps&lt;/dt&gt;
&lt;dd&gt;Boolean, whether to use the HTTPS protocol when connecting to S3. Can be set only at the database level. You can set the prototol for individual buckets using S3BucketConfig.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; 1 (enabled)&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;AWSEndpoint&lt;/dt&gt;
&lt;dd&gt;String, the endpoint host for all S3 URLs, set as follows:
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;AWS: &lt;em&gt;&lt;code&gt;hostname_or_IP&lt;/code&gt;&lt;/em&gt;:&lt;em&gt;&lt;code&gt;port&lt;/code&gt;&lt;/em&gt;. Do not include the scheme (http(s)).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AWS with a FIPS-compliant S3 Endpoint: Hostname of a &lt;a href=&#34;https://aws.amazon.com/compliance/fips/&#34;&gt;FIPS-compliant S3 endpoint&lt;/a&gt;. You must also enable S3EnableVirtualAddressing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On-premises/Pure: IP address of the Pure Storage server.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If not set, the database uses virtual-hosted request URLs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; &#39;s3.amazonaws.com&#39;&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;AWSLogLevel&lt;/dt&gt;
&lt;dd&gt;The log level, one of: OFF, FATAL, ERROR, WARN, INFO, DEBUG, or TRACE.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; ERROR&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;AWSRegion&lt;/dt&gt;
&lt;dd&gt;The AWS region containing the S3 bucket from which to read files. This parameter can only be configured with one region at a time. Failure to set the correct region can lead to a delay before queries fail.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; &#39;us-east-1&#39;&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;AWSSessionToken&lt;/dt&gt;
&lt;dd&gt;A temporary security token generated by running the &lt;code&gt;get-session-token&lt;/code&gt; command, used to configure multi-factor authentication.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

If you use session tokens at the session level, you must set all parameters at the session level, even if some of them are set at the database level.

&lt;/div&gt;&lt;/dd&gt;
&lt;dt&gt;AWSStreamingConnectionPercentage&lt;/dt&gt;
&lt;dd&gt;In Eon Mode, the number of connections to the communal storage to use for streaming reads. In a cloud environment, this setting helps prevent streaming data from using up all available file handles. This setting is unnecessary when using on-premises object stores because of their lower latency.&lt;/dd&gt;
&lt;dt&gt;S3BucketConfig&lt;/dt&gt;
&lt;dd&gt;A JSON array of objects specifying per-bucket configuration overrides. Each property other than the bucket name has a corresponding configuration parameter (shown in parentheses). If both the database-level parameter and its equivalent in S3BucketConfig are set, the value in S3BucketConfig takes precedence.
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;bucket&lt;/code&gt;: Bucket name&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;region&lt;/code&gt; (AWSRegion)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;protocol&lt;/code&gt; (AWSEnableHttp): Connection protocol, one of &lt;code&gt;http&lt;/code&gt; or &lt;code&gt;https&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;endpoint&lt;/code&gt; (AWSEndpoint)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;enableVirtualAddressing&lt;/code&gt; (S3BucketCredentials): Boolean, whether to rewrite the S3 URL to use a virtual hosted path&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;requesterPays&lt;/code&gt; (S3RequesterPays)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;serverSideEncryption&lt;/code&gt; (S3ServerSideEncryption)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;sseCustomerAlgorithm&lt;/code&gt; (S3SseCustomerAlgorithm)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;sseCustomerKey&lt;/code&gt; (S3SseCustomerKey)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;sseKmsKeyId&lt;/code&gt; (S3SseKmsKeyId)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;proxy&lt;/code&gt; (S3Proxy)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/dd&gt;
&lt;dt&gt;S3BucketCredentials&lt;/dt&gt;
&lt;dd&gt;A JSON object specifying per-bucket credentials. Each property other than the bucket name has a corresponding configuration parameter. If both the database-level parameter and its equivalent in S3BucketCredentials are set, the value in S3BucketCredentials takes precedence.
&lt;p&gt;Properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;bucket&lt;/code&gt;: Bucket name&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;accessKey&lt;/code&gt;: Access key for the bucket (the &lt;em&gt;&lt;code&gt;ID&lt;/code&gt;&lt;/em&gt; in AWSAuth)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;secretAccessKey&lt;/code&gt;: Secret access key for the bucket (the &lt;em&gt;&lt;code&gt;secret&lt;/code&gt;&lt;/em&gt; in AWSAuth)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;sessionToken&lt;/code&gt;: Session token, only used when S3BucketCredentials is set at the session level (AWSSessionToken)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This parameter is only visible to superusers. Users can set this parameter at the session level with &lt;a href=&#34;../../../en/sql-reference/statements/alter-statements/alter-session/#&#34;&gt;ALTER SESSION&lt;/a&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;S3EnableVirtualAddressing&lt;/dt&gt;
&lt;dd&gt;Boolean, whether to rewrite S3 URLs to use virtual-hosted paths (disabled by default). This configuration setting takes effect only when you have specified a value for AWSEndpoint.
&lt;p&gt;If you set AWSEndpoint to a &lt;a href=&#34;https://aws.amazon.com/compliance/fips/&#34;&gt;FIPS-compliant S3 endpoint&lt;/a&gt;, you must enable S3EnableVirtualAddressing.&lt;/p&gt;
&lt;p&gt;The value of this parameter does not affect how you specify S3 paths.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;S3Proxy&lt;/dt&gt;
&lt;dd&gt;HTTP(S) proxy settings, if needed, a string in the following format:  &lt;code&gt;http[s]://[user:password@]host[:port]&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;S3RequesterPays&lt;/dt&gt;
&lt;dd&gt;Boolean, whether requester (instead of bucket owner) pays the cost of accessing data on the bucket.&lt;/dd&gt;
&lt;dt&gt;S3ServerSideEncryption&lt;/dt&gt;
&lt;dd&gt;String, encryption algorithm to use when reading or writing to S3. Supported values are &lt;code&gt;AES256&lt;/code&gt; (for SSE-S3), &lt;code&gt;aws:kms&lt;/code&gt; (for SSE-KMS), and an empty string (for no encryption). See &lt;a href=&#34;#encryption&#34;&gt;Server-Side Encryption&lt;/a&gt;.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; &lt;code&gt;&amp;quot;&amp;quot;&lt;/code&gt; (no encryption)&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;S3SseCustomerAlgorithm&lt;/dt&gt;
&lt;dd&gt;String, the encryption algorithm to use when reading or writing to S3 using SSE-C encryption. The only supported values are &lt;code&gt;AES256&lt;/code&gt; and &lt;code&gt;&amp;quot;&amp;quot;&lt;/code&gt;. For SSE-S3 and SSE-KMS, instead use S3ServerSideEncryption.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; &lt;code&gt;&amp;quot;&amp;quot;&lt;/code&gt; (no encryption)&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;S3SseCustomerKey&lt;/dt&gt;
&lt;dd&gt;If using SSE-C encryption, the client key for S3 access.&lt;/dd&gt;
&lt;dt&gt;S3SseKmsKeyId&lt;/dt&gt;
&lt;dd&gt;If using SSE-KMS encryption, the key identifier (not the key) to pass to the Key Management Service. the database must have permission to use the key, which is managed through KMS.&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;&lt;a name=&#34;encryption&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;write-performance&#34;&gt;Write performance&lt;/h2&gt;
&lt;p&gt;By default, the database performs writes using a single thread, but a single write usually includes multiple files or parts of files. For writes to S3, you can use a larger thread pool to perform writes in parallel. This thread pool is used for all file writes to S3, including file exports and writes to communal storage.&lt;/p&gt;
&lt;p&gt;The size of the thread pool is controlled by the &lt;a href=&#34;../../../en/sql-reference/config-parameters/general-parameters/#ObjStoreUploadParallelism&#34;&gt;ObjStoreUploadParallelism&lt;/a&gt; configuration parameter. Each node has a single thread pool used for all file writes. In general, one or two threads per concurrent writer produces good results.&lt;/p&gt;
&lt;h2 id=&#34;server-side-encryption&#34;&gt;Server-side encryption&lt;/h2&gt;
&lt;p&gt;By default, the database reads and writes S3 data that is not encrypted. If the S3 bucket uses server-side encryption (SSE), you can configure the database to access it. S3 supports three types of server-side encryption: SSE-S3, SSE-KMS, and SSE-C.&lt;/p&gt;
&lt;p&gt;The database must also have read or write permissions (depending on the operation) on the bucket.&lt;/p&gt;
&lt;h3 id=&#34;sse-s3&#34;&gt;SSE-S3&lt;/h3&gt;
&lt;p&gt;With SSE-S3, the S3 service manages encryption keys. Reads do not require additional configuration. To write to S3, the client (OpenText™ Analytics Database, in this case) must specify only the encryption algorithm.&lt;/p&gt;
&lt;p&gt;If the S3 bucket is configured with the default encryption settings, the database can read and write data to them with no further changes. If the bucket does not use the default encryption settings, set the S3ServerSideEncryption configuration parameter or the &lt;code&gt;serverSideEncryption&lt;/code&gt; field in S3BucketConfig to &lt;code&gt;AES256&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=&#34;sse-kms&#34;&gt;SSE-KMS&lt;/h3&gt;
&lt;p&gt;With SSE-KMS, encryption keys are managed by the Key Management Service (KMS). The client must supply a KMS key identifier (not the actual key) when writing data. For all operations, the client must have permission to use the KMS key. These permissions are managed in KMS and not in the database.&lt;/p&gt;
&lt;p&gt;To use SSE-KMS:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Set the S3ServerSideEncryption configuration parameter or the &lt;code&gt;serverSideEncryption&lt;/code&gt; field in S3BucketConfig to &lt;code&gt;aws:kms&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the S3SseKmsKeyId configuration parameter or the &lt;code&gt;sseKmsKeyId&lt;/code&gt; field in S3BucketConfig to the key ID.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;sse-c&#34;&gt;SSE-C&lt;/h3&gt;
&lt;p&gt;With SSE-C, the client manages encryption keys and provides them to S3 for each operation.&lt;/p&gt;
&lt;p&gt;To use SSE-C:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Set the S3SseCustomerAlgorithm configuration parameter or the &lt;code&gt;sseCustomerAlgorithm&lt;/code&gt; field in S3BucketConfig to &lt;code&gt;AES256&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the S3SseCustomerKey configuration parameter or the &lt;code&gt;sseCustomerKey&lt;/code&gt; field in S3BucketConfig to the access key. The value can be either a 32-character plaintext key or a 44-character base64-encoded key.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;https-logging-aws&#34;&gt;HTTP(S) logging (AWS)&lt;/h2&gt;
&lt;p&gt;When a request to AWS fails or is retried, or the database detects a performance degradation, the database logs the event in the &lt;a href=&#34;../../../en/sql-reference/system-tables/v-monitor-schema/udfs-events/#&#34;&gt;UDFS_EVENTS&lt;/a&gt; system table. The event type for these records is &lt;code&gt;HttpRequestAttempt&lt;/code&gt;, and the description is a JSON object with more details. For example:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SELECT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;filesystem&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;event&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;description&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;FROM&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;UDFS_EVENTS&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;filesystem&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;event&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;                                                                 &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;description&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;c1&#34;&gt;------------+---------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;S3&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;         &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;HttpRequestAttempt&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;err&#34;&gt;{&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;AttemptStartTimeGMT&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;2024-01-18 16:45:52.663&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;RetryCount&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;HttpMethod&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;GET&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;URIString&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;s3://mybucket/s3_2.dat&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;AmzRequestId&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;79104EXAMPLEB723&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;AmzId2&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;IOWQ4fDEXAMPLEQM+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK+Jd1vEXAMPLEa3Km&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;HttpStatusCode&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;404&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;HttpRequestLatency&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;2&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;SdkExceptionMessage&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;No response body.&amp;#34;&lt;/span&gt;&lt;span class=&#34;err&#34;&gt;}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;S3&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;         &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;HttpRequestAttempt&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;err&#34;&gt;{&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;AttemptStartTimeGMT&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;2024-01-18 16:46:02.663&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;RetryCount&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;HttpMethod&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;GET&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;URIString&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;s3://mybucket/s3_2.dat&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;AmzRequestId&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;79104EXAMPLEB791&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;AmzId2&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;JPXQ4fDEXAMPLEQM+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK+Jd1vEXAMPLEa4Ln&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;HttpStatusCode&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;206&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;HttpRequestLatency&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;err&#34;&gt;}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;rows&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;To extract fields from this description, use the &lt;a href=&#34;../../../en/sql-reference/functions/flex-functions/flex-extractor-functions/mapjsonextractor/#&#34;&gt;MAPJSONEXTRACTOR&lt;/a&gt; function:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SELECT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;filesystem&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;event&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;     &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;MapJSONExtractor&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;description&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;))[&lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;RetryCount&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;as&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;RetryCount&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;     &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;MapJSONExtractor&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;description&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;))[&lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;HttpStatusCode&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;as&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;HttpStatusCode&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;   &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;FROM&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;UDFS_EVENTS&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;filesystem&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;event&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;RetryCount&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;HttpStatusCode&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;c1&#34;&gt;------------+---------------------+------------+-----------------
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;S3&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;         &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;HttpRequestAttempt&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;          &lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;404&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;S3&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;         &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;HttpRequestAttempt&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;          &lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;206&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;rows&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;For details on AWS-specific fields such as &lt;code&gt;AmzRequestId&lt;/code&gt;, see the &lt;a href=&#34;https://docs.aws.amazon.com/AmazonS3/latest/userguide/get-request-ids.html&#34;&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;../../../en/sql-reference/system-tables/v-monitor-schema/udfs-statistics/#&#34;&gt;UDFS_STATISTICS&lt;/a&gt; system table records the accumulated duration for all HTTP requests in the &lt;code&gt;TOTAL_REQUEST_DURATION_MS&lt;/code&gt; column. This table also provides computed values for average duration and throughput.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following example sets a database-wide AWS region and credentials:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ALTER&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;DATABASE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;DEFAULT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SET&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;AWSRegion&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;us-west-1&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ALTER&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;DATABASE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;DEFAULT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SET&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;AWSAuth&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;myaccesskeyid123456:mysecretaccesskey123456789012345678901234&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The following example loads data from S3. You can use a glob if all files in the glob can be loaded together. In the following example, AWS_DataLake contains only ORC files.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;COPY&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;FROM&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;s3://datalake/*&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;ORC&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;You can specify a list of comma-separated S3 buckets as in the following example. All buckets must be in the same region. To load from more than one region, use separate COPY statements and change the value of AWSRegion between calls.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;COPY&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;FROM&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;s3://AWS_Data_1/sales.parquet&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;s3://AWS_Data_2/sales.parquet&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;PARQUET&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The following example creates a user storage location and a role, so that users without their own S3 credentials can read data from S3 using the server credential.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;   &lt;/span&gt;&lt;span class=&#34;c1&#34;&gt;--- set database-level credential (once):
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ALTER&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;DATABASE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;DEFAULT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SET&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;AWSAuth&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;myaccesskeyid123456:mysecretaccesskey123456789012345678901234&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;CREATE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;LOCATION&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;s3://datalake&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;SHARED&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;USAGE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;USER&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;LABEL&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;s3user&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;CREATE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ROLE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;ExtUsers&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;   &lt;/span&gt;&lt;span class=&#34;c1&#34;&gt;--- Assign users to this role using GRANT (Role).
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;GRANT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;READ&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ON&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;LOCATION&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;s3://datalake&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;TO&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;ExtUsers&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The configuration properties for a given bucket may differ based on its type. The following S3BucketConfig setting is for an AWS bucket (&lt;code&gt;AWSBucket&lt;/code&gt;) and a Pure Storage bucket (&lt;code&gt;PureStorageBucket&lt;/code&gt;). &lt;code&gt;AWSBucket&lt;/code&gt; doesn&#39;t specify an endpoint, so the database uses the AWSEndpoint configuration parameter, which defaults to &lt;code&gt;s3.amazonaws.com&lt;/code&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ALTER&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;DATABASE&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;DEFAULT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SET&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;S3BucketConfig&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;[
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;    {
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;bucket&amp;#34;: &amp;#34;AWSBucket&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;region&amp;#34;: &amp;#34;us-east-2&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;protocol&amp;#34;: &amp;#34;https&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;requesterPays&amp;#34;: true,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;serverSideEncryption&amp;#34;: &amp;#34;aes256&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;    },
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;    {
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;bucket&amp;#34;: &amp;#34;PureStorageBucket&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;endpoint&amp;#34;: &amp;#34;pure.mycorp.net:1234&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;protocol&amp;#34;: &amp;#34;http&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;enableVirtualAddressing&amp;#34;: false
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;    }
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;]&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The following example sets S3BucketCredentials for these two buckets:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ALTER&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SESSION&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SET&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;S3BucketCredentials&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;[
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;    {
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;bucket&amp;#34;: &amp;#34;AWSBucket&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;accessKey&amp;#34;: &amp;#34;&amp;lt;AK0&amp;gt;&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;secretAccessKey&amp;#34;: &amp;#34;&amp;lt;SAK0&amp;gt;&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;sessionToken&amp;#34;: &amp;#34;1234567890&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;    },
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;    {
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;bucket&amp;#34;: &amp;#34;PureStorageBucket&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;accessKey&amp;#34;: &amp;#34;&amp;lt;AK1&amp;gt;&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;        &amp;#34;secretAccessKey&amp;#34;: &amp;#34;&amp;lt;SAK1&amp;gt;&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;    }
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s1&#34;&gt;]&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The following example sets an STS temporary session token. The database uses the session token to access S3 with the specified credentials and bypasses checking for a USER storage location.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;$ aws sts get-session-token
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;s2&#34;&gt;&amp;#34;Credentials&amp;#34;&lt;/span&gt;: &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;s2&#34;&gt;&amp;#34;AccessKeyId&amp;#34;&lt;/span&gt;: &lt;span class=&#34;s2&#34;&gt;&amp;#34;ASIAJZQNDVS727EHDHOQ&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;s2&#34;&gt;&amp;#34;SecretAccessKey&amp;#34;&lt;/span&gt;: &lt;span class=&#34;s2&#34;&gt;&amp;#34;F+xnpkHbst6UPorlLGj/ilJhO5J2n3Yo7Mp4vYvd&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;s2&#34;&gt;&amp;#34;SessionToken&amp;#34;&lt;/span&gt;: &lt;span class=&#34;s2&#34;&gt;&amp;#34;FQoDYXdzEKv//////////wEaDMWKxakEkCyuDH0UjyKsAe6/3REgW5VbWtpuYyVvSnEK1jzGPHi/jPOPNT7Kd+ftSnD3qdaQ7j28SUW9YYbD50lcXikz/HPlusPuX9sAJJb7w5oiwdg+ZasIS/+ejFgCzLeNE3kDAzLxKKsunvwuo7EhTTyqmlLkLtIWu9zFykzrR+3Tl76X7EUMOaoL31HOYsVEL5d9I9KInF0gE12ZB1yN16MsQVxpSCavOFHQsj/05zbxOQ4o0erY1gU=&amp;#34;&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;s2&#34;&gt;&amp;#34;Expiration&amp;#34;&lt;/span&gt;: &lt;span class=&#34;s2&#34;&gt;&amp;#34;2018-07-18T05:56:33Z&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;$ vsql
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-sql&#34; data-lang=&#34;sql&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ALTER&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SESSION&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SET&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;AWSAuth&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;ASIAJZQNDVS727EHDHOQ:F+xnpkHbst6UPorlLGj/ilJhO5J2n3Yo7Mp4vYvd&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;ALTER&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SESSION&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;SET&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;AWSSessionToken&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;FQoDYXdzEKv//////////wEaDMWKxakEkCyuDH0UjyKsAe6/3REgW5VbWtpuYyVvSnEK1jzGPHi/jPOPNT7Kd+ftSnD3qdaQ7j28SUW9YYbD50lcXikz/HPlusPuX9sAJJb7w5oiwdg+ZasIS/+ejFgCzLeNE3kDAzLxKKsunvwuo7EhTTyqmlLkLtIWu9zFykzrR+3Tl76X7EUMOaoL31HOYsVEL5d9I9KInF0gE12ZB1yN16MsQVxpSCavOFHQsj/05zbxOQ4o0erY1gU=&amp;#39;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;see-also&#34;&gt;See also&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;../../../en/sql-reference/file-systems-and-object-stores/s3-object-store/per-bucket-s3-configs/&#34;&gt;Per-Bucket S3 Configurations&lt;/a&gt;&lt;/p&gt;

      </description>
    </item>
    
  </channel>
</rss>
