This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Configuring HDFS access
Vertica uses information from the Hadoop cluster configuration to support reading data (COPY or external tables).
Vertica uses information from the Hadoop cluster configuration to support reading data (COPY or external tables). In Eon Mode, it also uses this information to access communal storage on HDFS. Vertica nodes therefore must have access to certain Hadoop configuration files.
For both co-located and separate clusters that use Kerberos authentication, configure Vertica for Kerberos as explained in Configure Vertica for Kerberos Authentication.
Vertica requires access to the WebHDFS service and ports on all name nodes and data nodes. For more information about WebHDFS ports, see HDFS Ports in the Cloudera documentation.
Accessing Hadoop configuration files
Your Vertica nodes need access to certain Hadoop configuration files:
- If Vertica is co-located on HDFS nodes, then those configuration files are already present.
- If Vertica is running on a separate cluster, you must copy the required files to all database nodes. A simple way to do so is to configure your Vertica nodes as Hadoop edge nodes. Client applications run on edge nodes; from Hadoop's perspective, Vertica is a client application. You can use Ambari or Cloudera Manager to configure edge nodes. For more information, see the documentation from your Hadoop vendor.
Verify that the value of the HadoopConfDir configuration parameter (see Hadoop parameters) includes a directory containing the core-site.xml
and hdfs-site.xml
files. If you do not set a value, Vertica looks for the files in /etc/hadoop/conf. For all Vertica users, the directory is accessed by the Linux user under which the Vertica server process runs.
Vertica uses several properties defined in these configuration files. These properties are listed in HDFS file system.
Using a cluster with high availability NameNodes
If your Hadoop cluster uses High Availability (HA) Name Nodes, verify that the dfs.nameservices
parameter and the individual name nodes are defined in hdfs-site.xml
.
Using more than one Hadoop cluster
In some cases, a Vertica cluster requires access to more than one HDFS cluster. For example, your business might use separate HDFS clusters for separate regions, or you might need data from both test and deployment clusters.
To support multiple clusters, perform the following steps:
-
Copy the configuration files from all HDFS clusters to your database nodes. You can place the copied files in any location readable by Vertica. However, as a best practice, you should place them all in the same directory tree, with one subdirectory per HDFS cluster. The locations must be the same on all database nodes.
-
Set the HadoopConfDir configuration parameter. The value is a colon-separated path containing the directories for all of your HDFS clusters.
-
Use an explicit name node or name service in the URL when creating an external table or copying data. Do not use hdfs:///
because it could be ambiguous. For more information about URLs, see HDFS file system.
Vertica connects directly to a name node or name service; it does not otherwise distinguish among HDFS clusters. Therefore, names of HDFS name nodes and name services must be globally unique.
Verifying the configuration
Use the VERIFY_HADOOP_CONF_DIR function to verify that Vertica can find configuration files in HadoopConfDir.
Use the HDFS_CLUSTER_CONFIG_CHECK function to test access through the hdfs
scheme.
For more information about testing your configuration, see Verifying HDFS configuration.
Updating configuration files
If you update the configuration files after starting Vertica, use the following statement to refresh them:
=> SELECT CLEAR_HDFS_CACHES();
The CLEAR_HDFS_CACHES function also flushes information about which name node is active in a High Availability (HA) Hadoop cluster. Therefore, the first request after calling this function is slow, because the initial connection to the name node can take more than 15 seconds.
1 - Verifying HDFS configuration
Use the EXTERNAL_CONFIG_CHECK function to test access to HDFS.
Use the EXTERNAL_CONFIG_CHECK function to test access to HDFS. This function calls several others. If you prefer to test individual components, or if some tests do not apply to your configuration, you can instead call the functions individually. For example, if you are not using the HCatalog Connector then you do not need to call that function. The functions are:
To run all tests, call EXTERNAL_CONFIG_CHECK
with no arguments:
=> SELECT EXTERNAL_CONFIG_CHECK();
To test only some authorities, nameservices, or Hive schemas, pass a single string argument. The format is a comma-separated list of "key=value" pairs, where keys are "authority", "nameservice", and "schema". The value is passed to all of the sub-functions; see those reference pages for details on how values are interpreted.
The following example tests the configuration of only the nameservice named "ns1":
=> SELECT EXTERNAL_CONFIG_CHECK('nameservice=ns1');
2 - Troubleshooting reads from HDFS
You might encounter the following issues when accessing data in HDFS.
You might encounter the following issues when accessing data in HDFS.
Queries using [web]hdfs:/// show unexpected results
If you are using the ///
shorthand to query external tables and see unexpected results, such as production data in your test cluster, verify that HadoopConfDir is set to the value you expect. The HadoopConfDir configuration parameter defines a path to search for the Hadoop configuration files that Vertica needs to resolve file locations. The HadoopConfDir parameter can be set at the session level, overriding the permanent value set in the database.
To debug problems with ///
URLs, try replacing the URLs with ones that use an explicit nameservice or name node. If the explicit URL works, then the problem is with the resolution of the shorthand. If the explicit URL also does not work as expected, then the problem is elsewhere (such as your nameservice).
Queries take a long time to run when using HA
The High Availability Name Node feature in HDFS allows a name node to fail over to a standby name node. The dfs.client.failover.max.attempts
configuration parameter (in hdfs-site.xml
) specifies how many attempts to make when failing over. Vertica uses a default value of 4 if this parameter is not set. After reaching the maximum number of failover attempts, Vertica concludes that the HDFS cluster is unavailable and aborts the operation. Vertica uses the dfs.client.failover.sleep.base.millis
and dfs.client.failover.sleep.max.millis
parameters to decide how long to wait between retries. Typical ranges are 500 milliseconds to 15 seconds, with longer waits for successive retries.
A second parameter, ipc.client.connect.retry.interval
, specifies the time to wait between attempts, with typical values being 10 to 20 seconds.
Cloudera and Hortonworks both provide tools to automatically generate configuration files. These tools can set the maximum number of failover attempts to a much higher number (50 or 100). If the HDFS cluster is unavailable (all name nodes are unreachable), Vertica can appear to hang for an extended period (minutes to hours) while trying to connect.
Failover attempts are logged in the QUERY_EVENTS system table. The following example shows how to query this table to find these events:
=> SELECT event_category, event_type, event_description, operator_name,
event_details, count(event_type) AS count
FROM query_events
WHERE event_type ilike 'WEBHDFS FAILOVER RETRY'
GROUP BY event_category, event_type, event_description, operator_name, event_details;
-[ RECORD 1 ]-----+---------------------------------------
event_category | EXECUTION
event_type | WEBHDFS FAILOVER RETRY
event_description | WebHDFS Namenode failover and retry.
operator_name | WebHDFS FileSystem
event_details | WebHDFS request failed on ns
count | 4
You can either wait for Vertica to complete or abort the connection, or set the dfs.client.failover.max.attempts
parameter to a lower value.
WebHDFS error when using LibHDFS++
When creating an external table or loading data and using the hdfs
scheme, you might see errors from WebHDFS failures. Such errors indicate that Vertica was not able to use the hdfs
scheme and fell back to webhdfs
, but that the WebHDFS configuration is incorrect.
First verify the value of the HadoopConfDir configuration parameter, which can be set at the session level. Then verify that the HDFS configuration files found there have the correct WebHDFS configuration for your Hadoop cluster. See Configuring HDFS access for information about use of these files. See your Hadoop documentation for information about WebHDFS configuration.
Vertica places too much load on the name node (LibHDFS++)
Large HDFS clusters can sometimes experience heavy load on the name node when clients, including Vertica, need to locate data. If your name node is sensitive to this load and if you are using LibHDFS++, you can instruct Vertica to distribute metadata about block locations to its nodes so that they do not have to contact the name node as often. Distributing this metadata can degrade database performance somewhat in deployments where the name node isn't contended. This performance effect is because the data must be serialized and distributed.
If protecting your name node from load is more important than query performance, set the EnableHDFSBlockInfoCache configuration parameter to 1 (true). Usually this applies to large HDFS clusters where name node contention is already an issue.
This setting applies to access through LibHDFS++ (hdfs
scheme). Sometimes LibHDFS++ falls back to WebHDFS, which does not use this setting. If you have enabled this setting and you are still seeing high traffic on your name node from Vertica, check the QUERY_EVENTS system table for LibHDFS++ UNSUPPORTED OPERATION
events.
Kerberos authentication errors
Kerberos authentication can fail even though a ticket is valid if Hadoop expires tickets frequently. It can also fail due to clock skew between Hadoop and Vertica nodes. For details, see Troubleshooting Kerberos authentication.