Known issues
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
9.3.1-0
Updated: 01/05/2020
Issue Key | Component | Description |
---|---|---|
VER-41895 | Admin Tools | On some systems, admintools fails to parse output while running SSH commands on hosts in the cluster. Workaround: If the admintools operation needs to run on just one node, users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly. |
VER-48020 | Hadoop | Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load. |
VER-48041 | Admin Tools | On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. |
VER-60409 | AP-Advanced, Optimizer |
APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing. Workaround: Increase configuration parameter MaxParsedQuerySizeMB or reduce the number of output columns. For the cases of APPLY_SVD and APPLY_PCA, you can limit the number of output columns either by setting the parameter num_components or by setting the parameter cutoff. In practice, cutoff=0.9 is usually enough. Please note that If you increase MaxParsedQuerySizeMB to a larger value, for example 4096, each query you run may use 4 GB of memory during parsing. This means running multiple queries at the same time could cause out-of-memory (OOM) if your total memory is limited. Please refer to Vertica documentation for more information about MaxParsedQuerySizeMB. |
VER-61069 | Execution Engine |
In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely. Workaround: Halt the remaining processes using admin tools. |
VER-61420 | Data Removal - Delete, Purge, Partitioning | Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM. |
VER-62061 | Catalog Sync and Revive | If you revive a database on a one-node cluster, Vertica does not warn users against reviving the same database elsewhere. This can result in two instances of the same database running concurrently on different hosts, which can corrupt the database catalog. |
VER-62983 | Hadoop | When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas. |
VER-64352 | SDK |
Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2: CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python'; DROP LIBRARY lib1; CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python'; Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails. Workaround: One of the following:
|
VER-67228 | AMI, License | An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing. |
VER-70139 | Metadata Tables Security | The LDAP Link dry run metafunctions do not yet have full support for the LDAPLinkStartTLS and LDAPLinkTLSReqCert parameters. Workaround: When using the LDAP Link dry run metafunctions, pass "1" and "allow" for LDAPLinkStartTLS and LDAPLinkTLSReqCert arguments, respectively |
VER-70238 | Backup/DR, Security |
A vbr restore operation can fail if your database is configured for SSL and has a server.key and a server.crt file present in the catalog directory. Workaround: Move your SSL authentication related files (server.crt, server.csr and server.key) from the catalog directory before performing the restore. |
VER-70468 | Nimbus |
Creating a load balance group for a subcluster with a ROUNDROBIN load balance policy only has an effect if you set the global load balancing policy to ROUNDROBIN as well. This is the case, even though the LOAD_BALANCE_GROUPS system table shows the group's policy as ROUNDROBIN. Workaround: Set the global load balancing policy using the SET_LOAD_BALANCE_POLICY function: SELECT set_load_balance_policy('roundrobin'); |
VER-70549 | Spread |
When quickly stopping and then starting a different database, the new database may fail to start after attempting to connect to Spread processes associated with the stopped database. Workaround: After stopping a database, ensure that all old Spread processes have stopped on the affected nodes before starting a different database. This typically takes no more than one minute, but may vary under certain circumstances. |
VER-76267 | Optimizer | Executing EXPLAIN COPY on a new table fails. |
9.3.0-0
Updated: 12/05/2019
Issue Key | Component | Description |
---|---|---|
VER-45474 | Optimizer | When a node down, DELETE and UPDATE query performance can slow due to non-optimized query plans. |
VER-68463 | Cloud - Amazon, Data Export, Hadoop | Export to parquet with partitions fails if any of the exported columns in the outer-most select statement is specified using "schema.table.column" notation. The workaround is to specify it using just "column" notation without "schema.table" part. |
VER-67228 | AMI, License | An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing. |
VER-64997 | Backup/DR, Security | A vbr restore operation can fail if your database is configured for SSL and has a server.key and a server.crt file present in the catalog directory. |
VER-64916 | Kafka Integration | When Vertica exports data collector information to Kafka via notifier, the sterilization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database. |
VER-64352 | SDK |
Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:
Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails. |
VER-63720 | Recovery | When fewer nodes than the node number of the cluster are specified in a vbr configuration file, the catalog of the restored library will not be installed on nodes that were not specified in the vbr configuration file. |
VER-62983 | Hadoop | When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas. |
VER-62061 | Catalog Sync and Revive | If you revive a database on a one-node cluster, Vertica does not warn users against reviving the same database elsewhere. This can result in two instances of the same database running concurrently on different hosts, which can corrupt the database catalog. |
VER-61584 | Nimbus, Subscriptions | It happens only while node(s) is shutting down or in unsafe status. No workaround. |
VER-61420 | Data Removal - Delete, Purge, Partitioning | Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM. |
VER-61069 | Execution Engine | In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely. |
VER-60409 | AP-Advanced, Optimizer | APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing. |
VER-58168 | Recovery | A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for or cancel such a transaction in order to recover tables modified by the transaction. In some rare instances such transactions may be hung and can not be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node can not transition to 'UP'. Usually the hung transaction can be stopped by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node, restart the cluster. |
VER-57126 | Data Removal - Delete, Purge, Partitioning | Partition operations that use a range, for example, copy_partitions_to_table, must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY expression such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool poolname". |
VER-48020 | Hadoop | Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load. |