Known issues
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
9.2.1-0
Updated: 12/05/2019
Issue Key | Component | Description |
---|---|---|
VER-45474 | Optimizer | When a node down, DELETE and UPDATE query performance can slow due to non-optimized query plans. |
VER-55470 | AP-Advanced |
After calling the IMPUTE function with the fourth argument being 'mode', a temporary table named impute_201701_80425147_p1 is created. This table is required to query the view generated by IMPUTE. Workaround: Drop the temporary table when you no longer need the view. |
VER-58168 | Recovery | A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for or cancel such a transaction in order to recover tables modified by the transaction. In some rare instances such transactions may be hung and can not be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node can not transition to 'UP'. Usually the hung transaction can be stopped by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node, restart the cluster. |
VER-60797 | License | AutoPass format licenses do not work properly when installed in Vertica 8.1 or older. In order to replace them with legacy Vertica license, users need to set the knob AllowVerticaLicenseOverWriteHP=1. |
VER-62983 | Hadoop | When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas. |
VER-63720 | Recovery | When fewer nodes than the node number of the cluster are specified in a vbr configuration file, the catalog of the restored library will not be installed on nodes that were not specified in the vbr configuration file. |
VER-64916 | Kafka Integration | When Vertica exports data collector information to Kafka via notifier, the sterilization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database. |
VER-64997 | Backup/DR, Security |
A vbr restore operation can fail if your database is configured for SSL and has a server.key and a server.crt file present in the catalog directory. Workaround: Move your SSL authentication related files (server.crt, server.csr and server.key) from the catalog directory before performing the restore. |
VER-65742 | Security |
When users are given permissions on a view through schema inherited privileges and directly on the view, only the owner of the view has access. Workaround: Only use inherited privileges or privileges directly on the object, not both. |
VER-66025 | Catalog Engine, Spread | The database can fail if you create nested fault groups after dropping and re-adding nodes. |
VER-66455 | Optimizer |
When there is an enabled constraint on a table, a DML on that table often involves an internal query checking for constraint integrity. If the constraint-checking query spills and fails, Vertica retries the DML but the DML eventually still fails with the message, "DDL statement interfered with query replan". Workaround: Re-issue the DML and make sure the constraint-checking query won't fail by increasing memory, load/update data in smaller increments, or by enabling join spill. |
VER-66756 | Installation Program | If a standby node had been replaced in past with failed nodes with deprecated projections. The identify_unsupported_projections.sh script could not reflect on the standby node. When upgrading to 9.1 or later, the standby node may not start due to failure of the 'U_DeprecateNonIdenticallySortedBuddies' and/or 'U_DeprecatePrejoinRangeSegProjs' task. |
VER-67210 | AP-Advanced |
If a node runs out of memory while training a machine learning model, sometimes a query may fail with "Error during cleanup for User Defined Function load_rows_into_blocks: VIAssert(size >= 0) failed" Workaround: Retry the query when more memory resources are available. |
VER-67228 | AMI, License | An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing. |
9.2.0-0
Updated: 11/27/2018
Issue Key | Component | Description |
---|---|---|
VER-61584 | Subscriptions | The VAssert(madeNewPrimary) failure only occurs while a node or nodes are shutting down or are in unsafe mode. |
VER-61780 | Scrutinize | Scrutinize can generate UnicodeEncodeError if the system locale is set to a language that has non-ASCII characters. |
VER-62983 | Hadoop | When HCatalog Connector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on HCatalog Connector schemas. |
VER-41895 | Admin Tools | On some systems, admintools will fail to parse output while running SSH commands on hosts in the cluster. In some situations, if the admintools operation needs to run on just one node, then there is a workaround. Users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly. |
VER-48041 | Admin Tools | On some systems, occasionally admintools will not be able to parse the output it sees while running SSH commands on other hosts in the cluster. `The issue is typically transient and there is no known work-around. |
VER-62414 | Hadoop |
Loading ORC and Parquet files with a very small stripe or rowgroup size can lead to a performance degradation or run into an out of memory condition. Workaround: Tuning the configuration parameter "HiveSourceSizeMB" to a value greater than 512 will resolve this issue. |
VER-55257 | Client Drivers - ODBC |
Issuing a query that returns a large result set and closing the statement before retrieving all of its rows can result in the following error when attempting subsequent operations with the statement: "An error occurred during query preparation: Multiple commands cannot be active on the same connection. Consider increasing ResultBufferSize or fetching all results before initiating another command." Workaround: Set the ResultBufferSize property to 0 or retrieve all rows associated with a result set before closing the statement. |