Known issues
Known issues release notes for 10.0.x.
Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.
10.0.1-0
Updated: 08/19/2020
Issue Key | Component | Description |
---|---|---|
VER-41895 | Admin Tools | On some systems, admintools fails to parse output while running SSH commands on hosts in the cluster. |
VER-48020 | Hadoop | Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load. |
VER-48041 | Admin Tools | On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. |
VER-61069 | Execution Engine | In very rare circumstances, if a vertica process crashes during shutdown, the remaining processes might hang indefinitely. |
VER-61420 | Data Removal - Delete, Purge, Partitioning | Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM. |
VER-62983 | Hadoop | When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas. |
VER-64352 | SDK |
Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2:
Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails. |
VER-64916 | Kafka Integration | When Vertica exports data collector information to Kafka via notifier, the sterilization logic does not properly handle Vertica internal data type translation. When importing or copying that data back into the Management Console, Vertica uses the raw data in Kafka and inserts garbled data into the extended monitoring database. |
VER-69803 | Hadoop | The infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP. |
VER-70468 | Documentation | Creating a load balance group for a subcluster with a ROUNDROBIN load balance policy only has an effect if you set the global load balancing policy to ROUNDROBIN as well. This is the case, even though the LOAD_BALANCE_GROUPS system table shows the group's policy as ROUNDROBIN. |
VER-71761 | ComplexTypes | The result of an indexed multi-dimension array cannot be compared to an un-typed string literal without a cast operation. |
VER-72380 | ComplexTypes | Insert-selecting an array[varchar] type can result in an error when the source column varchar element length is smaller than the target column. |
VER-73715 | AMI, License | An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing. |
10.0.0-0
Updated: 05/07/2020
Issue Key | Component | Description |
---|---|---|
VER-76268 | Optimizer | Executing EXPLAIN COPY on a new table fails. |
VER-72422 | Nimbus | In Eon Mode, library files that are queued for deletion are not removed from S3 or GCS communal storage. Running FLUSH_REAPER_QUEUE and CLEAN_COMMUNAL_STORAGE does not remove the files. |
VER-72380 | ComplexTypes | Insert-selecting an array[varchar] type can result in an error when the source column varchar element length is smaller than the target column. |
VER-71761 | ComplexTypes | The result of an indexed multi-dimension array cannot be compared to an un-typed string literal without a cast operation. |
VER-70468 | Documentation | Creating a load balance group for a subcluster with a ROUNDROBIN load balance policy only has an effect if you set the global load balancing policy to ROUNDROBIN as well. This is the case, even though the LOAD_BALANCE_GROUPS system table shows the group's policy as ROUNDROBIN. |
VER-69803 | Hadoop | The infer_external_table_ddl meta-function will hang if you do not set authentication information in the current session when analyzing a file on GCP. |
VER-69797 | ComplexTypes | When referencing elements from an array, Vertica cannot cast to other data types without an explicit reference to the original data type. |
VER-69442 | Client Drivers - VSQL, Supported Platforms | On RHEL 8.0, VSQL has an additional dependency on the libnsl library. Attempting to use VSQL in Vertica 9.3SP1 or 10.0 without first installing libnsl will fail and produce the following error: Could not connect to database (EOF received)/opt/vertica/bin/vsql: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory |
VER-67228 | AMI, License | An Eon Mode database created in one hourly listing can only be revived into another hourly listing with the same operating system. For example, an Eon Mode database created in the Red Hat hour listing cannot be shutdown then revived into a cluster created using the Amazon Linux listing. |
VER-64352 | SDK | Under special circumstances, a sequence of statements to create and drop Python libraries fails. For example, the following session statements attempt to create Python libraries lib1 and lib2: CREATE LIBRARY lib1 AS :lib1_path DEPENDS :numpy_path LANGUAGE 'Python'; DROP LIBRARY lib1; CREATE LIBRARY lib2 AS :lib2_path DEPENDS :sklearn_path LANGUAGE 'Python'; Here, lib1 is a Python library that imports module numpy, while lib2 imports module sklearn, which relies on numpy. After lib1 is dropped, the CREATE LIBRARY statement for lib2 fails. |
VER-63720 | Backup/DR | Refuse to full restore if the number of nodes participating in restore doesn't match the number of mapping nodes in the configuration file. |
VER-62983 | Hadoop | When hcatalogconnector schemas are created with custom_partitions enabled, poor performance has been observed when there are many (500+) partitions. By default, custom_partitions are disabled on hcatalogconnector schemas. |
VER-61420 | Data Removal - Delete, Purge, Partitioning | Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Version 9.1 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Note that any extraneous containers created this way will eventually be merged by the TM. |
VER-61069 | Execution Engine | In very rare circumstances, if a vertica process crashes during shutdown, the remaining processes might hang indefinitely. |
VER-60409 | AP-Advanced, Optimizer | APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also outputs many columns may fail with error "Request size too big" due to additional memory requirement in parsing. |
VER-48041 | Admin Tools | On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. |
VER-48020 | Hadoop | Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load. |
VER-41895 | Admin Tools | On some systems, admintools fails to parse output while running SSH commands on hosts in the cluster. |