Resolved issues
11.0.2-25
Updated 09/05/2023
Issue Key | Component | Description |
---|---|---|
VER-88819 | Data load / COPY, Execution Engine | On Vertica 11.0.2, loading malformed or corrupted parquet files could cause Vertica nodes to crash. Now, these circumstances produce an error. |
11.0.2-24
Updated 08/29/2023
Issue Key | Component | Description |
---|---|---|
VER-88087 | Execution Engine | Queries with large tables stopped the database because the indices that Vertica uses to navigate the tables consumed too much RAM. This issue has been resolved, and now the indices use less RAM. |
VER-88268 | Data load / COPY, Execution Engine | On Vertica 11.0.2 loading malformed or corrupted parquet files could cause Vertica nodes to crash. Now an error is produced in such cases instead. |
VER-88277 | ComplexTypes | Creating a local temp table with complex data type previously caused a node failure. This has been resolved. |
VER-88284 | Performance tests | In some cases, the NVL2 function caused Vertica to crash when it returned an array type. This issue has been resolved. |
11.0.2-23
Updated 07/27/2023
Issue Key | Component | Description |
---|---|---|
VER-87802 | Optimizer | During the planning stage, updates on tables with thousands of columns using thousands of SET USING clauses took a long time. Planning performance for these updates was improved. |
11.0.2-22
Updated 06/26/2023
Issue Key | Component | Description |
---|---|---|
VER‑86941 | Data Export, S3 | Export to Parquet sometimes logged errors in a DC table for successful exports. This issue has been resolved. |
VER-87257 | Optimizer | Merge queries with an INTO...USING clause that calls a subquery would sometimes return an error when merging into a table with Set Using/Default query columns. The issue has been resolved. |
VER-87296 | Optimizer | Queries eligible for TOPK projections that were also eligible for elimination of no-op joins would sometimes exit with internal error. The issue has been resolved. |
11.0.2-21
Updated 06/06/2023
Issue Key | Component | Description |
---|---|---|
VER‑85650 | Database Designer Core | DESIGNER_DESIGN_PROJECTION_ENCODINGS returned with an error if a period was embedded in the design name. This issue has been resolved. |
VER-86343 | Data load / COPY | A check to prevent TOCTOU (time of check to time of use) privilege escalations issued false positives in cases where a file is appended to during a COPY. This issue has been resolved: the check has been updated so it no longer issues a false positive in such situations. |
VER-86348 | Admin Tools, Security | Paramiko has been upgraded to 2.10.1 to address CVE-2022-24302. |
VER-87121 | Kafka Integration, Security | Previously, using a Kafka Notifier with SASL_SSL or SASL_PLAINTEXT would incorrectly use SSL instead. This has been fixed. |
VER-87145 | Catalog Engine | Truncating a local temporary table unnecessarily required a global catalog lock, as temporary tables are session scoped. This issue has been resolved. |
11.0.2-20
Updated 04/12/2023
Issue Key | Component | Description |
---|---|---|
VER-85442 | AMI | The Vertica server AMI and SSH configurations were updated to use more secure ciphers. |
VER-85572 | Catalog Engine | Vertica 10.1 added support for bounds to array types. Addition of these bounds made changes to typemod, which represented column details such as VARCHAR length, and timestamp precision. The changes to typemod were not correctly accounted for in the metadata tables odbc_columns and jdbc_columns, which resulted in inconsistent TIMESTAMP column definitions. This issue has been resolved. |
11.0.2-19
Updated 02/21/2023
Issue Key | Component | Description |
---|---|---|
VER‑85078 | UI - Management Console | Two Activity pages in the Management Console, Table Utilization and Table Details, erroneously referenced columns in system tables that have been deleted. This issue has been resolved. |
VER‑85238 | Security |
The LDAP authentication parameter starttls is no longer deprecated and should generally be used when the LDAPAuth TLS Configuration is not granular enough to handle your environment. For example, if you have several LDAPAuth servers and only some of them can handle TLS, use ALTER AUTHENTICATION to set starttls to soft in your authentication record to make TLS a preference and not a requirement. For details, see the Vertica documentation on LDAP authentication parameters. |
VER‑85494 | Tuple Mover | The Mergeout strata algorithm had a hidden overflow issue due to incorrect type casting for wide projections with more than 4095 columns. This issue has been resolved. |
11.0.2-18
Updated 12/01/2022
Issue Key | Component | Description |
---|---|---|
VER‑84168 | Procedural Languages | Running ANALYZE_STATISTICS through a stored procedure without superuser privileges sometimes caused the database to go down. This issue has been resolved. |
VER‑84172 | Data Export | Export to delimited did not allow the same character to be used for both 'escapeAs' and 'enclosedBy'. This restriction has been removed. |
VER‑84174 | Control Networking | Previously, addition of standby nodes changed the assignment of control nodes and number of fault groups. This issue has been resolved: standby nodes are no longer set as control nodes. |
VER‑84241 | Optimizer | Queries with string expressions in SELECT statements that were part of other expressions and also contained constant expressions sometimes returned an error. The issue has been resolved. |
VER‑84245 | Backup/DR | The vbr listbackup task failed with an error when nodes in the configuration file's [mapping] section did not match the nodes in database. This issue has been resolved: listbackup now returns with a warning about nodes that are missing in the configuration file. |
VER‑84412 | Backup/DR | When vbr failed to launch vertica-download-file or vertica-download-file ended with error, vbr raised an error because it tried to load the error message as json. This issue has been resolved with more error handling. |
VER‑84525 | Admin Tools | admintools uses a pseudoterminal (ssh + expect) to transfer data between nodes. When sending data larger than 4k, successful transfer depends on disabling canonical mode in the pseudoterminal. In some cases, this setting was ignored. This issue has been resolved. |
11.0.2-17
Updated 11/16/2022
Issue Key | Component | Description |
---|---|---|
VER‑83717 | Client Drivers - JDBC, Client Drivers - VSQL, Execution Engine | In the 12.0.0 release, the default values of Vertica KeepAlive parameters were non-NULL values, which overrode the equivalent kernel TCP keepalive parameters. This issue has been resolved: now, all Vertica KeepAlive parameters are set to 0 (NULL), so effective settings are obtained from TCP keepalive parameters. |
VER‑83733 | UI - Management Console | timestamptz column microsecond values were truncated in MC. This issue has been resolved: database server timestamps with zone offset values are now properly represented. |
VER‑83751 | Backup/DR | VBR uploads to S3 storage sometimes returned with error messages that the upload failed. This issue has been resolved: on receiving these messages, VBR now retries the upload operation. |
VER‑83859 | Backup/DR | Backup and restore on FIPS-enabled systems using object storage systems such as S3 could fail due to unavailability of the MD5 checksum implementation. These operations do not use MD5 for security, and this issue was resolved by flagging the uses of MD5 as not-for-security, thus enabling its use even with FIPS mode enabled. |
VER‑83922 | Optimizer | When UPDATE referenced a table column with an access policy, the access policy rule was applied twice instead of once. This issue has been resolved: now the access policy rule is applied only once. |
VER‑84000 | Admin Tools, Security |
The following pypi packages were upgraded to fix CVE-2021-28363:
|
11.0.2-16
Updated 09/28/2022
Issue Key | Component | Description |
---|---|---|
VER‑83307 | Kafka Integration | Under certain conditions, Vertica crashed when exporting data to Kafka. This issue has been resolved. |
VER‑83465 | Security | In some cases, the LDAP Link service could fail to create users due circular assignment after the first synchronization. This issue has been resolved. |
VER‑83482 | Database Designer Core | When Database Designer processed queries that referenced non-table objects such as projections with pre-aggregated data, it sometimes failed or produced core dumps and database crashes. This issue has been resolved: now Database Designer skips queries of these types. |
VER‑83485 | Security | In some cases, the LDAP Link service could fail to create users due circular assignment after the first synchronization. This issue has been resolved. |
VER‑83548 | Catalog Engine | System table vs_storage_reference_counts was not updating the counter values in column num_accesses. This issue has been resolved. |
11.0.2-15
Updated 09/12/2022
Issue Key | Component | Description |
---|---|---|
VER‑83072 | Optimizer | Queries with a UNION on EON subclusters resegmented grouped UNION leg outputs, even if UNION legs were segmented on group keys. This issue has been resolved, thereby improving query performance. |
VER‑83089 | Backup/DR | vbr calls to the AWS function DeleteObjects() did not gracefully handle SlowDown errors. This issue has been resolved by changing the boto3 retry logic, so SlowDown errors are less likely to occur. |
VER‑83195 | UI - Management Console | If a database with extended monitoring enabled went down, its Extended Monitoring page displayed no data. This issue has been resolved. |
11.0.2-14
Updated 08/19/2022
Issue Key | Component | Description |
---|---|---|
VER‑82378 | Admin Tools | Adding two nodes to a one-node enterprise database required the database to be rebalanced and made K-safe. Attempts to rebalance and achieve K-safety on the admintool command line ended in an error, while attempts to update K-safety in the admintools GUI also failed. These issues have been resolved. |
VER‑82630 | Client Drivers - VSQL | The vsql client now returns an error code when it encounters an error while running a long query--for example, the query exceeds the resource pool's RUNTIMECAP setting. |
VER‑82636 | Optimizer | Previously, if a partition range projection included a column defined by an expression derived from another table column, querying on the projection sometimes caused the server to crash. This issue has been resolved. |
VER‑82760 | Execution Engine, Optimizer | If join output required sorting and was used by another merge join, but had multiple equivalent sort keys, the sort keys were not fully maintained. In this case, the query returned with incorrect results. This issue has been resolve by maintaining the necessary sort keys for merge join input. |
VER‑82775 | Execution Engine | The makeutf8 function sometimes caused undefined behavior when given maximum-length inputs for the argument column type. This issue has been resolved. |
VER‑82805 | Backup/DR | If you restored a backup to another database with a different communal storage location, startup on the target database failed if the database's oid was assigned to another object. This issue has been resolved. |
VER‑82815 | Data Export, Data load / COPY, Hadoop | When reading partition values from file paths, values that contained a '/' character were read incorrectly by Vertica. This issue has been resolved. |
VER‑82850 | DDL - Table | Previously, after a table was renamed, its projection definition was not updated. This caused the exported DDL to contain the new table name while the old table name continued to be used as its alias. This issue has been fixed: the projection no longer references the old table name as an alias after the table is renamed. |
VER‑82881 | DDL | Vertica allows any partition expression that resolves to non-NULL values, even in cases where the expression columns originally contained NULL values. Conversely, Vertica no longer allows a partition expression that produces NULL values even if the expression columns contain no NULL values. |
VER‑82906 | Depot | If the depot had no space to download a new file, the data loading plan did not write the file to the depot. Instead, it viewed the file as already in the depot, and incorrectly returned an error that the file size did not match the catalog. This issue has been resolved: the data loading plan no longer regards the absent file as in the depot. |
VER‑82942 | Client Drivers - JDBC | When JDBC connected to the server with BinaryTransfer and the JVM timezone had a historical or future Daylight Saving Time (DST) schedule, querying DST start dates in the JVM timezone sometimes returned incorrect data. This issue has been resolved; however, performance of BinaryTransfer for DATE data is worse than that of TextTransfer. |
VER‑82969 | Data load / COPY | Flex table parsers did not reserve enough buffer space to correctly process certain inputs to NUMERIC-type columns. This issue has been resolved. |
11.0.2-13
Updated 07/11/2022
Issue Key | Component | Description |
---|---|---|
VER‑82382 | AP-Geospatial | In some cases, ST_GeomFromGeoJSON returned non-null result on null input. The issue has been resolved. |
11.0.2-12
Updated 06/23/2022
Issue Key | Component | Description |
---|---|---|
VER‑81853 | UI - Management Console | Management Console was unable to start a database if the URL for communal storage ended with a backslash ('/'). This issue has been resolved |
VER‑81863 | Hadoop | The configuration parameter UseServerIdentityOverUserIdentity was omitted from the configuration_parameters system table. It is now included. |
VER‑82037 | Execution Engine | Changed misleading counter name from "number of bytes read from communal storage" to "number of bytes read from persistent storage" |
VER‑82042 | Execution Engine | Addition of configuration parameter EnablePredicateRemoval introduced performance regressions in certain queries. This regression has been resolved. |
VER‑82246 | Execution Engine | Queries on the VARBINARY(16) data type performed suboptimally. This issue has been resolved. |
11.0.2-11
Updated 05/27/2022
Issue Key | Component | Description |
---|---|---|
VER‑81739 | UI - Management Console | Some cookie features were not set properly in HTTP headers. This issue has been resolved. |
VER‑81770 | Tuple Mover | To get information about the size of storage locations, Tuple Mover used the SysInfo function getStorageLocationForUsage, which performed slowly. Tuple Mover now gets storage location size information from the resource manager. |
VER‑81776 | Optimizer | Previously, DBD functions DESIGNER_ADD_DESIGN_QUERIES and DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY only supported local file system locations as arguments. Now, they also support communal storage locations. |
VER‑81808 | Catalog Engine | In a database with large number of projections, querying system tables on vs_segment or system views that referenced vs_segment sometimes returned with an error that the MaxParsedQuerySizeMB limit had been exceeded. This issue has been resolved. |
VER‑81850 | Tuple Mover | Mergeout plans converted StorageMerge to StorageUnion+Sort, which slowed down mergeout and adversely affeced performance. This issue has been resolved: StorageMerge is no longer converrted to StorageUnion+Sort. |
VER‑81914 | Execution Engine | Running analyze_statistics sometimes caused significant, unexpected slowdowns on concurrent queries. This slowdown was most noticeable on subsecond queries with StorageMerge operators in the plan. This issue has been resolved. |
11.0.2-10
Updated 05/05/2022
Issue Key | Component | Description |
---|---|---|
VER‑81386 | Backup/DR | When running Eon mode replication tasks, vbr returned with an error when it tried to connect to Vertica through a non-primary subscriber node. This issue has been resolved. |
VER‑81559 | UI - Management Console | When users set Alert Email Recipients for a Node State Change custom threshold, the email was not sent. This issue has been resolved. |
11.0.2-9
Updated 04/21/2022
Issue Key | Component | Description |
---|---|---|
VER‑81357 | Admin Tools | Attempts to replace a down node in the admintools GUI or on the command line by calling db_replace_node or restart_node with the --force option failed. This issue has been resolved. |
11.0.2-8
Updated 04/18/2022
Issue Key | Component | Description |
---|---|---|
VER‑81044 | UI - Management Console | When an AWS cluster was scaled up, the MC did not load all data when a custom file system was detected, and the AWS key pair was not populated. These issues were resolved. |
VER‑81048 | Execution Engine | Unlike a regular outer join, an event series join pads output with some cached values that represent the columns of the mismatched side. Padding with nulls is required if no previous rows are available for interpolation. To process event series joins, cached rows are always initialized with nulls, then updated with values to represent the mismatched side if applicable. When initializing cached rows with a schema that differed from what the join expected at that stage of the query plan, Vertica occasionally crashed. This issue has been resolved: now cached rows are initially set to null tuples, using the correct tuple schema. |
VER‑81144 | Admin Tools | admintools -t re_ip options -T and -U did not update admintools.conf with the correct control messaging protocol. This issue has been resolved. |
VER‑81254 | Data load / COPY | Operations over directories containing large numbers of external data files on object stores consumed more CPU than expected. This issue affected planning for external table queries, COPY statements, and license auditing. An algorithmic change reduced the amount of CPU time used to perform these operations. |
VER‑81255 | Client Drivers - JDBC, Sessions | Each Vertica node now uses TCP keepalive to detect if it is disconnected from and automatically free resources allocated for a client. |
VER‑81269 | Data Collector | Setting an empty channel in SET_DATA_COLLECTOR_NOTIFY_POLICY is now disallowed. |
VER‑81278 | UI - Management Console | When the Management Console is installed on a Linux server, some files are created in the /tmp folder. If these files are removed, the MC database is not restored and clients lose data during an upgrade. This issue has been resolved: the MC database is now restored after an upgrade. |
11.0.2-7
Updated 03/29/2022
Issue Key | Component | Description |
---|---|---|
VER‑80840 | Execution Engine | Vertica evaluates IN expressions in several ways, depending on the right-hand side. If the right-hand side is a subquery, then Vertica evaluates the expression as a join; if it is a list of constant expressions, then Vertica evaluates the expression by building a hash table of constant values; and if the expression is anything else, then Vertica either errors out, or evaluates the expression by rewriting it into a logical disjunction. Detection for the third case was flawed, resulting in an expression being evaluated incorrectly, which sometimes resulted in a crash. This issue has been resolved. |
VER‑80847 | AP-Geospatial | STV_Create_Index can create incorrect indexes on large sets of polygons. Using these indexes might cause a query to fail or bring a node down. This issue has been resolved. |
VER‑80895 | Backup/DR, Migration Tool | After migrating a database from Enterprise to Eon mode, loading data into tables with unsegmented projections could fail with the error "Cannot plan query because no super projections are safe". This issue has been resolved. |
VER‑80898 | Admin Tools | The admintools operation db_add_node failed if the first node in the cluster was down. This failure occurred because after adding the node, db_add_node would try to use the first node as the source node for syncing vertica.conf and spread.conf files. This issue has been resolved: now, Vertica uses any UP node as the source node for syncing. |
VER‑80906 | UI - Management Console | Previously, MC saved a DBD-generated design only when database K-safety was set to 0. This, issue has been resolved: now MC saves the design irrespective of the K-safety setting. |
VER‑80909 | UI - Management Console | When creating a cluster, Management Console included unsupported characters in the key pair name when it generated the cluster IAM role name, which blocked the cluster creation process. This issue has been resolved: now, MC removes unsupported characters in the key pair name from the generated IAM role name. |
VER‑80911 | Optimizer | Previously, live aggregate projections had difficulty rewriting AVG() if the argument was of type INT. This issue has been resolved. |
VER‑80914 | Monitoring | The ros_count column in system view projection_storage was removed in release 11.0.2. This column has been restored, as per requests from clients who used it for monitoring purposes. |
VER‑80916 | UI - Management Console | When creating a cluster, Management Console included unsupported characters in the key pair name when it generated the cluster IAM role name, which blocked the creation process. This issue has been resolved: now, MC removes unsupported characters in the key pair name from the generated IAM role name. |
VER‑80918 | Data Networking | Depending on the customer's network settings, TCP connections were occasionally considered alive for nodes that recently went down. This could cause significant query stalls or indefinite session hangs without the ability to cancel. The issue has been resolved with improved TCP connection handling. |
11.0.2-6
Updated 03/04/2022
Issue Key | Component | Description |
---|---|---|
VER‑80630 | Client Drivers - JDBC, Execution Engine | Previously, when using the JDBC and ADO.net drivers with binary encoding, queries that contained NUMERIC literal expressions that used parameterized prepared statement queries--for example, ?/10--could have incorrect precision and scale. This issue has been resolved. |
VER‑80738 | Execution Engine | As a performance optimization, Vertica analyzes whether expressions can be true or false over a range of values, to avoid evaluating that expression for each row in the range. The analysis function regexp_like() sometimes returned incorrect results when the regular expression contained characters that allowed a pattern to match zero times, such as "?" and "*". This issue has been resolved. |
11.0.2-5
Updated 02/17/2022
Issue Key | Component | Description |
---|---|---|
VER‑80474 | Backup/DR | On LINUX_FILESYSTEM, when a snapshot was in progress, vertica called glob() on all storage containers and then called stat() to check container statistics. If an error occurred between these two operations, the backup operation failed. This issue has been resolved: on LINUX_FILESYSTEM, the snapshot now only calls stat() on storage containers. |
VER‑80477 | ComplexTypes | A bug in the code caused parsing of nested case statements to become inefficient. This resulted in much larger processing time for case statements with multiple levels of nesting. This issue has been resolved: processing of nested case statements is again linear in the number of nesting levels. |
VER‑80498 | Optimizer, Performance tests | analyze_statistics('') on a large catalog sometimes ran out of memory and either failed or triggered an out of memory error in the kernel. This issue has been resolved. |
VER‑80524 | ComplexTypes, Execution Engine | Previously, expressions with "subquery." as arguments (not in the top level of a SELECT statement or subquery) could result in undefined behavior. This issue has been resolved. Now, whenever "subquery." appears as an expression argument, it expands into a complex ROW expression with one field for each of the subquery's columns. |
VER‑80532 | Kafka Integration | KafkaExport returns when it detects that all messages were sent to Kafka, reducing execution time up to 10 seconds. |
VER‑80548 | DDL | Previously, IMPORT_STATISTICS was unable to import statistics from output generated by EXPORT_STATISTICS_PARTITION. This issue has been resolved. |
Updated 01/28/2022
Issue Key | Component | Description |
---|---|---|
VER‑80423 | Client Drivers - ADO | Previously, VARCHAR and LONGVARCHAR database types had incorrect SqlType mappings, which caused inconsistent datatype IDs to appear in different contexts. This issue has been resolved: now VARCHAR and LONGVARCHAR data types consistently map to 12 and -1, respectively. |
VER‑80440 | ComplexTypes | Vertica crashed when you added a column with a default value to a table that contained complex type columns. This issue has been resolved: the ALTER TABLE statement now returns with an error message. |
VER‑80451 | Execution Engine, UDX | As of 10.1, inline SQL functions could not be passed a volatile parameter if that parameter appeared multiple times in the function definition. As an inline function, RIGHT returned an error when it was called with a volatile user-defined aggregate function as its first parameter. This issue has been resolved: RIGHT is no longer an inline SQL function, and instead is now defined internally. |
11.0.2-4
Updated 01/28/2022
Issue Key | Component | Description |
---|---|---|
VER‑80423 | Client Drivers - ADO | Previously, VARCHAR and LONGVARCHAR database types had incorrect SqlType mappings, which caused inconsistent datatype IDs to appear in different contexts. This issue has been resolved: now VARCHAR and LONGVARCHAR data types consistently map to 12 and -1, respectively. |
VER‑80440 | ComplexTypes | Vertica crashed when you added a column with a default value to a table that contained complex type columns. This issue has been resolved: the ALTER TABLE statement now returns with an error message. |
VER‑80451 | Execution Engine, UDX | As of 10.1, inline SQL functions could not be passed a volatile parameter if that parameter appeared multiple times in the function definition. As an inline function, RIGHT returned an error when it was called with a volatile user-defined aggregate function as its first parameter. This issue has been resolved: RIGHT is no longer an inline SQL function, and instead is now defined internally. |
11.0.2-3
Updated 01/25/2022
Issue Key | Component | Description |
---|---|---|
VER‑80234 | Admin Tools | In earlier releases, create_db failed in FIPS mode. This issue has been resolved. |
VER‑80236 | Backup/DR | During backup, vbr sends queries to vsql and reads the results from each query. If the vsql output comprised multiple lines that ended with newline characters, vbr mistakenly interpreted the newline character to indicate the end of output from the current query and stopped reading. As a result, when vbr sent the next query to vsql, it read the unread output from the earlier query as belonging to the current query. This issue has been resolved: vbr now correctly detects the end of each query result and reads it accordingly. |
VER‑80238 | Recovery | The Tuple Mover uses the configuration parameter MaxMrgOutROSSizeMB to determine the maximum size of ROS containers that are candidates for mergeout. After a rebalance operation, Tuple Mover now groups ROS containers in batches that are smaller than MaxMrgOutROSSizeMB. A ROS container that is larger than MaxMrgOutROSSizeMB is merged individually. |
VER‑80371 | DDL, Execution Engine | The server crashed when system queries used large strings as predicate constants—for example, "table_name = ..." or "schema_name = ..." This issue has been resolved. |
11.0.2-2
Updated 01/11/2022
Issue Key | Component | Description |
---|---|---|
VER‑80157 | UI - Management Console | Security vulnerability CVE-2021-45105 was found in earlier versions of the Apache log4j library used by the MC. The library has been updated to resolve this issue. |
VER‑80168 | Kafka Integration | Security vulnerabilities CVE-2021-45105 and CVE-2021-44832 were found in earlier versions of the log4j library used by the Vertica/Apache Kafka integration. The library has been updated to resolve this issue. |
VER‑80242 | UI - Management Console | Security vulnerability CVE-2021-44832 was found in earlier versions of the Apache log4j library used by the MC. The library has been updated to resolve this issue. |
VER‑80252 | Execution Engine | Certain queries with ill-formed predicates that contained a large number of fields caused Vertica to run out of memory while trying to return an error message. This issue has been resolved: now, Vertica can build and return error messages of the type operator-not-found, regardless of length. |
11.0.2-0
Updated 12/23/2021
Issue Key | Component | Description |
---|---|---|
VER‑50526 | Data Removal - Delete, Purge, Partitioning, EON | Eon mode now supports optimized delete and merge for unsegmented projections. |
VER‑77427 | Client Drivers - JDBC | Previously, JDBC would check $JAVA_HOME/security for the truststore and the default password was an empty string. Now, if a truststore path (trustStore) is not specified, JDBC first checks the original location ($JAVA_HOME/security/), then checks the default JVM truststore at $JAVA_HOME/lib/security/. If a truststore password (trustStorePassword) is not specified, it uses the password "changeit." |
VER‑78338 | Execution Engine | System table query_consumption calculated peak_memory_kb by summing peak memory consumption from all nodes. This issue has been resolved: query_consumption now calculates peak_memory_kb by finding the maximum of peak memory that was requested among all nodes. |
VER‑78671 | Execution Engine | Query predicates were reordered at runtime in a way that caused queries to fail if they compared numeric values (integer or float) to non-numeric strings such as 'abc’. This issue has been resolved. |
VER‑78833 | Optimizer | The optimizer occasionally chose a regular projection even when it was directed to use available LAP/Top-K projections. This issue has been resolved. |
VER‑79001 | Admin Tools |
Calling "command_host -c start" or "restart_node" to start an EncryptSpreadComm-enabled database now gives a more useful error message, instructing users to call start_db instead. In general, to start or stop an EncryptSpreadComm-enabled database, users should call start_db or stop_db. |
VER‑79057 | Backup/DR | The hash function changed in Vertica releases <= 11.0. As a result, backup manifest file digests that are generated when backing up earlier releases (< 11) do not match the new snapshot manifest file for the same object. This issue has ben resolved: in cases like this, Vertica now ignores digest mismatches. |
VER‑79085 | Admin Tools, Database Designer Core | Admintools rebalance_data with ksafe=1 returned an error when database K-safety was set to 0. This issue has been resolved. |
VER‑79135 | Client Drivers - JDBC |
Previously, changing the session timezone on the server did not affect the results returned when querying a timestamp. You could access the timestamp by calling getAdjustedTimestamp() on the result set object, but this was not functioning properly in binary transfer mode. This issue has been resolved. In text and binary transfer modes, calling getAdjustedTimestamp() on a result set object returned by the server that contains a timestamp now properly returns the timestamp based on the timezone session parameter. |
VER‑79141 | Tuple Mover |
Users can now control configuration parameter MaxDVROSPerContainer knob and set it to any value >1. The new formula is: max(2, (MaxDVROSPerContainer+1)/2) A two-level strata helps avoid excessive DVMergeouts: DVs with fewer deleted rows than (TBL.nrows/MaxDVROSPerStratum), where maxDVROSPerStratun is max(2, (MaxDVROSPerContainer+1)/2), are placed at stratum 0; if the number of these DVs exceeds MaxDVROSPerStratum, they are merged together. As before, larger DVs at stratum 1 are not merged. |
VER‑79146 | EON, ResourceManager | Subcluster level resource pool creation now supports specifying cpuaffinityset and cpuaffinitymode. |
VER‑79180 | UI - Management Console | Previously, the feedback feature had an issue uploading feedback information. The default behavior was changed, and now the feature sends information by email. |
VER‑79236 | Tuple Mover | In previous releases, the DVMergeout plan read storage files during plan compilation stage to determine the offset and length of each column in the storage files. Accessing this metadata incurred S3 API calls. This issue has been resolved: the tuple mover now calculates column offsets and length of each without accessing the storage files. |
VER‑79259 | FlexTable | In rare cases, MapToString used to return null when the underlying VMap was not null. This was an issue with displaying VMaps but no data loss happened. The issue was fixed. |
VER‑79349 | Data Export, S3 | When connecting to S3 using https, the S3EXPORT function failed to authenticate the server cerficiate unless the aws_ca_bundle parameter was set. This issue has been resolved: the system CA bundle is now used by default. |
VER‑79350 | Optimizer | Joins with an interpolated predicate did not recognize equivalency between date/time data types that specified and omitted precision--for example data types TIME(6) and TIME. The issue has been resolved. |
VER‑79383 | Execution Engine | On rare occasions, a Vertica database crashed when query plans with filter operators were canceled. This issue has been resolved. |
VER‑79475 | Hadoop | The hdfs_cluster_config_check function failed on SSL enabled hdfs clusters, now this is fixed. |
VER‑79510 | Admin Tools | Previously, Vertica used the same catalog path base and data path base for nodes and admintools. Now, admintools uses the data base path as set in admintools.conf, as distinct from the catalog path base. |
VER‑79513 | Optimizer | Under certain conditions, flaws were found in the logic that checked for duplicate keys. This issue has been resolved. |
VER‑79548 | Scrutinize | In release 11.0, the Vertica agent used a new method to transfer files through different nodes, but this change prevented scrutinize from sending zip files. This issue has been resolved. |
VER‑79562 | Depot | If you called copy_partitions_to_table on two tables with the same pinning policies, and the target table had no projection data, the Vertica database server crashed. This issue has been resolved. |
VER‑79742 | Security | Changes to LDAPLinkURL and LDAPLinkSearchBase orphaned LDAPLinked users. This issue has been resolved: users are no longer orphaned if the new URL or search base contains the same set of users, and previously orphaned users are un-orphaned. |
VER‑79757 | Kafka Integration | Certain Kafka notifier errors tried to allocate a memory pool twice and triggered an assert condition. This issue has been resolved. |
VER‑79820 | Backup/DR | During backup, vbr sends queries to vsql and reads the results from each query. If the vsql output was very long and comprised multiple lines that ended with newline characters, vbr mistakenly interpreted the newline character to indicate the end of output from the current query and stopped reading. As a result, when vbr sent the next query to vsql, it read the unread output from the earlier query as belonging to the current query. This issue has been resolved: vbr now correctly detects the end of each query result and reads it accordingly. |
11.0.1-3
Updated 12/16/2021
Issue Key | Component | Description |
---|---|---|
VER‑79881 | Tuple Mover |
Users can now control configuration parameter MaxDVROSPerContainer knob and set it to any value >1. The new formula is: max(2, (MaxDVROSPerContainer+1)/2) A two-level strata helps avoid excessive DVMergeouts: DVs with fewer deleted rows than (TBL.nrows/MaxDVROSPerStratum), where maxDVROSPerStratum is max(2, (MaxDVROSPerContainer+1)/2), are placed at stratum 0; if the number of these DVs exceeds MaxDVROSPerStratum, they are merged together. As before, larger DVs at stratum 1 are not merged. |
VER‑79886 | Scrutinize | In release 11.0, the Vertica agent used a new method to transfer files through different nodes, but this change prevented scrutinize from sending zip files. This issue has been resolved. |
VER‑79910 | Security | Changes to LDAPLinkURL and LDAPLinkSearchBase orphaned LDAPLinked users. This issue has been resolved: users are no longer orphaned if the new URL or search base contains the same set of users, and previously orphaned users are un-orphaned. |
VER‑80080 | Kafka Integration | This release updates the Kafka integration’s Log4j library. The updated library addresses the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities found in earlier versions. |
VER‑80081 | UI - Management Console | This release updates the Management Console’s Log4j library. The updated library addresses the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities found in earlier versions. |
11.0.1-2
Updated 11/24/2021
Issue Key | Component | Description |
---|---|---|
VER‑79278 | UI - Management Console | The KPI dropdown on the Manage page did not open correctly, and the Activity Page displayed as unauthorized. This issue has been resolved. |
VER‑79587 | Backup/DR | The hash function changed in Vertica releases <= 11.0. As a result, backup manifest file digests that are generated when backing up earlier releases (< 11) do not match the new snapshot manifest file for the same object. This issue has ben resolved: in cases like this, Vertica now ignores digest mismatches. |
VER‑79621 | Optimizer | The optimizer occasionally chose a regular projection even when it was directed to use available LAP/Top-K projections. This issue has been resolved. |
VER‑79667 | FlexTable | In rare cases, MapToString returned null when the underlying VMap was not null. This was an issue with displaying VMaps but no data loss happened. This issue has been resolved. |
VER‑79743 | Hadoop | Previously, hdfs_cluster_config_check would fail on SSL enabled hdfs clusters. This issue has been resolved. |
VER‑79780 | Data Removal - Delete, Purge, Partitioning | Eon mode now supports optimized delete and merge for unsegmented projections. |
11.0.1-1
Updated 11/10/2021
Issue Key | Component | Description |
---|---|---|
VER‑79417 | Tuple Mover | In previous releases, the DVMergeout plan read storage files during plan compilation stage to determine the offset and length of each column in the storage files. Accessing this metadata incurred S3 API calls. This issue has been resolved: the tuple mover now calculates column offsets and length of each without accessing the storage files. |
VER‑79516 | Execution Engine | On rare occasions, a Vertica database crashed when query plans with filter operators were canceled. This issue has been resolved. |
VER‑79525 | Data Export, S3 | When connecting to S3 using https, the S3EXPORT function failed to authenticate the server cerficiate unless the aws_ca_bundle parameter was set. This issue has been resolved: the system CA bundle is now used by default. |
VER‑79535 | Admin Tools, Database Designer Front-End/UI | Admintools rebalance_data with ksafe=1 returned an error when database K-safety was set to 0. This issue has been resolved. |
VER‑79540 | Admin Tools, Database Designer Core | Admintools rebalance_data with ksafe=1 returned an error when database K-safety was set to 0. This issue has been resolved. |
VER‑79581 | Admin Tools | Previously, Vertica used the same catalog path base and data path base for nodes and admintools. Now, admintools uses the data base path as set in admintools.conf, as distinct from the catalog path base. |
11.0.1-0
Updated 10/25/2021
Issue Key | Component | Description |
---|---|---|
VER‑67295 | Security | LDAPLink now properly handles nested groups. |
VER‑68210 | UI - Management Console | MC failed to import a database that contained a table named "users" in the public schema. This issue has been resolved. |
VER‑75768 | Data Removal - Delete, Purge, Partitioning | Users could remove a storage location that contained temporary table data with drop_location . This issue has been resolved: if a storage location contains temporary data, drop_location now returns an error and hint. |
VER‑75794 | Data Removal - Delete, Purge, Partitioning | Calling meta-function CALENDAR_HIERARCHY_DAY with the active_years and active_month arguments set to 0 can result in considerable I/O. When yoiu do so now, the function returns with a warning. |
VER‑77175 | Execution Engine | Some sessions that used User Defined Load code (or external table queries backed by User Defined Loads) accumulated memory usage through the life of the session. The memory was only used on non-initiator nodes, and was released after the session ended. This issue has been resolved. |
VER‑77583 | Installation: Server RPM/Deb | Several Python scripts in the /vertica/scripts directory used the old Python 2 print command, which prevented them from working with Python 3. They have been updated to the new syntax. |
VER‑77688 | Documentation | The script to backup and restore grants on UDx libraries shown in the documentation topic "Backing Up and Restoring Grants" contained several bugs. It has been corrected. |
VER‑77771 | Documentation | Documentation now informs users not to embed spaces before or after comma delimiters of the ‑‑restore-objects list; otherwise, vbr interprets the space as part of the object name. |
VER‑77818 | Client Drivers - ADO | If you canceled a query and immediately called DataReader.close() without reading all rows that the server sent before the cancel took effect, the necessary clean-up work was not completed, and an exception was incorrectly propagated to the application. This issue has been resolved. |
VER‑77999 | Kafka Integration | When loading a certain amount of small messages, filters such as KafkaInsertDelimiters can return a VIAssert error. This issue has been resolved. |
VER‑78001 | Backup/DR | When performing a full restore, AWS configuration parameters such as AWSEndpoint,AWSAuth and AWSEnableHttps were overwritten on the restored database with their backup settings. This issue has been resolved: the restore leaves the parameter values unchanged. |
VER‑78313 | Optimizer | The configuration parameter MaxParsedQuerySizeMB knob is set in MB (as documented). The optimizer can require very large amounts of memory for a given query, much of it consumed by internal objects that the parser creates when it converts the query into a query tree for optimization. Other issues were identified as contributing to excessive memory consumption, and these have been addressed, including freeing memory allocated to query tree objects when they are no longer in use. |
VER‑78391 | Admin Tools | When installing a package, AdminTools compares the md5sum of the package file and the md5sum indicated in the isinstalled.sql script. If the two md5sum values do not match, AdminTools shows an error message and the installation fails. The error message has been improved to show the mismatch as the reason for failure. |
VER‑78470 | DDL - Table | When a table was dropped, the drop operation was not always logged. This issue has been resolved. |
VER‑78555 | Database Designer Core | Database Designer generated files with wrong permissions. This issue has been resolved. |
VER‑78576 | ComplexTypes | An error in constant folding would sometimes incorrectly fold IN expressions with string value lists. This issue has been fixed. |
VER‑78577 | UI - Management Console | Management Console returned errors when configuring email gateway aliases that included hyphen (-) characters. This issue was resolved. |
VER‑78578 | DDL | If a column was set to a DEFAULT or SET USING expression and the column name embedded a period, attempts to change the column's data type with ALTER TABLE threw a syntax error. This issue has been resolved. |
VER‑78612 | Catalog Engine | If COPY specified to write rejected data to a table, subsequent removal of a node from the cluster rendered that table unreadable. This issue has been resolved: the rejected data can now be read from any node that is up. |
VER‑78619 | Execution Engine | Queries on system table EXTERNAL_TABLE_DETAILS with complex predicates on the table_schema, table_name, or source_format columns either returned wrong results or caused the cluster to crash. This issue has been resolved. |
VER‑78632 | Optimizer | Queries with multiple distinct aggregates sometimes produced wrong results when inputs appeared to be segmented on the same columns as distinct aggregate arguments. The issue has been resolved. |
VER‑78682 | Data Removal - Delete, Purge, Partitioning, DDL | The type metadata for epoch columns in version 9.3.1 and earlier was slightly different than in later versions. After upgrading from 9.3.1, SWAP_PARTITIONS_BETWEN_TABLES treated those columns as not equivalent and threw an error. This issue has been resolved. Now, when SWAP_PARTITIONS_BETWEN_TABLES compares column types, it ignores metadata differences in epoch columns. |
VER‑78726 | Optimizer | Partition statistics now support partition expressions that include the date/time function date_trunc(). |
VER‑78730 | DDL | If you profiled a query that included the ENABLE_WITH_CLAUSE_MATERIALIZATION hint, Vertica did not enable materialization for that query. This issue has been resolved. |
VER‑78750 | Catalog Engine | In earlier releases, if you set CatalogSyncInterval to a new value, Vertica did not use the new sync interval until after the next scheduled sync as set by the previous CatalogSyncInterval setting was complete. This issue has been resolved: now Vertica immediately implements the new sync interval. |
VER‑78767 | Optimizer | Attempts to add a column with a default value that included a TIMESERIES clause returned with a ROLLBACK message. This issue has been resolved. |
VER‑78856 | DDL - Projection, Optimizer | Eligible predicates were not pushed down into subqueries with a LIMIT OVER clause. The issue has been resolved. |
VER‑78969 | Hadoop | Exporting Parquet files with over 2^31 rows caused node failures. The limit has now been raised to 2^64 rows. |
VER‑79260 | UI - Management Console | Previously, the feedback feature had an issue uploading feedback information. The default behavior was changed, and now the feature sends information by email. |
11.0.0-3
Updated 10/06/2021
Issue Key | Component | Description |
---|---|---|
VER‑78945 | UI - Management Console | Management Console returned errors when configuring email gateway aliases that included hyphen (-) characters. This issue has been resolved. |
VER‑78986 | Client Drivers - ADO | If you canceled a query and immediately called DataReader.close() without reading all rows that the server sent before the cancel took effect, the necessary clean-up work was not completed, and an exception was incorrectly propagated to the application. This issue has been resolved. |
VER‑78988 | Optimizer | Attempts to add a column with a default value that included a TIMESERIES clause returned with a ROLLBACK message. This issue has been resolved. |
VER‑79042 | Optimizer | Partition statistics now support partition expressions that include the date/time function date_trunc(). |
VER‑79045 | Optimizer | Queries with multiple distinct aggregates sometimes produced wrong results when inputs appeared to be segmented on the same columns as distinct aggregate arguments. The issue has been resolved. |
VER‑79065 | Kafka Integration | When loading a certain amount of small messages, filters such as KafkaInsertDelimiters can return a VIAssert error. This issue has been resolved. |
11.0.0-2
Updated 9/23/2021
Issue Key | Component | Description |
---|---|---|
VER‑78927 | Security |
In Vertica 11.0, TLS configurations were greatly simplified for both LDAP Link and LDAP Authentication. As part of that simplification, the LDAP StartTLS parameter is now set automatically based on the TLSMODE and no longer needs to be set separately via a configuration parameter. Previously, StartTLS was incorrectly enabled when using the ldaps:// protocol regardless of the TLSMODE. This issue has been resolved. |
11.0.0-1
Updated 9/14/2021
Issue Key | Component | Description |
---|---|---|
VER‑78586 | Database Designer Core | In recent releases, Database Designer generated SQL files with permissions of 600 instead of 666. This issue has been resolved. |
VER‑78663 | Execution Engine | Queries on system table EXTERNAL_TABLE_DETAILS with complex predicates on the table_schema, table_name, or source_format columns either returned wrong results or caused the cluster to crash. This issue has been resolved. |
VER‑78668 | Backup/DR | When performing a full restore, AWS configuration parameters such as AWSEndpoint,AWSAuth and AWSEnableHttps were overwritten on the restored database with their backup settings. This issue has been resolved: the restore leaves the parameter values unchanged. |
VER‑78712 | Optimizer | The configuration parameter MaxParsedQuerySizeMB knob is set in MB (as documented). The optimizer can require very large amounts of memory for a given query, much of it consumed by internal objects that the parser creates when it converts the query into a query tree for optimization. Other issues were identified as contributing to excessive memory consumption, and these have been addressed, including freeing memory allocated to query tree objects when they are no longer in use. |
11.0.0-0
Updated 8/11/2021
Issue Key | Component | Description |
---|---|---|
VER‑68406 | Tuple Mover | When Mergeout Cache is enabled, the dc_mergeout_requests system table now contains valid transaction ids instead of zero. |
VER‑71064 | Catalog Sync and Revive, Depot, EON | Previously, when a node belonging to a secondary subcluster restarted, it lost files in its depot. This issue has been fixed. |
VER‑72596 | Data load / COPY, Security | The COPY option REJECTED DATA to TABLE now properly distributes data between tables with identical names belonging to different schemas. |
VER‑73751 | Tuple Mover | The Tuple Mover logged a large number of PURGE requests on a projection while another MERGEOUT job was running on the same projection. This issue has been resolved. |
VER‑73773 | Tuple Mover | Previously, the Tuple Mover attempted to merge all eligible ROS containers without considering resource pool capacity. As a result, mergeout failed if the resource pool could not handle the mergeout plan size. This issue has been resolved: the Tuple Mover now takes into account resource pool capacity when creating a mergeout plan, and adjusts the number of ROS containers accordingly. |
VER‑74554 | Tuple Mover | Occasionally, the Tuple Mover dequeued DVMERGEOUT and MERGEOUT requests simultaneously and executed only the DVMERGEOUT requests, leaving the MERGEOUT requests pending indefinitely. This issue has been resolved: now, after completing execution of any DVMERGEOUT job, the Tuple Mover always looks for outstanding MERGEOUT requests and queues them for execution. |
VER‑74615 | Hadoop | Fixed a bug in predicate pushdown on parquet files stored on HDFS. The bug would cause a parquet file spanning multiple HDFS block to not have some of its rowgroups, specifically those located on blocks other than the starting HDFS block, pruned. In some corner cases as this one, the bug would actually cause the wrong rowgroup to get pruned, leading to incorrect results. |
VER‑74619 | Hadoop | Due to some compatibility issues between the different open source libraries, Vertica failed to read the ZSTD compressed parquet files generated by some external tools (such as Impala) with a column containing all NULLS. This is fixed and Vertica can correctly read such files without error. |
VER‑74814 | Hadoop | The open source library used by Vertica to generate parquet files would buffer null values inefficiently in-memory. This caused high memory usage, especially in cases where the data being exported had a lot of nulls. The library has been patched to buffer null values in encoded format, resulting in optimized memory usage. |
VER‑74974 | Database Designer Core | Under certain circumstances, Database Designer designed projections that could not be refreshed by refresh_columns(). This issue has been resolved. |
VER‑75139 | DDL | Adding columns to large tables with many columns on an Eon-mode database was slow and incurred considerable resource overhead, which adversely affected other workloads. This issue has been resolved. |
VER‑75496 | Depot | System tables continued to report that a file existed in the depot after it was evicted, which caused queries on that file to return "File not found" errors. This issue has been resolved. |
VER‑75715 | Backup/DR | When restoring objects in coexist mode, the STDOUT now contains the correct schema name prefix. |
VER‑75778 | Execution Engine | With Vertica running on machines with very high core counts, complex memory-intensive queries featuring an analytical function that fed into a merge operation sometimes caused a crash if the query ran in a resource pool where EXECUTIONPARALLELISM was set to a high value. This issue has been resolved. |
VER‑75783 | Optimizer | The NO HISTOGRAM event was set incorrectly on the dc_optimizer_events table's hidden epoch column. As a result, the suggested_action column was also set incorrectly to run analyze_statistics. This issue is resolved: the NO HISTOGRAM event is no longer set on the epoch column. |
VER‑75806 | UI - Management Console | COPY type of queries have been added to the list of queries displayed for Completed Queries on Query Monitoring Activity page. |
VER‑75864 | Data Export | Previously during export to Parquet, Vertica wrote the time portion of each timestamp value as a negative number for all timestamps before POSTGRES EPOCH DATE (2000-01-01). Due to this some tools (e.g. Impala) could not load such timestamps from parquet files exported by Vertica. This is fixed now. |
VER‑75881 | Security | Vertica no longer takes a catalog lock during authentication, after the user's password security algorithm has been changed from MD5 to SHA512. This is due to removing the updating of the user's salt, which is not used for MD5 hash authentication. |
VER‑75898 | Execution Engine | Calls to export_objects sometimes allocated considerable memory while user acesss privileges to the object were repeatedly checked. The accumulated memory was not freed until export_objects returned, which sometimes caused the database to go down with an out-of-memory error. This issue has been resolved: now memory is freed more promptly so it does not excessively accumulate. |
VER‑75933 | Catalog Engine | The export_objects meta-function could not be canceled. This issue has been resolved. |
VER‑76094 | Data Removal - Delete, Purge, Partitioning | If you created a local storage location for USER data on a cluster that included standby nodes, attempts to drop the storage location returned with an error that Vertica was unable to drop the storage location from standby nodes. This issue has been resolved. |
VER‑76125 | Backup/DR | The access permission check for the S3 bucket root during backup/restore has been removed. Users with access permissions to specific bucket directories can now perform backup/restore in those directories without getting AccessDenied errors. |
VER‑76131 | Kafka Integration | Updated documentation to mention support for SCRAM-SHA-256/512 |
VER‑76200 | Admin Tools | When adding a node to an Eon-mode database with Administration Tools, users were prompted to rebalance the cluster, even though this action is not supported for Eon. This issue was resolved: now Administration Tools skips this step for an Eon database. |
VER‑76244 | Depot | Files that might be candidates for pruning—for example, due to expression analysis or not read at all as with top-K queries—were unnecessarily read into the depot, and adversely affected depot efficiency and performance. This problem has been resolved: now, the depot only fetches from shared storage files that are read by a statement. |
VER‑76349 | Optimizer | The optimizer combines multiple predicates into a single-column Boolean predicate where subqueries are involved, to achieve predicate pushdown. The optimizer failed to properly handle cases where two NotNull predicates were combined into a single Boolean predicate, and returned an error. This issue has been resolved. |
VER‑76384 | Execution Engine | In queries that used variable-length-optimized joins, certain types of joins incurred a small risk of a crashing the database due to a problem when checking for NULL join keys. This issue has been resolved. |
VER‑76424 | Execution Engine | If a query includes 'count(s.)' where s is a subquery, Vertica expects multiple outputs for s.. Because Vertica does not support multi-valued expressions in this context, the expression tree represents s.* as a single record-type variable. The mismatch in the number of outputs can result in database failure. In cases like this, Vertica now returns an error message that multi-valued expressions are not supported. |
VER‑76449 | Sessions | Vertica now better detects situations where multiple Vertica processes are started at the same time on the same node. |
VER‑76511 | Sessions, Transactions | Previously, a single-node transaction sends commit message to all nodes even if it has no content to commit. This is now fixed (single-node transaction commits locally if it has no content to commit) |
VER‑76543 | Optimizer, Security | For a view A.v1, its base table B.t1, and an access policy on B.t1: users no longer require a USAGE privilege on schema B to SELECT view A.v1. |
VER‑76584 | Security | Vertica now automatically creates needed default key projections for a user with DML access when that user performs an INSERT into a table with a primary key and no projections. |
VER‑76815 | Optimizer | Using unary operators as GROUP BY or ORDER BY elements in WITH clause statements caused Vertica to crash. The issue is now resolved. |
VER‑76824 | Optimizer | If you called a view and the view's underlying query invoked a UDx function on a table with an argument of '*' (all columns), Vertica crashed if the queried table later changed--for example, columns were added to it. The issue has been resolved: the view now returns the same results. |
VER‑76851 | Data Export | Added support for exporting UUID types via s3export. Before, exporting data with UUID types using s3export would sometimes crash the initiator node. |
VER‑76874 | Optimizer | Updating the result set of a query that called the volatile function LISTAGG resulted in unequal row counts among projections of the updated table. This issue has been resolved. |
VER‑76952 | DDL - Projection | In previous releases, users were unable to alter the metadata of any column in tables that had a live aggregate or Top-K projection, regardless of whether they participated in the projection itself. This issue has been resolved: now users can change the metadata of columns that do not participate in the table's live aggregate or Top-K projections. |
VER‑76961 | Spread | Spread now correctly detects old tokens as duplicates. |
VER‑77006 | Machine Learning | The PREDICT_SVM_CLASSIFIER function could cause the database to go down when provided an invalid value for its optional "type" parameter. The function now returns an error message indicating that the entered value was invalid and notes that valid values are "response" and "probability." |
VER‑77007 | Catalog Engine | Standby nodes did not get changes to the GENERAL resource pool when it replaced a down node. This problem has been resolved. |
VER‑77026 | Execution Engine |
Vertica was unable to optimize queries on v_internal tables, where equality predicates (with operator =) filtered on columns relname or nspname, in the following cases:
In this case, Vertica was unable to optimize equality predicate nspname = 'xyz'. In all these cases, the queries are now optimized as expected. |
VER‑77134 | Backup/DR | Attempts to execute a CREATE TABLE AS statement on a database while it is the target of a replication operation return an error. The error message has been updated so it clearly indicates the source of the problem. |
VER‑77173 | Monitoring | Startup.log now contains a stage identifying when the node has received the initial catalog. |
VER‑77190 | Optimizer | SELECT clause CASE expressions with constant conditions and string results that were evaluated to shorter strings sometimes produced an internal error when participating in joins with aggregation. This issue has been resolved. |
VER‑77199 | Kafka Integration | The Kafka Scheduler now allows an initial offset of -3, which indicates to begin reading from the consumer group offset. |
VER‑77227 | Admin Tools | Previously, admintools reported it could not start the database because it was unable to read database catalogs, but did not provide further details. This issue has been resolved: the message now provides details on the failure's cause. |
VER‑77265 | Catalog Sync and Revive | We add more detailed messages when permission is denied. |
VER‑77278 | Catalog Engine | If you called close_session() while running analyze_statistics() on a local temporary table, Vertica sometimes crashed. This issue has been resolved. |
VER‑77387 | Directed Query, Optimizer | If the CTE of a materialized WITH clause was unused and referenced an unknown column, Vertica threw an error. This behavior was inconsistent with the behavior of an unmaterialized WITH clause, where Vertica ignored unused CTEs and did not check them for errors. This problem has been resolved: in both cases, Vertica now ignores all unused CTEs, so they are never checked for errors such as unknown columns. |
VER‑77394 | Execution Engine |
It was unsafe to reorder query predicates when the following conditions were true: The query contained a predicate on a projection's leading sort order columns that restricted the leading columns to constant values, where the leading columns also were not run-length encoded. A SIPS predicate from a merge join was applied to non-leading sort order columns of that projection. This issue has been resolved: query predicates can no longer be reordered when it contains a predicate on a projection's leading sort order columns that restrict leading columns to constant values, where the leading columns are not run-length encoded. |
VER‑77584 | Execution Engine | Before evaluating a query predicate on rows, Vertica gets the min/max of the expression to determine what rows it can first prune from the queried dataset. An incorrect check on a block's null count caused Vertica to use the maximum value of an all-null block, and mistakenly prune rows that otherwise would have passed the predicate. This issue has been resolved. |
VER‑77695 | Admin Tools | In earlier releases, starting a database with the start_db --force option could delete the data directory if the user lacked read/execute permissions on the data directory. Now, if the user lacks permissions to access the data directory, admintools cancels the start operation. If the user has correct permissions, admintools gives users 10 seconds to abort the start operation. |
VER‑77814 | Optimizer | Queries that included the TABLESAMPLE option were not supported for views. This issue has been resolved: you can now query views with the TABLESAMPLE option. |
VER‑77904 | Admin Tools | If admintools called create_db and the database creation process was lengthy, admintools sometimes prompted users to confirm whether to continue waiting. If the user did not answer the prompt--for example, when create_db was called by a script--create_db completed execution without creating all database nodes and properly updating the configuration file admintools.conf. In this case, the database was incomplete and unusable. Now, the prompt times out after 120 seconds. If the user doesn't respond within that time period, create_db exits. |
VER‑77905 | Execution Engine | A change in Vertica 10.1 prevented volatile functions from being called multiple times in an SQL macro. This change affected the throw_error function. The throw_error function is now marked immutable, so SQL macros can call it multiple times. |
VER‑77962 | Catalog Engine | Vertica now restarts properly for nodes that have very large checkpoint files. |
VER‑78251 | Data Networking | In rare circumstances, the socket on which Vertica accepts internal connections could erroneously close and send a large number of socket-related error messages to vertica.log. This issue has been fixed. |