Resolved issues release notes for 11.1.x.
Issue Key |
Component |
Description |
VER-87777 |
Admin Tools, Data Collector |
If you revived a database and the EnableDataCollector parameter was set to 1, you could not start the database after it was revived. This issue was resolved. To start the database, disable the cluster lease check. |
VER-87878 |
Catalog Engine |
Previously, when a cluster lost quorum and switched to read-only mode or stopped, some transaction commits in the queue might get processed. However, due to the loss of quorum, these commits might not have been persisted. These "transient transactions" were reported as successful, but they were lost when the cluster restarted.
Now, when Vertica detects a transient transaction, it issues a WARNING so you can diagnose the problem, and it creates an event in ACTIVE_EVENTS that describes what happened.
|
VER-87969 |
SDK |
Previously, you could not compile Vertica UDx builds with GCC compiler version 13 and higher. This issue has been resolved. |
VER-87972 |
Kafka Integration |
In some circumstances, there were long timeouts or the process might hang indefinitely when the KafkaAvroParser accessed the Avro Schema Registry. This issue has been resolved. |
VER-88007 |
Execution Engine |
Queries with large tables stopped the database because the indices that Vertica uses to navigate the tables consumed too much RAM. This issue has been resolved, and now the indices use less RAM. |
VER-88128 |
EON |
The sync_catalog function failed when MinIO communal storage did not meet read-after-write and list-after-write consistency guarantees. A check was added to bypass this restriction. However, if possible, users should ensure their MinIO storage is configured for read-after-write and list-after-write consistency. |
VER-88207 |
Optimizer |
In some query plans with segmentation across multiple nodes, Vertica would get an internal optimizer error when trying to prune out unused data edges from the plan. This issue has been resolved. |
VER-88228 |
EON |
In rare circumstances, the automatic sync of catalog files to the communal storage stopped working on some nodes. Users could still manually sync with sync_catalog(). The issue has been resolved. |
VER-88283 |
Performance tests |
In some cases, the NVL2 function caused Vertica to crash when it returned an array type. This issue has been resolved. |
Issue Key |
Component |
Description |
VER‑85163 |
Recovery |
Recovery from scratch reset cpe to 0, but mergeout was disabled for the projection whose cpe was
0. This restriction has been removed to allow concurrent data loading. |
VER-85649 |
Database Designer Core |
DESIGNER_DESIGN_PROJECTION_ENCODINGS returned with an error if a period was embedded in the
design name. This issue has been resolved. |
VER-86097 |
Execution Engine |
Predicate reordering optimization moved a comparison against a constant ahead of SIP filters,
but the SIP filter needed to be evaluated after the constant predicate. This issue has been
resolved: now, predicates are not reordered when a stateful SIP filter needs to be evaluated in
a particular order. |
VER-86130 |
Optimizer |
Queries with deeply nested expressions caused nodes to crash due to stack overflow. This issue
has been resolved: now, the query runs normally if stack overflow occurs during non-essential
stages of query processing–for example, getting pretty print for the expressions; or the query
fails with an error message that the query contained an expression too large to analyze. |
VER-86281 |
Execution Engine |
When pushing down predicates of a query that involved a WITH clause being turned into a shared
temp relation, an IS NULL predicate on the preserved side of a left outer join was pushed below
the join. As a result, rows that should have been filtered out were erroneously included in the
result set. This issue has been resolved by updating the predicate pushdown logic. |
VER-86317 |
Optimizer |
LIMIT k OVER (...) clauses incorrectedly estimated output rows by k, where k was calculated for
every partition in the OVER clause. This issue has been resolved: estimation of output rows is
now derived from the OVER clause. |
VER-86319 |
Recovery |
When you applied a swap partition event to one table, the other table involved in the same swap
partition event was removed from the dirty transactions list. This issue has been resolved. Now,
both tables involved in the same swap partition event are in the dirty transaction list. |
VER-86341 |
AP-Geospatial |
If you nested multiple geospatial functions when reading from a Parquet file, there was an issue
finding usable memory that made the database crash. This issue has been resolved. |
VER-86342 |
Data load / COPY |
A check to prevent TOCTOU (time of check to time of use) privilege escalations issued false
positives in cases where a file is appended to during a COPY. This issue has been resolved: the
check has been updated so it no longer issues a false positive in such situations. |
VER-86347 |
Admin Tools, Security |
Paramiko has been upgraded to 2.10.1 to address CVE-2022-24302. |
Issue Key |
Component |
Description |
VER‑82766 |
Optimizer |
Queries with complex WITH clauses that used materialized WITH were running out of memory at the
parsing stage. This issue has been resolved. |
VER‑82814 |
Data Export, Data load / COPY, Hadoop |
When reading partition values from file paths, values that contained a '/' character were being
read incorrectly by Vertica. This issue has been resolved. |
VER‑82849 |
DDL - Table |
After renaming a table with ALTER TABLE, the DDL for its projections continued to reference the
previous name as an alias of the new name. This issue has been resolved: when a table is
renamed, the DDL of its projections is also updated. |
VER‑82865 |
DDL |
Some UNION ALL queries failed with the error 'Temp relation descriptor not provided.' This issue
has been resolved. |
VER‑82880 |
DDL |
Vertica allows any partition expression that resolves to non-NULL values, even in cases where
the expression columns originally contained NULL values. Conversely, Vertica no longer allows a
partition expression that produces NULL values even if the expression columns contain no NULL
values. |
VER‑82905 |
Depot |
If the depot had no space to download a new file, the data loading plan did not write the file
to the depot. Instead, it viewed the file as already in the depot, and incorrectly returned an
error that the file size did not match the catalog. This issue has been resolved: the data
loading plan no longer regards the absent file as in the depot. |
VER‑82941 |
Client Drivers - JDBC |
When JDBC connected to the server with BinaryTransfer and the JVM timezone had a historical or
future Daylight Saving Time (DST) schedule, querying DST start dates in the JVM timezone
sometimes returned incorrect data. This issue has been resolved; however, performance of
BinaryTransfer for DATE data is worse than that of TextTransfer. |
VER‑82968 |
Data load / COPY |
Flex table parsers did not reserve enough buffer space to correctly process certain inputs to
NUMERIC-type columns. This issue has been resolved. |
VER‑82981 |
Execution Engine |
Queries with multipart plans that produced large temporay relations sometimes failed to clean up
temporary files. This issue has been resolved. |
VER‑83052 |
Cloud - Amazon, UI - Management Console |
When a previously used IP address was repeated from the subnet, Management Console failed to add
hosts to the cluster. This issue is now resolved. |
VER‑83053 |
Cloud - Amazon, UI - Management Console |
After using Management Console to revive a database in the cloud, the database sometimes failed
to start because of a lost+found directory in the catalog path. This issue is now resolved. |
VER‑83070 |
Admin Tools |
pexpect inefficiently closed potential file descriptors when the nofile limit was high,
attempting to close all file descriptors whether they were open or closed. This issue has been
addressed: when the number of file descriptors is ≤2000, pexpect uses closerange to close the
file descriptors, as it preforms better than close in a loop. When the the number of file
descriptors >2000, pexpect iterates over the file descriptors in /proc/self/fd, which
contains open file descriptors only, and closes them. |
VER‑83071 |
Optimizer |
Queries with a UNION on EON subclusters resegmented grouped UNION leg outputs, even if UNION
legs were segmented on group keys. This issue has been resolved, thereby improving query
performance. |
VER‑83088 |
Backup/DR |
vbr calls to the AWS function DeleteObjects() did not gracefully handle SlowDown errors. This
issue has been resolved by changing the boto3 retry logic, so SlowDown errors are less likely to
occur. |
Issue Key |
Component |
Description |
VER‑82377 |
Admin Tools |
Adding two nodes to a one-node enterprise database required the database to be rebalanced and
made K-safe. Attempts to rebalance and achieve K-safety on the admintool command line ended in
an error, while attempts to update K-safety in the admintools GUI also failed. These issues have
been resolved. |
VER‑82380 |
ComplexTypes, FlexTable |
When parsing arrays of primitive types using the JSON parser, COPY failed with a message about
records being too large. This issue has been resolved. |
VER‑82381 |
AP-Geospatial |
In some cases, ST_GeomFromGeoJSON returned a non-null result on null input. This issue has been
resolved. |
VER‑82629 |
Client Drivers - VSQL |
The vsql client returned no error code if it encountered an error whle running a long query--for
example, exceeding a resource pools's RUNTIMECAP setting. This issue has been resolved. |
VER‑82635 |
Optimizer |
Previously, if a partition range projection included a column defined by an expression derived
from another table column, querying on the projection sometimes caused the server to crash. This
issue has been resolved. |
VER‑82757 |
ComplexTypes |
The array_count function read all fields of a complex array from storage. Doing so resulted in
high cost estimates and sub-optimal query plans. This issue has been resolved: array_count now
reads only one field. |
VER‑82758 |
Execution Engine |
The optimizer removes predicates from a query if expression analysis finds them to be true for
entire storage containers. When the same predicate appeared multiple times in a given query, and
one of those predicates passed expression analysis, sometimes the optimizer removed all
instances of that predicate. This issue has been resolved: the optimizer no longer removes
multiple instances of the same predicate when one of them passes expression analysis. |
VER‑82759 |
Execution Engine, Optimizer |
If join output required sorting and was used by another merge join, but had multiple equivalent
sort keys, the sort keys were not fully maintained. In this case, the query returned with
incorrect results. This issue has been resolve by maintaining the necessary sort keys for merge
join input. |
VER‑82774 |
Execution Engine |
The makeutf8 function sometimes caused undefined behavior when given maximum-length inputs for
the argument column type. This issue has been resolved. |
VER‑82806 |
Backup/DR |
If you restored a backup to another database with a different communal storage location, startup
on the target database failed if the database's oid was assigned to another object. This issue
has been resolved. |
Issue Key |
Component |
Description |
VER‑53679 |
Optimizer |
Previously, live aggregate projections had difficulty rewriting AVG() if the argument was of
type INT. This issue has been resolved. |
VER‑62937 |
Security |
Previously, when configuring Kerberos authentication, not configuring the KerberosRealm
parameter would cause the database to go down. This has been fixed; if KerberosRealm is not
specified, Vertica automatically retrieves the realm from the Kerberos server and creates the
connection. |
VER‑65100 |
Backup/DR |
Running backup/restore on a cluster with non-permanent nodes led to a misleading failure. This
issue has been resolved. |
VER‑72363 |
Backup/DR |
Before this fix, the restored global catalog objects are installed on all the executor nodes
after GCLX. The long process of catalog installation could cause GCLX timeout. This fix installs
the restored global catalog objects on all the executor nodes before GCLX. |
VER‑73776 |
Optimizer |
Adding a subcluster with a larger number of nodes to an EON cluster required an unexpectedly
long amount of time. This issue has been resolved. |
VER‑76092 |
Data Networking |
In rare circumstances, depending on the customer's network settings, TCP connections could still
be considered alive for nodes that recently went down. That could result in significant query
stalls or indefinite session hangs without ability to cancel. The issue has been resolved with
improved TCP connection handling. |
VER‑76948 |
UI - Management Console |
When an AWS cluster was scaled up, the MC did not load all data when a custom file system was
detected, and the AWS key pair was not populated. These issues were resolved. |
VER‑79025 |
Data Removal - Delete, Purge, Partitioning, DDL - Table |
In the past, an ALTER table's partition using an existing partition expression's internal
representation was treated as a different partition expression and reorganized partition storage
containers. Now, it is treated as the same partition expression and does not require partition
storage container reorganization. |
VER‑79357 |
Optimizer |
The FROM clause of an UPDATE statement can now reference the target-table, as follows:
FROM DEFAULT [join-type] JOIN dataset [ ON join-predicate ]
DEFAULT specifies the table to update. This keyword can be used only once in the FROM clause,
and it cannot be used elsewhere in the UPDATE statement.
|
VER‑80051 |
Execution Engine |
Vertica may crash after queries or DML operations with a storage merge. |
VER‑80087 |
Optimizer |
Previously, DBD functions DESIGNER_ADD_DESIGN_QUERIES and
DESIGNER_RUN_POPULATE_DESIGN_AND_DEPLOY only supported local file system locations as arguments.
Now, they also support communal storage locations. |
VER‑80359 |
UI - Management Console |
When creating a cluster, Management Console included unsupported characters in the key pair name
when it generated the cluster IAM role name, which blocked the creation process. This issue has
been resolved: now, MC removes unsupported characters in the key pair name from the generated
IAM role name. |
VER‑80361 |
Execution Engine |
Unlike a regular outer join, an event series join pads output with some cached values that
represent the columns of the mismatched side. Padding with nulls is required if no previous
rows are available for interpolation. To process event series joins, cached rows are always
initialized with nulls, then updated with values to represent the mismatched side if
applicable.
When initializing cached rows with a schema that differed from what the join expected at that
stage of the query plan, Vertica occasionally crashed. This issue has been resolved: now
cached rows are initially set to null tuples, using the correct tuple schema. |
VER‑80384 |
UI - Management Console |
When creating a cluster, Management Console included unsupported characters in the key pair name
when it generated the cluster IAM role name, which blocked the cluster creation process. This
issue has been resolved: now, MC removes unsupported characters in the key pair name from the
generated IAM role name. |
VER‑80416 |
Client Drivers - JDBC, Execution Engine |
Previously, when using the JDBC and ADO.net drivers with binary encoding, queries that contained
NUMERIC literal expressions that used parameterized prepared statement queries--could have
incorrect precision and scale. This issue has been resolved. |
VER‑80441 |
Backup/DR, Migration Tool |
After migrating a database from Enterprise to Eon mode, loading data into tables with
unsegmented projections could fail with the error "Cannot plan query because no super
projections are safe". This issue has been resolved. |
VER‑80496 |
UDX |
EDIT_DISTANCE allocates memory properly instead of using the limited stack memory which was
causing crashes on very large strings. |
VER‑80530 |
Execution Engine |
Vertica evaluates IN expressions in several ways, depending on the right-hand side. If the
right-hand side is a subquery, then Vertica evaluates the expression as a join; if it is a list
of constant expressions, then Vertica evaluates the expression by building a hash table of
constant values; and if the expression is anything else, then Vertica either errors out, or
evaluates the expression by rewriting it into a logical disjunction. Detection for the third
case was flawed, resulting in an expression being evaluated incorrectly, which sometimes
resulted in a crash. This issue has been resolved. |
VER‑80537 |
UI - Management Console |
When you added an existing user to a Custom Threshold notification, the checkbox was not
responsive. This issue was resolved |
VER‑80551 |
Control Networking |
Depending on the customer's network settings, TCP connections were occasionally considered alive
for nodes that recently went down. This could cause significant query stalls or indefinite
session hangs without the ability to cancel. The issue has been resolved with improved TCP
connection handling. |
VER‑80552 |
AP-Geospatial |
STV_Create_Index can create incorrect indexes on large sets of polygons. Using these indexes
might cause a query to fail or bring a node down. This issue has been resolved. |
VER‑80553 |
Execution Engine |
The REPLACE function returned incorrect results if it called the SPLIT_PART function as its
second (target string) argument. This issue has been resolved. |
VER‑80568 |
Data Collector |
Setting an empty channel in SET_DATA_COLLECTOR_NOTIFY_POLICY is now disallowed. |
VER‑80575 |
Monitoring |
The ros_count column in system view projection_storage was removed in release 11.0.2. This
column has been restored, as per requests from clients who used it for monitoring purposes. |
VER‑80637 |
Execution Engine |
As a performance optimization, Vertica analyzes whether expressions can be true or false over a
range of values, to avoid evaluating that expression for each row in the range. The analysis
function regexp_like() sometimes returned incorrect results when the regular expression
contained characters that allowed a pattern to match zero times, such as "?" and "*". This issue
has been resolved. |
VER‑80655 |
UI - Management Console |
Previously, MC saved a DBD-generated design only when database K-safety was set to 0. This,
issue has been resolved: now MC saves the design irrespective of the K-safety setting. |
VER‑80697 |
UI - Management Console |
Before the introduction of custom threshold, email will be sent on "critical" and "alert"
priority. After the introduction of custom threshold we let the users to send emails on all
priorities.
This change has been implemented on all Thresholds, but since "Node state change" is a
different criteria, that changes are not in effect. Now we have changed "Node state change"
threshold to send email on all alert priority. |
VER‑80728 |
Execution Engine |
Partition-ranged projections did not work well with tables with Numeric typed partition
expressions--for example, using the date_part function in the partition expression. This issue
has been resolved. |
VER‑80749 |
Admin Tools |
admintools -t re_ip options -T and -U did not update admintools.conf with the correct control
messaging protocol. This issue has been resolved. |
VER‑80765 |
Admin Tools |
The admintools operation db_add_node failed if the first node in the cluster was down. This
failure occurred because after adding the node, db_add_node would try to use the first node as
the source node for syncing vertica.conf and spread.conf files. This issue has been resolved:
now, Vertica uses any UP node as the source node for syncing. |
VER‑80823 |
Client Drivers - Misc, Security |
Fixed an issue where some Kerberos configurations would become invalid after upgrading to
Vertica 11.1.0-0.
A side effect of this fix is that OAuth authentication records created in 11.1.0-0 become
invalid when upgrading to 11.1.0-1 or later. To fix this, drop and recreate these invalid
OAuth authentication records.
|
VER‑80859 |
UDX |
New system prerequisites have been added for compiling C++ user-defined extensions (UDXs). See
"Setting up the C++ SDK" for more details.
https://www.vertica.com/docs/11.1.x/HTML/Content/Authoring/ExtendingVertica/C++/C++SDK.htm |
VER‑80879 |
Data load / COPY |
Operations over directories containing large numbers of external data files on object stores
consumed more CPU than expected. This issue affected planning for external table queries, COPY
statements, and license auditing. An algorithmic change reduced the amount of CPU time used to
perform these operations. |
VER‑80888 |
Admin Tools |
Attempts to run multiple databases on the same cluster failed, even though each database ran
exclusively on a different set of nodes--for example, database A with nodes 1 through 5,
database B on nodes 6 through 10. This issue has been resolved. |
VER‑80929 |
UI - Management Console |
When the Management Console is installed on a Linux server, some files are created in the /tmp
folder. If these files are removed, the MC database is not restored and clients lose data during
an upgrade. This issue has been resolved: the MC database is now restored after an upgrade. |
VER‑80982 |
Backup/DR |
Replication on an Eon mode database requires access to communal storage locations on the source
and target clusters. Since the Vertica 11.0.x release, replication on a database that used
MiniIO communal storage locations failed because local nodes did not map correctly to those
locations. This issue has been resolved. |
VER‑80997 |
Backup/DR |
When running Eon mode replication tasks, vbr returned with an error when it tried to connect to
Vertica through a non-primary subscriber node. This issue has been resolved. |
VER‑81035 |
Catalog Engine |
LDAPLink now holds the GCLX for less time when Vertica synchronizes users and groups from an
LDAP server. |
VER‑81055 |
Optimizer |
Previously, if two SELECT clause subqueries matched exactly, the subquery reuse feature created
two different VARs instead of a single VAR for both, resulting in the GROUP BY expression being
unable to find the relevant expression for the second VAR generated. This issue has now been
fixed by generating a single VAR when the subqueries match exactly. |
VER‑81063 |
Scrutinize |
Due to previous update on ATCommand method, the scrutinize functions are broken. This fix will
resolve that issue. |
VER‑81238 |
Admin Tools |
Administration Tools no longer kills all cluster nodes after the default timeout if some nodes
are still initializaing. Also, if users do not respond to the timeout prompt, Administration
Tools continues to wait. |
VER‑81504 |
Security |
In order to mitigate CVE-2022-0778 we have upgraded the version of OpenSSL that we ship to
1.1.1n |
Issue Key |
Component |
Description |
VER‑1596 |
Client Drivers - JDBC, Sessions |
Each Vertica node now uses TCP keepalive to detect if it is disconnected from and automatically
free resources allocated for a client. |
VER‑73125 |
Recovery |
The Tuple Mover uses the configuration parameter MaxMrgOutROSSizeMB to determine the maximum
size of ROS containers that are candidates for mergeout. After a rebalance operation, Tuple
Mover now groups ROS containers in batches that are smaller than MaxMrgOutROSSizeMB. A ROS
container that is larger than MaxMrgOutROSSizeMB is merged individually. |
VER‑77380 |
Optimizer, Performance tests |
Running analyze_statistics('') on a large catalog would sometimes run out of memory and (at
best) fail or (at worst) trigger an "out of memory" in the kernel. This has been resolved. |
VER‑79545 |
DDL |
Previously, you could not resize a table column that was used to segment any of that table's
projections. This restriction has been lifted. Now, ALTER TABLE...ALTER COLUMN can modify the
column's data type so it is larger or smaller than before, as long as the new size does not
affect existing data. |
VER‑79811 |
Backup/DR |
During backup, vbr sends queries to vsql and reads the results from each query. If the vsql
output comprised multiple lines that ended with newline characters, vbr mistakenly interpreted
the newline character to indicate the end of output from the current query and stopped reading.
As a result, when vbr sent the next query to vsql, it read the unread output from the earlier
query as belonging to the current query. This issue has been resolved: vbr now correctly detects
the end of each query result and reads it accordingly. |
VER‑79902 |
Admin Tools |
create_db failed in FIPS mode. This issue has been resolved. |
VER‑79913 |
DDL |
Previously, IMPORT_STATISTICS was unable to import statistics from output generated by
EXPORT_STATISTICS_PARTITION. This issue has been resolved. |
VER‑80002 |
Security |
ALTER TLS CONFIGURATION data_channel now validates that the specified certificate can be used
for internode encryption. |
VER‑80038 |
DDL, Execution Engine |
The server crashed when system queries used large strings as predicate constants--for example,
"table_name = ..." or "schema_name = ..." This issue has been resolved. |
VER‑80049 |
Kafka Integration, UI - Management Console |
Security vulnerabilities CVE-2021-44228 and CVE-2021-45046 were found in earlier versions of the
log4j library used by the MC. The library has been updated to resolve this issue. |
VER‑80052 |
UI - Management Console |
This release updates the Management Console's Log4j library. The updated library addresses the
CVE-2021-44228 and CVE-2021-45046 security vulnerabilities found in earlier versions. |
VER‑80053 |
Kafka Integration |
This release updates the Kafka integration's Log4j library. The updated library addresses the
CVE-2021-44228 and CVE-2021-45046 security vulnerabilities found in earlier versions. |
VER‑80124 |
Execution Engine |
Certain queries with ill-formed predicates that contained a large number of fields caused
Vertica to run out of memory while trying to return an error message. This issue has been
resolved: now, Vertica can build and return error messages of the type operator-not-found,
regardless of length. |
VER‑80227 |
ComplexTypes |
A bug in the code caused parsing of nested case statements to become inefficient. This resulted
in much larger processing time for case statements with multiple levels of nesting. This issue
has been resolved: processing of nested case statements is again linear in the number of nesting
levels. |
VER‑80262 |
Backup/DR |
On LINUX_FILESYSTEM, when a snapshot was in progress, vertica called glob() on all storage
containers and then called stat() to check container statistics. If an error occurred between
these two operations, the backup operation failed. This issue has been resolved: on
LINUX_FILESYSTEM, the snapshot now only calls stat() on storage containers. |
VER‑80316 |
ComplexTypes, Execution Engine |
Previously, expressions with "subquery." as arguments (not in the top level of a SELECT
statement or subquery) could result in undefined behavior. This issue has been resolved. Now,
whenever "subquery." appears as an expression argument, it expands into a complex ROW
expression with one field for each of the subquery's columns. |
VER‑80388 |
Execution Engine, UDX |
As of 10.1, inline SQL functions could not be passed a volatile parameter if that parameter
appeared multiple times in the function definition. As an inline function, RIGHT returned an
error when it was called with a volatile user-defined aggregate function as its first parameter.
This issue has been resolved: RIGHT is no longer an inline SQL function, and instead is now
defined internally. |
VER‑80439 |
Kafka Integration |
KafkaExport returns when it detects that all messages were sent to Kafka, reducing execution
time up to 10 seconds. |
VER‑80610 |
Client Drivers - JDBC, Execution Engine |
Previously, when using the JDBC and ADO.net drivers with binary encoding, queries that contained
NUMERIC literal expressions that used parameterized prepared statement queries--for example,
?/10--could have incorrect precision and scale. This issue has been resolved. |