Issue Key |
Component |
Description |
VER‑68406 |
Tuple Mover |
When Mergeout Cache is enabled, the dc_mergeout_requests system table now
contains valid transaction ids instead of zero. |
VER‑71064 |
Catalog Sync and Revive, Depot, EON |
Previously, when a node belonging to a secondary subcluster restarted, it lost
files in its depot. This issue has been fixed. |
VER‑72596 |
Data load / COPY, Security |
The COPY option REJECTED DATA to TABLE now properly distributes data between
tables with identical names belonging to different schemas. |
VER‑73751 |
Tuple Mover |
The Tuple Mover logged a large number of PURGE requests on a projection while
another MERGEOUT job was running on the same projection. This issue has been
resolved. |
VER‑73773 |
Tuple Mover |
Previously, the Tuple Mover attempted to merge all eligible ROS containers
without considering resource pool capacity. As a result, mergeout failed if the
resource pool could not handle the mergeout plan size. This issue has been
resolved: the Tuple Mover now takes into account resource pool capacity when
creating a mergeout plan, and adjusts the number of ROS containers accordingly. |
VER‑74554 |
Tuple Mover |
Occasionally, the Tuple Mover dequeued DVMERGEOUT and MERGEOUT requests
simultaneously and executed only the DVMERGEOUT requests, leaving the MERGEOUT
requests pending indefinitely. This issue has been resolved: now, after
completing execution of any DVMERGEOUT job, the Tuple Mover always looks for
outstanding MERGEOUT requests and queues them for execution. |
VER‑74615 |
Hadoop |
Fixed a bug in predicate pushdown on parquet files stored on HDFS. The bug would
cause a parquet file spanning multiple HDFS block to not have some of its
rowgroups, specifically those located on blocks other than the starting HDFS
block, pruned. In some corner cases as this one, the bug would actually cause
the wrong rowgroup to get pruned, leading to incorrect results. |
VER‑74619 |
Hadoop |
Due to some compatibility issues between the different open source libraries,
Vertica failed to read the ZSTD compressed parquet files generated by some
external tools (such as Impala) with a column containing all NULLS. This is
fixed and Vertica can correctly read such files without error. |
VER‑74814 |
Hadoop |
The open source library used by Vertica to generate parquet files would buffer
null values inefficiently in-memory. This caused high memory usage, especially
in cases where the data being exported had a lot of nulls. The library has been
patched to buffer null values in encoded format, resulting in optimized memory
usage. |
VER‑74974 |
Database Designer Core |
Under certain circumstances, Database Designer designed projections that could
not be refreshed by refresh_columns(). This issue has been resolved. |
VER‑75139 |
DDL |
Adding columns to large tables with many columns on an Eon-mode database was
slow and incurred considerable resource overhead, which adversely affected other
workloads. This issue has been resolved. |
VER‑75496 |
Depot |
System tables continued to report that a file existed in the depot after it was
evicted, which caused queries on that file to return "File not found" errors.
This issue has been resolved. |
VER‑75715 |
Backup/DR |
When restoring objects in coexist mode, the STDOUT now contains the correct
schema name prefix. |
VER‑75778 |
Execution Engine |
With Vertica running on machines with very high core counts, complex
memory-intensive queries featuring an analytical function that fed into a merge
operation sometimes caused a crash if the query ran in a resource pool where
EXECUTIONPARALLELISM was set to a high value. This issue has been resolved. |
VER‑75783 |
Optimizer |
The NO HISTOGRAM event was set incorrectly on the dc_optimizer_events table's
hidden epoch column. As a result, the suggested_action column was also set
incorrectly to run analyze_statistics. This issue is resolved: the NO HISTOGRAM
event is no longer set on the epoch column. |
VER‑75806 |
UI - Management Console |
COPY type of queries have been added to the list of queries displayed for
Completed Queries on Query Monitoring Activity page. |
VER‑75864 |
Data Export |
Previously during export to Parquet, Vertica wrote the time portion of each
timestamp value as a negative number for all timestamps before POSTGRES EPOCH
DATE (2000-01-01). Due to this some tools (e.g. Impala) could not load such
timestamps from parquet files exported by Vertica. This is fixed now. |
VER‑75881 |
Security |
Vertica no longer takes a catalog lock during authentication, after the user's
password security algorithm has been changed from MD5 to SHA512. This is due to
removing the updating of the user's salt, which is not used for MD5 hash
authentication. |
VER‑75898 |
Execution Engine |
Calls to export_objects sometimes allocated considerable memory while user
acesss privileges to the object were repeatedly checked. The accumulated memory
was not freed until export_objects returned, which sometimes caused the database
to go down with an out-of-memory error. This issue has been resolved: now memory
is freed more promptly so it does not excessively accumulate. |
VER‑75933 |
Catalog Engine |
The export_objects meta-function could not be canceled. This issue has been
resolved. |
VER‑76094 |
Data Removal - Delete, Purge, Partitioning |
If you created a local storage location for USER data on a cluster that included
standby nodes, attempts to drop the storage location returned with an error that
Vertica was unable to drop the storage location from standby nodes. This issue
has been resolved. |
VER‑76125 |
Backup/DR |
The access permission check for the S3 bucket root during backup/restore has
been removed. Users with access permissions to specific bucket directories can
now perform backup/restore in those directories without getting AccessDenied
errors. |
VER‑76131 |
Kafka Integration |
Updated documentation to mention support for SCRAM-SHA-256/512 |
VER‑76200 |
Admin Tools |
When adding a node to an Eon-mode database with Administration Tools, users were
prompted to rebalance the cluster, even though this action is not supported for
Eon. This issue was resolved: now Administration Tools skips this step for an
Eon database. |
VER‑76244 |
Depot |
Files that might be candidates for pruning—for example, due to expression
analysis or not read at all as with top-K queries—were unnecessarily read into
the depot, and adversely affected depot efficiency and performance. This problem
has been resolved: now, the depot only fetches from shared storage files that
are read by a statement. |
VER‑76349 |
Optimizer |
The optimizer combines multiple predicates into a single-column Boolean
predicate where subqueries are involved, to achieve predicate pushdown. The
optimizer failed to properly handle cases where two NotNull predicates were
combined into a single Boolean predicate, and returned an error. This issue has
been resolved. |
VER‑76384 |
Execution Engine |
In queries that used variable-length-optimized joins, certain types of joins
incurred a small risk of a crashing the database due to a problem when checking
for NULL join keys. This issue has been resolved. |
VER‑76424 |
Execution Engine |
If a query includes 'count(s.)' where s is a subquery, Vertica expects multiple
outputs for s.. Because Vertica does not support multi-valued expressions in
this context, the expression tree represents s.* as a single record-type
variable. The mismatch in the number of outputs can result in database failure. In
cases like this, Vertica now returns an error message that multi-valued
expressions are not supported. |
VER‑76449 |
Sessions |
Vertica now better detects situations where multiple Vertica processes are
started at the same time on the same node. |
VER‑76511 |
Sessions, Transactions |
Previously, a single-node transaction sends commit message to all nodes even if
it has no content to commit. This is now fixed (single-node transaction commits
locally if it has no content to commit) |
VER‑76543 |
Optimizer, Security |
For a view A.v1, its base table B.t1, and an access policy on B.t1: users no
longer require a USAGE privilege on schema B to SELECT view A.v1. |
VER‑76584 |
Security |
Vertica now automatically creates needed default key projections for a user with
DML access when that user performs an INSERT into a table with a primary key and
no projections. |
VER‑76815 |
Optimizer |
Using unary operators as GROUP BY or ORDER BY elements in WITH clause statements
caused Vertica to crash. The issue is now resolved. |
VER‑76824 |
Optimizer |
If you called a view and the view's underlying query invoked a UDx function on a
table with an argument of '*' (all columns), Vertica crashed if the queried
table later changed--for example, columns were added to it. The issue has been
resolved: the view now returns the same results. |
VER‑76851 |
Data Export |
Added support for exporting UUID types via s3export. Before, exporting data with
UUID types using s3export would sometimes crash the initiator node. |
VER‑76874 |
Optimizer |
Updating the result set of a query that called the volatile function LISTAGG
resulted in unequal row counts among projections of the updated table. This
issue has been resolved. |
VER‑76952 |
DDL - Projection |
In previous releases, users were unable to alter the metadata of any column in
tables that had a live aggregate or Top-K projection, regardless of whether they
participated in the projection itself. This issue has been resolved: now users
can change the metadata of columns that do not participate in the table's live
aggregate or Top-K projections. |
VER‑76961 |
Spread |
Spread now correctly detects old tokens as duplicates. |
VER‑77006 |
Machine Learning |
The PREDICT_SVM_CLASSIFIER function could cause the database to go down when
provided an invalid value for its optional "type" parameter. The function now
returns an error message indicating that the entered value was invalid and notes
that valid values are "response" and "probability." |
VER‑77007 |
Catalog Engine |
Standby nodes did not get changes to the GENERAL resource pool when it replaced
a down node. This problem has been resolved. |
VER‑77026 |
Execution Engine |
Vertica was unable to optimize queries on v_internal tables, where equality
predicates (with operator =) filtered on columns relname or nspname, in the
following cases:
-
The predicate specified expressions with embedded characters such as
underscore (_) or percentage (%). For example:
SELECT * FROM v_internal.vs_columns WHERE nspname = 'x_yz';
-
The query contained multiple predicates separated by AND operators, and
more than one predicate queried the same column, either nspname or relname. For
example:
SELECT * FROM v_internal.vs_columns WHERE nspname = 'xyz' AND nspname <> 'vs_internal';
In this case, Vertica was unable to optimize equality predicate nspname = 'xyz'.
In all these cases, the queries are now optimized as expected.
|
VER‑77134 |
Backup/DR |
Attempts to execute a CREATE TABLE AS statement on a database while it is the
target of a replication operation return an error. The error message has been
updated so it clearly indicates the source of the problem. |
VER‑77173 |
Monitoring |
Startup.log now contains a stage identifying when the node has received the
initial catalog. |
VER‑77190 |
Optimizer |
SELECT clause CASE expressions with constant conditions and string results that
were evaluated to shorter strings sometimes produced an internal error when
participating in joins with aggregation. This issue has been resolved. |
VER‑77199 |
Kafka Integration |
The Kafka Scheduler now allows an initial offset of -3, which indicates to begin
reading from the consumer group offset. |
VER‑77227 |
Admin Tools |
Previously, admintools reported it could not start the database because it was
unable to read database catalogs, but did not provide further details. This
issue has been resolved: the message now provides details on the failure's
cause. |
VER‑77265 |
Catalog Sync and Revive |
We add more detailed messages when permission is denied. |
VER‑77278 |
Catalog Engine |
If you called close_session() while running analyze_statistics() on a local
temporary table, Vertica sometimes crashed. This issue has been resolved. |
VER‑77387 |
Directed Query, Optimizer |
If the CTE of a materialized WITH clause was unused and referenced an unknown
column, Vertica threw an error. This behavior was inconsistent with the behavior
of an unmaterialized WITH clause, where Vertica ignored unused CTEs and did not
check them for errors. This problem has been resolved: in both cases, Vertica
now ignores all unused CTEs, so they are never checked for errors such as
unknown columns. |
VER‑77394 |
Execution Engine |
It was unsafe to reorder query predicates when the following conditions were
true:
The query contained a predicate on a projection's leading sort order
columns that restricted the leading columns to constant values, where the
leading columns also were not run-length encoded.
A SIPS predicate from a merge join was applied to non-leading sort order
columns of that projection.
This issue has been
resolved: query predicates can no longer be reordered when it contains a
predicate on a projection's leading sort order columns that restrict leading
columns to constant values, where the leading columns are not run-length
encoded.
|
VER‑77584 |
Execution Engine |
Before evaluating a query predicate on rows, Vertica gets the min/max of the
expression to determine what rows it can first prune from the queried dataset.
An incorrect check on a block's null count caused Vertica to use the maximum
value of an all-null block, and mistakenly prune rows that otherwise would have
passed the predicate. This issue has been resolved. |
VER‑77695 |
Admin Tools |
In earlier releases, starting a database with the start_db --force option could
delete the data directory if the user lacked read/execute permissions on the
data directory. Now, if the user lacks permissions to access the data directory,
admintools cancels the start operation. If the user has correct permissions,
admintools gives users 10 seconds to abort the start operation. |
VER‑77814 |
Optimizer |
Queries that included the TABLESAMPLE option were not supported for views. This
issue has been resolved: you can now query views with the TABLESAMPLE option. |
VER‑77904 |
Admin Tools |
If admintools called create_db and the database creation process was lengthy,
admintools sometimes prompted users to confirm whether to continue waiting. If
the user did not answer the prompt--for example, when create_db was called by a
script--create_db completed execution without creating all database nodes and
properly updating the configuration file admintools.conf. In this case, the
database was incomplete and unusable. Now, the prompt times out
after 120 seconds. If the user doesn't respond within that time period,
create_db exits. |
VER‑77905 |
Execution Engine |
A change in Vertica 10.1 prevented volatile functions from being called multiple
times in an SQL macro. This change affected the throw_error function. The
throw_error function is now marked immutable, so SQL macros can call it multiple
times. |
VER‑77962 |
Catalog Engine |
Vertica now restarts properly for nodes that have very large checkpoint files. |
VER‑78251 |
Data Networking |
In rare circumstances, the socket on which Vertica accepts internal connections
could erroneously close and send a large number of socket-related error messages
to vertica.log. This issue has been fixed. |