Resolved issues release notes for 11.0.x.
Issue Key |
Component |
Description |
VER‑82378 |
Admin Tools |
Adding two nodes to a one-node enterprise database required the database to be
rebalanced and made K-safe. Attempts to rebalance and achieve K-safety on the
admintool command line ended in an error, while attempts to update K-safety in
the admintools GUI also failed. These issues have been resolved. |
VER‑82630 |
Client Drivers - VSQL |
The vsql client now returns an error code when it encounters an error while
running a long query--for example, the query exceeds the resource pool's
RUNTIMECAP setting. |
VER‑82636 |
Optimizer |
Previously, if a partition range projection included a column defined by an
expression derived from another table column, querying on the projection
sometimes caused the server to crash. This issue has been resolved. |
VER‑82760 |
Execution Engine, Optimizer |
If join output required sorting and was used by another merge join, but had
multiple equivalent sort keys, the sort keys were not fully maintained. In this
case, the query returned with incorrect results. This issue has been resolve by
maintaining the necessary sort keys for merge join input. |
VER‑82775 |
Execution Engine |
The makeutf8 function sometimes caused undefined behavior when given
maximum-length inputs for the argument column type. This issue has been
resolved. |
VER‑82805 |
Backup/DR |
If you restored a backup to another database with a different communal storage
location, startup on the target database failed if the database's oid was
assigned to another object. This issue has been resolved. |
VER‑82815 |
Data Export, Data load / COPY, Hadoop |
When reading partition values from file paths, values that contained a '/'
character were read incorrectly by Vertica. This issue has been resolved. |
VER‑82850 |
DDL - Table |
Previously, after a table was renamed, its projection definition was not
updated. This caused the exported DDL to contain the new table name while the
old table name continued to be used as its alias. This issue has been fixed: the
projection no longer references the old table name as an alias after the table
is renamed. |
VER‑82881 |
DDL |
Vertica allows any partition expression that resolves to non-NULL values, even
in cases where the expression columns originally contained NULL values.
Conversely, Vertica no longer allows a partition expression that produces NULL
values even if the expression columns contain no NULL values. |
VER‑82906 |
Depot |
If the depot had no space to download a new file, the data loading plan did not
write the file to the depot. Instead, it viewed the file as already in the
depot, and incorrectly returned an error that the file size did not match the
catalog. This issue has been resolved: the data loading plan no longer regards
the absent file as in the depot. |
VER‑82942 |
Client Drivers - JDBC |
When JDBC connected to the server with BinaryTransfer and the JVM timezone had a
historical or future Daylight Saving Time (DST) schedule, querying DST start
dates in the JVM timezone sometimes returned incorrect data. This issue has been
resolved; however, performance of BinaryTransfer for DATE data is worse than
that of TextTransfer. |
VER‑82969 |
Data load / COPY |
Flex table parsers did not reserve enough buffer space to correctly process
certain inputs to NUMERIC-type columns. This issue has been resolved. |
Issue Key |
Component |
Description |
VER‑80840 |
Execution Engine |
Vertica evaluates IN expressions in several ways, depending on the right-hand
side. If the right-hand side is a subquery, then Vertica evaluates the
expression as a join; if it is a list of constant expressions, then Vertica
evaluates the expression by building a hash table of constant values; and if the
expression is anything else, then Vertica either errors out, or evaluates the
expression by rewriting it into a logical disjunction. Detection for the third
case was flawed, resulting in an expression being evaluated incorrectly, which
sometimes resulted in a crash. This issue has been resolved. |
VER‑80847 |
AP-Geospatial |
STV_Create_Index can create incorrect indexes on large sets of polygons. Using
these indexes might cause a query to fail or bring a node down. This issue has
been resolved. |
VER‑80895 |
Backup/DR, Migration Tool |
After migrating a database from Enterprise to Eon mode, loading data into tables
with unsegmented projections could fail with the error "Cannot plan query
because no super projections are safe". This issue has been resolved. |
VER‑80898 |
Admin Tools |
The admintools operation db_add_node failed if the first node in the cluster was
down. This failure occurred because after adding the node, db_add_node would try
to use the first node as the source node for syncing vertica.conf and
spread.conf files. This issue has been resolved: now, Vertica uses any UP node
as the source node for syncing. |
VER‑80906 |
UI - Management Console |
Previously, MC saved a DBD-generated design only when database K-safety was set
to 0. This, issue has been resolved: now MC saves the design irrespective of the
K-safety setting. |
VER‑80909 |
UI - Management Console |
When creating a cluster, Management Console included unsupported characters in
the key pair name when it generated the cluster IAM role name, which blocked the
cluster creation process. This issue has been resolved: now, MC removes
unsupported characters in the key pair name from the generated IAM role name. |
VER‑80911 |
Optimizer |
Previously, live aggregate projections had difficulty rewriting AVG() if the
argument was of type INT. This issue has been resolved. |
VER‑80914 |
Monitoring |
The ros_count column in system view projection_storage was removed in release
11.0.2. This column has been restored, as per requests from clients who used it
for monitoring purposes. |
VER‑80916 |
UI - Management Console |
When creating a cluster, Management Console included unsupported characters in
the key pair name when it generated the cluster IAM role name, which blocked the
creation process. This issue has been resolved: now, MC removes unsupported
characters in the key pair name from the generated IAM role name. |
VER‑80918 |
Data Networking |
Depending on the customer's network settings, TCP connections were occasionally
considered alive for nodes that recently went down. This could cause significant
query stalls or indefinite session hangs without the ability to cancel. The
issue has been resolved with improved TCP connection handling. |
Issue Key |
Component |
Description |
VER‑50526 |
Data Removal - Delete, Purge, Partitioning, EON |
Eon mode now supports optimized delete and merge for unsegmented projections. |
VER‑77427 |
Client Drivers - JDBC |
Previously, JDBC would check $JAVA_HOME/security for the truststore and the
default password was an empty string.
Now, if a truststore path (trustStore) is not specified, JDBC first checks the
original location ($JAVA_HOME/security/), then checks the default JVM
truststore at $JAVA_HOME/lib/security/. If a truststore password
(trustStorePassword) is not specified, it uses the password "changeit." |
VER‑78338 |
Execution Engine |
System table query_consumption calculated peak_memory_kb by summing peak memory
consumption from all nodes. This issue has been resolved: query_consumption now
calculates peak_memory_kb by finding the maximum of peak memory that was
requested among all nodes. |
VER‑78671 |
Execution Engine |
Query predicates were reordered at runtime in a way that caused queries to fail
if they compared numeric values (integer or float) to non-numeric strings such
as 'abc’. This issue has been resolved. |
VER‑78833 |
Optimizer |
The optimizer occasionally chose a regular projection even when it was directed
to use available LAP/Top-K projections. This issue has been resolved. |
VER‑79001 |
Admin Tools |
Calling "command_host -c start" or "restart_node" to start an
EncryptSpreadComm-enabled database now gives a more useful error message,
instructing users to call start_db instead.
In general, to start or stop an EncryptSpreadComm-enabled database, users
should call start_db or stop_db.
|
VER‑79057 |
Backup/DR |
The hash function changed in Vertica releases <= 11.0. As a result, backup
manifest file digests that are generated when backing up earlier releases (<
11) do not match the new snapshot manifest file for the same object. This issue
has ben resolved: in cases like this, Vertica now ignores digest mismatches. |
VER‑79085 |
Admin Tools, Database Designer Core |
Admintools rebalance_data with ksafe=1 returned an error when database K-safety
was set to 0. This issue has been resolved. |
VER‑79135 |
Client Drivers - JDBC |
Previously, changing the session timezone on the server did not affect the
results returned when querying a timestamp. You could access the timestamp
by calling getAdjustedTimestamp() on the result set object, but this was not
functioning properly in binary transfer mode.
This issue has been resolved. In text and binary transfer modes, calling
getAdjustedTimestamp() on a result set object returned by the server that
contains a timestamp now properly returns the timestamp based on the
timezone session parameter.
|
VER‑79141 |
Tuple Mover |
Users can now control configuration parameter MaxDVROSPerContainer knob and
set it to any value >1. The new formula is:
max(2, (MaxDVROSPerContainer+1)/2)
A two-level strata helps avoid excessive DVMergeouts: DVs with fewer deleted
rows than (TBL.nrows/MaxDVROSPerStratum), where maxDVROSPerStratun is max(2,
(MaxDVROSPerContainer+1)/2), are placed at stratum 0; if the number of these
DVs exceeds MaxDVROSPerStratum, they are merged together. As before, larger
DVs at stratum 1 are not merged.
|
VER‑79146 |
EON, ResourceManager |
Subcluster level resource pool creation now supports specifying cpuaffinityset
and cpuaffinitymode. |
VER‑79180 |
UI - Management Console |
Previously, the feedback feature had an issue uploading feedback information.
The default behavior was changed, and now the feature sends information by
email. |
VER‑79236 |
Tuple Mover |
In previous releases, the DVMergeout plan read storage files during plan
compilation stage to determine the offset and length of each column in the
storage files. Accessing this metadata incurred S3 API calls. This issue has
been resolved: the tuple mover now calculates column offsets and length of each
without accessing the storage files. |
VER‑79259 |
FlexTable |
In rare cases, MapToString used to return null when the underlying VMap was not
null. This was an issue with displaying VMaps but no data loss happened. The
issue was fixed. |
VER‑79349 |
Data Export, S3 |
When connecting to S3 using https, the S3EXPORT function failed to authenticate
the server cerficiate unless the aws_ca_bundle parameter was set. This issue has
been resolved: the system CA bundle is now used by default. |
VER‑79350 |
Optimizer |
Joins with an interpolated predicate did not recognize equivalency between
date/time data types that specified and omitted precision--for example data
types TIME(6) and TIME. The issue has been resolved. |
VER‑79383 |
Execution Engine |
On rare occasions, a Vertica database crashed when query plans with filter
operators were canceled. This issue has been resolved. |
VER‑79475 |
Hadoop |
The hdfs_cluster_config_check function failed on SSL enabled hdfs clusters, now
this is fixed. |
VER‑79510 |
Admin Tools |
Previously, Vertica used the same catalog path base and data path base for nodes
and admintools. Now, admintools uses the data base path as set in
admintools.conf, as distinct from the catalog path base. |
VER‑79513 |
Optimizer |
Under certain conditions, flaws were found in the logic that checked for
duplicate keys. This issue has been resolved. |
VER‑79548 |
Scrutinize |
In release 11.0, the Vertica agent used a new method to transfer files through
different nodes, but this change prevented scrutinize from sending zip files.
This issue has been resolved. |
VER‑79562 |
Depot |
If you called copy_partitions_to_table on two tables with the same pinning
policies, and the target table had no projection data, the Vertica database
server crashed. This issue has been resolved. |
VER‑79742 |
Security |
Changes to LDAPLinkURL and LDAPLinkSearchBase orphaned LDAPLinked users. This
issue has been resolved: users are no longer orphaned if the new URL or search
base contains the same set of users, and previously orphaned users are
un-orphaned. |
VER‑79757 |
Kafka Integration |
Certain Kafka notifier errors tried to allocate a memory pool twice and
triggered an assert condition. This issue has been resolved. |
VER‑79820 |
Backup/DR |
During backup, vbr sends queries to vsql and reads the results from each query.
If the vsql output was very long and comprised multiple lines that ended with
newline characters, vbr mistakenly interpreted the newline character to indicate
the end of output from the current query and stopped reading. As a result, when
vbr sent the next query to vsql, it read the unread output from the earlier
query as belonging to the current query. This issue has been resolved: vbr now
correctly detects the end of each query result and reads it accordingly. |
Issue Key |
Component |
Description |
VER‑67295 |
Security |
LDAPLink now properly handles nested groups. |
VER‑68210 |
UI - Management Console |
MC failed to import a database that contained a table named
"users" in the public schema. This issue has been resolved. |
VER‑75768 |
Data Removal - Delete, Purge, Partitioning |
Users could remove a storage location that contained temporary table
data with drop_location . This issue has been resolved: if a storage location
contains temporary
data, drop_location now returns an error and hint. |
VER‑75794 |
Data Removal - Delete, Purge, Partitioning |
Calling meta-function CALENDAR_HIERARCHY_DAY with the active_years
and active_month arguments set to 0 can result in considerable I/O. When yoiu do
so now, the
function returns with a warning. |
VER‑77175 |
Execution Engine |
Some sessions that used User Defined Load code (or external table
queries backed by User Defined Loads) accumulated memory usage through the life
of the session.
The memory was only used on non-initiator nodes, and was released after the
session ended. This
issue has been resolved. |
VER‑77583 |
Installation: Server RPM/Deb |
Several Python scripts in the /vertica/scripts directory used the old
Python 2 print command, which prevented them from working with Python 3. They
have been updated
to the new syntax. |
VER‑77688 |
Documentation |
The script to backup and restore grants on UDx libraries shown in the
documentation topic "Backing Up and Restoring Grants" contained several bugs. It
has
been corrected. |
VER‑77771 |
Documentation |
Documentation now informs users not to embed spaces before or after
comma delimiters of the ‑‑restore-objects list; otherwise, vbr
interprets the space as part of
the object name. |
VER‑77818 |
Client Drivers - ADO |
If you canceled a query and immediately called DataReader.close()
without reading all rows that the server sent before the cancel took effect, the
necessary
clean-up work was not completed, and an exception was incorrectly propagated to
the application.
This issue has been resolved. |
VER‑77999 |
Kafka Integration |
When loading a certain amount of small messages, filters such as
KafkaInsertDelimiters can return a VIAssert error. This issue has been resolved. |
VER‑78001 |
Backup/DR |
When performing a full restore, AWS configuration parameters such as
AWSEndpoint,AWSAuth and AWSEnableHttps were overwritten on the restored database
with their
backup settings. This issue has been resolved: the restore leaves the parameter
values
unchanged. |
VER‑78313 |
Optimizer |
The configuration parameter MaxParsedQuerySizeMB knob is set in MB (as
documented). The optimizer can require very large amounts of memory for a given
query, much of it consumed by internal objects that the parser creates when it
converts the query into a query tree for optimization. Other issues
were identified as contributing to excessive memory consumption, and these have
been addressed, including freeing memory allocated to query tree objects when
they are no longer in use. |
VER‑78391 |
Admin Tools |
When installing a package, AdminTools compares the md5sum of the
package file and the md5sum indicated in the isinstalled.sql script. If the two
md5sum values do
not match, AdminTools shows an error message and the installation fails. The
error message has
been improved to show the mismatch as the reason for failure. |
VER‑78470 |
DDL - Table |
When a table was dropped, the drop operation was not always logged.
This issue has been resolved. |
VER‑78555 |
Database Designer Core |
Database Designer generated files with wrong permissions. This issue
has been resolved. |
VER‑78576 |
ComplexTypes |
An error in constant folding would sometimes incorrectly fold IN
expressions with string value lists. This issue has been fixed. |
VER‑78577 |
UI - Management Console |
Management Console returned errors when configuring email gateway
aliases that included hyphen (-) characters. This issue was resolved. |
VER‑78578 |
DDL |
If a column was set to a DEFAULT or SET USING expression and the
column name embedded a period, attempts to change the column's data type with
ALTER TABLE
threw a syntax error. This issue has been resolved. |
VER‑78612 |
Catalog Engine |
If COPY specified to write rejected data to a table, subsequent
removal of a node from the cluster rendered that table unreadable. This issue
has been resolved:
the rejected data can now be read from any node that is up. |
VER‑78619 |
Execution Engine |
Queries on system table EXTERNAL_TABLE_DETAILS with complex
predicates on the table_schema, table_name, or source_format columns either
returned wrong
results or caused the cluster to crash. This issue has been resolved. |
VER‑78632 |
Optimizer |
Queries with multiple distinct aggregates sometimes produced wrong
results when inputs appeared to be segmented on the same columns as distinct
aggregate
arguments. The issue has been resolved. |
VER‑78682 |
Data Removal - Delete, Purge, Partitioning, DDL |
The type metadata for epoch columns in version 9.3.1 and earlier was
slightly different than in later versions. After upgrading from 9.3.1,
SWAP_PARTITIONS_BETWEN_TABLES treated those columns as not equivalent and threw
an error. This
issue has been resolved. Now, when SWAP_PARTITIONS_BETWEN_TABLES compares column
types, it
ignores metadata differences in epoch columns. |
VER‑78726 |
Optimizer |
Partition statistics now support partition expressions that include
the date/time function date_trunc(). |
VER‑78730 |
DDL |
If you profiled a query that included the
ENABLE_WITH_CLAUSE_MATERIALIZATION hint, Vertica did not enable materialization
for that query.
This issue has been resolved. |
VER‑78750 |
Catalog Engine |
In earlier releases, if you set CatalogSyncInterval to a new value,
Vertica did not use the new sync interval until after the next scheduled sync as
set by the
previous CatalogSyncInterval setting was complete. This issue has been resolved:
now Vertica
immediately implements the new sync interval. |
VER‑78767 |
Optimizer |
Attempts to add a column with a default value that included a
TIMESERIES clause returned with a ROLLBACK message. This issue has been
resolved. |
VER‑78856 |
DDL - Projection, Optimizer |
Eligible predicates were not pushed down into subqueries with a LIMIT
OVER clause. The issue has been resolved. |
VER‑78969 |
Hadoop |
Exporting Parquet files with over 2^31 rows caused node failures. The
limit has now been raised to 2^64 rows. |
VER‑79260 |
UI - Management Console |
Previously, the feedback feature had an issue uploading feedback
information. The default behavior was changed, and now the feature sends
information by email. |
Issue Key |
Component |
Description |
VER‑68406 |
Tuple Mover |
When Mergeout Cache is enabled, the dc_mergeout_requests system table now
contains valid transaction ids instead of zero. |
VER‑71064 |
Catalog Sync and Revive, Depot, EON |
Previously, when a node belonging to a secondary subcluster restarted, it lost
files in its depot. This issue has been fixed. |
VER‑72596 |
Data load / COPY, Security |
The COPY option REJECTED DATA to TABLE now properly distributes data between
tables with identical names belonging to different schemas. |
VER‑73751 |
Tuple Mover |
The Tuple Mover logged a large number of PURGE requests on a projection while
another MERGEOUT job was running on the same projection. This issue has been
resolved. |
VER‑73773 |
Tuple Mover |
Previously, the Tuple Mover attempted to merge all eligible ROS containers
without considering resource pool capacity. As a result, mergeout failed if the
resource pool could not handle the mergeout plan size. This issue has been
resolved: the Tuple Mover now takes into account resource pool capacity when
creating a mergeout plan, and adjusts the number of ROS containers accordingly. |
VER‑74554 |
Tuple Mover |
Occasionally, the Tuple Mover dequeued DVMERGEOUT and MERGEOUT requests
simultaneously and executed only the DVMERGEOUT requests, leaving the MERGEOUT
requests pending indefinitely. This issue has been resolved: now, after
completing execution of any DVMERGEOUT job, the Tuple Mover always looks for
outstanding MERGEOUT requests and queues them for execution. |
VER‑74615 |
Hadoop |
Fixed a bug in predicate pushdown on parquet files stored on HDFS. The bug would
cause a parquet file spanning multiple HDFS block to not have some of its
rowgroups, specifically those located on blocks other than the starting HDFS
block, pruned. In some corner cases as this one, the bug would actually cause
the wrong rowgroup to get pruned, leading to incorrect results. |
VER‑74619 |
Hadoop |
Due to some compatibility issues between the different open source libraries,
Vertica failed to read the ZSTD compressed parquet files generated by some
external tools (such as Impala) with a column containing all NULLS. This is
fixed and Vertica can correctly read such files without error. |
VER‑74814 |
Hadoop |
The open source library used by Vertica to generate parquet files would buffer
null values inefficiently in-memory. This caused high memory usage, especially
in cases where the data being exported had a lot of nulls. The library has been
patched to buffer null values in encoded format, resulting in optimized memory
usage. |
VER‑74974 |
Database Designer Core |
Under certain circumstances, Database Designer designed projections that could
not be refreshed by refresh_columns(). This issue has been resolved. |
VER‑75139 |
DDL |
Adding columns to large tables with many columns on an Eon-mode database was
slow and incurred considerable resource overhead, which adversely affected other
workloads. This issue has been resolved. |
VER‑75496 |
Depot |
System tables continued to report that a file existed in the depot after it was
evicted, which caused queries on that file to return "File not found" errors.
This issue has been resolved. |
VER‑75715 |
Backup/DR |
When restoring objects in coexist mode, the STDOUT now contains the correct
schema name prefix. |
VER‑75778 |
Execution Engine |
With Vertica running on machines with very high core counts, complex
memory-intensive queries featuring an analytical function that fed into a merge
operation sometimes caused a crash if the query ran in a resource pool where
EXECUTIONPARALLELISM was set to a high value. This issue has been resolved. |
VER‑75783 |
Optimizer |
The NO HISTOGRAM event was set incorrectly on the dc_optimizer_events table's
hidden epoch column. As a result, the suggested_action column was also set
incorrectly to run analyze_statistics. This issue is resolved: the NO HISTOGRAM
event is no longer set on the epoch column. |
VER‑75806 |
UI - Management Console |
COPY type of queries have been added to the list of queries displayed for
Completed Queries on Query Monitoring Activity page. |
VER‑75864 |
Data Export |
Previously during export to Parquet, Vertica wrote the time portion of each
timestamp value as a negative number for all timestamps before POSTGRES EPOCH
DATE (2000-01-01). Due to this some tools (e.g. Impala) could not load such
timestamps from parquet files exported by Vertica. This is fixed now. |
VER‑75881 |
Security |
Vertica no longer takes a catalog lock during authentication, after the user's
password security algorithm has been changed from MD5 to SHA512. This is due to
removing the updating of the user's salt, which is not used for MD5 hash
authentication. |
VER‑75898 |
Execution Engine |
Calls to export_objects sometimes allocated considerable memory while user
acesss privileges to the object were repeatedly checked. The accumulated memory
was not freed until export_objects returned, which sometimes caused the database
to go down with an out-of-memory error. This issue has been resolved: now memory
is freed more promptly so it does not excessively accumulate. |
VER‑75933 |
Catalog Engine |
The export_objects meta-function could not be canceled. This issue has been
resolved. |
VER‑76094 |
Data Removal - Delete, Purge, Partitioning |
If you created a local storage location for USER data on a cluster that included
standby nodes, attempts to drop the storage location returned with an error that
Vertica was unable to drop the storage location from standby nodes. This issue
has been resolved. |
VER‑76125 |
Backup/DR |
The access permission check for the S3 bucket root during backup/restore has
been removed. Users with access permissions to specific bucket directories can
now perform backup/restore in those directories without getting AccessDenied
errors. |
VER‑76131 |
Kafka Integration |
Updated documentation to mention support for SCRAM-SHA-256/512 |
VER‑76200 |
Admin Tools |
When adding a node to an Eon-mode database with Administration Tools, users were
prompted to rebalance the cluster, even though this action is not supported for
Eon. This issue was resolved: now Administration Tools skips this step for an
Eon database. |
VER‑76244 |
Depot |
Files that might be candidates for pruning—for example, due to expression
analysis or not read at all as with top-K queries—were unnecessarily read into
the depot, and adversely affected depot efficiency and performance. This problem
has been resolved: now, the depot only fetches from shared storage files that
are read by a statement. |
VER‑76349 |
Optimizer |
The optimizer combines multiple predicates into a single-column Boolean
predicate where subqueries are involved, to achieve predicate pushdown. The
optimizer failed to properly handle cases where two NotNull predicates were
combined into a single Boolean predicate, and returned an error. This issue has
been resolved. |
VER‑76384 |
Execution Engine |
In queries that used variable-length-optimized joins, certain types of joins
incurred a small risk of a crashing the database due to a problem when checking
for NULL join keys. This issue has been resolved. |
VER‑76424 |
Execution Engine |
If a query includes 'count(s.)' where s is a subquery, Vertica expects multiple
outputs for s.. Because Vertica does not support multi-valued expressions in
this context, the expression tree represents s.* as a single record-type
variable. The mismatch in the number of outputs can result in database failure. In
cases like this, Vertica now returns an error message that multi-valued
expressions are not supported. |
VER‑76449 |
Sessions |
Vertica now better detects situations where multiple Vertica processes are
started at the same time on the same node. |
VER‑76511 |
Sessions, Transactions |
Previously, a single-node transaction sends commit message to all nodes even if
it has no content to commit. This is now fixed (single-node transaction commits
locally if it has no content to commit) |
VER‑76543 |
Optimizer, Security |
For a view A.v1, its base table B.t1, and an access policy on B.t1: users no
longer require a USAGE privilege on schema B to SELECT view A.v1. |
VER‑76584 |
Security |
Vertica now automatically creates needed default key projections for a user with
DML access when that user performs an INSERT into a table with a primary key and
no projections. |
VER‑76815 |
Optimizer |
Using unary operators as GROUP BY or ORDER BY elements in WITH clause statements
caused Vertica to crash. The issue is now resolved. |
VER‑76824 |
Optimizer |
If you called a view and the view's underlying query invoked a UDx function on a
table with an argument of '*' (all columns), Vertica crashed if the queried
table later changed--for example, columns were added to it. The issue has been
resolved: the view now returns the same results. |
VER‑76851 |
Data Export |
Added support for exporting UUID types via s3export. Before, exporting data with
UUID types using s3export would sometimes crash the initiator node. |
VER‑76874 |
Optimizer |
Updating the result set of a query that called the volatile function LISTAGG
resulted in unequal row counts among projections of the updated table. This
issue has been resolved. |
VER‑76952 |
DDL - Projection |
In previous releases, users were unable to alter the metadata of any column in
tables that had a live aggregate or Top-K projection, regardless of whether they
participated in the projection itself. This issue has been resolved: now users
can change the metadata of columns that do not participate in the table's live
aggregate or Top-K projections. |
VER‑76961 |
Spread |
Spread now correctly detects old tokens as duplicates. |
VER‑77006 |
Machine Learning |
The PREDICT_SVM_CLASSIFIER function could cause the database to go down when
provided an invalid value for its optional "type" parameter. The function now
returns an error message indicating that the entered value was invalid and notes
that valid values are "response" and "probability." |
VER‑77007 |
Catalog Engine |
Standby nodes did not get changes to the GENERAL resource pool when it replaced
a down node. This problem has been resolved. |
VER‑77026 |
Execution Engine |
Vertica was unable to optimize queries on v_internal tables, where equality
predicates (with operator =) filtered on columns relname or nspname, in the
following cases:
-
The predicate specified expressions with embedded characters such as
underscore (_) or percentage (%). For example:
SELECT * FROM v_internal.vs_columns WHERE nspname = 'x_yz';
-
The query contained multiple predicates separated by AND operators, and
more than one predicate queried the same column, either nspname or relname. For
example:
SELECT * FROM v_internal.vs_columns WHERE nspname = 'xyz' AND nspname <> 'vs_internal';
In this case, Vertica was unable to optimize equality predicate nspname = 'xyz'.
In all these cases, the queries are now optimized as expected.
|
VER‑77134 |
Backup/DR |
Attempts to execute a CREATE TABLE AS statement on a database while it is the
target of a replication operation return an error. The error message has been
updated so it clearly indicates the source of the problem. |
VER‑77173 |
Monitoring |
Startup.log now contains a stage identifying when the node has received the
initial catalog. |
VER‑77190 |
Optimizer |
SELECT clause CASE expressions with constant conditions and string results that
were evaluated to shorter strings sometimes produced an internal error when
participating in joins with aggregation. This issue has been resolved. |
VER‑77199 |
Kafka Integration |
The Kafka Scheduler now allows an initial offset of -3, which indicates to begin
reading from the consumer group offset. |
VER‑77227 |
Admin Tools |
Previously, admintools reported it could not start the database because it was
unable to read database catalogs, but did not provide further details. This
issue has been resolved: the message now provides details on the failure's
cause. |
VER‑77265 |
Catalog Sync and Revive |
We add more detailed messages when permission is denied. |
VER‑77278 |
Catalog Engine |
If you called close_session() while running analyze_statistics() on a local
temporary table, Vertica sometimes crashed. This issue has been resolved. |
VER‑77387 |
Directed Query, Optimizer |
If the CTE of a materialized WITH clause was unused and referenced an unknown
column, Vertica threw an error. This behavior was inconsistent with the behavior
of an unmaterialized WITH clause, where Vertica ignored unused CTEs and did not
check them for errors. This problem has been resolved: in both cases, Vertica
now ignores all unused CTEs, so they are never checked for errors such as
unknown columns. |
VER‑77394 |
Execution Engine |
It was unsafe to reorder query predicates when the following conditions were
true:
The query contained a predicate on a projection's leading sort order
columns that restricted the leading columns to constant values, where the
leading columns also were not run-length encoded.
A SIPS predicate from a merge join was applied to non-leading sort order
columns of that projection.
This issue has been
resolved: query predicates can no longer be reordered when it contains a
predicate on a projection's leading sort order columns that restrict leading
columns to constant values, where the leading columns are not run-length
encoded.
|
VER‑77584 |
Execution Engine |
Before evaluating a query predicate on rows, Vertica gets the min/max of the
expression to determine what rows it can first prune from the queried dataset.
An incorrect check on a block's null count caused Vertica to use the maximum
value of an all-null block, and mistakenly prune rows that otherwise would have
passed the predicate. This issue has been resolved. |
VER‑77695 |
Admin Tools |
In earlier releases, starting a database with the start_db --force option could
delete the data directory if the user lacked read/execute permissions on the
data directory. Now, if the user lacks permissions to access the data directory,
admintools cancels the start operation. If the user has correct permissions,
admintools gives users 10 seconds to abort the start operation. |
VER‑77814 |
Optimizer |
Queries that included the TABLESAMPLE option were not supported for views. This
issue has been resolved: you can now query views with the TABLESAMPLE option. |
VER‑77904 |
Admin Tools |
If admintools called create_db and the database creation process was lengthy,
admintools sometimes prompted users to confirm whether to continue waiting. If
the user did not answer the prompt--for example, when create_db was called by a
script--create_db completed execution without creating all database nodes and
properly updating the configuration file admintools.conf. In this case, the
database was incomplete and unusable. Now, the prompt times out
after 120 seconds. If the user doesn't respond within that time period,
create_db exits. |
VER‑77905 |
Execution Engine |
A change in Vertica 10.1 prevented volatile functions from being called multiple
times in an SQL macro. This change affected the throw_error function. The
throw_error function is now marked immutable, so SQL macros can call it multiple
times. |
VER‑77962 |
Catalog Engine |
Vertica now restarts properly for nodes that have very large checkpoint files. |
VER‑78251 |
Data Networking |
In rare circumstances, the socket on which Vertica accepts internal connections
could erroneously close and send a large number of socket-related error messages
to vertica.log. This issue has been fixed. |