Resolved issues release notes for 9.2.x.
This hotfix was internal to Vertica.
Issue Key |
Component |
Description |
VER-67038 |
Build and Release |
A new error message in Export To Parquet did not have an error code.
This issue has been fixed. |
VER-67339 |
Backup/DR |
Dropping the user that owns an object involved in a replicate or restore operation during that
replicate or restore operation no longer causes the nodes involved in the operation to fail. |
VER-67362 |
Data Export, Hadoop |
Dictionary size is now accounted for when estimating row group sizes during data export to parquet
format. As a result row groups can no longer exceed their specified size. |
VER-67400 |
UDX |
Flex table views now properly show UTF-8 encoded multi-byte characters. |
VER-67408 |
Client Drivers - Installation Package |
Incorrect permissions on ODBC driver message files potentially prevented the driver from retrieving
error messages in some situations, such as the user not belonging to the same group as the message
files' owner. This problem has been corrected. |
VER-67409 |
DDL - Table |
The database statistics tool SELECT ANALYZE_STATISTICS no longer acquires a GCL-X lock when running
against local temp tables. |
VER-67410 |
Optimizer |
Vertica now supports subqueries in the ON clause of a query. |
VER-67411 |
Execution Engine |
On rare occasions, a query that exceeded its runtime cap automatically restarted instead of
reporting the timeout error. This issue has been fixed. |
VER-67412 |
Installation Program |
If a standby node had been replaced in past with failed nodes with deprecated projections, the
identify_unsupported_projections.sh script could not reflect on the standby node. When upgrading to
9.1 or later, the standby might may not start due to failure of the
'U_DeprecateNonIdenticallySortedBuddies' and/or 'U_DeprecatePrejoinRangeSegProjs' task. This issue
has been resolved. |
VER-67414 |
Optimizer |
Queries that perform an inner join and group the results now return consistent results. |
VER-67425 |
Hadoop |
Vertica can now read tinyint values in parquet files generated by Apache Spark. |
VER-67459 |
Execution Engine |
The run_index_tool meta-function can now be cancelled. Together with projection name filtering and
using multiple threads, dbadmins should be less reluctant to run this potentially long-running tool. |
Issue Key |
Component |
Description |
VER-36308 |
UI - Management Console |
The Vertica MC linux service : /etc/init.d/vertica-consoled status would always display status OK
without showing the actual MC state (Running or Stopped). This issue has been resolved. |
VER-36987 |
Optimizer - Statistics and Histogram |
If DROP_STATISTICS specified a table and the argument HISTOGRAMS, the function dropped all
statistics for the table, including row count. This issue has been resolved: the function now drops
only the histograms for that table. |
VER-38419 |
Optimizer - GrpBy and Pre Pushdown |
Aggregate functions can now be pushed down into UNION-ALL subqueries, if it is safe to do so. |
VER-43373 |
DDL - Table |
CREATE TABLE LIKE...INCLUDING PROJECTIONS generated projections for the cloned table that aliased
the new anchor table table with the name of the original anchor table. Projections for tables that
are created with CREATE TABLE LIKE no longer use this aliasing mechanism. |
VER-55257 |
Client Drivers - ODBC |
Issuing a query which returns a large result set and closing the statement before retrieving all of
its rows can result in the following error when attempting subsequent operations with the statement:
"An error occurred during query preparation: Multiple commands cannot be active on the same
connection. Consider increasing ResultBufferSize or fetching all results before initiating another
command." |
VER-56790 |
Client Drivers - JDBC, Client Drivers - ODBC |
Previously, certain server errors were not correctly propagated to the application, which resulted
in the correct native error code and the original server error message not being seen by the
application when it called SQLGetDiagRec(). This problem has been corrected. |
VER-58419 |
UI - Management Console |
'Export Design' button wasn't getting enabled after design was completed. This issue has been fixed. |
VER-59114 |
Client Drivers - VSQL |
Sometimes vsql was not able to locate a Kerberos Key Distribution Center (KDC) that was identified
only via specially-named DNS records. This issue has been been fixed. |
VER-62414 |
Hadoop |
Previously, Vertica sometimes generated Parquet files with too many rowgroups in a file. Loading
such files using external Parquet tools caused a performance degradation or out of memory errors.
This has been fixed. |
VER-63297 |
Client Drivers - ADO |
Previously, if an integer column value was split across multiple reads by the ADO.NET driver, it was
not correctly captured. This has been corrected. |
VER-63389 |
Data Export, Hadoop |
Previously, Vertica sometimes generated Parquet files with too many rowgroups in a file. Loading
such files using Vertica caused a performance degradation. This has been fixed. |
VER-63769 |
Data Export, Error Handling, Hadoop |
Previously, Parquet and ORC external tables would throw an error if the load path contained an empty
folder. A new parser option 'allow_no_match' when set to 'true' now returns an empty result set
instead of an error. |
VER-64592 |
DDL - Table |
Vertica was unable to add a foreign key constraint to a table if the table was referenced by a key
in another table that used the same constraint name. Now, Vertica checks for constraint name
conflicts only within the same table. |
VER-64912 |
Backup/DR |
Altering the owner of a replicated schema with the CASCADE clause caused a node crash on the target
database. This issue has been fixed. |
VER-64941 |
DDL - Table |
Previously, ALTER TABLE...ADD COLUMN required you to explicitly list all existing projections that
you wished to update with the new column. You can now update all projections by qualifying ADD
COLUMN with the new option ALL PROJECTIONS. |
VER-65065 |
Kafka Integration |
Vertica now properly supports TLS certificate chains for use with Kafka Scheduler and UDx. |
VER-65082 |
Execution Engine |
Under certain workloads, glibc can accumulate a significant amount of free memory in its allocation
arenas, free memory that nevertheless consumes physical memory as indicated by RSS usage. Vertica
now detects and automatically consolidates and returns much of that free memory back to the
operating system. |
VER-65137 |
Data load / COPY |
Rejected data and exceptions files during a COPY statement are created with file permissions 666. |
VER-65154 |
License |
Due to a miscalculation, the license auditor running in sampling mode showed an incorrectly large
standard error. The standard error is the number that appears after the +/- sign on the data size
line in license compliance messages. This issue has been fixed. |
VER-65249 |
UI - Management Console |
In the MC, the license page would periodically not show all the licenses that the database may
contain. This issue has now been resolved. |
VER-65257 |
Hadoop |
Export to Parquet previously crashed when the export included a combination of Select statements.
This issue has been fixed. |
VER-65333 |
Nimbus Subscriptions |
After upgrading to 9.2, the Eon database crashed after running rebalance_shards.
This issue has been fixed. |
VER-65338 |
UI - Management Console |
The MC license page would periodically not show the graph representing the license usage. This issue
has now been fixed. |
VER-65419 |
Security |
Setting a row-based access policy on an underlying table of a view denied the user access to the
view. This issue is fixed so the user can see the view as intended. |
VER-65484 |
Backup/DR |
Removed the redundant "Missing info files: skipping..." warnings in the vbr log when performing
backup tasks. |
VER-65501 |
Backup/DR |
Running the remove task using a vbr config file with an incorrect "snapshotName" removed entries
from the backup manifest file. This problem has been resolved. |
VER-65604 |
Security |
Passwords with a semicolon are no longer printed in the log. |
VER-65718 |
Tuple Mover |
If you upgraded a database to version 9.2.0 from any earlier release and configuration parameter
MergeoutCache was set to 1 (enabled), the upgraded database was unable to start. Now, if
MergeoutCache is set to 1 in the earlier release, the upgraded database ignores this setting. To
enable MergeoutCache, you must also set EnableReflexiveMergeout to 0 (disabled). |
VER-65776 |
UI - Management Console |
While logged into the MC as a non-admin user, the table in the Load page's "Continuous tab" would
periodically not show all the current running Vertica schedulers that are in use. This issue has
been resolved. |
VER-65801 |
Catalog Engine, Subscriptions |
Database started with missing shards. This issue is now resolved. |
VER-65818 |
Client Drivers - ADO |
The way the ADO.NET driver previously tracked statements associated with a connection to ensure that
they were always closed when the connection was closed created a memory leak in some use cases. This
problem has been corrected. |
VER-65866 |
Execution Engine, Nimbus, S3 |
On rare occasions, a bug in S3 streaming read caused fatal errors. This issue has been resolved. |
VER-66058 |
DDL |
In Eon mode with DFS, if a node was down, queries was liable to fail and return the error "System is
not k-safe ..." This issue has been resolved. |
VER-66080 |
Execution Engine |
Long 'like' patterns with many non-ASCII characters such as Cyrillic were liable to crash the
server. This issue has been resolved. |
VER-66088 |
Scrutinize |
The scrutinize command previously used the /tmp directory to store files during collection. It now
uses the specified temp directory. |
VER-66095 |
Data load / COPY |
The FJsonParser no longer fails when loading JSON records with more than 4KB of keys and the option
reject_on_duplicate_key=true. |
VER-66136 |
UI - Management Console |
Running the 'Explain" query option for an unsegmented projection on the MC Query Plan page could
trigger the error: "There is no metadata available for this projection". This issue has been
resolved. |
VER-66228 |
Optimizer |
Vertica now disables fast const loading if you are using hierarchical partitioning. |
VER-66240 |
Optimizer - Statistics and Histogram |
The database statistics tool SELECT ANALYZE_STATISTICS no longer fails if you call it when the
cluster is in a critical state. |
VER-66308 |
Hadoop |
Previously, NULLs in text struct data when loaded through the HCatalog Connector produced a null
pointer exception. This has been fixed. |
VER-66327 |
Client Drivers - JDBC, Security |
JDBC could fail with a stack overflow error when receiving more than 512 MB of data from Vertica.
This issue was due to an incompatibility between Java's and OpenSSL's implementations of TLS
renegotiation. Vertica now contains the tls_renegotiation_limit parameter. You can set this
parameter to 0 to disable SSL/TLS renegotiation and avoid the issue. |
VER-66345 |
Data load / COPY |
Vertica's Parquet file parser would occasionally incorrectly reject rows from files containing a
column with a max definition level of zero and one or more columns with a nonzero max definition.
This issue has been fixed. |
VER-66489 |
Optimizer |
During an optimized merge, Vertica tried to plan the DELETE portion of the MERGE query twice,
sometimes triggering an error. This issue has been resolved. |
VER-66589 |
Execution Engine |
Partition pruning did not work when querying a table's TIMESTAMP column, where the query predicate
specified a TIMESTAMPTZ constant. This issue has been resolved. |
VER-67056 |
Admin Tools |
Sometimes during database revive, admintools treated s3 and hdfs user storage locations as local
filesystem paths. This led to errors during revive. This issue has been resolved. |
Issue Key |
Component |
Description |
VER-63430 |
Optimizer |
At times, query performance was sub-optimal when both the following occurred:
- The query included a subquery that joined multiple tables in a WHERE clause.
- The parent query included this subquery in an outer join that spanned multiple tables.
This issue been fixed.
|
VER-62662 |
Data load / COPY |
Occasionally, a COPY or external table query could crash a node.
This issue has been fixed. |
VER-48026 |
DDL - Table |
If you moved a table with foreign keys to a new schema, attempts to drop the original schema
without CASCADE returned a rollback error that referenced a foreign key dependency.
This issue has been fixed. |
VER-62002 |
DDL - Table |
Vertica placed an exclusive lock on the global catalog while it created a query plan for CREATE
TABLE AS .
Very large queries could prolong this lock until it eventually timed out. On rare occasions, the
prolonged lock caused an out-of-memory exception that shut down the cluster.
This issue has been fixed. |
VER-59212 |
Sessions |
In cases where a user had privileges on a user resource pool but not the GENERAL pool, certain
DDL statements would fail.
This issue has been fixed. |
VER-63405 |
Catalog Engine, Spread |
If a control node and one of its child nodes went down, attempts to restart the second (child)
node sometimes failed.
This issue has been fixed. |
VER-63839 |
Data load / COPY |
The SKIP keyword of a COPY statement was not properly supported with the FIXEDWIDTH data format.
This issue has been fixed.
|
VER-62810 |
Data load / COPY |
In a COPY statement, excessively long invalid inputs to any date or time columns could cause
stack overflows, resulting in a crash.
This issue has been fixed. |
VER-64716 |
Data load / COPY, FlexTable |
When parsing an array, FJSON PARSER sometimes returned inconsistent results with Vertica versions
prior to 9.1SP1.
This issue has been fixed. |
VER-61431 |
Optimizer - Plan Stability |
The optimizer did not consider active directed queries when it created plans for queries that
included a LABEL hint.
This issue has been fixed. |
VER-60716 |
FlexTable |
The MAPITEMS function returned truncated map values.
The issue has been fixed. |
VER-63650 |
DDL |
In some cases, attempts to add a column with a NOT NULL constraint partially failed: Vertica
added the column but omitted the constraint.
This issue has been fixed. |
VER-63550 |
S3 |
S3Export was not thread safe when the data contained time/date values. Therefore, you could not
use S3Export with PARTITION BEST when exporting time/date values.
This issue has been fixed. |
VER-64421 |
Cloud - Amazon, UI - Management Console |
At times, Management Console failed to add a new host to the database after a long wait.
This issue has been fixed. |
VER-61289 |
Execution Engine, Hadoop |
If a Parquet file metadata was very large, Vertica consumed more memory than was reserved and
crashed when the system ran out of memory.
This issue has been fixed. |
VER-63742 |
AMI, UI - Management Console |
Management Console deployed a Vertica CloudFormation Template in an existing VPN/subnet failed to
provision and revive the Vertica database with the user providing a VPN access CIDR.
This issue has been fixed. |
VER-64645 |
UI - Management Console |
SMTP alerts were not being received in the planned timeframe, even during network disconnection.
This issue has been fixed.
|
VER-64705 |
UI - Management Console |
The permissions of /opt/vconsole/mcdb/derby/mcdb/tmp were not being maintained at a secure
setting through restarts of the Management Console.
This issue has been fixed. |
VER-61351 |
Admin Tools |
Adding large numbers of nodes in a single operation could lead to an admintools error about
parsing output.
This issue has been fixed. |
VER-60695 |
Optimizer |
The optimizer could not use a fast plan to perform a refresh operation on tables with multiple
live aggregrate projections.
Now, the optimizer applies the refresh operation on each live aggregrate projection as a
separate transaction, and applies the fast plan to each live aggregrate projection.
This significantly reduces the time required to refresh tables with multiple live aggregrate
projections. |
VER-63861 |
AP-Advanced |
If you ran APPROXIMATE_COUNT_DISTINCT_SYNOPSIS on a database table that contained NULL values,
the synopsis object that it returned sometimes was larger than the one it returned after the
NULL values were removed.
This issue has been fixed. |
VER-45444 |
DDL - Projection |
In many cases, Tuple Mover operations were adversely impacted by the high default number of
segmentation and sort columns in a superprojection.
This issue has been fixed. |
VER-64112 |
Optimizer |
Very large expressions could run out of stack and crash the node.
This issue has been fixed. |
VER-51210 |
Execution Engine, Optimizer |
Removed misleading documentation that suggested ANY and ALL operators can be used to evaluate
arrays.
Attempts to do so now throw an error. |
VER-64351 |
Tuple Mover |
When executing heavy workloads over an extended period of time, the Tuple Mover was liable to
accumulate significant memory until its session ended and it released the memory.
This issue has been fixed. |
VER-63844 |
DDL - Projection |
The catalog stored incorrect information about pinned projections.
This issue has been resolved. |
VER-63841 |
Execution Engine |
In some regular expression scalar functions, Vertica would crash for certain long input strings.
This issue has been fixed.
|
VER-62988 |
Data load / COPY, Hadoop |
Queries involving a join of two external tables loading Parquet files sometimes caused Vertica to
crash. The crash happened in a very rare situation due to memory misalignment.
This issue has been fixed. |
VER-58472 |
UI - Management Console |
Management Console could not import multiple clusters when the host had the same private IP
address as the private IP address of a previously imported cluster.
This issue has been fixed. |
VER-61741 |
Optimizer |
When a projection was created with only the PINNED keyword, Vertica incorrectly considered it a
segmented projection. This caused optimizer internal errors and incorrect results when loading
data into tables with these projections.
This issued has been fixed.
IMPORTANT: The fix only applies to newly created pinned projections. Existing pinned projections
in the catalog are still incorrect and need to be dropped and recreated manually. |
VER-61351 |
Admin Tools |
Adding large numbers of nodes in a single operation could lead to an admintools error parsing
output.
This issue has been fixed. |
VER-63728 |
Catalog Engine |
Having frequent CREATE/INSERT/DROP tables caused a memory leak in the catalog.
This issue has been fixed. |
VER-63044 |
Optimizer |
The MERGE USING and INSERT SELECT operations, which selected data to be inserted or
merged via a query with subqueries under Outer Joins, would sometimes result in an Internal
Error.
This issue has been fixed. |
VER-62249 |
Optimizer |
Vertica evaluated permissions on external sources when creating an external table while holding
GCLX. Sometimes this caused GCLX to timeout if there were issues with the file system of the
external sources.
This issue is now fixed. |
VER-57071 |
Optimizer |
In a materialized WITH statement, when there are WITH clauses that have the same alias as regular
tables in the FROM clause, Vertica sometimes failed to parse the query correctly.
This issue has been fixed. |
VER-64269 |
UDX |
Queries with CASE-like expressions over inline functions/SQL macros that return strings and that
evaluate CASE expressions internally sometimes returned an error such as "Function can't be used
with an operator."
This issue has been fixed. |
VER-47310 |
Installation Program |
Vertica was not installing properly on OpenStack virtual machines.
This issue has been fixed. |
VER-63969 |
Front end - Parse & Analyze |
When a meta-function was used in a materialized WITH clause, Vertica process could crash
with a "VAssert(0)" error.
This issue has been fixed. |
VER-64858 |
Optimizer |
An historical query (i.e., at epoch) did not fold stable functions into constants properly, which
caused significant performance degradation compared to a non-historical query.
This issue has been fixed. |
VER-64378 |
Data load / COPY |
A COPY statement with AUTO copy mode failed to handle TRICKLE copy mode set as the table loading
mode properly, which caused loaded data to fail over to ROS containers when the WOS pool was
full.
This issue has been fixed. |
VER-60916 |
DDL, Hadoop |
Creating a view can be expensive if a view query is complex or references external storage.
This is because Vertica plans the view query during view creation, although the plan is only for
validation purpose and is dropped after view creation.
You can optionally turn off planning by setting DisableViewQueryPlanning to true. |
VER-60247 |
Kafka Integration |
You could not configure the scheduler and add microbatches dynamically when the scheduler was
running.
This issue has been fixed.
|