Resolved issues release notes for 10.0.x.
This hotfix was internal to Vertica.
Issue Key |
Component |
Description |
VER-40651 |
UI - Management Console |
A Security bug on MC is now fixed. |
VER-40662 |
UI - Management Console |
Starting with 10.0SP1, MC disables the http TRACE call with the embedded Jetty server. |
VER-48151 |
Client Drivers - Misc |
Before Vertica version 10.0SP1, result set counts were limited to values that would fit in a
32-bit integer. This problem has been corrected. |
VER-56614 |
Documentation |
Updated DROP_STATISTICS documentation by removing redundant BASE option, which has the same
effect as the ALL option. |
VER-60017 |
FlexTable |
Parser FCSVPARSER used to ignore the NULL parameter passed to a COPY command. This parser now
applies NULL parameter to loaded data, replacing matching values by database NULL. |
VER-60547 |
Admin Tools |
When upgrading the database, if automatic package updates fail, Vertica will provide a HINT with
instructions for updating these packages manually. |
VER-61344 |
Installation Program |
You can optionally use the OS provided package manager to handle dependencies when you install
the vertica software package. |
VER-65314 |
Security |
Kerberos updated to 1.18.2: fixes an issue where Vertica could exceed the limit for opened files
when using Kerberos. |
VER-65476 |
Cloud - Amazon |
Vertica.local_verify scripts were raising errors due to low values of nofile, nproc, pid_max,
max_map_count, and PAM module requirement in different Vertica VMs. These values have been
modified to eliminate errors for all instances below 128GB memory. |
VER-69442 |
Client Drivers - VSQL, Supported Platforms |
On RHEL 8.0, VSQL has an additional dependency on the libnsl library. Attempting to use VSQL in
Vertica 9.3SP1 or 10.0 without first installing libnsl will fail and produce the following
error: Could not connect to database (EOF received)/opt/vertica/bin/vsql: error while loading
shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory |
VER-69860 |
Optimizer |
In some cases, querying certain projections failed to find appropriate partition-level
statistics. This issue has been resolved: queries on projections now use partition-level
statistics as needed. |
VER-70427 |
Client Drivers - JDBC |
The JDBC client driver could hang on a batch update when a table locktimeout happened, This
update has fixed this issue and standardized the JDBC client driver update behavior in both
batched and non-batched, prepared and non-prepared situations to follow the JDBC SQL standard. |
VER-70591 |
AP-Geospatial |
In rare cases, describing indexes with the list_polygons parameter set caused the initiator node
to fail. This was caused by incorrectly sizing the geometry or geography output column size.
This issue has been resolved: users can now resize the shape column size through a new parameter
shape_column_length, which is set to bytes. |
VER-70804 |
Execution Engine |
In some cases, cancelling a query caused subsequent queries in the same transaction to return
with an error that they also had been cancelled. This issue has been resolved. |
VER-71728 |
Data load / COPY, SDK |
The protocol used to run a user-defined parser in fenced mode had a bug which would occasionally
cause the parser to process the same data multiple times. This issue has been fixed. |
VER-71983 |
ComplexTypes, Optimizer |
The presence of directed queries sometimes caused "Unknown Expr" warnings with queries involving
arrays and sets. This issue has been resolved. |
VER-71997 |
Backup/DR |
Vertica backups were failing to delete old restore points when the number of backup objects
exceeded 10,000 objects. This issue has been fixed. |
VER-72078 |
Optimizer |
Meta-function copy_table now copies projection statistics from the source table to the target
new table. |
VER-72227 |
Cloud - Amazon, Security |
Vertica instances can now only access the S3 bucket that the user specified for communal
storage. |
VER-72242 |
ComplexTypes |
The ALTER TABLE ADD COLUMN command previously did not work correctly when adding a column of
array type. This issue has been fixed. |
VER-72342 |
Backup/DR |
The error message "Trying to delete untracked object" has been replaced with the more useful "
Trying to delete untracked object: This is likely caused by inconsistent backup metadata. Hint:
Running quick-repair can resolve the backup metadata inconsistency. Running full-check task can
provide more thorough information, and guidance on how to fix this issue. |
VER-72443 |
Tuple Mover |
A ROS of inactive partitions was excluded from mergeout if it was very large in relation to the
size of other ROS's to be merged. Now, a large ROS container of inactive partitions is merged
with smaller ROS containers when the following conditions are true:
- Total number of ROS containers for the projection is close to the threshold for ROS
pushback.
- Total mergeout size does not exceed the limit for ROS container size.
|
VER-72553 |
DDL |
Vertica returned an error if you renamed an unsegmented non-superprojection and the new name
conflicted with an existing projection name in the same schema. This issue has been resolved:
now Vertica resolves the conflict by modifying the new projection name. |
VER-72577 |
Data load / COPY |
Copy of parquet files was very slow when parquet files were poorly written (generated by GO) and
had wide varchar columns. This is now fixed. |
VER-72589 |
Optimizer |
If you created projections in earlier (pre-10.0.x) releases with pre-aggregated data (for
example, LAPs and TopK projections) and the anchor tables were partitioned with a GROUP BY
clause, their ROS containers are liable to be corrupted from various DML and ILM operations.
In this case, you must rebuild the projections:
-
Identify problematic projections by running the meta-function REFRESH on the
database.
-
Export the DDL of these projections with EXPORT_OBJECTS or EXPORT_TABLES.
-
Drop the projections, then recreate them as originally defined.
-
Run REFRESH. Vertica rebuilds the projections with new storage containers.
|
VER-72603 |
DDL |
Setting NULL on a table column's DEFAULT expression is equivalent to setting no default
expression on that column.
So, if a column's DEFAULT/SET USING expression was already NULL, then changing the column's
data type with ALTER TABLE...ALTER COLUMN...SET DATA TYPE removes its DEFAULT/SET USING
expression. |
VER-72613 |
Security |
The LDAPLinkSearchTimeout configuration parameter has been restored. |
VER-72625 |
Optimizer |
The "COMMENT ON" statement no longer causes certain views (all_tables, user_functions, etc.) to
show duplicate entries. |
VER-72683 |
Optimizer |
In some cases, common sub-expressions in the SELECT clause of an INSERT...SELECT statement were
not reused, which caused performance degradation. Also, the EXPLAIN-generated query plan
occasionally rendered common sub-expressions incorrectly. These issues have been resolved. |
VER-72721 |
Catalog Engine, Execution Engine |
Dropping a column no longer corrupts partition-level statistics, causing subsequent runs of the
ANALYZE_STATISTICS_PARTITION meta-function on the same partition range to fail. If you have
existing corrupted partition-level statistics, drop the statistics and run
ANALYZE_STATISTICS_PARTITION to recreate them. |
VER-72797 |
Data load / COPY |
Added support guidance parameter 'UDLMaxDataBufferSize' with default value 256 x 1024 x 1024
(256 MB). Its value can be increased to avoid 'insufficient memory' errors during User Define
Load. |
VER-72832 |
Execution Engine |
Querying UUID data types with an IN operator ran significantly slower than an equivalent query
using OR. This problem has been resolved. |
VER-72852 |
Kafka Integration, Security |
The scheduler now supports CA bundles at the UDX and vkconfig level. |
VER-72864 |
Optimizer |
An INSERT statement with a SELECT clause that computed many complex expressions and returned a
small result set sometimes performed slower than running the same SELECT statement
independently. This issue has been resolved. |
VER-72886 |
Execution Engine |
Setting configuration parameter CascadeResourcePoolAlwaysReplan = 1 occasionally caused problems
when a timed-out query had already started producing results on the original resource pool. Now,
if configuration parameter CascadeResourcePoolAlwaysReplan = 1, and a query times out on the
original resource pool and must cascade to a secondary pool, the following behavior applies:
-
The query has not started to produce results: the query cascades to the secondary
resource pool and creates another query plan.
-
The query already started to produce results: The query cascades to the secondary
resource pool where CascadeResourcePoolAlwaysReplan is ignored, and query output
continues.
|
VER-72930 |
Monitoring |
Two new columns have been added to Data Collector system table dc_resource_acquisitions:
session_id and request_id. |
VER-72943 |
Optimizer |
The hint message that was associated with the DEPOT_FETCH hint referenced a deprecated alias of
that hint. The hint has been fixed to reference the supported hint name. |
VER-72952 |
DDL - Projection |
Users without the required table and schema privileges were able to create projections if the
CREATE PROJECTION statement used the createType hint with an argument of L, D, or P. This
problem has been resolved. |
VER-72991 |
Hadoop |
Previously, when accessing data on HDFS, Vertica would sometimes connect to the wrong HDFS
cluster in cases when several namenodes from different clusters have the same hostname. This is
now fixed. |
VER-73064 |
Control Networking |
When a wide table had many storage containers, adding a column sometimes caused the server to
fail. This issue has been resolved. |
VER-73080 |
DDL |
If you renamed a schema, the change was not propagated to table DEFAULT expressions that
specified sequences of that schema. This issue has been resolved: all sequences of a renamed
schema are now updated with the new schema name. |
VER-73100 |
DDL - Table |
When you add a column to a table with ALTER TABLE...ADD COLUMN, Vertica registers an
AddDerivedColumnEvent, which it uses during node recovery to avoid rebuilding projections of
that table. When you dropped a column from the same table with ALTER TABLE...DROP COLUMN,
Vertica updated this event to indicate that the number of table columns changed. At the same
time, it also checked if the dropped column was referenced as the default expression of another
column in the table. If so, Vertica returned with an error and hint to advance the AHM before
before executing the drop. Users who lacked privileges to advance the AHM were blocked from
performing the drop operation.
This issue has been resolved: given the same conditions, dropping a column no longer
depends on advancing the AHM. Instead, Vertica now sets the AddDerivedColumnEvent attribute
number to InvalidAttrNumber. When the next recovery operation detects this setting, it rebuilds
the affected projections.
|
VER-73102 |
DDL - Table |
When you renamed a table column, in some instances the DDL of the table projections retained the
previous column name as an alias. This problem has been resolved. |
VER-73111 |
Backup/DR |
Error reporting by MIGRATE_ENTERPRISE_TO_EON on unbundled storage containers has been updated:
MIGRATE_ENTERPRISE_TO_EON now returns the names all tables with projections that store data in
unbundled storage containers, and recommends to run meta-function COMPACT_STORAGE() on those
tables. |
VER-73131 |
Admin Tools |
The admintools locking mechanism has been changed to prevent instances of admintools from
leaving behind admintools.conf.lock, which could stop secondary nodes from starting. |
VER-73209 |
Data load / COPY |
Under certain circumstances, the FJsonParser rejecting rows could cause the database to panic.
This issue has been fixed. |
VER-73224 |
Optimizer |
Previously, you could only set configuration parameter PushDownJoinFilterNull at the database
level. If this parameter is set to 0, the NOT NULL predicate is not pushed down to the SCAN
operator during JOIN operations. Now, this parameter can also be set for the current session. |
VER-73304 |
Tuple Mover |
If mergeout on a purge request failed because the target table was locked, the Tuple Mover was
unable to execute mergeout on later purge requests for the same table. This issue has been
resolved. |
VER-73314 |
UI - Management Console |
An issue with selecting various Vertica database versions for Data Source on MC has been fixed. |
VER-73380 |
Optimizer |
Users will now receive a hint to increase value of MaxParsedQuerySizeMB if their queries surpass
their database's query memory limit. |
VER-73516 |
Catalog Engine |
Under very rare circumstances, when a node PANIC'ed during commit, subsequent recovery resulted
in discrepancies among several buddy projections. This issue has been resolved. |
Issue Key |
Component |
Description |
VER-73015 |
ComplexTypes |
The ALTER TABLE ADD COLUMN command previously did not work correctly when adding a column of
array type. This issue has been fixed. |
VER-73139 |
Optimizer |
In some cases, common sub-expressions in the SELECT clause of an INSERT...SELECT statement were
not reused, which caused performance degradation. Also, the EXPLAIN-generated query plan
occasionally rendered common sub-expressions incorrectly. These issues have been resolved. |
VER-73172 |
Hadoop |
Previously, when accessing data on hdfs Vertica would sometimes connect to the wrong hdfs
cluster in cases when several namenodes from different clusters had the same hostname. This is
now fixed. |
VER-73261 |
DDL - Projection |
Users without the required table and schema privileges were able to create projections if the
CREATE PROJECTION statement used the createType hint with an argument of L, D, or P. This
problem has been resolved. |
VER-73262 |
Execution Engine |
In some cases, cancelling a query caused subsequent queries in the same transaction to return
with an error that they also had been cancelled. This issue has been resolved. |
VER-73265 |
Control Networking |
When a wide table had many storage containers, adding a column could sometimes cause the server
to fail. The issue is fixed. |
VER-73442 |
Admin Tools |
Admintools did not start because the file admintools.conf.lock was left behind by another
instance of admintools. The locking mechanism has been changed to address this issue. |
VER-73464 |
Optimizer |
In some cases, querying certain projections failed to find appropriate partition-level
statistics. This issue has been resolved: queries on projections now use partition-level
statistics as needed. |
VER-73467 |
Tuple Mover |
A ROS of inactive partitions was excluded from mergeout if it was very large in relation to
the size of other ROS's to be merged. Now, a large ROS container of inactive partitions is
merged with smaller ROS containers when the following conditions are true:
- Total number of ROS containers for the projection is close to the threshold for ROS
pushback.
- Total mergeout size does not exceed the limit for ROS container size.
|
VER-73468 |
DDL |
If you renamed a schema, the change was not propagated to table DEFAULT expressions that
specified sequences of that schema. This issue has been resolved: all sequences of a renamed
schema are now updated with the new schema name. |
VER-73470 |
Data load / COPY |
Under certain circumstances, the FJsonParser rejecting rows could cause the database to panic.
This issue has been resolved. |
VER-73480 |
ComplexTypes, Optimizer |
The presence of directed queries sometimes caused "Unknown Expr" warnings with queries involving
arrays and sets.
This issue has been resolved. |
VER-73478 |
Optimizer |
Previously, you could only set configuration parameter PushDownJoinFilterNull at the database
level. If this parameter is set to 0, the NOT NULL predicate is not pushed down to the SCAN
operator during JOIN operations. Now, this parameter can also be set for the current session. |
Issue Key |
Component |
Description |
VER-68548 |
UI - Management Console |
A problem occurred in MC where javascript was caching the TLS checkbox state while importing a
database. This problem has been fixed. |
VER-71398 |
UI - Management Console |
Previously, an exception occurred if a database that had no password was used as the production
database for Extended Monitoring, This issue has been fixed. |
VER-69103 |
AP-Advanced |
Queries using user-defined aggregates or the ACD library would occasionally return the error
"DataArea overflow". This issue has been fixed. |
VER-52301 |
Kafka Integration |
When an error occurs while parsing Avro messages, Vertica now provide a more helpful error
message in the rejection table. |
VER-68043 |
Kafka Integration |
Previously, the KafkaAvroParser would report a misleading error message when the
schema_registry_url pointed to a page that was not an Avro schema. KafkaAvroParser now reports a
more accurate error message. |
VER-69208 |
Kafka Integration |
The example vkconfig launch script now uses nohup to prevent the scheduler from exiting
prematurely. |
VER-69988 |
Kafka Integration, Supported Platforms |
In newer Linux distributions (RHEL 8 or Debian 10, for example) the rdkafka library had an issue
with the new glibc thread support. This issue could cause the database to go down when executing
a COPY statement via the KafkaSource function. This issue has been resolved. |
VER-70919 |
Kafka Integration |
Applied patch to librdkafka issue #2108 to fix infinite loop that would cause COPY statements
using KafkaSource() and their respective schedulers to hang until the vertica server processes
are restarted.
https://github.com/edenhill/librdkafka/issues/2108 |
VER-71114 |
Execution Engine |
Fixed an issue where loading large Kafka messages would cause an error. |
VER-69437 |
Supported Platforms |
The vertica_agent.service can now stop gracefully on Red Hat Enterprise Linux version 7.7. |
VER-70932 |
License |
The license auditor may core dump in some cases on tables containing columns with SET USING
expressions. |
VER-67024 |
Client Drivers - ODBC |
Previously, most batch insert operations (performed with the ODBC driver) which resulted in the
server rejecting some rows presented the user with an inaccurate error message:
Row rejected by server; see server log for details
No such details were actually available in the server log. This error message has been
changed to:
Row rejected by server; check row data for truncation or null constraint violations |
VER-69654 |
Third Party Tools Integration |
Previously, the length of the returned object was based on the input's length. However, numeric
formats do not generally preserve size, since 1 may encrypt to 1000000, and so on. This leads to
problems such as copying a 20 byte object into a 4 byte VString object. This fix ensures that
the length of the output buffer is at least the size of the input length + 100 bytes. |
VER-54779 |
Spread |
When Vertica is installed with a separate control network (by using the "--control-network"
option during installation), replacing an existing node or adding a new one to the cluster might
require restarting the whole database. This issue has been fixed. |
VER-69112 |
Security, Third Party Tools Integration |
NULL input to VoltageSecureProtect and VoltageSecureAccess now returns NULL value. |
VER-70371 |
Admin Tools |
Log rotation now functions properly in databases upgraded from 9.2.x to 9.3.x. |
VER-70488 |
Admin Tools |
Starting a subcluster in an EON Mode database no longer writes duplicate entries to the
admintools.log file. |
VER-70973 |
Admin Tools |
The Database Designer no longer produces an error when it targets a schema other than the
"public" schema. |
VER-68453 |
Tuple Mover |
In previous releases, the TM resource pool always allocated two mergeout threads to inactive
partitions, no matter how many threads were specified by its MAXCONCURRENCY parameter. The
inability to increase the number of threads available to inactive partitions sometimes caused
ROS pushback. Now, the Tuple Mover can allocate up to half of the MAXCONCURRENCY-specified
mergeout threads to inactive partitions. |
VER-70836 |
Optimizer |
In some cases, the optimizer required an unusual amount of time to generate plans for queries on
tables with complex live aggregate projections. This issue has been resolved. |
VER-71748 |
Optimizer |
Queries with a mix of single-table predicates and expressions over several EXISTS queries in
their WHERE clause sometimes returned incorrect results. The issue has been fixed. |
VER-71953 |
Optimizer, Recovery |
Projection data now remains consistent following a MERGE during node recovery in Enterprise
mode. |
VER-71397 |
Execution Engine |
In some cases, malformed queries on specific columns, such as constraint definitions with
unrecognized values or tranformations with improper formats, caused the database to fail. This
problem has been resolved. |
VER-71457 |
Execution Engine |
Certain MERGE operations were unable to match unique hash values between the inner and outer of
optimizedmerge join. This resulted in setting the hash table key and value to null. Attempts to
decrement the outer join hash count failed to take into account these null values, and this
caused node failure. This issue has been resolved. |
VER-71148 |
DDL - Table |
If a column had a constraint and its name contained ASCII and non-ASCII characters, attempts to
insert values into this column sometimes caused the database to fail. This issue has been
resolved. |
VER-71151 |
DDL - Table |
ALTER TABLE...RENAME was unable to rename multiple tables if projection names of the target
tables were in conflict. This issue was resolved by maintaining local snapshots of the renamed
objects. |
VER-62046 |
DDL - Projection |
When partitioning a table, Vertica first calculated the number of partitions, and then verified
that columns in the partition expression were also in all projections of that table. The
algorithm has been reversed: now Vertica checks that all projections contain the required
columns before calculating the number of partitions. |
VER-71145 |
Data Removal - Delete, Purge, Partitioning |
Vertica now provides clearer messages when meta-function drop_partitions fails due to an
insufficient resource pool. |
VER-70607 |
Catalog Engine |
In Eon Mode, projection checkpoint epochs on down nodes now become consistent with the current
checkpoint epoch when the nodes resume activity. |
VER-61279 |
Hadoop |
Previously, loading data into an HDFS storage location would occasionally fail with a "Error
finalizing ROS DataTarget" message. This is now fixed. |
VER-63413 |
Hadoop |
Previously, export of huge tables to parquet format would fail sometimes with "file not found
exception", this is now fixed. |
VER-68830 |
Hadoop |
On some HDFS distributions, if a datanode is killed during a Vertica query to HDFS, the namenode
fails to respond for a long time causing Vertica to time out and roll back the transaction. In
such cases Vertica used to log a confusing error message, and if the timeout happened during the
SASL handshaking process, Vertica would hang without the possibility of being canceled. Now we
log a better error message (saying that dfs.client.socket-timeout value should be increased on
the HDFS cluster), and the hang during SASL handshaking is now cancelable. |
VER-71088 |
Hadoop |
Previously, Vertica queries accessing HDFS would fail immediately if an HDFS operation returned
403 error code (such as Server too busy). Vertica now retries the operation. |
VER-71542 |
Hadoop |
The Vertica function INFER_EXTERNAL_TABLE_DDL is now compatible with Parquet files that use
2-level encoding for LIST types. |
VER-71047 |
Backup/DR |
When logging with level 3, vbr won't print out the content of dbPassword, dest_dbPassword and
serviceAccessPass any more. |
VER-71451 |
EON |
The clean_communal_storage meta-function is now up to 200X faster. |