This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

New features

This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.

This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.

For known and fixed issues in the most recent release, see the Vertica Release Notes.

1 - Deprecated and removed functionality

Vertica retires functionality in two phases:.

Vertica retires functionality in two phases:

  • Deprecated: Vertica announces deprecated features and functionality in a major or minor release. Deprecated features remain in the product and are functional. Published release documentation announces deprecation on this page. When users access this functionality, it may return informational messages about its pending removal.

  • Removed: Vertica removes a feature in a major or minor release that follows the deprecation announcement. Users can no longer access the functionality, and this page is updated to verify removal (see History, below). Documentation that describes this functionality is removed, but remains in previous documentation versions.

Deprecated

The following functionality was deprecated and will be removed in future versions:

Release Functionality Notes
12.0.3 ADO.NET driver support for .NET 3.5
prometheus.createServiceMonitor Helm chart parameter
12.0.2 JDBC 4.0 and 4.1 support
Support for Visual Studio 2008 and 2010 plug-ins
CA bundles Use TLS configurations instead.

The following parameters for CREATE NOTIFIER and ALTER NOTIFIER:

  • TLSMODE

  • CA BUNDLE

  • CERTIFICATE

Use the TLS CONFIGURATION parameter instead. For details, see Creating a Kafka notifier.
The TLSMODE PREFER parameter for CONNECT TO VERTICA. Use the TLS CONFIGURATION parameter instead.
cert-manager for Helm chart TLS configuration. Use webhook.certSource parameter to generate certificates internally or provide custom certificates. See Helm chart parameters.
webhook.caBundle Helm chart parameter. Provide the certificate authority (CA) bundle in a Secret to webhook.tlsSecret.
12.0.0 hive_partition_cols parameter for PARQUET and ORC parsers Use COPY...PARTITION COLUMNS instead. See Partitioned file paths.

The following ODBC/JDBC OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

Replaced by OAuthJsonConfig (ODBC) and oauthjsonconfig (JDBC).

Removed

The following functionality was removed:

Release Functionality Notes
12.0.3 Support for Visual Studio 2008 and 2010 plug-ins
vsql support for macOS 10.12-10.14 vsql now requires macOS 10.15 or higher.
12.0.1 Internet Explorer 11 Internet Explorer 11 web browser is retired and out of support as of June 15, 2022.
12.0.0 macOS 10.12 ODBC and vsql client driver support
AWS Library (S3EXPORT) Replaced by File export.

History

The following functionality or support has been deprecated or removed as indicated:

Functionality Component Deprecated in: Removed in:

The following Kafka user-defined session parameters:

  • kafka_SSL_CA

  • kafka_SSL_Certificate

  • kafka_SSL_PrivateKey_Secret

  • kafka_SSL_PrivateKeyPassword_secret

  • kafka_Enable_SSL

Kafka 12.0.3
vsql support for macOS 10.12-10.14 Client drivers 12.0.3
CA bundles Security 12.0.2

The following parameters for CREATE NOTIFIER and ALTER NOTIFIER:

  • TLSMODE

  • CA BUNDLE

  • CERTIFICATE

Security 12.0.2
The TLSMODE PREFER parameter for CONNECT TO VERTICA. Security 12.0.2
JDBC 4.0 and 4.1 support Client drivers 12.0.2
Support for Visual Studio 2008 and 2010 plug-ins Client drivers 12.0.2 12.0.3
Internet Explorer 11 support Management Console 12.0.1
ODBC support for macOS 10.12-10.14 Client drivers 12.0

The following ODBC/JDBC OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

Client drivers 12.0
hive_partition_cols parameter for PARQUET and ORC parsers Server 12.0

The following ODBC/JDBC OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

Client drivers 12.0
INFER_EXTERNAL_TABLE_DDL function Server 11.1.1
Admission Controller Webhook image Kubernetes 11.0.1 11.0.2
Admission Controller Helm chart Kubernetes 11.0.1
Shared DATA and DATA,TEMP storage locations Server 11.0.1
DESIGN_ALL option for EXPORT_CATALOG() Server 11.0
HDFSUseWebHDFS configuration parameter and LibHDFS++ Server 11.0
INFER_EXTERNAL_TABLE_DDL (path, table) syntax Server 11.0 11.1.1

AWS library functions:

  • AWS_GET_CONFIG

  • AWS_SET_CONFIG

  • S3EXPORT

  • S3EXPORT_PARTITION

Server 11.0 12.0
Vertica Spark connector V1 Client 11.0
admintools db_add_subcluster --is-secondary argument Server 11.0
Red Hat Enterprise Linux/CentOS 6.x Server 10.1.1 11.0
STRING_TO_ARRAY(array,delimiter) syntax Server 10.1.1
Vertica JDBC API com.vertica.jdbc.kv package Client Drivers 10.1
ARRAY_CONTAINS function Server 10.1

Client-server TLS parameters:

  • SSLCertificate

  • SSLPrivateKey

  • SSLCA

  • EnableSSL

LDAP authentication parameters:

  • tls_key

  • tls_cert

  • tls_cacert

  • tls_reqcert

LDAPLink and LDAPLink dry-run parameters:

  • LDAPLinkTLSCACert

  • LDAPLinkTLSCADir

  • LDAPLinkStartTLS

  • LDAPLinkTLSReqCert

Server 10.1 11.0
MD5 hashing algorithm for user passwords Server 10.1
Reading structs from ORC files as expanded columns Server 10.1 11.0
vbr configuration section [S3] and S3 configuration parameters Server 10.1
flatten_complex_type_nulls parameter to the ORC and Parquet parsers Server 10.1 11.0
System table WOS_CONTAINER_STORAGE Server 10.0.1 11.0.2
skip_strong_schema_match parameter to the Parquet parser Server 10.0.1 10.1
Specifying segmentation on specific nodes Server 10.0.1
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE Server 10.0.1 11.0.1
Meta-function ANALYZE_CORRELATIONS Server 10.0
Eon Mode meta-function BACKGROUND_DEPOT_WARMING Server 10.0
Reading structs from Parquet files as expanded columns Server 10.0 10.1

Eon Mode meta-functions:

  • SET_DEPOT_PIN_POLICY

  • CLEAR_DEPOT_PIN_POLICY

Server 10.0 10.1
vbr configuration parameter SnapshotEpochLagFailureThreshold Server 10.0

Array-specific functions:

  • array_min

  • array_max

  • array_sum

  • array_avg

Server 10.0 10.1
DMLTargetDirect configuration parameter Server 10.0
HiveMetadataCacheSizeMB configuration parameter Server 10.0 10.1
MoveOutInterval Server 10.0
MoveOutMaxAgeTime Server 10.0
MoveOutSizePct Server 10.0
Windows 7 Client Drivers 9.3.1
DATABASE_PARAMETERS admintools command Server 9.3.1
Write-optimized store (WOS) Server 9.3 10.0
7.2_upgrade vbr task Server 9.3
DropFailedToActivateSubscriptions configuration parameter Server 9.3 10.0
‑‑skip-fs-checks Server 9.2.1
32-bit ODBC Linux and OS X client drivers Client 9.2.1 9.3
Vertica Python client Client 9.2.1 10.0
macOS 10.11 Client 9.2.1
DisableDirectToCommunalStorageWrites configuration parameter Server 9.2.1
CONNECT_TO_VERTICA meta-function Server 9.2.1 9.3
ReuseDataConnections configuration parameter Server 9.2.1 9.3
Network interfaces (superseded by network addresses) Server 9.2
Database branching Server 9.2 10.0
KERBEROS_HDFS_CONFIG_CHECK meta-function Server 9.2
Java 5 support JDBC Client 9.2 9.2.1

Configuration parameters for enabling projections with aggregated data:

  • EnableExprsInProjections

  • EnableGroupByProjections

  • EnableTopKProjections

  • EnableUDTProjections

Server 9.2
DISABLE_ELASTIC_CLUSTER() Server 9.1.1 11.0
eof_timeout parameter of KafkaSource Server 9.1.1 9.2
Windows Server 2012 Server 9.1.1
Debian 7.6, 7.7 Client driver 9.1.1 9.2.1
IdolLib function library Server 9.1 9.1.1
SSL certificates that contain weak CA signatures such as MD5 Server 9.1
HCatalogConnectorUseLibHDFSPP configuration parameter Server 9.1
S3 UDSource Server 9.1 9.1.1
HCatalog Connector support for WebHCat Server 9.1
partition_key column in system tables STRATA and STRATA_STRUCTURES Server 9.1 10.0.1
Vertica Pulse Server 9.0.1 9.1.1
Support for SQL Server 2008 Server 9.0.1 9.0.1
SUMMARIZE_MODEL meta-function Server 9.0 9.1
RestrictSystemTable parameter Server 9.0.1
S3EXPORT multipart parameter Server 9.0
EnableStorageBundling configuration parameter Server 9.0
Machine Learning for Predictive Analytics package parameter key_columns for data preparation functions. Server 9.0 9.0.1
DROP_PARTITION meta-function, superseded by DROP_PARTITIONS Server 9.0
Machine Learning for Predictive Analytics package parameter owner. Server 8.1.1 9.0
Backup and restore --setupconfig command Server 8.1 9.1.1
SET_RECOVER_BY_TABLE meta-function. Do not disable recovery by table. Server 8.0.1
Column rebalance_projections_status.duration_sec Server 8.0
HDFS Connector Server 8.0 9.0
Prejoin projections Server 8.0 9.2
Administration Tools option --compat21 Server 7.2.1
admin_tools -t config_nodes Server 7.2 11.0.1
Projection buddies with inconsistent sort order Server 7.2 9.0
backup.sh Server 7.2 9.0
restore.sh Server 7.2 9.0
copy_vertica_database.sh Server 7.2
JavaClassPathForUDx configuration parameter Server 7.1
ADD_LOCATION meta-function Server 7.1
bwlimit configuration parameter Server 7.1 9.0
vbr configuration parameters retryCount and retryDelay Server 7.1 11.0
EXECUTION_ENGINE_PROFILE counters: file handles, memory allocated Server 7.0 9.3
EXECUTION_ENGINE_PROFILES counter memory reserved Server 7.0
MERGE_PARTITIONS() meta-function Server 7.0

krb5 client authentication method

All clients 7.0
range-segmentation-clause Server 6.1.1 9.2
scope parameter of meta-function CLEAR_PROFILING Server 6.1
Projection creation type IMPLEMENT_TEMP_DESIGN Server, clients 6.1

3 - New and changed in Vertica 12.0.0

3.1 - Client connectivity

These new parameters let you adjust TCP keepalive and heartbeat protocol settings at the database and session levels.

Keepalive and heartbeat protocol settings

These new parameters let you adjust TCP keepalive and heartbeat protocol settings at the database and session levels.

  • KeepAliveIdleTime: The time, in seconds, before the first TCP keepalive probe is sent to ensure that the client is still connected.

  • KeepAliveProbeInterval: The time interval, in seconds, between keepalive probes.

  • KeepAliveProbeCount: The number of consecutive keepalive probes that must be unacknowledged by the client before the client connection is considered lost and closed.

For details, see Managing client connections.

3.2 - Client drivers

The vertica-nodejs client driver is now available.

Node.js client driver

The vertica-nodejs client driver is now available.

Centralized OAuth configuration

To simplify OAuth configuration, you can now set OAuth parameters with a single JSON string with the new oauthjsonconfig (ODBC) and OAuthJsonConfig (JDBC) parameters.

To preserve existing configurations, the old parameters take precedence over the new JSON parameter, but the new JSON parameter is the recommended configuration method, and the old parameters have been deprecated.

JDBC DataSource user property

You can now get and set the user property of DataSource with getUser() and setUser(), respectively. For details, see JDBC API.

ODBC client driver enhancements

A number of enhancements have been made to the ODBC client driver to better conform to the ODBC specification. These changes may cause regressions in existing ODBC client programs:

  • If an application calls SQLSetDescField to set any field other than the following, the record becomes unbound:

    • SET_DESC_COUNT

    • The deferred fields:

      • SQL_DESC_DATA_PTR

      • SQL_DESC_INDICATOR_PTR

      • SQL_DESC_OCTET_LENGTH_PTR

  • SQLSetDescField must be called in the sequence order specified in the Microsoft documentation.

  • If an application wants to set the precision or scale field for a numeric or date type, it must use SQLSetDescField to explicitly set those fields rather than relying on SQLBindParameter. For details, see the Microsoft documentation.

  • If an application calls SQLGetData, the StrLen_or_IndPtr can return the actual length of the data available, SQL_NO_TOTAL, or SQL_NULL_DATA. If the data is truncated, Vertica returns SQL_NO_TOTAL. When checking the length of the available buffer, you must consider all cases, including SQL_NO_TOTAL and SQL_NULL_DATA. For details, see the Microsoft documentation on SQLGetData and Getting Long Data.

  • If an application calls SQLExecute, the response will always attempt to set the SQLSTATE, including the status and other metadata.

  • Many exception messages have been improved with more detailed error information.

JDBC: CallableStatement

The Vertica JDBC client driver now supports stored procedures through CallableStatement. For details, see Executing queries through JDBC.

3.3 - Containers and Kubernetes

You can enable Spread encryption on the Vertica database.

Spread encryption on Vertica database

You can enable Spread encryption on the Vertica database. Use the encryptSpreadComm custom resource parameter to set the EncryptSpreadComm security parameter for the database.

For details, see Custom resource definition parameters.

Prometheus operator integration

Vertica on Kubernetes integrates with the Prometheus operator to facilitate adoption in environments that currently use the Prometheus operator.

For details, see Prometheus integration.

3.4 - Database management

You can set disk quotas for schemas, individual tables, or both.

Disk quota for schemas and tables

You can set disk quotas for schemas, individual tables, or both. Disk quotas are checked by most user operations that increase storage size. Quotas do not affect recovery, rebalancing, or Tuple Mover operations.

In Eon Mode, disk usage is an aggregate of all disk space used by all shards for the schema or table. This value is computed for primary subscriptions only. In Enterprise Mode, disk usage is the sum of disk space used by all storage containers on all nodes for the schema or table. This sum excludes buddy projections but includes all other projections.

The DISK_QUOTA_USAGES system table records quotas and current usage for objects that have quotas.

For details, see Disk quotas.

3.5 - Diagnostic tools

scrutinize now collects the full time zone by running the following command:.

Scrutinize

scrutinize now collects the full time zone by running the following command:

timedatectl | grep "Time zone"

Output is directed to:

scrutinize-output-dir/VerticaScrutinize.timestamp/node-name/context/Commands/timezone

For example, after running scrutinize:

$ cat ./VerticaScrutinize.20220513110429/v_vmart_node0001/context/Commands/timezone
       Time zone: America/New_York (EDT, -0400)

3.6 - Documentation updates

Several organizational changes have been made to make information easier to find:.

Several organizational changes have been made to make information easier to find:

  • Top-level topics have been re-ordered to place similar information together. The SQL Reference has been moved near the bottom with the other reference documentation. The lists of configuration parameters have been moved into the SQL Reference.

  • In the SQL Reference, functions and meta-functions have been combined under Functions, and the alphabetical list includes all of them. (Previously, the alphabetical list included only meta-functions.) Meta-functions are labeled as such on their reference pages.

  • The documentation about client drivers has been refactored.

  • The information that was in the Concepts Guide has been placed with the other documentation on those topics. For example, the section about projections is now with the other documentation about projections.

  • Several duplicate topics were removed, and several collections of short related topics have been consolidated.

3.7 - Eon Mode

The new RESHARD_DATABASE function allows you to change the number of shards in an Eon Mode database.

Re-shard an Eon Mode database

The new RESHARD_DATABASE function allows you to change the number of shards in an Eon Mode database. Previously, the shard count was set during database creation and could not be altered.

For more information about re-sharding, see Change the number of shards in the database.

Graceful shutdown

If you want to drain a subcluster's client connections before shutting it down, you can now gracefully shut down subclusters using SHUTDOWN_WITH_DRAIN. The function allows existing connections on the subcluster to continue their work until either all connections close or a user-specified timeout is reached. When one of these conditions is met, the function proceeds to shut down the subcluster.

For more information about the graceful shutdown process, see Graceful Shutdown.

3.8 - Loading data

Data files are sometimes partitioned in the file system using the directory structure.

Partitioned file paths

Data files are sometimes partitioned in the file system using the directory structure. Partitioning moves values out of the raw data, where they have to be included for each row, and into the directory structure, saving disk space. Partitioning can also improve query performance by allowing entire directories to be skipped.

Previously, only the Parquet and ORC parsers could take advantage of partitioned file paths. Now COPY supports partitioned file paths for all parsers using the new PARTITION COLUMNS option. The hive_partition_cols parameter for the Parquet and ORC parsers is deprecated.

The hive_partition_cols parameter has the following behavior changes from previous releases:

  • Nested partition directories must appear in consistent order in the file system. The following path pattern is invalid:

    /data/created=2022-01-01/region=north
    /data/region=south/created=2022-01-02
    
  • If the column value cannot be parsed from the directory name, COPY rejects the path instead of treating the value as null.

  • If the path is missing a declared partition column, COPY always returns an error. Previously, if do_soft_schema_match_by_name was true in the Parquet parser, the parser filled the column with nulls.

  • Partition columns are no longer required to be the last columns in the table definition.

See Partitioned file paths.

3.9 - Machine learning

Vertica now supports the isolation forest (iForest) algorithm.

Isolation forest

Vertica now supports the isolation forest (iForest) algorithm. iForest is an unsupervised outlier detection algorithm that you can use for applications such as fraud detection and system health monitoring, or for removing outliers from your data. You can use the IFOREST function to train an iForest model and the APPLY_IFOREST function to apply the model to an input relation.

For an extended discussion on the iForest algorithm, see Isolation Forest.

Option to build LINEAR_REG and LOGISTIC_REG models without calculating intercept

You now have the option of training linear and logistic regression models without calculating an intercept. See LINEAR_REG and LOGISTIC_REG for details.

3.10 - Management Console

You can edit the following properties of an existing custom alert:.

Edit custom alerts

You can edit the following properties of an existing custom alert:

  • Alert Name

  • SQL Query

  • Variable values

For details, see Custom alerts.

Revive an Eon Mode database on Microsoft Azure

Management Console supports reviving Eon Mode database clusters on Microsoft Azure.

Alternate filesystem layouts for reviving and scaling on AWS

On Amazon Web Services (AWS), you can use alternate filesystem layouts for the depot, temp, and catalog Vertica directories for the following actions:

  • Reviving a database

  • Creating a subcluster

  • Scaling a subcluster

For details, see Amazon Web Services in MC and Subclusters in MC.

3.11 - Projections

Partition Ranges can now be specified for live aggregate and top-K projections.

Partition range support for aggregate projections

Partition Ranges can now be specified for live aggregate and top-K projections.

3.12 - SDK updates

User-defined aggregate functions (UDAFs) can now be polymorphic.

Polymorphic aggregate functions

User-defined aggregate functions (UDAFs) can now be polymorphic. A polymorphic function can accept any number and type of arguments. For information on how to write a polymorphic function, see Creating a polymorphic UDx.

Complex types in the Python SDK

You can now read and write complex types in user-defined extensions written in Python. The complex types support includes arrays, rows, and combinations of both. For details, see Arguments and return values. For examples of a Python UDx utilizing complex types, see Python example: matrix multiplication and Python example: complex types JSON parser.

3.13 - Security and authentication

To improve security, Vertica no longer issues the following error when a user attempts to connect and authenticate:.

Generic authentication errors

To improve security, Vertica no longer issues the following error when a user attempts to connect and authenticate:

Invalid username or password

In addition, Vertica no longer issues method-specific error messages, and all authentication failures will result in the same error message:

authentication failed for username "name"

This change may affect clients that decide to retry connection attempts based on the type of connection error code. One such example is vsql, which would previously attempt a plaintext connection if TLS failed (and TLSMODE was ENABLE), but not if vsql received an error code reporting incorrect user credentials.

Because the new, generic message and error code does not specify the cause for the failure, vsql cannot distinguish between a authentication error as a result of an invalid TLS configuration or invalid user credentials, and it will attempt to establish a plaintext connection in both cases.

Default authentication records

Vertica now creates three default authentication records and grants them to the public role. These authentication records have a priority of -1, so all user-created authentication records take priority over these default records.

In previous versions of Vertica, if no authentication records were enabled, the following implicit authentication records were applied:

  • Users without a password were authenticated with the trust method. This implicit authentication record continues to apply in 12.0.0 for users without a password.

  • Users with a password were authenticated with the password method. This implicit authentication record has been removed in favor of the default records.

Fallthrough authentication

You can now allow authentication records to fall through to the next record (in order of priority) upon failure. For details, see Fallthrough authentication.

Upgrade behavior

In previous versions, this fallthrough behavior was only available for ident (could fall through to any other authentication method) and ldap (could only fall through to other ldap methods) authentication methods and this behavior could not be disabled. As of Vertica 12.0.0, this behavior is disabled by default for all new authentication records, including ident and ldap.

To preserve the behavior of existing databases that depended on ident's fallthrough Vertica automatically enables fallthrough for ident authentication records on upgrade if all of the following are true:

  • The database contains an ident authentication record.

  • The ident authentication record has the highest priority.

  • The database contains another user-defined authentication record.

Again, in previous versions, ldap records only fell through to other ldap records and skipped records that used other methods. This is no longer the case; ldap records are fallthrough-compatible with many other methods.

Therefore, to replicate old fallthrough behavior for ldap, your ldap records must be consecutive (in priority order) to fallthrough to each other.

3.14 - SQL functions and statements

You can now use the INFER_TABLE_DDL function to produce candidate table definitions from JSON files.

INFER_TABLE_DDL supports JSON

You can now use the INFER_TABLE_DDL function to produce candidate table definitions from JSON files. Because a JSON file does not have an explicit schema, the function inspects the data itself. Because JSON data can vary from record to record or file to file, the function can return more than one candidate definition. In the following example, differences are highlighted:

=> SELECT INFER_TABLE_DDL ('/data/*.json'
    USING PARAMETERS table_name='restaurants', format='json',
max_files=3, max_candidates=3);
WARNING 0:  This generated statement contains one or more float types which might lose precision
WARNING 0:  This generated statement contains one or more varchar/varbinary types which default to length 80

                INFER_TABLE_DDL
------------------------------------------------------------------------
 Candidate matched 1/2 of total files(s):
  create table "restaurants"(
    "cuisine" varchar,
    "location_city" Array[varchar],
    "menu" Array[Row(
      "item" varchar,
      "price" float
    )],
    "name" varchar
  );
Candidate matched 1/2 of total files(s):
  create table "restaurants"(
    "cuisine" varchar,
    "location_city" Array[varchar],
    "menu" Array[Row(
      "items" Array[Row(
        "item" varchar,
        "price" numeric
      )],
      "time" varchar
    )],
    "name" varchar
  );

(1 row)

Immutable tables

Immutable tables are insert-only tables in which existing data cannot be modified, regardless of user privileges. Updating row values and deleting rows are prohibited. Certain changes to table metadata—for example, renaming tables—are also prohibited, in order to prevent attempts to circumvent these restrictions.

You set an existing table to be immutable with ALTER TABLE:

ALTER TABLE table SET IMMUTABLE ROWS;

For details, see Immutable tables.

3.15 - Users and privileges

By default, synchronizing LDAP users through the LDAP Link service automatically grants roles (derived from their LDAP groups) to the users.

By default, synchronizing LDAP users through the LDAP Link service automatically grants roles (derived from their LDAP groups) to the users. However, these are not default roles and therefore must be enabled manually with SET ROLE before they take effect.

The new LDAPLinkAddRolesAsDefault parameter (disabled by default) makes these roles default roles automatically:

=> ALTER DATABASE DEFAULT SET LDAPLinkAddRolesAsDefault = 1;

For details on this and other LDAP Link parameters, see LDAP link parameters.

3.16 - Vertica on the cloud

Vertica now supports three i4i, two r6i, and one c6i EC2 instance types for use in database clusters.

AWS i4i, r6i, and c6i instance types now supported

Vertica now supports three i4i, two r6i, and one c6i EC2 instance types for use in database clusters. See Supported AWS instance types.

4 - New and changed in Vertica 12.0.1

4.1 - admintools

When you run the commands stop_db and stop_subcluster on an Eon Mode database, the default behavior is now to gracefully shut down the specified subclusters.

stop_db and stop_subcluster graceful shutdown

When you run the admintools commands stop_db and stop_subcluster on an Eon Mode database, the default behavior is now to gracefully shut down the specified subclusters. Both commands provide override options that force immediate shutdown of the target subclusters.

For examples, see Stopping the database and Stopping a Subcluster.

Fast startup

The admintools command start_db has a new --fast option for starting Eon Mode databases. When invoked on an unencrypted database, Vertica uses startup information from cluster_config.json to expedite the startup process.

4.2 - Client drivers

The connection parameters oauthjsonconfig (ODBC) and OAuthJsonConfig (JDBC) now take precedence over the deprecated OAuth parameters:.

OAuthJsonConfig precedence

The connection parameters oauthjsonconfig (ODBC) and OAuthJsonConfig (JDBC) now take precedence over the deprecated OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

4.3 - Complex types

Previously, Vertica did not allow you to change the fields of complex type columns in native tables.

Add new fields to complex type columns

Previously, Vertica did not allow you to change the fields of complex type columns in native tables. Now, you can add new fields to an existing complex type column using ALTER TABLE. See Adding a new field to a complex type column for details.

Field aliases in ROW literals

In ROW literals, you can now name individual fields using AS. The following two statements are equivalent:

=> SELECT ROW('Amy' AS name, 2 AS id, false AS current) AS student;

=> SELECT ROW('Amy', 2, false) AS student(name, id, current);

4.4 - Containers and Kubernetes

The securityContext custom resource definition parameter can elevate Vertica server container privileges so that you can perform privileged actions, such as create core files that contain information about the Vertica server process.

Elevate security context privileges for core files

The securityContext custom resource definition parameter can elevate Vertica server container privileges so that you can perform privileged actions, such as create core files that contain information about the Vertica server process.

For details, see Custom resource definition parameters and Troubleshooting your Kubernetes cluster. For additional details about security context, see the Kubernetes documentation.

Resize persistent volumes automatically on managed Kubernetes services

When you use a storageClass that allows volume expansion and you allocate depot storage with a percentage of the total disk space, the operator automatically adjusts the size of the PV and the depot when you update spec.local.requestSize.

For details, see Custom resource definition parameters.

Override helm chart object naming convention

The nameOverride Helm chart parameter sets the naming prefix for all objects that the Helm chart creates.

For details, see Helm chart parameters.

Java 8 support in the Vertica server image

The Vertica server image includes the Java 8 OpenJDK so that you can develop Java UDx and use the Java runtime in your Vertica environment.

For details about Java UDx development, see Java SDK. For a detailed list of Vertica images, see Containerized Vertica.

4.5 - Machine learning

The APPLY_IFOREST function now supports a contamination parameter that specifies the approximate ratio of data points in the training data that are labeled as outliers.

Isolation forest

The APPLY_IFOREST function now supports a contamination parameter that specifies the approximate ratio of data points in the training data that are labeled as outliers. The function uses this contamination value to calculate an anomaly score threshold, a minimum limit that determines whether an input data point is marked as an outlier. You can set contamination instead of setting a threshold explicitly with the threshold parameter.

4.6 - Management Console

On Eon Mode on Amazon Web Services (AWS), the root volume is encrypted by default when you perform any of the following operations with Management Console:.

Eon Mode root volume encryption on AWS

On Eon Mode on Amazon Web Services (AWS), the root volume is encrypted by default when you perform any of the following operations with Management Console:

  • Provision a database

  • Revive a database

  • Scale up a database

  • Add a new subcluster

For details, see Amazon Web Services in MC and Subclusters in MC.

Support gp3 volume type on AWS

Vertica supports the gp3 volume type as the default volume type for root and EBS volumes on Amazon Web Services (AWS). The gp3 volume type is more cost-efficient and has better performance for input/output operations per second (IOPS) than the gp2 volume type. For details, see the Amazon gp3 product overview documentation.

Vertica uses gp3 volumes as the default volume type for the following operations in both Enterprise Mode and Eon Mode:

  • Create a database

  • Revive a database

  • Add a subcluster

  • Scale up a subcluster

Additionally, the Vertica CloudFormation Template (CFT) uses the gp3 volume as the default volume type.

For details about Vertica on AWS, see the following:

AWS cluster creation i4i instance type support

Management Console now supports three AWS i4i instance types for database cluster creation. For configuration details, see Enterprise Mode volume configuration defaults for AWS and Eon Mode volume configuration defaults for AWS.

4.7 - Performance improvements

Refreshing a SET USING column where SET USING is set to a constant value such as null has been optimized.

REFRESH_COLUMNS

Refreshing a SET USING column where SET USING is set to a constant value such as null has been optimized. Compared to previous releases, calling REFRESH_COLUMNS in REBUILD mode now performs significantly faster when it repopulates a SET USING column with constant values.

One exception applies: this optimization does not apply to SET USING subqueries such as SELECT null.

Backup/restore

Better manifest processing and garbage collection have significantly improved performance of vbr backup and restore operations.

4.8 - S3 object store

Vertica now supports all three types of S3 server-side encryption: SSE-S3, SSE-KMS, and SSE-C.

Server-side encryption (SSE)

Vertica now supports all three types of S3 server-side encryption: SSE-S3, SSE-KMS, and SSE-C. See S3 object store.

4.9 - Security and authentication

You can now create custom TLS Configurations and assign them to the following database parameters to secure different connection types:.

Custom TLS configurations

You can now create custom TLS Configurations and assign them to the following database parameters to secure different connection types:

  • ServerTLSConfig

  • HttpsTLSConfig

  • LDAPLinkTLSConfig

  • LDAPAuthTLSConfig

  • InternodeTLSConfig

For details, see Security parameters.

InternodeTLSConfig TLSMODE

You can now set the TLSMODE for internode encryption with ALTER TLS CONFIGURATION.

For details on data channel TLS, see Data channel TLS.

4.10 - SQL functions and statements

You can now create and drop TLS Configurations.

CREATE and DROP TLS configurations

You can now create and drop TLS Configurations.

REFRESH_COLUMNS updates

The REFRESH_COLUMNS function now supports a Boolean flag that specifies whether to split ROS containers if the range of partition keys to refresh spans multiple containers or part of a single container. By default, this flag is set to true: split ROS containers as needed.

4.11 - Users and privileges

Keys, certificates, and TLS Configurations now support granting and revoking privileges.

Expanded cryptographic privileges

Keys, certificates, and TLS Configurations now support granting and revoking privileges.

Superusers have limited privileges on cryptographic objects that they do not own. For details, see Database Object Privileges.

Cryptographic keys and certificates

You can now grant/revoke USAGE, DROP, and ALTER privileges on cryptographic keys.

You cannot explicitly grant privileges on certificates; rather, privileges on certificates comes from ownership or privileges on the underlying cryptographic key. For details, see GRANT (key).

TLS configurations

You can now grant/ revoke DROP and USAGE privileges on TLS Configurations.

5 - New and changed in Vertica 12.0.2

5.1 - Client connectivity

You can now set the client label for vsql connections at the start of the session with the --label option.

Client labels

You can now set the client label for vsql connections at the start of the session with the --label option. Client sessions and their labels are listed in the SESSIONS system table. This option supplements the existing SET_CLIENT_LABEL function, which sets the client label for existing sessions.

JDBC database name

The database name parameter in DriverManager.getConnection() is now optional.

5.2 - Client drivers

You can now disable hostname validation of the OAuth identity provider with the boolean oauthvalidatehostname in OAuthJsonConfig.

OAuth: disable hostname validation

You can now disable hostname validation of the OAuth identity provider with the boolean oauthvalidatehostname in OAuthJsonConfig.

OAuthValidateHostname is set to true by default, requiring the identity provider's certificate to list the IP or hostname of the identify provider. For details, see JDBC connection properties and ODBC DSN connection properties.

ODBC request timeout

You can now specify the timeout for requests sent by ODBC clients with the ConnectionTimeout parameter. For details on this and other parameters, see ODBC DSN connection properties.

5.3 - Complex types

ARRAY_FIND and CONTAINS can now search for elements that satisfy a predicate by using Lambda Functions.

ARRAY_FIND and CONTAINS can now search for elements that satisfy a predicate by using Lambda functions. The new FILTER function applies a predicate to an array and returns an array containing only elements that satisfy the predicate. See Searching and Filtering.

Parquet export

EXPORT TO PARQUET now supports the ARRAY and ROW types in any combination.

5.4 - Containers and Kubernetes

You can authenticate to Amazon Elastic Kubernetes Service (EKS) with an Identity and Access Management (IAM) profile.

IAM profile authentication to Amazon EKS

You can authenticate to Amazon Elastic Kubernetes Service (EKS) with an Identity and Access Management (IAM) profile. For details, see Configuring communal storage.

TLS certificate management for helm charts

The VerticaDB operator can generate and manage TLS certificates with the webhook.certSource Helm chart parameter. The operator can generate certificates internally, or it can manage existing certificates.

For details about setting the webhook.certSource parameter, see Helm chart parameters.

For details about managing TLS certificates for Helm charts, see Installing the Vertica DB operator.

CA bundle management for helm charts

You can include your certificate authority (CA) bundle in the Secret provided to the webhook.tlsSecret Helm chart parameter. This parameter stores the CA bundle with its TLS certificate and keys.

Previously, you had to use the webhook.caBundle parameter to provide access to the CA bundle. This parameter is now deprecated.

For details, see Helm chart parameters.

Custom catalog path

You can create a custom catalog path in the container filesystem to store the catalog in a different location than the local data. Set the local.catalogPath Custom Resource Definition (CRD) parameter to define the custom catalog location.

For details, see Custom resource definition parameters.

Role and RoleBinding creation

The skipRoleAndRoleBindingCreation Helm chart parameter determines whether the Helm chart creates Roles and RoleBindings when you install the Helm chart. When set to true, you can install the Helm chart as a user that does not have privileges to create Roles or RoleBindings.

For details about the parameter, see Helm chart parameters.

For details about granting operator privileges, see Installing the Vertica DB operator.

5.5 - Documentation updates

Documentation about setting up Vertica in on-premises and cloud environments has been combined into a new Setup section.

New setup section

Documentation about setting up Vertica in on-premises and cloud environments has been combined into a new Setup section. The section contains three parts:

Information about setting up Vertica in Kubernetes remains in Containerized Vertica.

5.6 - Eon Mode

The RESHARDING_EVENTS system table provides information about historic and ongoing resharding operations, such as the current status of a resharding process and the timestamps for resharding events.

RESHARDING_EVENTS system table

The RESHARDING_EVENTS system table provides information about historic and ongoing resharding operations, such as the current status of a resharding process and the timestamps for resharding events.

5.7 - Installation and upgrade

The new --parallel-no-prompts flag tells the install script to install the Vertica binaries on hosts in parallel rather than one at a time.

Install binary packages in parallel

The new --parallel-no-prompts flag tells the install script to install the Vertica binaries on hosts in parallel rather than one at a time. Using this flag reduces the time it takes to install Vertica, especially in large clusters. See --parallel-no-prompts.

5.8 - Licensing and auditing

Vertica now rejects AutoPass licenses on builds that don't have Autopass License Server and returns an error code and notice, which details why the license failed to parse.

Improved handling of AutoPass licenses

Vertica now rejects AutoPass licenses on builds that don't have Autopass License Server and returns an error code and notice, which details why the license failed to parse.

5.9 - Loading data

The JSON and Avro parsers now produce warnings if the data being loaded contains fields that are not part of the table definition.

Warning on unmatched data fields

The JSON and Avro parsers now produce warnings if the data being loaded contains fields that are not part of the table definition. New fields are logged in the UDX_EVENTS table and can be added to the target table definition using ALTER TABLE.

For more information, see Schema evolution and the parser reference pages, FJSONPARSER and FAVROPARSER.

5.10 - Machine learning

You can now train autoregression models with the Yule-Walker algorithm.

Autoregression models support the yule-walker algorithm

You can now train autoregression models with the Yule-Walker algorithm. The AUTOREGRESSOR training function provides a method parameter by which you can specify either the Ordinary Least Squares or Yule-Walker training algorithm.

Jaro-winkler distance

You can now calculate the Jaro Distance and Jaro-Winkler Distance between two strings.

5.11 - Management Console

You can localize user interface (UI) text using language files in /opt/vconsole/temp/webapp/resources/i18n/lang.

Localize user interface text

You can localize user interface (UI) text using language files in /opt/vconsole/temp/webapp/resources/i18n/lang. The language files include the locales.json file, a list of supported languages, and a resource bundle, a directory that stores files that contain UI text strings that you can translate. By default, Vertica provides resource bundles for the following:

  • American English

  • Simplified Chinese

For details, see Localizing user interface text.

Support for r6i and c6i instance types on AWS

Management Console supports the following c6i and r6i EBS instance types on Amazon Web Services (AWS) for both Eon and Enterprise Mode:

  • c6i.4xlarge

  • c6i.8xlarge

  • c6i.12xlarge

  • c6i.16xlarge

  • r6i.4xlarge

  • r6i.8xlarge

  • r6i.12xlarge

  • r6i.16xlarge

You can select these instance types when you perform any of the following operations with Management Console:

  • Provision a database

  • Revive a database

  • Scale up a database

  • Add a new subcluster

For details about EBS volume defaults, see Eon Mode volume configuration defaults for AWS and Enterprise Mode volume configuration defaults for AWS.

Scale down and terminate subclusters with stopped nodes

You can scale down or terminate subclusters with stopped nodes in the following cloud environments:

  • Amazon Web Services (AWS)

  • Google Cloud Platform (GCP)

  • Microsoft Azure

For details, see Scaling subclusters in MC and Terminating subclusters in MC.

5.12 - Query optimization

Previously, event series joins allowed you to directly compare values from two series using INTERPOLATExa0PREVIOUS.

Interpolate next value

Previously, event series joins allowed you to directly compare values from two series using INTERPOLATE PREVIOUS. This feature has now been expanded, allowing you to use INTERPOLATE NEXT as well.

5.13 - Security and authentication

You can now specify a TLS Configuration for CREATE NOTIFIER and ALTER NOTIFIER.

TLS configuration support

Notifiers

You can now specify a TLS Configuration for CREATE NOTIFIER and ALTER NOTIFIER. This parameter supersedes the existing TLSMODE, CA BUNDLE, and CERTIFICATE notifier parameters.

To maintain backward compatibility, Vertica automatically generates TLS Configurations for notifiers that use the deprecated parameters.

CONNECT TO VERTICA

You can now specify a TLS Configuration for CONNECT TO VERTICA. This parameter supersedes the existing TLSMODE PREFER parameter.

5.14 - SQL functions and statements

You can now drop custom TLS Configurations.

DROP TLS CONFIGURATION

You can now drop custom TLS Configurations.

Inverse hyperbolic functions

Vertica now supports the following functions:

Directed query enhancements

The meta-function SAVE_PLANS lets you create multiple optimizer-generated directed queries from the most frequently executed queries. New columns have been added to the system table DIRECTED_QUERIES to store SAVE_PLANS-generated metadata.

Several directed query statements—ACTIVATE DIRECTED QUERY, DEACTIVATE DIRECTED QUERY, and DROP DIRECTED QUERY—now support WHERE clauses that can resolve to multiple directed queries. For example:

DEACTIVATE DIRECTED QUERY WHERE save_plans_version = 21;

5.15 - Vertica on the cloud

Vertica AMIs can now use the AWS Instance Metadata Service Version 2 (IMDSv2) to authenticate to AWS services, including S3.

Vertica AMIs support IMDSv2

Vertica AMIs can now use the AWS Instance Metadata Service Version 2 (IMDSv2) to authenticate to AWS services, including S3.

6 - New and changed in Vertica 12.0.3

6.1 - Client connectivity

You can now set a label for the vsql client with the V_CLIENT_LABEL environment variable.

Client label environment variable

You can now set a label for the vsql client with the V_CLIENT_LABEL environment variable. For details, see vsql environment variables.

6.2 - Containers and Kubernetes

In some environments, you might need to access Prometheus metrics from an external client.

TLS certificate authentication to prometheus metrics

In some environments, you might need to access Prometheus metrics from an external client. Vertica provides the prometheus.tlsSecret Helm chart parameter to configure a role-based access control (RBAC) proxy sidecar to authenticate requests with user-provided TLS certificates.

For details, see Helm chart parameters and Prometheus integration.

Set readiness probe on container

The readinessProbeOverride custom resource definition (CRD) parameter overrides settings for the default readiness probe so that you can fine-tune when the Vertica pod is ready to accept traffic.

For details, see Custom resource definition parameters.

IRSA profile authentication to Amazon EKS

You can authenticate to Amazon Elastic Kubernetes Service (EKS) with IAM roles for service accounts (IRSA). For details, see Configuring communal storage

Operator scheduling rules for helm chart

You can control which node the operator pod is scheduled on with the following Helm chart parameters:

  • affinity

  • nodeSelector

  • priorityClassName

  • tolerations

For details, see Helm chart parameters.

Set liveness and startup probes on container

The livenessProbeOverride and startupProbeOverride custom resource definition (CRD) parameters override settings for the corresponding default probes. These parameters fine-tune how the container and the Vertica process within the container indicate their state to other objects in the StatefulSet.

For details, see Custom resource definition parameters.

Override pod-level security context

The podSecurityContext custom resource definition (CRD) parameter can elevate pod-level privileges so that you can perform privileged actions, such as setting sysctl commands in the pod.

For details, see Custom resource definition parameters. For additional details about pod-level privileges, see the Kubernetes documentation.

6.3 - Loading data

When an external table uses data that is stored on an object store (S3, GCS, or Azure), and that data has many levels of partitioning, performance can be affected.

Optimization for external tables on object stores

When an external table uses data that is stored on an object store (S3, GCS, or Azure), and that data has many levels of partitioning, performance can be affected. The ObjectStoreGlobStrategy configuration parameter allows you to read and prune partitions using a different strategy, improving performance when queries are selective and there are many partittion directory levels. With this setting, fewer calls are made to the object store API. For details, see Partitions on Object Stores.

6.4 - Machine learning

The new XGB_PREDICTOR_IMPORTANCE function measures the importance of the predictors in XGB_CLASSIFIER and XGB_REGRESSOR models.

Extract predictor importance of XGBoost models

The new XGB_PREDICTOR_IMPORTANCE function measures the importance of the predictors in XGB_CLASSIFIER and XGB_REGRESSOR models.

K-prototypes

You can now use the k-prototypes clustering algorithm to cluster mixed data into different groups based on similarities between data points.

New MLSUPERVISOR role

A new predefined role MLSUPERVISOR can delegate DBADMIN privileges for ML model management to other users.

6.5 - Partitioning

DROP_PARTITIONS now sets an exclusive D lock on the target table.

New drop partitions (d) lock

DROP_PARTITIONS now sets an exclusive D lock on the target table. This lock is only compatible with I-lock operations, so table load operations such as INSERT and COPY are allowed during the drop operation. All other operations that require a table lock, such as UPDATE or DELETE, are blocked until the drop partition operation is complete and the D-lock is released.

6.6 - Query optimization

ALTER PROJECTION now supports a new DISABLExa0parameter, which marks a projection as unavailable to the optimizer.

Disabling projections

ALTER PROJECTION now supports a new DISABLE parameter, which marks a projection as unavailable to the optimizer.

6.7 - Security and authentication

Users can now specify the authentication record they want to authenticate with by providing the credentials required by that record.

Authentication filtering

Users can now specify the authentication record they want to authenticate with by providing the credentials required by that record. Previously, users could only authenticate with the highest priority authentication record or the records to which it fell through. For details, see Authentication filtering.

6.8 - SQL functions and statements

LIKE operators now support the keywords ANY and ALL, which let you specify multiple patterns to test against a string expression.

LIKE support for ANY/ALL

LIKE operators now support the keywords ANY and ALL, which let you specify multiple patterns to test against a string expression. For example, the following query qualifies the case-insensitive ILIKE with ALL to find all customer names that contain the strings media and ever:

=> SELECT DISTINCT (customer_name) FROM customer_dimension
     WHERE customer_name ILIKE ALL ('%media%','%ever%') ORDER BY customer_name;
 customer_name
---------------
 Evermedia
(1 row)

For details, see LIKE ANY/ALL Usage.