1 - Containers and Kubernetes

Vertica on Kubernetes supports OpenShift, a hybrid cloud platform that adds security features and additional support to Kubernetes.

Red hat OpenShift support

Vertica on Kubernetes supports OpenShift, a hybrid cloud platform that adds security features and additional support to Kubernetes. The Vertica DB operator is available in the OpenShift OperatorHub as a community-based operator that supports OpenShift versions 4.8 and higher.

For details, see Red hat OpenShift integration.

Vertica server online upgrade

You can upgrade the Vertica server versions without taking your cluster offline. Set the new upgradePolicy custom resource parameter to upgrade one subcluster at a time without interrupting your database activity.

For details, see Upgrading Vertica on Kubernetes.

Managed Kubernetes services support

Vertica supports managed Kubernetes services on Google Kubernetes Engine (GKE).

Improved logging with helm chart installs

Vertica DB operators that were installed with Helm have additional logging features. You can specify whether the operator sends logs to standard output, or to a file in the operator pod filesystem. In addition, you can set logging levels with new Helm chart parameters.

For details about the new logging Helm chart parameters, see Helm chart parameters.

Add environment variables to the Vertica server container

Use annotations in the custom resource to add environment variables in the Vertica server container. For details, see Custom resource definition parameters.

2 - Data types

The JSON and Avro parsers now support complex types with strong typing.

JSON and Avro complex types

The JSON and Avro parsers now support complex types with strong typing. This support is in addition to flexible complex types that use a VMap (LONG VARBINARY) column to hold complex types. For details, see JSON data and Avro data.

This change does not include KafkaAvroParser.

Casting complex types

Vertica now supports casting structs (the ROW type) and non-native arrays. When casting a ROW, you can change the names of the fields. When casting an array, you can change the bounds. When casting to a bounded native array, inputs that are too long are truncated. When casting to a non-native array (an array containing complex data types including other arrays), if the new bounds are too small for the data the cast fails.

3 - Loading data

The Parquet parser now supports the following:.

Additional Parquet file formats

The Parquet parser now supports the following:

  • Files written using the DATA_PAGE_V2 page type.

  • Files using the DELTA_BINARY_PACKED encoding.

4 - Machine learning

Previous versions of Vertica supported the following data types:.

TensorFlow: additional support for INT and FLOAT types

Previous versions of Vertica supported the following data types:

  • Input: TF_FLOAT (default), TF_DOUBLE

  • Output: TF_FLOAT (default), TF_INT64

Vertica 11.1 adds support for the following data types:

  • Input: TF_INT8, TF_INT16, TF_INT32, TF_INT64

  • Output: TF_DOUBLE, TF_INT8, TF_INT16, TF_INT32

For details, see tf_model_desc.json overview.

5 - SDK updates

A new API allows a UDx written in C++ to report errors, warnings, and other messages to the client.

Reporting errors and warnings (C++)

A new API allows a UDx written in C++ to report errors, warnings, and other messages to the client. A message can contain a details string and a hint, like the messages that Vertica itself reports. See Sending messages.

6 - Security and authentication

The new PasswordLockTimeUnit security parameter lets you specify the time units for which an account is locked after a certain number of FAILED_LOGIN_ATTEMPTS.

PasswordLockTimeUnit

The new PasswordLockTimeUnit security parameter lets you specify the time units for which an account is locked after a certain number of FAILED_LOGIN_ATTEMPTS. The number of time units is controlled by the profile parameter PASSWORD_LOCK_TIME.

Account locking is disabled by default and must be configured manually by creating or altering a profile. For details, see Account locking.

OAuth 2.0 support

You can now create and manage OAuth 2.0 authentication records and allow ODBC and JDBC clients to verify with an identity provider and then authenticate to Vertica with an access token (rather than a username and password).

7 - SQL functions and statements

The INFER_EXTERNAL_TABLE_DDL function now supports input files in the Avro format, in addition to Parquet and ORC.

INFER_EXTERNAL_TABLE_DDL supports Avro format

The INFER_EXTERNAL_TABLE_DDL function now supports input files in the Avro format, in addition to Parquet and ORC.

ALTER USER for subcluster-specific resource pool

The ALTER USER function now supports assigning a subcluster resource pool to a user. For example:

=> ALTER USER user RESOURCE POOL resource-pool FOR SUBCLUSTER subcluster-name;

8 - Stored procedures

Stored procedures that use SECURITY DEFINER now execute with the definer's:.

Execute with definer's user-level parameters

Stored procedures that use SECURITY DEFINER now execute with the definer's:

9 - System tables

The NODES system table contains a new column named BUILD_INFO.

New BUILD_INFO column in NODES table

The NODES system table contains a new column named BUILD_INFO. This column contains the version of the Vertica server binary that is running on the node.

New status for NODE_STATE column in NODES and NODE_STATES tables

The NODE_STATE column in the NODES and NODE_STATES tables has a new status named UNKOWN. A node can be in this state when Vertica identifies it as part of the cluster, but cannot communicate with the node to determine its current state.