This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

New features

This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.

This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.

For known and fixed issues in the most recent release, see the Vertica Release Notes.

1 - New and changed in Vertica 12.0.4

1.1 - admintools

You can configure how long re_ip executes a given task before it times out, by editing the setting of prepare_timeout_sec in admintools.conf.

Setting re_ip timeout

You can configure how long re_ip executes a given task before it times out, by editing the setting of prepare_timeout_sec in admintools.conf. By default, this parameter is set to 7200 (seconds).

1.2 - Client connectivity

You can set the IdleTimeoutInitializingConnectionsMs configuration parameter to change the length of time that the server waits for an idle client during each step in connection initialization, before a session is created.

Set idle timeouts during connection initialization

You can set the IdleTimeoutInitializingConnectionsMs configuration parameter to change the length of time that the server waits for an idle client during each step in connection initialization, before a session is created.

Client hostnames

The hostnames of clients (as reported by their operating systems) are now recorded in the CLIENT_OS_HOSTNAME column of the following tables:

1.3 - Client drivers

The ODBC and vsql client drivers now support Apple M1 and M2 processors.

ODBC and vsql: support for M1 and M2 processors

The ODBC and vsql client drivers now support Apple M1 and M2 processors. For details, see Client drivers support.

1.4 - Complex types

The new UNNEST function expands one or more array columns.

UNNEST function

The new UNNEST function expands one or more array columns. UNNEST is similar to EXPLODE, with two key differences:

  • UNNEST returns element values but not positions.

  • UNNEST automatically expands all arrays that are passed to it.

EXPLODE performance optimization

The EXPLODE function translates a single row of input into multiple rows of output, one per collection element. There is, therefore, usually no benefit to partitioning the input. Previously, a query using EXPLODE required an OVER clause and Vertica partitioned the input before calling EXPLODE. Now, you can skip this partitioning step by using the skip_partitioning function parameter and omitting the OVER clause.

CROSS JOIN EXPLODE

You can use the output of EXPLODE as if it were a relation by using CROSS JOIN EXPLODE in a query, as in the following example:

=> ALTER SESSION SET UDPARAMETER FOR ComplexTypesLib skip_partitioning = true;

=> SELECT student, score FROM tests
   CROSS JOIN EXPLODE(scores) AS t (pos, score)
   ORDER BY score DESC;
 student | score
---------+-------
 Sam     |    98
 Sam     |    97
 Bob     |    92
 Tom     |    91
 Sam     |    85
 Tom     |    82
 Bob     |    79
 Bob     |    78
 Tom     |    75
 Tom     |    68
(10 rows)

For details, see Joining on Results.

1.5 - Containers and Kubernetes

Server-side encryption (SSE)

Veritca on Kubernetes supports all three types of S3 SSE:

  • SSE-3
  • SSE-KMS
  • SSE-C

For details, see Configuring communal storage and Custom resource definition parameters.

Set configuration parameters

You can set configuration parameters in the Vertica server with the communal.additionalConfig parameter.

For details, see Custom resource definition parameters.

Set depot volume type

You can store the depot in an emptyDir volume with the local.depotVolume parameter. When you use an emptyDir volume, the local.storageClass and local.requestSize parameters only apply to data stored in PersistentVolumes.

For details about the custom resource parameters, see Custom resource definition parameters. For details about volume types, see the Kubernetes documentation.

EventTrigger custom resource definition (CRD)

The EventTrigger CRD extends the Kubernetes Job workload resource to watch a VerticaDB custom resource (CR) for a state change. When a condition on the VerticaDB CR changes to a specified status, the EventTrigger runs a Job.

For details, see EventTrigger CRD.

Use underscore character in subcluster name

You can now use the underscore character ( "_" ) in the subcluster name. For details about subcluster naming conventions, see Subclusters on Kubernetes.

Vertica image with no private keys

Vertica on Kubernetes provides an image that does not contain static SSH keys to authenticate internal communication between the pods. This requires that you provide SSH keys to the custom resource (CR). You can conceal your SSH keys in a Secret, and pass that Secret to the CR with the sshSecret field.

For details about the image and passing SSH keys to the CR, see the following:

Custom name label

You can customize your VerticaDB CRD application's name with the app.kubernetes.io/name label:

spec:
  labels:
    app.kubernetes.io/name: app-name

If you do not add a custom name label, then the default value is vertica. For additional details about the name label, see the Kubernetes documentation.

Delete PVCs after VerticaDB delete

When you delete a VerticaDB custom resource, the VerticaDB operator deletes any PVCs that are associated with that instance. For details about Vertica on Kubernetes persistent storage, see Containerized Vertica on Kubernetes.

1.6 - Database management

You can now send messages to SNS topics with notifiers.

Amazon simple notification service (SNS) notifiers

You can now send messages to SNS topics with notifiers. For details, see Configuring reporting for the simple notification service (SNS).

1.7 - dc_catalog_truncations improvements

After a cluster restart (normal or crash), there is no indication how many transactions were rolled back by the system. New columns epoch_before_truncation and epoch_after_truncation have been added to the dc_catalog_truncations tables to track where the workload needs to be restarted to catchup the missing transactions.

DC_CATALOG_TRUNCATIONS Improvements

After a cluster restart (normal or crash), there is no indication how many transactions were rolled back by the system. New columns epoch_before_truncation and epoch_after_truncation have been added to the dc_catalog_truncations tables to track where the workload needs to be restarted to catchup the missing transactions.

1.8 - Directed queries

You can now annotate the query of an INSERT...SELECT statement with directed query hints.

INSERT...SELECT hints

You can now annotate the query of an INSERT...SELECT statement with directed query hints.

1.9 - Documentation updates

When data is not well-specified in a schema, developing a table definition so you can load the data can be challenging.

Data exploration

When data is not well-specified in a schema, developing a table definition so you can load the data can be challenging. Vertica provides tools you can use to explore data and refine a table definition. These tools and processes are now described in a new top-level section of the documentation, Data exploration.

1.10 - Eon Mode

Sandboxing enables you to spin-off a secondary subcluster from an existing cluster, resulting in two mutually isolated clusters that share the same data but do not interfere with each other.

Subcluster sandboxing

Sandboxing enables you to spin-off a secondary subcluster from an existing cluster, resulting in two mutually isolated clusters that share the same data but do not interfere with each other. After sandbox creation, you can perform standard database operations and queries, such as creating new tables or loading libraries, without affecting the other cluster. For example, dropping a table in the sandboxed subcluster does not drop the table in the main cluster, and vice versa.

Sandboxes support many use cases, including the following:

  • Testing a new version of Vertica without needing to spin-up a new cluster and reload data.

  • Trying out features without compromising the consistency of the main cluster.

  • Sharing data with another team by giving them access to a sandboxed subcluster.

After removing the sandbox and performing necessary cleanup tasks, the subcluster can rejoin the main cluster.

1.11 - Apache Kafka integration

The kafka_conf_secret parameter conceals sensitive data that you must pass to rdkafka to configure the Vertica and Kafka integration.

kafka_conf_secret parameter

The kafka_conf_secret parameter conceals sensitive data that you must pass to rdkafka to configure the Vertica and Kafka integration. Vertica does not log or store values that you pass in kafka_conf_secret.

This option is available on tools and functions that use the kafka_conf parameter.

For details, see Directly setting Kafka library options and Apache Kafka integration.

1.12 - Loading data

When an external table uses data that is stored on an object store (S3, GCS, or Azure), and that data has many levels of partitioning, performance can be affected.

Optimization for external tables on object stores

When an external table uses data that is stored on an object store (S3, GCS, or Azure), and that data has many levels of partitioning, performance can be affected. The ObjectStoreGlobStrategy configuration parameter allows you to read and prune partitions using a different strategy, improving performance when queries are selective and there are many partition directory levels. With this setting, fewer calls are made to the object store API. For details, see Partitions on Object Stores.

Iceberg support

You can now define external tables backed by Parquet data stored by Apache Iceberg. An Iceberg table consists of data files and metadata describing the schema. Unlike other external tables, an Iceberg external table need not specify column definitions (DDL). The information is read from Iceberg metadata at query time. For certain data types you can adjust column definitions, for example to specify VARCHAR sizes. See CREATE EXTERNAL TABLE ICEBERG.

1.13 - Machine learning

Model versioning provides an infrastructure to track and manage the status of registered models in a database.

Model versioning

Model versioning provides an infrastructure to track and manage the status of registered models in a database. The versioning infrastructure supports a collaborative environment where multiple users can submit candidate models for individual applications. dbadmin and users with the MLSUPERVISOR role can manage and change the status of all registered models, including which models are currently in production.

Vertica provides the following model versioning functionality:

For details about the model versioning environment and an in-depth example, see Model versioning.

Poisson regression

Vertica now supports the Poisson regression algorithm to model count data. Poisson regression offers an alternative to linear regression or logistic regression and is useful when your desired prediction range is non-negative real numbers or integers.

Autoregresssive integrated moving average (ARIMA) support

Vertica now supports autoregressive integrated moving average (ARIMA) models for time series analytics. ARIMA models combine the abilities of AUTOREGRESSOR and MOVING_AVERAGE models by making future predictions based on both preceding time series values and errors of previous predictions. You can use the ARIMA function to create and train an ARIMA model and the PREDICT_ARIMA function to make predictions.

For an in-depth example that trains and makes predictions with an ARIMA model, see ARIMA model example.

1.14 - Partitioning

Previously, window partitioning could only be used for 1:1 analytic functions.

Window partitioning for 1:N transform functions using PARTITION ROW

Previously, window partitioning could only be used for 1:1 analytic functions. Now, the PARTITION ROW syntax can be used for window partitioning 1:N transform functions.

1.15 - Performance improvements

In previous releases, new and updated projections were not available to queries before they were completely refreshed with all table data.

Refreshing partitioned projections

In previous releases, new and updated projections were not available to queries before they were completely refreshed with all table data. This delayed availability of projections to process query requests.

Now, when you refresh the projections of a partitioned table—for example, a new projection of a table with existing data—the refresh operation first loads data from the partition with the highest range of keys. After this partition is refreshed, Vertica begins to refresh the partition with next highest partition range. This process continues until all projection partitions are refreshed. In the meantime, while the refresh operation is in progress, projection partitions that have completed the refresh process become available to process query requests.

1.16 - Resource management

The new configuration parameter DefaultResourcePoolForUsers lets you configure the default user resource pool that is assigned to new users, including LDAP users.

Default user resource pool

The new configuration parameter DefaultResourcePoolForUsers lets you configure the default user resource pool that is assigned to new users, including LDAP users. For details, see User resource allocation.

1.17 - SDK updates

Single-phase user-defined transform functions (UDTFs) written in C++ and Python now support the isExploder class property, which indicates whether the UDTF performs a one-to-many transform.

One-to-many UDTFs and PARTITION ROW

Single-phase user-defined transform functions (UDTFs) written in C++ and Python now support the isExploder class property, which indicates whether the UDTF performs a one-to-many transform. If this property is set to True within the getTransformFunctionProperties factory method, the function uses an OVER(PARTITION ROW) clause by default and thereby removes the need to specify an OVER clause when calling the UDTF. Users can also include any expression in SELECT queries that call a one-to-many UDTF.

For more information about this class property and one-to-many UDTFs, see Partitioning options for UDTFs and TransformFunctionFactory class. See Python example: explode for an in-depth example detailing a one-to-many UDTF.

1.18 - Security and authentication

Vertica can now validate OAuth tokens without contacting the identity provider by verifying the signer of the OAuth token.

OAuth

JSON web token validation

Vertica can now validate OAuth tokens without contacting the identity provider by verifying the signer of the OAuth token. To enable this feature, create an authentication record that uses 'oauth' and set the authentication parameters to use JWT validation. For details, see Configuring OAuth authentication.

Just-in-time user provisioning

Just-in-time (JIT) user provisioning is the act of automatically configuring an authenticated user and their roles based on information provided by the identity provider. When a client authenticates to Vertica with an OAuth token the authentication record enables JIT user provisioning, Vertica automatically creates the user (if the user does not already exist in Vertica). Additionally, if Keycloak is the identity provider, Vertica grants to the users their roles as specified by Keycloak.

For example, if a client presents an OAuth token to authenticate as user Alice with role director, and Alice does not exist in Vertica, Vertica automatically creates the user Alice, grants to her the authentication record. This configuration also uses Keycloak as the identity provider, so Vertica grants to her the director role as a default role.

For details, see Just-in-time user provisioning

JDBC: custom truststore

You can use a custom truststore to secure the connection between the JDBC client and the identity provider. This is useful if your identity provider's certificate is not signed by a well-known certificate authority.

The path to the truststore and its password are specified with the following connection properties:

  • oauthtruststorepath

  • oauthtruststorepassword

If oauthtruststorepath is not specified, the JDBC client uses the default truststore.

For details on this and other connection properties, see JDBC connection properties. For details on configuring OAuth, see Configuring OAuth authentication.

1.19 - Stored procedures

You can now schedule the automatic execution of stored procedures.

Scheduled execution

You can now schedule the automatic execution of stored procedures. This can be used to automate various tasks, like logging database activity, revoking privileges, or creating roles. For details, see Scheduled execution.

Viewing overloaded stored procedures

You can now view the source code of a particular implementation of an overloaded stored procedure with EXPORT_OBJECTS. For details, see Exporting objects.

1.20 - Upgrade

New features for upgrade.

Upgrade to any Vertica version higher than your current version

Previously, Vertica upgrades were incremental, requiring you to upgrade to each major and minor Vertica version in your upgrade path. You can now skip intermediate version and upgrade directly to any version of Vertica higher than your current version.

1.21 - User-defined extensions

User-defined extension and session parameters that meet any of the following criteria are automatically redacted from logs and system tables like QUERY_REQUESTS:.

Redactable user-defined extension and session parameters

User-defined extension and session parameters that meet any of the following criteria are automatically redacted from logs and system tables like QUERY_REQUESTS:

  • Named "secret" or "password"

  • End with "_secret" or "_password"

2 - List of new features

3 - New and changed in Vertica 12.0.3

3.1 - Client connectivity

You can now set a label for the vsql client with the VSQL_CLIENT_LABEL environment variable.

Client label environment variable

You can now set a label for the vsql client with the VSQL_CLIENT_LABEL environment variable. For details, see vsql environment variables.

3.2 - Containers and Kubernetes

In some environments, you might need to access Prometheus metrics from an external client.

TLS certificate authentication to prometheus metrics

In some environments, you might need to access Prometheus metrics from an external client. Vertica provides the prometheus.tlsSecret Helm chart parameter to configure a role-based access control (RBAC) proxy sidecar to authenticate requests with user-provided TLS certificates.

For details, see Helm chart parameters and Prometheus integration.

Set readiness probe on container

The readinessProbeOverride custom resource definition (CRD) parameter overrides settings for the default readiness probe so that you can fine-tune when the Vertica pod is ready to accept traffic.

For details, see Custom resource definition parameters.

IRSA profile authentication to Amazon EKS

You can authenticate to Amazon Elastic Kubernetes Service (EKS) with IAM roles for service accounts (IRSA). For details, see Configuring communal storage

Operator scheduling rules for helm chart

You can control which node the operator pod is scheduled on with the following Helm chart parameters:

  • affinity

  • nodeSelector

  • priorityClassName

  • tolerations

For details, see Helm chart parameters.

Set liveness and startup probes on container

The livenessProbeOverride and startupProbeOverride custom resource definition (CRD) parameters override settings for the corresponding default probes. These parameters fine-tune how the container and the Vertica process within the container indicate their state to other objects in the StatefulSet.

For details, see Custom resource definition parameters.

Override pod-level security context

The podSecurityContext custom resource definition (CRD) parameter can elevate pod-level privileges so that you can perform privileged actions, such as setting sysctl commands in the pod.

For details, see Custom resource definition parameters. For additional details about pod-level privileges, see the Kubernetes documentation.

3.3 - Machine learning

The new XGB_PREDICTOR_IMPORTANCE function measures the importance of the predictors in XGB_CLASSIFIER and XGB_REGRESSOR models.

Extract predictor importance of XGBoost models

The new XGB_PREDICTOR_IMPORTANCE function measures the importance of the predictors in XGB_CLASSIFIER and XGB_REGRESSOR models.

K-prototypes

You can now use the k-prototypes clustering algorithm to cluster mixed data into different groups based on similarities between data points.

New MLSUPERVISOR role

A new predefined role MLSUPERVISOR can delegate DBADMIN privileges for ML model management to other users.

3.4 - Partitioning

DROP_PARTITIONS now sets an exclusive D lock on the target table.

New drop partitions (d) lock

DROP_PARTITIONS now sets an exclusive D lock on the target table. This lock is only compatible with I-lock operations, so table load operations such as INSERT and COPY are allowed during the drop operation. All other operations that require a table lock, such as UPDATE or DELETE, are blocked until the drop partition operation is complete and the D-lock is released.

3.5 - Query optimization

ALTER PROJECTION now supports a new DISABLExa0parameter, which marks a projection as unavailable to the optimizer.

Disabling projections

ALTER PROJECTION now supports a new DISABLE parameter, which marks a projection as unavailable to the optimizer.

3.6 - Security and authentication

Users can now specify the authentication record they want to authenticate with by providing the credentials required by that record.

Authentication filtering

Users can now specify the authentication record they want to authenticate with by providing the credentials required by that record. Previously, users could only authenticate with the highest priority authentication record or the records to which it fell through. For details, see Authentication filtering.

3.7 - SQL functions and statements

LIKE operators now support the keywords ANY and ALL, which let you specify multiple patterns to test against a string expression.

LIKE support for ANY/ALL

LIKE operators now support the keywords ANY and ALL, which let you specify multiple patterns to test against a string expression. For example, the following query qualifies the case-insensitive ILIKE with ALL to find all customer names that contain the strings media and ever:

=> SELECT DISTINCT (customer_name) FROM customer_dimension
     WHERE customer_name ILIKE ALL ('%media%','%ever%') ORDER BY customer_name;
 customer_name
---------------
 Evermedia
(1 row)

For details, see LIKE ANY/ALL Usage.

4 - New and changed in Vertica 12.0.2

4.1 - Client connectivity

You can now set the client label for vsql connections at the start of the session with the --label option.

Client labels

You can now set the client label for vsql connections at the start of the session with the --label option. Client sessions and their labels are listed in the SESSIONS system table. This option supplements the existing SET_CLIENT_LABEL function, which sets the client label for existing sessions.

JDBC database name

The database name parameter in DriverManager.getConnection() is now optional.

4.2 - Client drivers

You can now disable hostname validation of the OAuth identity provider with the boolean oauthvalidatehostname in OAuthJsonConfig.

OAuth: disable hostname validation

You can now disable hostname validation of the OAuth identity provider with the boolean oauthvalidatehostname in OAuthJsonConfig.

OAuthValidateHostname is set to true by default, requiring the identity provider's certificate to list the IP or hostname of the identify provider. For details, see JDBC connection properties and ODBC DSN connection properties.

ODBC request timeout

You can now specify the timeout for requests sent by ODBC clients with the ConnectionTimeout parameter. For details on this and other parameters, see ODBC DSN connection properties.

4.3 - Complex types

ARRAY_FIND and CONTAINS can now search for elements that satisfy a predicate by using Lambda Functions.

ARRAY_FIND and CONTAINS can now search for elements that satisfy a predicate by using Lambda functions. The new FILTER function applies a predicate to an array and returns an array containing only elements that satisfy the predicate. See Searching and Filtering.

Parquet export

EXPORT TO PARQUET now supports the ARRAY and ROW types in any combination.

4.4 - Containers and Kubernetes

You can authenticate to Amazon Elastic Kubernetes Service (EKS) with an Identity and Access Management (IAM) profile.

IAM profile authentication to Amazon EKS

You can authenticate to Amazon Elastic Kubernetes Service (EKS) with an Identity and Access Management (IAM) profile. For details, see Configuring communal storage.

TLS certificate management for helm charts

The VerticaDB operator can generate and manage TLS certificates with the webhook.certSource Helm chart parameter. The operator can generate certificates internally, or it can manage existing certificates.

For details about setting the webhook.certSource parameter, see Helm chart parameters.

For details about managing TLS certificates for Helm charts, see Installing the Vertica DB operator.

CA bundle management for helm charts

You can include your certificate authority (CA) bundle in the Secret provided to the webhook.tlsSecret Helm chart parameter. This parameter stores the CA bundle with its TLS certificate and keys.

Previously, you had to use the webhook.caBundle parameter to provide access to the CA bundle. This parameter is now deprecated.

For details, see Helm chart parameters.

Custom catalog path

You can create a custom catalog path in the container filesystem to store the catalog in a different location than the local data. Set the local.catalogPath Custom Resource Definition (CRD) parameter to define the custom catalog location.

For details, see Custom resource definition parameters.

Role and RoleBinding creation

The skipRoleAndRoleBindingCreation Helm chart parameter determines whether the Helm chart creates Roles and RoleBindings when you install the Helm chart. When set to true, you can install the Helm chart as a user that does not have privileges to create Roles or RoleBindings.

For details about the parameter, see Helm chart parameters.

For details about granting operator privileges, see Installing the Vertica DB operator.

4.5 - Documentation updates

Documentation about setting up Vertica in on-premises and cloud environments has been combined into a new Setup section.

New setup section

Documentation about setting up Vertica in on-premises and cloud environments has been combined into a new Setup section. The section contains three parts:

Information about setting up Vertica in Kubernetes remains in Containerized Vertica.

4.6 - Eon Mode

The RESHARDING_EVENTS system table provides information about historic and ongoing resharding operations, such as the current status of a resharding process and the timestamps for resharding events.

RESHARDING_EVENTS system table

The RESHARDING_EVENTS system table provides information about historic and ongoing resharding operations, such as the current status of a resharding process and the timestamps for resharding events.

4.7 - Installation and upgrade

The new --parallel-no-prompts flag tells the install script to install the Vertica binaries on hosts in parallel rather than one at a time.

Install binary packages in parallel

The new --parallel-no-prompts flag tells the install script to install the Vertica binaries on hosts in parallel rather than one at a time. Using this flag reduces the time it takes to install Vertica, especially in large clusters. See --parallel-no-prompts.

4.8 - Licensing and auditing

Vertica now rejects AutoPass licenses on builds that don't have Autopass License Server and returns an error code and notice, which details why the license failed to parse.

Improved handling of AutoPass licenses

Vertica now rejects AutoPass licenses on builds that don't have Autopass License Server and returns an error code and notice, which details why the license failed to parse.

4.9 - Loading data

The JSON and Avro parsers now produce warnings if the data being loaded contains fields that are not part of the table definition.

Warning on unmatched data fields

The JSON and Avro parsers now produce warnings if the data being loaded contains fields that are not part of the table definition. New fields are logged in the UDX_EVENTS table and can be added to the target table definition using ALTER TABLE.

For more information, see Schema evolution and the parser reference pages, FJSONPARSER and FAVROPARSER.

4.10 - Machine learning

You can now train autoregression models with the Yule-Walker algorithm.

Autoregression models support the yule-walker algorithm

You can now train autoregression models with the Yule-Walker algorithm. The AUTOREGRESSOR training function provides a method parameter by which you can specify either the Ordinary Least Squares or Yule-Walker training algorithm.

Jaro-winkler distance

You can now calculate the Jaro Distance and Jaro-Winkler Distance between two strings.

4.11 - Management Console

You can localize user interface (UI) text using language files in /opt/vconsole/temp/webapp/resources/i18n/lang.

Localize user interface text

You can localize user interface (UI) text using language files in /opt/vconsole/temp/webapp/resources/i18n/lang. The language files include the locales.json file, a list of supported languages, and a resource bundle, a directory that stores files that contain UI text strings that you can translate. By default, Vertica provides resource bundles for the following:

  • American English

  • Simplified Chinese

For details, see Localizing user interface text.

Support for r6i and c6i instance types on AWS

Management Console supports the following c6i and r6i EBS instance types on Amazon Web Services (AWS) for both Eon and Enterprise Mode:

  • c6i.4xlarge

  • c6i.8xlarge

  • c6i.12xlarge

  • c6i.16xlarge

  • r6i.4xlarge

  • r6i.8xlarge

  • r6i.12xlarge

  • r6i.16xlarge

You can select these instance types when you perform any of the following operations with Management Console:

  • Provision a database

  • Revive a database

  • Scale up a database

  • Add a new subcluster

For details about EBS volume defaults, see Eon Mode volume configuration defaults for AWS and Enterprise Mode volume configuration defaults for AWS.

Scale down and terminate subclusters with stopped nodes

You can scale down or terminate subclusters with stopped nodes in the following cloud environments:

  • Amazon Web Services (AWS)

  • Google Cloud Platform (GCP)

  • Microsoft Azure

For details, see Scaling subclusters in MC and Terminating subclusters in MC.

4.12 - Query optimization

Previously, event series joins allowed you to directly compare values from two series using INTERPOLATExa0PREVIOUS.

Interpolate next value

Previously, event series joins allowed you to directly compare values from two series using INTERPOLATE PREVIOUS. This feature has now been expanded, allowing you to use INTERPOLATE NEXT as well.

4.13 - Security and authentication

You can now specify a TLS Configuration for CREATE NOTIFIER and ALTER NOTIFIER.

TLS configuration support

Notifiers

You can now specify a TLS Configuration for CREATE NOTIFIER and ALTER NOTIFIER. This parameter supersedes the existing TLSMODE, CA BUNDLE, and CERTIFICATE notifier parameters.

To maintain backward compatibility, Vertica automatically generates TLS Configurations for notifiers that use the deprecated parameters.

CONNECT TO VERTICA

You can now specify a TLS Configuration for CONNECT TO VERTICA. This parameter supersedes the existing TLSMODE PREFER parameter.

4.14 - SQL functions and statements

You can now drop custom TLS Configurations.

DROP TLS CONFIGURATION

You can now drop custom TLS Configurations.

Inverse hyperbolic functions

Vertica now supports the following functions:

Directed query enhancements

The meta-function SAVE_PLANS lets you create multiple optimizer-generated directed queries from the most frequently executed queries. New columns have been added to the system table DIRECTED_QUERIES to store SAVE_PLANS-generated metadata.

Several directed query statements—ACTIVATE DIRECTED QUERY, DEACTIVATE DIRECTED QUERY, and DROP DIRECTED QUERY—now support WHERE clauses that can resolve to multiple directed queries. For example:

DEACTIVATE DIRECTED QUERY WHERE save_plans_version = 21;

4.15 - Vertica on the cloud

Vertica AMIs can now use the AWS Instance Metadata Service Version 2 (IMDSv2) to authenticate to AWS services, including S3.

Vertica AMIs support IMDSv2

Vertica AMIs can now use the AWS Instance Metadata Service Version 2 (IMDSv2) to authenticate to AWS services, including S3.

5 - New and changed in Vertica 12.0.1

5.1 - admintools

When you run the commands stop_db and stop_subcluster on an Eon Mode database, the default behavior is now to gracefully shut down the specified subclusters.

stop_db and stop_subcluster graceful shutdown

When you run the admintools commands stop_db and stop_subcluster on an Eon Mode database, the default behavior is now to gracefully shut down the specified subclusters. Both commands provide override options that force immediate shutdown of the target subclusters.

For examples, see Stopping the database and Stopping a Subcluster.

Fast startup

The admintools command start_db has a new --fast option for starting Eon Mode databases. When invoked on an unencrypted database, Vertica uses startup information from cluster_config.json to expedite the startup process.

5.2 - Client drivers

The connection parameters oauthjsonconfig (ODBC) and OAuthJsonConfig (JDBC) now take precedence over the deprecated OAuth parameters:.

OAuthJsonConfig precedence

The connection parameters oauthjsonconfig (ODBC) and OAuthJsonConfig (JDBC) now take precedence over the deprecated OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

5.3 - Complex types

Previously, Vertica did not allow you to change the fields of complex type columns in native tables.

Add new fields to complex type columns

Previously, Vertica did not allow you to change the fields of complex type columns in native tables. Now, you can add new fields to an existing complex type column using ALTER TABLE. See Adding a new field to a complex type column for details.

Field aliases in ROW literals

In ROW literals, you can now name individual fields using AS. The following two statements are equivalent:

=> SELECT ROW('Amy' AS name, 2 AS id, false AS current) AS student;

=> SELECT ROW('Amy', 2, false) AS student(name, id, current);

5.4 - Containers and Kubernetes

The securityContext custom resource definition parameter can elevate Vertica server container privileges so that you can perform privileged actions, such as create core files that contain information about the Vertica server process.

Elevate security context privileges for core files

The securityContext custom resource definition parameter can elevate Vertica server container privileges so that you can perform privileged actions, such as create core files that contain information about the Vertica server process.

For details, see Custom resource definition parameters and Metrics gathering. For additional details about security context, see the Kubernetes documentation.

Resize persistent volumes automatically on managed Kubernetes services

When you use a storageClass that allows volume expansion and you allocate depot storage with a percentage of the total disk space, the operator automatically adjusts the size of the PV and the depot when you update spec.local.requestSize.

For details, see Custom resource definition parameters.

Override helm chart object naming convention

The nameOverride Helm chart parameter sets the naming prefix for all objects that the Helm chart creates.

For details, see Helm chart parameters.

Java 8 support in the Vertica server image

The Vertica server image includes the Java 8 OpenJDK so that you can develop Java UDx and use the Java runtime in your Vertica environment.

For details about Java UDx development, see Java SDK. For a detailed list of Vertica images, see Containerized Vertica.

5.5 - Machine learning

The APPLY_IFOREST function now supports a contamination parameter that specifies the approximate ratio of data points in the training data that are labeled as outliers.

Isolation forest

The APPLY_IFOREST function now supports a contamination parameter that specifies the approximate ratio of data points in the training data that are labeled as outliers. The function uses this contamination value to calculate an anomaly score threshold, a minimum limit that determines whether an input data point is marked as an outlier. You can set contamination instead of setting a threshold explicitly with the threshold parameter.

5.6 - Management Console

On Eon Mode on Amazon Web Services (AWS), the root volume is encrypted by default when you perform any of the following operations with Management Console:.

Eon Mode root volume encryption on AWS

On Eon Mode on Amazon Web Services (AWS), the root volume is encrypted by default when you perform any of the following operations with Management Console:

  • Provision a database

  • Revive a database

  • Scale up a database

  • Add a new subcluster

For details, see Amazon Web Services in MC and Subclusters in MC.

Support gp3 volume type on AWS

Vertica supports the gp3 volume type as the default volume type for root and EBS volumes on Amazon Web Services (AWS). The gp3 volume type is more cost-efficient and has better performance for input/output operations per second (IOPS) than the gp2 volume type. For details, see the Amazon gp3 product overview documentation.

Vertica uses gp3 volumes as the default volume type for the following operations in both Enterprise Mode and Eon Mode:

  • Create a database

  • Revive a database

  • Add a subcluster

  • Scale up a subcluster

Additionally, the Vertica CloudFormation Template (CFT) uses the gp3 volume as the default volume type.

For details about Vertica on AWS, see the following:

AWS cluster creation i4i instance type support

Management Console now supports three AWS i4i instance types for database cluster creation. For configuration details, see Enterprise Mode volume configuration defaults for AWS and Eon Mode volume configuration defaults for AWS.

5.7 - Performance improvements

Refreshing a SET USING column where SET USING is set to a constant value such as null has been optimized.

REFRESH_COLUMNS

Refreshing a SET USING column where SET USING is set to a constant value such as null has been optimized. Compared to previous releases, calling REFRESH_COLUMNS in REBUILD mode now performs significantly faster when it repopulates a SET USING column with constant values.

One exception applies: this optimization does not apply to SET USING subqueries such as SELECT null.

Backup/restore

Better manifest processing and garbage collection have significantly improved performance of vbr backup and restore operations.

5.8 - S3 object store

Vertica now supports all three types of S3 server-side encryption: SSE-S3, SSE-KMS, and SSE-C.

Server-side encryption (SSE)

Vertica now supports all three types of S3 server-side encryption: SSE-S3, SSE-KMS, and SSE-C. See S3 object store.

5.9 - Security and authentication

You can now create custom TLS Configurations and assign them to the following database parameters to secure different connection types:.

Custom TLS configurations

You can now create custom TLS Configurations and assign them to the following database parameters to secure different connection types:

  • ServerTLSConfig

  • HttpsTLSConfig

  • LDAPLinkTLSConfig

  • LDAPAuthTLSConfig

  • InternodeTLSConfig

For details, see Security parameters.

InternodeTLSConfig TLSMODE

You can now set the TLSMODE for internode encryption with ALTER TLS CONFIGURATION.

For details on data channel TLS, see Data channel TLS.

5.10 - SQL functions and statements

You can now create and drop TLS Configurations.

CREATE and DROP TLS configurations

You can now create and drop TLS Configurations.

REFRESH_COLUMNS updates

The REFRESH_COLUMNS function now supports a Boolean flag that specifies whether to split ROS containers if the range of partition keys to refresh spans multiple containers or part of a single container. By default, this flag is set to true: split ROS containers as needed.

5.11 - Users and privileges

Keys, certificates, and TLS Configurations now support granting and revoking privileges.

Expanded cryptographic privileges

Keys, certificates, and TLS Configurations now support granting and revoking privileges.

Superusers have limited privileges on cryptographic objects that they do not own. For details, see Database Object Privileges.

Cryptographic keys and certificates

You can now grant/revoke USAGE, DROP, and ALTER privileges on cryptographic keys.

You cannot explicitly grant privileges on certificates; rather, privileges on certificates comes from ownership or privileges on the underlying cryptographic key. For details, see GRANT (key).

TLS configurations

You can now grant/revoke DROP and USAGE privileges on TLS Configurations.

6 - New and changed in Vertica 12.0.0

6.1 - Client connectivity

These new parameters let you adjust TCP keepalive and heartbeat protocol settings at the database and session levels.

Keepalive and heartbeat protocol settings

These new parameters let you adjust TCP keepalive and heartbeat protocol settings at the database and session levels.

  • KeepAliveIdleTime: The time, in seconds, before the first TCP keepalive probe is sent to ensure that the client is still connected.

  • KeepAliveProbeInterval: The time interval, in seconds, between keepalive probes.

  • KeepAliveProbeCount: The number of consecutive keepalive probes that must be unacknowledged by the client before the client connection is considered lost and closed.

For details, see Managing client connections.

6.2 - Client drivers

The vertica-nodejs client driver is now available.

Node.js client driver

The vertica-nodejs client driver is now available.

Centralized OAuth configuration

To simplify OAuth configuration, you can now set OAuth parameters with a single JSON string with the new oauthjsonconfig (ODBC) and OAuthJsonConfig (JDBC) parameters.

To preserve existing configurations, the old parameters take precedence over the new JSON parameter, but the new JSON parameter is the recommended configuration method, and the old parameters have been deprecated.

JDBC DataSource user property

You can now get and set the user property of DataSource with getUser() and setUser(), respectively. For details, see JDBC API.

ODBC client driver enhancements

A number of enhancements have been made to the ODBC client driver to better conform to the ODBC specification. These changes may cause regressions in existing ODBC client programs:

  • If an application calls SQLSetDescField to set any field other than the following, the record becomes unbound:

    • SET_DESC_COUNT

    • The deferred fields:

      • SQL_DESC_DATA_PTR

      • SQL_DESC_INDICATOR_PTR

      • SQL_DESC_OCTET_LENGTH_PTR

  • SQLSetDescField must be called in the sequence order specified in the Microsoft documentation.

  • If an application wants to set the precision or scale field for a numeric or date type, it must use SQLSetDescField to explicitly set those fields rather than relying on SQLBindParameter. For details, see the Microsoft documentation.

  • If an application calls SQLGetData, the StrLen_or_IndPtr can return the actual length of the data available, SQL_NO_TOTAL, or SQL_NULL_DATA. If the data is truncated, Vertica returns SQL_NO_TOTAL. When checking the length of the available buffer, you must consider all cases, including SQL_NO_TOTAL and SQL_NULL_DATA. For details, see the Microsoft documentation on SQLGetData and Getting Long Data.

  • If an application calls SQLExecute, the response will always attempt to set the SQLSTATE, including the status and other metadata.

  • Many exception messages have been improved with more detailed error information.

JDBC: CallableStatement

The Vertica JDBC client driver now supports stored procedures through CallableStatement. For details, see Executing queries through JDBC.

6.3 - Containers and Kubernetes

You can enable Spread encryption on the Vertica database.

Spread encryption on Vertica database

You can enable Spread encryption on the Vertica database. Use the encryptSpreadComm custom resource parameter to set the EncryptSpreadComm security parameter for the database.

For details, see Custom resource definition parameters.

Prometheus operator integration

Vertica on Kubernetes integrates with the Prometheus operator to facilitate adoption in environments that currently use the Prometheus operator.

For details, see Prometheus integration.

6.4 - Database management

You can set disk quotas for schemas, individual tables, or both.

Disk quota for schemas and tables

You can set disk quotas for schemas, individual tables, or both. Disk quotas are checked by most user operations that increase storage size. Quotas do not affect recovery, rebalancing, or Tuple Mover operations.

In Eon Mode, disk usage is an aggregate of all disk space used by all shards for the schema or table. This value is computed for primary subscriptions only. In Enterprise Mode, disk usage is the sum of disk space used by all storage containers on all nodes for the schema or table. This sum excludes buddy projections but includes all other projections.

The DISK_QUOTA_USAGES system table records quotas and current usage for objects that have quotas.

For details, see Disk quotas.

6.5 - Diagnostic tools

scrutinize now collects the full time zone by running the following command:.

Scrutinize

scrutinize now collects the full time zone by running the following command:

timedatectl | grep "Time zone"

Output is directed to:

scrutinize-output-dir/VerticaScrutinize.timestamp/node-name/context/Commands/timezone

For example, after running scrutinize:

$ cat ./VerticaScrutinize.20220513110429/v_vmart_node0001/context/Commands/timezone
       Time zone: America/New_York (EDT, -0400)

6.6 - Documentation updates

Several organizational changes have been made to make information easier to find:.

Several organizational changes have been made to make information easier to find:

  • Top-level topics have been re-ordered to place similar information together. The SQL Reference has been moved near the bottom with the other reference documentation. The lists of configuration parameters have been moved into the SQL Reference.

  • In the SQL Reference, functions and meta-functions have been combined under Functions, and the alphabetical list includes all of them. (Previously, the alphabetical list included only meta-functions.) Meta-functions are labeled as such on their reference pages.

  • The documentation about client drivers has been refactored.

  • The information that was in the Concepts Guide has been placed with the other documentation on those topics. For example, the section about projections is now with the other documentation about projections.

  • Several duplicate topics were removed, and several collections of short related topics have been consolidated.

6.7 - Eon Mode

The new RESHARD_DATABASE function allows you to change the number of shards in an Eon Mode database.

Re-shard an Eon Mode database

The new RESHARD_DATABASE function allows you to change the number of shards in an Eon Mode database. Previously, the shard count was set during database creation and could not be altered.

For more information about re-sharding, see Change the number of shards in the database.

Graceful shutdown

If you want to drain a subcluster's client connections before shutting it down, you can now gracefully shut down subclusters using SHUTDOWN_WITH_DRAIN. The function allows existing connections on the subcluster to continue their work until either all connections close or a user-specified timeout is reached. When one of these conditions is met, the function proceeds to shut down the subcluster.

For more information about the graceful shutdown process, see Graceful Shutdown.

6.8 - Loading data

Data files are sometimes partitioned in the file system using the directory structure.

Partitioned file paths

Data files are sometimes partitioned in the file system using the directory structure. Partitioning moves values out of the raw data, where they have to be included for each row, and into the directory structure, saving disk space. Partitioning can also improve query performance by allowing entire directories to be skipped.

Previously, only the Parquet and ORC parsers could take advantage of partitioned file paths. Now COPY supports partitioned file paths for all parsers using the new PARTITION COLUMNS option. The hive_partition_cols parameter for the Parquet and ORC parsers is deprecated.

The hive_partition_cols parameter has the following behavior changes from previous releases:

  • Nested partition directories must appear in consistent order in the file system. The following path pattern is invalid:

    /data/created=2022-01-01/region=north
    /data/region=south/created=2022-01-02
    
  • If the column value cannot be parsed from the directory name, COPY rejects the path instead of treating the value as null.

  • If the path is missing a declared partition column, COPY always returns an error. Previously, if do_soft_schema_match_by_name was true in the Parquet parser, the parser filled the column with nulls.

  • Partition columns are no longer required to be the last columns in the table definition.

See Partitioned file paths.

6.9 - Machine learning

Vertica now supports the isolation forest (iForest) algorithm.

Isolation forest

Vertica now supports the isolation forest (iForest) algorithm. iForest is an unsupervised outlier detection algorithm that you can use for applications such as fraud detection and system health monitoring, or for removing outliers from your data. You can use the IFOREST function to train an iForest model and the APPLY_IFOREST function to apply the model to an input relation.

For an extended discussion on the iForest algorithm, see Isolation Forest.

Option to build LINEAR_REG and LOGISTIC_REG models without calculating intercept

You now have the option of training linear and logistic regression models without calculating an intercept. See LINEAR_REG and LOGISTIC_REG for details.

6.10 - Management Console

You can edit the following properties of an existing custom alert:.

Edit custom alerts

You can edit the following properties of an existing custom alert:

  • Alert Name

  • SQL Query

  • Variable values

For details, see Custom alerts.

Revive an Eon Mode database on Microsoft Azure

Management Console supports reviving Eon Mode database clusters on Microsoft Azure.

Alternate filesystem layouts for reviving and scaling on AWS

On Amazon Web Services (AWS), you can use alternate filesystem layouts for the depot, temp, and catalog Vertica directories for the following actions:

  • Reviving a database

  • Creating a subcluster

  • Scaling a subcluster

For details, see Amazon Web Services in MC and Subclusters in MC.

6.11 - Projections

Partition Ranges can now be specified for live aggregate and top-K projections.

Partition range support for aggregate projections

Partition Ranges can now be specified for live aggregate and top-K projections.

6.12 - SDK updates

User-defined aggregate functions (UDAFs) can now be polymorphic.

Polymorphic aggregate functions

User-defined aggregate functions (UDAFs) can now be polymorphic. A polymorphic function can accept any number and type of arguments. For information on how to write a polymorphic function, see Creating a polymorphic UDx.

Complex types in the Python SDK

You can now read and write complex types in user-defined extensions written in Python. The complex types support includes arrays, rows, and combinations of both. For details, see Arguments and return values. For examples of a Python UDx utilizing complex types, see Python example: matrix multiplication and Python example: complex types JSON parser.

6.13 - Security and authentication

To improve security, Vertica no longer issues the following error when a user attempts to connect and authenticate:.

Generic authentication errors

To improve security, Vertica no longer issues the following error when a user attempts to connect and authenticate:

Invalid username or password

In addition, Vertica no longer issues method-specific error messages, and all authentication failures will result in the same error message:

authentication failed for username "name"

This change may affect clients that decide to retry connection attempts based on the type of connection error code. One such example is vsql, which would previously attempt a plaintext connection if TLS failed (and TLSMODE was ENABLE), but not if vsql received an error code reporting incorrect user credentials.

Because the new, generic message and error code does not specify the cause for the failure, vsql cannot distinguish between a authentication error as a result of an invalid TLS configuration or invalid user credentials, and it will attempt to establish a plaintext connection in both cases.

Default authentication records

Vertica now creates three default authentication records and grants them to the public role. These authentication records have a priority of -1, so all user-created authentication records take priority over these default records.

In previous versions of Vertica, if no authentication records were enabled, the following implicit authentication records were applied:

  • Users without a password were authenticated with the trust method. This implicit authentication record continues to apply in 12.0.0 for users without a password.

  • Users with a password were authenticated with the password method. This implicit authentication record has been removed in favor of the default records.

Fallthrough authentication

You can now allow authentication records to fall through to the next record (in order of priority) upon failure. For details, see Fallthrough authentication.

Upgrade behavior

In previous versions, this fallthrough behavior was only available for ident (could fall through to any other authentication method) and ldap (could only fall through to other ldap methods) authentication methods and this behavior could not be disabled. As of Vertica 12.0.0, this behavior is disabled by default for all new authentication records, including ident and ldap.

To preserve the behavior of existing databases that depended on ident's fallthrough Vertica automatically enables fallthrough for ident authentication records on upgrade if all of the following are true:

  • The database contains an ident authentication record.

  • The ident authentication record has the highest priority.

  • The database contains another user-defined authentication record.

Again, in previous versions, ldap records only fell through to other ldap records and skipped records that used other methods. This is no longer the case; ldap records are fallthrough-compatible with many other methods.

Therefore, to replicate old fallthrough behavior for ldap, your ldap records must be consecutive (in priority order) to fallthrough to each other.

6.14 - SQL functions and statements

You can now use the INFER_TABLE_DDL function to produce candidate table definitions from JSON files.

INFER_TABLE_DDL supports JSON

You can now use the INFER_TABLE_DDL function to produce candidate table definitions from JSON files. Because a JSON file does not have an explicit schema, the function inspects the data itself. Because JSON data can vary from record to record or file to file, the function can return more than one candidate definition. In the following example, differences are highlighted:

=> SELECT INFER_TABLE_DDL ('/data/*.json'
    USING PARAMETERS table_name='restaurants', format='json',
max_files=3, max_candidates=3);
WARNING 0:  This generated statement contains one or more float types which might lose precision
WARNING 0:  This generated statement contains one or more varchar/varbinary types which default to length 80

                INFER_TABLE_DDL
------------------------------------------------------------------------
 Candidate matched 1/2 of total files(s):
  create table "restaurants"(
    "cuisine" varchar,
    "location_city" Array[varchar],
    "menu" Array[Row(
      "item" varchar,
      "price" float
    )],
    "name" varchar
  );
Candidate matched 1/2 of total files(s):
  create table "restaurants"(
    "cuisine" varchar,
    "location_city" Array[varchar],
    "menu" Array[Row(
      "items" Array[Row(
        "item" varchar,
        "price" numeric
      )],
      "time" varchar
    )],
    "name" varchar
  );

(1 row)

Immutable tables

Immutable tables are insert-only tables in which existing data cannot be modified, regardless of user privileges. Updating row values and deleting rows are prohibited. Certain changes to table metadata—for example, renaming tables—are also prohibited, in order to prevent attempts to circumvent these restrictions.

You set an existing table to be immutable with ALTER TABLE:

ALTER TABLE table SET IMMUTABLE ROWS;

For details, see Immutable tables.

6.15 - Users and privileges

By default, synchronizing LDAP users through the LDAP Link service automatically grants roles (derived from their LDAP groups) to the users.

By default, synchronizing LDAP users through the LDAP Link service automatically grants roles (derived from their LDAP groups) to the users. However, these are not default roles and therefore must be enabled manually with SET ROLE before they take effect.

The new LDAPLinkAddRolesAsDefault parameter (disabled by default) makes these roles default roles automatically:

=> ALTER DATABASE DEFAULT SET LDAPLinkAddRolesAsDefault = 1;

For details on this and other LDAP Link parameters, see LDAP link parameters.

6.16 - Vertica on the cloud

Vertica now supports three i4i, two r6i, and one c6i EC2 instance types for use in database clusters.

AWS i4i, r6i, and c6i instance types now supported

Vertica now supports three i4i, two r6i, and one c6i EC2 instance types for use in database clusters. See Supported AWS instance types.

7 - Deprecated and removed functionality

Vertica retires functionality in two phases:.

Vertica retires functionality in two phases:

  • Deprecated: Vertica announces deprecated features and functionality in a major or minor release. Deprecated features remain in the product and are functional. Published release documentation announces deprecation on this page. When users access this functionality, it may return informational messages about its pending removal.

  • Removed: Vertica removes a feature in a major or minor release that follows the deprecation announcement. Users can no longer access the functionality, and this page is updated to verify removal (see History, below). Documentation that describes this functionality is removed, but remains in previous documentation versions.

Deprecated

The following functionality was deprecated and will be removed in future versions:

Release Functionality Notes
12.0.4 Visual Studio 2012, 2013, and 2015 plug-ins and the Microsoft Connectivity Pack
12.0.3 ADO.NET driver support for .NET 3.5
prometheus.createServiceMonitor Helm chart parameter
webhook.caBundle Helm chart parameter Use webhook.tlsSecret instead. For details, see Helm chart parameters.
12.0.2 JDBC 4.0 and 4.1 support
Support for Visual Studio 2008 and 2010 plug-ins
CA bundles Use TLS configurations instead.

The following parameters for CREATE NOTIFIER and ALTER NOTIFIER:

  • TLSMODE

  • CA BUNDLE

  • CERTIFICATE

Use the TLS CONFIGURATION parameter instead. For details, see Creating a Kafka notifier.
The TLSMODE PREFER parameter for CONNECT TO VERTICA. Use the TLS CONFIGURATION parameter instead.
cert-manager for Helm chart TLS configuration. Use webhook.certSource parameter to generate certificates internally or provide custom certificates. See Helm chart parameters.
webhook.caBundle Helm chart parameter. Provide the certificate authority (CA) bundle in a Secret to webhook.tlsSecret.
12.0.0 hive_partition_cols parameter for PARQUET and ORC parsers Use COPY...PARTITION COLUMNS instead. See Partitioned file paths.

The following ODBC/JDBC OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

Replaced by OAuthJsonConfig (ODBC) and oauthjsonconfig (JDBC).

Removed

The following functionality was removed:

Release Functionality Notes
12.0.4 ADO.NET driver support for .NET 3.5 The ADO.NET driver now requires the Microsoft .NET Framework 4.6 or higher. The Vertica Client Drivers and Tools for Windows prompts you to install .NET 4.6 if your system does not meet this requirement.
12.0.3 Support for Visual Studio 2008 and 2010 plug-ins
vsql support for macOS 10.12-10.14 vsql now requires macOS 10.15 or higher.
12.0.1 Internet Explorer 11 Internet Explorer 11 web browser is retired and out of support as of June 15, 2022.
12.0.0 macOS 10.12 ODBC and vsql client driver support
AWS Library (S3EXPORT) Replaced by File export.

History

The following functionality or support has been deprecated or removed as indicated:

Functionality Component Deprecated in: Removed in:
Visual Studio 2012, 2013, and 2015 plug-ins and the Microsoft Connectivity Pack. Client drivers 12.0.4
vsql support for macOS 10.12-10.14 Client drivers 12.0.3
CA bundles Security 12.0.2

The following parameters for CREATE NOTIFIER and ALTER NOTIFIER:

  • TLSMODE

  • CA BUNDLE

  • CERTIFICATE

Security 12.0.2
The TLSMODE PREFER parameter for CONNECT TO VERTICA. Security 12.0.2
JDBC 4.0 and 4.1 support Client drivers 12.0.2
Support for Visual Studio 2008 and 2010 plug-ins Client drivers 12.0.2 12.0.3
Internet Explorer 11 support Management Console 12.0.1
ODBC support for macOS 10.12-10.14 Client drivers 12.0

The following ODBC/JDBC OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

Client drivers 12.0
hive_partition_cols parameter for PARQUET and ORC parsers Server 12.0

The following ODBC/JDBC OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

Client drivers 12.0
INFER_EXTERNAL_TABLE_DDL function Server 11.1.1
Admission Controller Webhook image Kubernetes 11.0.1 11.0.2
Admission Controller Helm chart Kubernetes 11.0.1
Shared DATA and DATA,TEMP storage locations Server 11.0.1
DESIGN_ALL option for EXPORT_CATALOG() Server 11.0
HDFSUseWebHDFS configuration parameter and LibHDFS++ Server 11.0
INFER_EXTERNAL_TABLE_DDL (path, table) syntax Server 11.0 11.1.1

AWS library functions:

  • AWS_GET_CONFIG

  • AWS_SET_CONFIG

  • S3EXPORT

  • S3EXPORT_PARTITION

Server 11.0 12.0
Vertica Spark connector V1 Client 11.0
admintools db_add_subcluster --is-secondary argument Server 11.0
Red Hat Enterprise Linux/CentOS 6.x Server 10.1.1 11.0
STRING_TO_ARRAY(array,delimiter) syntax Server 10.1.1
Vertica JDBC API com.vertica.jdbc.kv package Client Drivers 10.1
ARRAY_CONTAINS function Server 10.1

Client-server TLS parameters:

  • SSLCertificate

  • SSLPrivateKey

  • SSLCA

  • EnableSSL

LDAP authentication parameters:

  • tls_key

  • tls_cert

  • tls_cacert

  • tls_reqcert

LDAPLink and LDAPLink dry-run parameters:

  • LDAPLinkTLSCACert

  • LDAPLinkTLSCADir

  • LDAPLinkStartTLS

  • LDAPLinkTLSReqCert

Server 10.1 11.0
MD5 hashing algorithm for user passwords Server 10.1
Reading structs from ORC files as expanded columns Server 10.1 11.0
vbr configuration section [S3] and S3 configuration parameters Server 10.1
flatten_complex_type_nulls parameter to the ORC and Parquet parsers Server 10.1 11.0
System table WOS_CONTAINER_STORAGE Server 10.0.1 11.0.2
skip_strong_schema_match parameter to the Parquet parser Server 10.0.1 10.1
Specifying segmentation on specific nodes Server 10.0.1
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE Server 10.0.1 11.0.1
Meta-function ANALYZE_CORRELATIONS Server 10.0
Eon Mode meta-function BACKGROUND_DEPOT_WARMING Server 10.0
Reading structs from Parquet files as expanded columns Server 10.0 10.1

Eon Mode meta-functions:

  • SET_DEPOT_PIN_POLICY

  • CLEAR_DEPOT_PIN_POLICY

Server 10.0 10.1
vbr configuration parameter SnapshotEpochLagFailureThreshold Server 10.0

Array-specific functions:

  • array_min

  • array_max

  • array_sum

  • array_avg

Server 10.0 10.1
DMLTargetDirect configuration parameter Server 10.0
HiveMetadataCacheSizeMB configuration parameter Server 10.0 10.1
MoveOutInterval Server 10.0
MoveOutMaxAgeTime Server 10.0
MoveOutSizePct Server 10.0
Windows 7 Client Drivers 9.3.1
DATABASE_PARAMETERS admintools command Server 9.3.1
Write-optimized store (WOS) Server 9.3 10.0
7.2_upgrade vbr task Server 9.3
DropFailedToActivateSubscriptions configuration parameter Server 9.3 10.0
--skip-fs-checks Server 9.2.1
32-bit ODBC Linux and OS X client drivers Client 9.2.1 9.3
Vertica Python client Client 9.2.1 10.0
macOS 10.11 Client 9.2.1
DisableDirectToCommunalStorageWrites configuration parameter Server 9.2.1
CONNECT_TO_VERTICA meta-function Server 9.2.1 9.3
ReuseDataConnections configuration parameter Server 9.2.1 9.3
Network interfaces (superseded by network addresses) Server 9.2
Database branching Server 9.2 10.0
KERBEROS_HDFS_CONFIG_CHECK meta-function Server 9.2
Java 5 support JDBC Client 9.2 9.2.1

Configuration parameters for enabling projections with aggregated data:

  • EnableExprsInProjections

  • EnableGroupByProjections

  • EnableTopKProjections

  • EnableUDTProjections

Server 9.2
DISABLE_ELASTIC_CLUSTER() Server 9.1.1 11.0
eof_timeout parameter of KafkaSource Server 9.1.1 9.2
Windows Server 2012 Server 9.1.1
Debian 7.6, 7.7 Client driver 9.1.1 9.2.1
IdolLib function library Server 9.1 9.1.1
SSL certificates that contain weak CA signatures such as MD5 Server 9.1
HCatalogConnectorUseLibHDFSPP configuration parameter Server 9.1
S3 UDSource Server 9.1 9.1.1
HCatalog Connector support for WebHCat Server 9.1
partition_key column in system tables STRATA and STRATA_STRUCTURES Server 9.1 10.0.1
Vertica Pulse Server 9.0.1 9.1.1
Support for SQL Server 2008 Server 9.0.1 9.0.1
SUMMARIZE_MODEL meta-function Server 9.0 9.1
RestrictSystemTable parameter Server 9.0.1
S3EXPORT multipart parameter Server 9.0
EnableStorageBundling configuration parameter Server 9.0
Machine Learning for Predictive Analytics package parameter key_columns for data preparation functions. Server 9.0 9.0.1
DROP_PARTITION meta-function, superseded by DROP_PARTITIONS Server 9.0
Machine Learning for Predictive Analytics package parameter owner. Server 8.1.1 9.0
Backup and restore --setupconfig command Server 8.1 9.1.1
SET_RECOVER_BY_TABLE meta-function. Do not disable recovery by table. Server 8.0.1
Column rebalance_projections_status.duration_sec Server 8.0
HDFS Connector Server 8.0 9.0
Prejoin projections Server 8.0 9.2
Administration Tools option --compat21 Server 7.2.1
admin_tools -t config_nodes Server 7.2 11.0.1
Projection buddies with inconsistent sort order Server 7.2 9.0
backup.sh Server 7.2 9.0
restore.sh Server 7.2 9.0
copy_vertica_database.sh Server 7.2
JavaClassPathForUDx configuration parameter Server 7.1
ADD_LOCATION meta-function Server 7.1
bwlimit configuration parameter Server 7.1 9.0
vbr configuration parameters retryCount and retryDelay Server 7.1 11.0
EXECUTION_ENGINE_PROFILE counters: file handles, memory allocated Server 7.0 9.3
EXECUTION_ENGINE_PROFILES counter memory reserved Server 7.0
MERGE_PARTITIONS() meta-function Server 7.0

krb5 client authentication method

All clients 7.0
range-segmentation-clause Server 6.1.1 9.2
scope parameter of meta-function CLEAR_PROFILING Server 6.1
Projection creation type IMPLEMENT_TEMP_DESIGN Server, clients 6.1