This is the multi-page printable view of this section. Click here to print.
New and changed in Vertica 11.1.1
- 1: Configuration
- 2: Containers and Kubernetes
- 3: Data export
- 4: Data types
- 5: Database management
- 6: Geospatial analytics
- 7: Apache Hadoop integration
- 8: Loading data
- 9: Machine learning
- 10: Management Console
- 11: Query optimization
- 12: SDK updates
- 13: Security and authentication
- 14: SQL functions and statements
1 - Configuration
Session-level support for SaveDCEEProfileThresholdUS
Configuration parameter SaveDCEEProfileThresholdUS can now be set at the session level.
2 - Containers and Kubernetes
Operator helm chart privileges with service accounts
Users that do not have cluster administrator privileges can install and run the operator by passing a custom service account with operator privileges to the serviceAccountNameOverride
Helm chart parameter. The serviceAccountNameOverride
parameter prevents the Helm chart from creating a default service account for the operator installation.
The cluster administrator must create the custom service account and grant it operator privileges before the non-admin user installs the Helm chart.
For details about granting operator privileges, see Installing the Vertica DB operator. For details about serviceAccountNameOverride, see Helm chart parameters.
Install the helm chart without the admission controller webhook
Users with limited permissions might want to install only the operator to manage Vertica on Kubernetes. The webhook.enable Helm chart parameter controls whether the Helm chart installs the admission controller webhook along with the operator.
For detailed installation steps, see Installing the Vertica DB operator. For details about webhook.enable, see Helm chart parameters.
Managed Kubernetes services support
Vertica supports managed Kubernetes services on Azure Kubernetes Service (AKS).
Air-gapped operator with helm chart
As a security feature, you can deploy the operator in an air-gapped cluster, which is a cluster that is disconnected from other networks. Use the following parameters to configure the isolated operator:
-
image.repo
-
rbac_proxy_image.name
-
rbac_proxy_image.repo
-
imagePullSecrets
For additional details about each of these parameters, see Helm chart parameters.
Customize LoadBalancer service types
Assign a static IP address for subclusters that use the LoadBalancer service type with the subcluster[i].loadBalancerIP
parameter. This is useful for bare metal Vertica on Kubernetes deployments.
Add custom annotations to implementation-specific services with the subclusters[i].serviceAnnotations
parameter. Managed Kubernetes offerings use annotations on service objects to control logging and other actions.
For details about each parameter, see Custom resource definition parameters. For a list of supported Managed services, see Containerized environments.
Configure online upgrade traffic redirect wait time
During an upgrade, the operator runs a reconcile iteration and requeues any resource objects for additional work if their actual state does not match the desired state. Set the upgradeRequeueTime
parameter to determine how long the operator waits to complete any requeued work.
For additional details, see Upgrading Vertica on Kubernetes and Custom resource definition parameters.
VerticaAutoscaler custom resource
The new VerticaAutoscaler custom resource automatically scales existing subclusters by subcluster or pod. You can select the metric and trigger that determines when the workload resource scales.
For details, see VerticaAutoscaler custom resource.
Prometheus integration
Vertica on Kubernetes integrates with Prometheus to collect metrics on the VerticaDB operator. Helm chart parameters configure role-based access control (RBAC) or allow external client connections without RBAC authorization.
For details, see Prometheus integration.
3 - Data export
JSON export
You can now export data to JSON files, including heterogeneous complex types. For details, see EXPORT TO JSON.
Export to single file
The following file exporters can now export all data to a single file:
If you choose to use this option, a single node performs the export.
4 - Data types
ROW and ARRAY literals support nested complex types
When directly constructing ROW or ARRAY types, for example to use in a WHERE clause, you can now include fields and elements of these types. For example:
=> SELECT id.name, id.num, GPA FROM students
WHERE major = ARRAY[ROW('Science','Physics')];
name | num | GPA
-------+-----+-----
bob | 121 | 3.3
carol | 123 | 3.4
(2 rows)
Adding and removing columns of complex types
You can now use ALTER TABLE to add and remove columns of complex types. Previously, you could create a table with complex-type columns but could not add them to existing tables.
5 - Database management
Syslog notifiers
You can now create notifiers that send messages to syslog.
Events monitored by this notifier type are not logged to MONITORING_EVENTS nor vertica.log
.
For details, see Configuring reporting for syslog.
6 - Geospatial analytics
ST_GeomFromGeoJSON ignore_errors parameter
The ST_GeomFromGeoJSON function has a new ignore_errors
parameter. When set to true, it returns a NULL if the input GeoJSON fails to parse.
Use this setting when you are loading data that might contain faulty GeoJSON values. You can fill any faulty value's GEOMETRY column with NULL, and continue loading records.
For details, see ST_GeomFromGeoJSON.
7 - Apache Hadoop integration
WebHDFS access with mTLS
For loading and exporting data, Vertica can access HDFS clusters protected by mTLS through the swebhdfs
scheme. You must create a certificate and key and set the WebhdfsClientCertConf configuration parameter. The parameter value is a JSON string listing name services or authorities and their corresponding keys, so you can configure access to more than one HDFS cluster.
You can use CREATE KEY and CREATE CERTIFICATE to create temporary, session-scoped values. These keys and certificates are stored in memory, not on disk. You can set the configuration parameter at the session or database level.
For details and examples, see HDFS file system.
8 - Loading data
Avro maps
You can now treat an Avro map as an ARRAY[ROW]. The ROW fields must be named key
and value
. These are the names that the Avro format uses for those fields in the data, and the Avro parser relies on field names to match data to table columns.
9 - Machine learning
Support for XGBoost PMML models
You can now import and export XGBoost PMML models. In PMML, an XGBoost model is an ensemble of TreeModels called a multiple model.
New probability_normalization parameter for PREDICT_XGB_CLASSIFIER and PREDICT_XGB_CLASSIFIER_CLASSES
Previously, the PREDICT_XGB_CLASSIFIER and PREDICT_XGB_CLASSIFIER_CLASSES functions used only the logit method to normalize the prediction probability of different classes. While these functions still use the logit method by default, this new parameter switches the normalization method to softmax.
10 - Management Console
Eon Mode subcluster and node actions support for Microsoft Azure
Management Console supports the following new features for Microsoft Azure:
-
Create new subclusters
-
Scale subclusters up and down
-
View primary and secondary subcluster information
-
Start, stop, create, and terminate nodes in a subcluster
For details, see Microsoft Azure in MC.
11 - Query optimization
Materialized WITH clause now supported using EE5 temp relation
Previously, Vertica implemented WITH clauses by either rewriting WITH statements using subqueries, or materializing WITH queries into local temp tables. By default, Vertica now executes WITH clause queries in sections of a multi-part plan and saves their output in EE5 temp relations. This update improves overall performance of WITH clauses. To learn more, see Materialization of WITH Clause.
12 - SDK updates
Reporting errors and warnings (python)
You can now use a UDx written in Python to report errors, warnings, and other messages to the client. A message can contain a details string and a hint, like the messages that Vertica itself reports. The interface is similar to the C++ reporting API. For details, see Sending messages.
13 - Security and authentication
OAuth
Okta support
Vertica now supports Okta as an identity provider (IDP) for OAuth authentication. For details, see OAuth 2.0 authentication.
Empty username and password requirement removed (JDBC)
In 11.1.0, JDBC client applications that used OAuth had to pass an empty string for the user and password parameters in the connection string to authenticate. This parameter is no longer required when authenticating with OAuth.
Discovery URL parameter (keycloak only)
You can now specify a discovery URL for both an authentication record and JDBC/ODBC. The discovery URL is an endpoint that contains your identity provider's OpenID Provider Configuration Document. This parameter is only supported for Keycloak.
If specified as a connection property in JDBC or ODBC, the client drivers will automatically retrieve the token URL from the discovery URL.
Similarly, if specified as an authentication parameter in Vertica, Vertica will automatically retrieve the introspect_url
from the discovery_url
.
Scope parameter
The optional scope parameter lets you define the extent of access granted by the access token. For details about OAuth scoping, see the OAuth Documentation.
For details on this and other parameters, see JDBC connection properties and DSN Parameters (ODBC).
GRANT/REVOKE ALTER on TLS CONFIGURATION
You can now delegate the management (adding or removing certificates, changing the TLSMODE, etc.) of TLS Configurations to users or roles by granting and revoking ALTER privileges.
14 - SQL functions and statements
Multi-row INSERT
You can use INSERT to insert multiple rows in the target table, by specifying multiple comma-delimited VALUES lists. For example:
=> CREATE TABLE public.t1(a int, b int, c varchar(16));
CREATE TABLE
=> INSERT INTO t1 VALUES (1,2, 'un, deux'), (3,4, 'trois, quartre');
OUTPUT
--------
2
(1 row)
=> COMMIT;
COMMIT
=> SELECT * FROM t1;
a | b | c
---+---+----------------
1 | 2 | un, deux
3 | 4 | trois, quartre
(4 rows)
UPDATE enhancements
The FROM clause of an UPDATE statement can now reference the target table, as follows:
FROM DEFAULT [join-type] JOIN table-reference [ ON join-predicate ]
DEFAULT specifies the table to update. This keyword can be used only once in the FROM clause, and it cannot be used elsewhere in the UPDATE statement.
UPDATE statements also now support the following hints:
-
General: ALLNODES, EARLY_MATERIALIZATION, LABEL, SKIP_STATISTICS, VERBATIM
-
Join: SYNTACTIC_JOIN, DISTRIB, GBYTYPE, JTYPE, UTYPE
-
Projection: PROJS, SKIP_PROJS
INFER_TABLE_DDL for native and external tables
The INFER_TABLE_DDL function inspects a data file in Parquet, ORC, or Avro format and returns a draft table definition. It supports native and external tables and replaces INFER_EXTERNAL_TABLE_DDL, which is now deprecated.
Aliases for collection functions
For compatibility with other database platforms, ARRAY_COUNT is now an alias of APPLY_COUNT and ARRAY_LENGTH is now an alias of APPLY_COUNT_ELEMENTS. ARRAY_COUNT and ARRAY_LENGTH are no longer deprecated.