This is the multi-page printable view of this section. Click here to print.
New and changed in Vertica 24.1
- 1: Admin
- 2: Client connectivity
- 3: Client drivers
- 4: Containers and Kubernetes
- 5: Data Collector
- 6: Data load
- 7: Database management
- 8: Directed Queries
- 9: Eon Mode
- 10: Machine learning
- 11: Management Console
- 12: Tables
1 - Admin
Grafana dashboards
Vertica provides the following dashboards to visualize Prometheus metrics:
- Vertica Overview (Prometheus)
- Vertica Queries (Prometheus)
- Vertica Resource Management (Prometheus)
- Vertica Depot (Prometheus)
You can also download the source for each dashboard from the vertica/grafana-dashboards repository.
For details about Vertica and Prometheus, see HTTPS endpoints and Prometheus metrics.
2 - Client connectivity
Workload routing
For details on workload routing, see Workload routing.
User and role-based workload routing
You can now grant or revoke USAGE privileges on a workload. For details, see Workload routing.
Workload routing rule priorities
You can now set the priority of a routing rule when you create or alter it. Priorities are used when multiple routing rules apply to a user or their enabled roles. For details, see Workload routing.
View available workloads
You can now view the workloads available to your user and enabled roles with SHOW AVAILABLE WORKLOADS.
3 - Client drivers
ADO.NET: Read-only filesystem support
To better support read-only filesystems like on Kubernetes, the following changes have been made to the ADO.NET driver's logging behavior:
- The ADO.NET driver no longer creates a configuration file if one doesn't exist.
- The ADO.NET driver no longer modifies or reads from the Windows registry.
- The ADO.NET driver now uses the configuration file found in either the home or project directories, with the former having priority.
- The changes to logging behavior made by the following functions now only last for the lifetime of the application, and their respective
bool persist
parameters (which previously wrote changes to the configuration file) no longer have any effect:SetLogPath(String path)
SetLogNamespace(String lognamespace)
SetLogLevel(VerticaLogLevel loglevel)
ADO.NET: Windows installer restored
In Vertica 23.4, the ADO.NET driver was removed from the Windows client driver installer. This functionality has been restored. While you can still use a package or local .dll
reference to use the driver, you can also use the installer if your use case depends on certain tools interacting with the driver, like TIBCO Spotfire.
For details, see Installing the ADO.NET client driver.
OAuth configuration improvements
The following parameters can now be set in an OAuth authentication record. For details on these parameters, see OAuth authentication parameters:
auth_url
token_url
scope
validate_hostname
This feature centralizes OAuth configuration in the server and replaces the oauthjsonconfig (JDBC) and OAuthJsonConfig (ODBC) parameters, which required these and other parameters to be specified on a per-client basis. In general, clients are now only required to specify the following to authenticate to Vertica with OAuth:
- Client secret (for confidential clients)
- An access or refresh token
For a list of OAuth parameters for each client, see JDBC connection properties and ODBC DSN connection properties.
4 - Containers and Kubernetes
v1 API version
The VerticaDB CRD uses the v1
API version. This API version manages deployments with vclusterops
, a Go library that uses a high-level REST interface to administer the database with the Node Management Agent and HTTPS service. The v1beta1
API version is deprecated.
To upgrade your VerticaDB CRs to API version v1
with 24.1.0, you must migrate API versions. For details, see Upgrading Vertica on Kubernetes.
VerticaDB operator 2.0.0
The VerticaDB operator 2.0.0 is a cluster-scoped operator that can watch objects in any namespace within the cluster. This operator is compatible with both the v1
API version and the deprecated v1beta1
API version. In addition, the cluster administrator's workflow for granting user privileges with the 2.0.0 is streamlined.
For details about VerticaDB operator 2.0.0, see the following:
- Vertica images
- Installing the VerticaDB operator
- Upgrading the VerticaDB operator
- Upgrading Vertica on Kubernetes
- vertica/verticadb-operator Docker Hub repository.
Image updates
The minimal and full Vertica on Kubernetes images no longer include Administration tools (admintools) or static SSH keys that encrypt internal communications between pods.
For a list of all available images, see Vertica images and the Vertica Docker Hub repositories.
Changes to VerticaDB parameters
The following lists detail the changes to the VerticaDB custom resource definition parameters. For a complete list of the current parameters and annotations, see Custom resource definition parameters and Helm chart parameters.
New parameters
The following custom resource definition parameters were added:
tlsNMASecret
serviceAccountName
The following Helm chart parameters were added:
serviceAccountAnnotations
serviceAccountNameOverride
reconcileConcurrency.verticaautoscaler
reconcileConcurrency.verticadb
reconcileConcurrency.eventtrigger
Removed parameters
The following deprecated parameters were removed:
communal.kerberosServiceName
communal.kerberosRealm
You can use communal.additionalConfig
in place of these parameters.
Renamed parameters
The following table describes the renamed parameters:
Previous name | New name |
---|---|
communal.hadoopConfig |
hadoopConfig |
httpNodePort |
verticaHTTPNodePort |
subclusters.isPrimary |
subclusters.type |
subclusters.nodePort |
subclusters.clientNodePort |
superuserPasswordSecret |
passwordSecret |
Converted to annotations
Some parameters were converted to annotations. The following table describes the annotation conversions:
Parameter name | Annotation name |
---|---|
ignoreClusterLease |
vertica.com/ignore-cluster-lease |
communal.includeUIDInPath |
vertica.com/include-uid-in-path |
restartTimeout |
vertica.com/restart-timeout |
New annotations
The following annotations were added:
vertica.com/run-nma-in-sidecar
vertica.com/superuser-name
scrutinize diagnotics
You can run scrutinize
to collect diagnostic information about your VerticaDB custom resource instance. This command creates a tar file that you can upload to Vertica support for troubleshooting assistance.
For details about scrutinize
in a containerized environment, see scrutinize for VerticaDB.
Specify ServiceAccount in VerticaDB CR
The serviceAccountName
parameter lets you associate a VerticaDB CR instance with a service account. For details, see Custom resource definition parameters.
Support Google Secret Manager
The VerticaDB operator can access Secrets that you store in Google Secret Manager. This lets you maintain a single location for the sensitive information that you use with Google Cloud and Vertica on Kubernetes.
For details, see Secrets management.
Support anyuid in RedHat OpenShift
Vertica supports the anyuid
security context constraint (SCC) to enforce enhanced security measures. For details about Vertica and OpenShift, see Red Hat OpenShift integration.
Add custom UID and GID in VerticaDB CR
Set the runAsUser
and runAsGroup
parameters to use any value for the user ID (UID) or group ID (GID) with the VerticaDB CR. You must nest them under podSecurityContext
.
For details, see Custom resource definition parameters.
Spread encryption enabled by default
The encryptSpreadComm
custom resource definition (CRD) parameter was updated to enable Spread TLS by default. In addition, the parameter accepts new values to enable or clear spread encryption.
For details about the CRD parameter, see Custom resource definition parameters. For details about spread encryption, see Control channel Spread TLS.
Custom superuser name
You can set the superuser-name
annotation to use a custom superuser name with your VerticaDB custom resource. For details, see Custom resource definition parameters.
5 - Data Collector
SET_DATA_COLLECTOR_POLICY
The SET_DATA_COLLECTOR_POLICY (using parameters) function sets individual policies for the Data Collector. It supersedes SET_DATA_COLLECTOR_POLICY and SET_DATA_COLLECTOR_TIME_POLICY.
6 - Data load
Automatic data load performance improvement
During automatic data loads, Vertica now divides the load into batches for parallel execution. By default, Vertica chooses a batch size based on the total data volume and the number of executor nodes. You can override the default using the BATCH_SIZE
option with EXECUTE DATA LOADER.
Execute data loader with specific files
You can now call EXECUTE DATA LOADER with specific files to be loaded so that the loader does not check all files in the location. This option is useful if your workflow uses a "push" model where a notifier detects new files and exceutes the loader directly.
DATA_LOADER_EVENTS table
The DATA_LOADER_EVENTS system table records events for all data loaders, including path, whether the load succeeded, and how many times it has been retried. When querying the table, you see only events for the data loaders you have access to.
Iceberg tables support fallback name mapping
Vertica can now read Iceberg data even if the Parquet files do not encode field IDs. If the Parquet files do not contain the needed information, Vertica uses the fallback name mappings from the Iceberg metadata. This process is automatic and does not require any changes to the table definition in Vertica.
7 - Database management
LogRotate service
You can now automatically rotate log files with the LogRotate service. Previously, this functionality depended on the Linux logrotate
tool. The LogRotate service removes this dependency.
For details, see Rotating log files.
Write performance for S3
By default, Vertica performs writes using a single thread, but a single write usually includes multiple files or parts of files. For writes to S3, you can use a larger thread pool to perform writes in parallel. This thread pool is used for all file writes to S3, including file exports and writes to communal storage.
The size of the thread pool is controlled by the ObjStoreUploadParallelism configuration parameter. Each node has a single thread pool used for all file writes. In general, one or two threads per concurrent writer produces good results.
Node Management Agent: Improved error reporting
Most Node Management Agent (NMA) endpoints now return errors that conform to the RFC7807 specification. For details on the NMA, see Node Management Agent.
HTTPS service
For details on the HTTPS service, see HTTPS service.
Improved error reporting
All HTTPS endpoints now return errors that conform to the RFC7807 specification.
Subscription states
The new /v1/subscriptions
endpoint returns information on your subscriptions, including:
node_name
shard_name
subscription_state
: Subscription status (ACTIVE
,PENDING
,PASSIVE
, orREMOVING
)is_primary
: Whether the subscription is a primary subscription
For example:
$ curl -i -sk -w "\n" --user dbadmin:my_password "https://127.0.0.1:$HTTP_SERVER_PORT_1/v1/subscriptions"
{
"subscriptions_list":
[
{
"node_name": "node08",
"shard_name": "segment0004",
"subscription_state": "ACTIVE",
"is_primary": false
},
...
]
}
Depot and data paths
The following storage locations fields have been added to the /v1/nodes
and /v1/nodes/
node_name
endpoints:
data_path
: A list of paths used to store USAGE 'DATA,TEMP' data.depot_path
: The path used to store USAGE 'DEPOT' data.
For example:
$ curl -i -sk --user dbadmin:my_password https://vmart.example.com:8443/v1/nodes
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 648
Connection: keep-alive
Server: oatpp/1.3.0
{
"detail": null,
"node_list": [
{
"name": "v_vmart_node0001",
"node_id": 45035996273704982,
"address": "192.0.2.0",
"state": "UP",
"database": "VMart",
"is_primary": true,
"is_readonly": false,
"catalog_path": "\/scratch_b\/VMart\/v_vmart_node0001_catalog\/Catalog",
"data_path": [
"\/scratch_b\/VMart\/v_vmart_node0001_data"
],
"depot_path": "\/scratch_b\/VMart/my_depot",
"subcluster_name": "",
"last_msg_from_node_at": "2023-12-01T12:38:37.009443",
"down_since": null,
"build_info": "v24.1.0-20231126-36ee8c3de77d43c6ad7bbef252302977952ac9d6"
}
]
}
$ curl -i -sk --user dbadmin:my_password https://vmart.example.com:8443/v1/nodes/v_vmart_node0001/
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 648
Connection: keep-alive
Server: oatpp/1.3.0
{
"detail": null,
"node_list": [
{
"name": "v_vmart_node0001",
"node_id": 45035996273704982,
"address": "192.0.2.0",
"state": "UP",
"database": "VMart",
"is_primary": true,
"is_readonly": false,
"catalog_path": "\/scratch_b\/VMart\/v_vmart_node0001_catalog\/Catalog",
"data_path": [
"\/scratch_b\/VMart\/v_vmart_node0001_data"
],
"depot_path": "\/scratch_b\/VMart/my_depot",
"subcluster_name": "",
"last_msg_from_node_at": "2023-12-01T12:38:37.009443",
"down_since": null,
"build_info": "v24.1.0-20231126-36ee8c3de77d43c6ad7bbef252302977952ac9d6"
}
]
}
8 - Directed Queries
New status table and function
The DIRECTED_QUERY_STATUS system table records information about directed queries that have been executed, including how many times it has been executed. You can use the CLEAR_DIRECTED_QUERY_USAGE function to reset the counter for an individual directed query or for all of them.
9 - Eon Mode
Namespace support
Eon Mode databases now support namespaces. Namespaces are a collection of schemas and tables in your database that are grouped under a common name and segmented into the number of shards defined by that namespace. In Eon Mode databases, namespaces represent the top-level data structure in the Vertica object hierarchy. Each table and schema in the database belongs to a namespace.
By default, databases contain a single namespace, default_namespace
, which is formed on database creation with the shard count specified during setup. You can create additional namespaces with the CREATE NAMESPACE statement, and drop them with DROP NAMESPACE. When running Vertica statements and functions, such as CREATE TABLE and CREATE SCHEMA, you must specify the namespace to which the objects belong or under which to create them. If no namespace is specified, Vertica assumes the table or schema to be a member of the default_namespace
. For details about namespaces, including an extended example, see Managing namespaces.
10 - Machine learning
MLSUPERVISOR role privileges
Users with the MLSUPERVISOR role can now import and export models using the IMPORT_MODELS and EXPORT_MODELS meta-functions.
Export and Import to UDFS locations
You can now import and export models to any supported file system or object store, such as Amazon S3 buckets and Google Cloud Storage object stores. For more information, see IMPORT_MODELS and EXPORT_MODELS.
11 - Management Console
S3 bucket requirement for create and revive
When you create or revive a database, you must specify an S3 bucket that was authorized when you deployed the CloudFormation template.
For details, see the following:
12 - Tables
ALTER TABLE...ADD COLUMN
You can now use ALTER TABLE to add more than one column, using one ADD COLUMN clause for each.