This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
New features
This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.
This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.
For known and fixed issues in the most recent release, see the Vertica Release Notes.
2 - Deprecated and removed functionality
Vertica retires functionality in two phases:.
Vertica retires functionality in two phases:
-
Deprecated: Vertica announces deprecated features and functionality in a major or minor release. Deprecated features remain in the product and are functional. Published release documentation announces deprecation on this page. When users access this functionality, it may return informational messages about its pending removal.
-
Removed: Vertica removes a feature in a major or minor release that follows the deprecation announcement. Users can no longer access the functionality, and this page is updated to verify removal (see History, below). Documentation that describes this functionality is removed, but remains in previous documentation versions.
Deprecated
The following functionality was deprecated and will be removed in future versions:
Release |
Functionality |
Notes |
24.1 |
Integration with the Linux logrotate utility |
Vertica now has a native meta-function and associated timer service for manually and automatically rotating logs. For details, see Rotating log files. |
24.1 |
webhook.enable Helm chart parameter |
You must have cluster administrator privileges to install the VerticaDB operator and grant user privileges. For details, see Installing the VerticaDB operator. |
24.1 |
OAuth connection properties for ODBC and JDBC:
- OAuthRefreshToken/oauthrefreshtoken
- OAuthClientSecret/oauthclientsecret
|
To simplify OAuth configuration, these properties will be removed in a future version of the drivers. Instead, you should prefer to use the OAuthAccessToken (ODBC) and oauthaccesstoken (JDBC) properties by themselves to configure OAuth. |
24.1 |
OAuth connection properties for JDBC:
- oauthtruststorepath (JDBC only)
- oauthtruststorepassword (JDBC only)
|
These connection properties are used to configure custom CA certificates used when the driver connects to the identity provider with TLS. When these are removed, the driver behavior will change to not contact the IDP, making these properties unnecessary. |
Removed
Release |
Functionality |
Notes |
24.1.0 |
webhook.caBundle Helm chart parameter |
To add a CA bundle, use webhook.tlsSecret . |
24.1.0 |
serviceAccountNameOverride Helm chart parameter |
No longer required. The cluster administrator deploys the operator and grants privileges to namespaces. |
24.1.0 |
skipRoleAndRoleBindingCreation Helm chart parameter |
No longer required. The cluster administrator deploys the operator and grants privileges to namespaces. |
24.1.0 |
spec.communal.kerberosRealm VerticaDB custom resource definition parameter |
Use spec.communal.additionalConfig instead. |
24.1.0 |
spec.communal.kerberosServiceName VerticaDB custom resource definition parameter |
Use spec.communal.additionalConfig instead. |
24.1.0 |
Vertica Kubernetes (No keys) image |
For a list of images, see Vertica images. |
24.1.0 |
Vertica Kubernetes admintools support |
Vertica images no longer include the Admintools Python client. For a list of images, see Vertica images. |
24.1.0 |
The following log search tokenizers:
v_txtindex.AdvancedLogTokenizer
v_txtindex.BasicLogTokenizer
v_txtindex.WhitespaceLogTokenizer
logWordITokenizerPositionFactory and logWordITokenizerFactory from the v_txtindex.logSearchLib library
|
History
The following functionality or support has been deprecated or removed as indicated:
Functionality |
Component |
Deprecated in: |
Removed in: |
The following JDBC and ODBC connection properties:
- OAuthRefreshToken/oauthrefreshtoken
- OAuthClientSecret/oauthclientsecret
- oauthtruststorepath (JDBC only)
- oauthtruststorepassword (JDBC only)
|
Client drivers |
24.1.0 |
|
Integration with the Linux logrotate utility |
Server |
24.1.0 |
|
v1beta1 VerticaDB custom resource definition API version |
Kubernetes |
23.4.0 |
|
serviceAccountNameOverride Helm chart parameter |
Kubernetes |
23.4.0 |
24.1.0 |
skipRoleAndRoleBindingCreation Helm chart parameter |
Kubernetes |
23.4.0 |
24.1.0 |
spec.communal.kerberosRealm VerticaDB custom resource definition parameter |
Kubernetes |
23.4.0 |
24.1.0 |
spec.communal.kerberosServiceName VerticaDB custom resource definition parameter |
Kubernetes |
23.4.0 |
24.1.0 |
spec.temporarySubclusterRouting VerticaDB custom resource definition parameter |
Kubernetes |
23.4.0 |
24.1.0 |
Vertica Kubernetes (No keys) image |
Kubernetes |
23.4.0 |
24.1.0 |
Vertica Kubernetes admintools support |
Kubernetes |
23.4.0 |
24.1.0 |
Oracle Enterprise Linux 6.x (Red Hat compatible kernels only) |
Supported platforms |
23.4.0 |
24.1.0 |
Oracle Enterprise Linux 7.x (Red Hat compatible kernels only) |
Supported platforms |
23.4.0 |
|
Red Hat Enterprise Linux 7.x (RHEL 7) support |
Supported platforms |
23.4.0 |
24.1.0 |
The following log search tokenizers:
v_txtindex.AdvancedLogTokenizer
v_txtindex.BasicLogTokenizer
v_txtindex.WhitespaceLogTokenizer
logWordITokenizerPositionFactory and logWordITokenizerFactory from the v_txtindex.logSearchLib library
|
Server |
23.4.0 |
24.1.0 |
DHParams |
Server |
23.3.0 |
|
OAuthJsonConfig and oauthjsonconfig |
Client drivers |
23.3.0 |
|
Visual Studio 2012, 2013, and 2015 plug-ins and the Microsoft Connectivity Pack |
Client drivers |
12.0.4 |
23.3.0 |
ADO.NET driver support for .NET 3.5 |
Client drivers |
12.0.3 |
|
prometheus.createServiceMonitor Helm chart parameter |
Kubernetes |
12.0.3 |
|
webhook.caBundle Helm chart parameter |
Kubernetes |
12.0.3 |
24.1.0 |
cert-manager for Helm chart TLS configuration |
Kubernetes |
12.0.2 |
23.3.0 |
Use webhook.certSource parameter to generate certificates internally or provide custom certificates. See Helm chart parameters. |
Kubernetes |
12.0.2 |
|
The following Kafka user-defined session parameters:
|
Kafka |
12.0.3 |
|
vsql support for macOS 10.12-10.14 |
Client drivers |
|
12.0.3 |
CA bundles |
Security |
12.0.2 |
|
The following parameters for CREATE NOTIFIER and ALTER NOTIFIER:
-
TLSMODE
-
CA BUNDLE
-
CERTIFICATE
|
Security |
12.0.2 |
|
The TLSMODE PREFER parameter for CONNECT TO VERTICA. |
Security |
12.0.2 |
|
JDBC 4.0 and 4.1 support |
Client drivers |
12.0.2 |
23.4.0 |
Support for Visual Studio 2008 and 2010 plug-ins |
Client drivers |
12.0.2 |
12.0.3 |
Internet Explorer 11 support |
Management Console |
|
12.0.1 |
ODBC support for macOS 10.12-10.14 |
Client drivers |
|
12.0 |
The following ODBC/JDBC OAuth parameters:
-
OAuthAccessToken/oauthaccesstoken
-
OAuthRefreshToken/oauthrefreshtoken
-
OAuthClientId/oauthclientid
-
OAuthClientSecret/oauthclientsecret
-
OAuthTokenUrl/oauthtokenurl
-
OAuthDiscoveryUrl/oauthdiscoveryurl
-
OAuthScope/oauthscope
|
Client drivers |
12.0 |
|
hive_partition_cols parameter for PARQUET and ORC parsers |
Server |
12.0 |
|
The following ODBC/JDBC OAuth parameters:
-
OAuthAccessToken/oauthaccesstoken
-
OAuthRefreshToken/oauthrefreshtoken
-
OAuthClientId/oauthclientid
-
OAuthClientSecret/oauthclientsecret
-
OAuthTokenUrl/oauthtokenurl
-
OAuthDiscoveryUrl/oauthdiscoveryurl
-
OAuthScope/oauthscope
|
Client drivers |
12.0 |
|
INFER_EXTERNAL_TABLE_DDL function |
Server |
11.1.1 |
|
Admission Controller Webhook image |
Kubernetes |
11.0.1 |
11.0.2 |
Admission Controller Helm chart |
Kubernetes |
11.0.1 |
|
Shared DATA and DATA,TEMP storage locations |
Server |
11.0.1 |
|
DESIGN_ALL option for EXPORT_CATALOG() |
Server |
11.0 |
|
HDFSUseWebHDFS configuration parameter and LibHDFS++ |
Server |
11.0 |
|
INFER_EXTERNAL_TABLE_DDL (path, table) syntax |
Server |
11.0 |
11.1.1 |
AWS library functions:
-
AWS_GET_CONFIG
-
AWS_SET_CONFIG
-
S3EXPORT
-
S3EXPORT_PARTITION
|
Server |
11.0 |
12.0 |
Vertica Spark connector V1 |
Client |
11.0 |
|
admintools db_add_subcluster --is-secondary argument |
Server |
11.0 |
|
Red Hat Enterprise Linux/CentOS 6.x |
Server |
10.1.1 |
11.0 |
STRING_TO_ARRAY(array,delimiter) syntax |
Server |
10.1.1 |
|
Vertica JDBC API com.vertica.jdbc.kv package |
Client Drivers |
10.1 |
|
ARRAY_CONTAINS function |
Server |
10.1 |
|
Client-server TLS parameters:
-
SSLCertificate
-
SSLPrivateKey
-
SSLCA
-
EnableSSL
LDAP authentication parameters:
-
tls_key
-
tls_cert
-
tls_cacert
-
tls_reqcert
LDAPLink and LDAPLink dry-run parameters:
-
LDAPLinkTLSCACert
-
LDAPLinkTLSCADir
-
LDAPLinkStartTLS
-
LDAPLinkTLSReqCert
|
Server |
10.1 |
11.0 |
MD5 hashing algorithm for user passwords |
Server |
10.1 |
|
Reading structs from ORC files as expanded columns |
Server |
10.1 |
11.0 |
vbr configuration section [S3] and S3 configuration parameters |
Server |
10.1 |
|
flatten_complex_type_nulls parameter to the ORC and Parquet parsers |
Server |
10.1 |
11.0 |
System table WOS_CONTAINER_STORAGE |
Server |
10.0.1 |
11.0.2 |
skip_strong_schema_match parameter to the Parquet parser |
Server |
10.0.1 |
10.1 |
Specifying segmentation on specific nodes |
Server |
10.0.1 |
|
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE |
Server |
10.0.1 |
11.0.1 |
Meta-function ANALYZE_CORRELATIONS |
Server |
10.0 |
|
Eon Mode meta-function BACKGROUND_DEPOT_WARMING |
Server |
10.0 |
|
Reading structs from Parquet files as expanded columns |
Server |
10.0 |
10.1 |
Eon Mode meta-functions:
-
SET_DEPOT_PIN_POLICY
-
CLEAR_DEPOT_PIN_POLICY
|
Server |
10.0 |
10.1 |
vbr configuration parameter SnapshotEpochLagFailureThreshold |
Server |
10.0 |
|
Array-specific functions:
-
array_min
-
array_max
-
array_sum
-
array_avg
|
Server |
10.0 |
10.1 |
DMLTargetDirect configuration parameter |
Server |
10.0 |
|
HiveMetadataCacheSizeMB configuration parameter |
Server |
10.0 |
10.1 |
MoveOutInterval |
Server |
10.0 |
|
MoveOutMaxAgeTime |
Server |
10.0 |
|
MoveOutSizePct |
Server |
10.0 |
|
Windows 7 |
Client Drivers |
|
9.3.1 |
DATABASE_PARAMETERS admintools command |
Server |
9.3.1 |
|
Write-optimized store (WOS) |
Server |
9.3 |
10.0 |
7.2_upgrade vbr task |
Server |
9.3 |
|
DropFailedToActivateSubscriptions configuration parameter |
Server |
9.3 |
10.0 |
--skip-fs-checks |
Server |
9.2.1 |
|
32-bit ODBC Linux and OS X client drivers |
Client |
9.2.1 |
9.3 |
Vertica Python client |
Client |
9.2.1 |
10.0 |
macOS 10.11 |
Client |
9.2.1 |
|
DisableDirectToCommunalStorageWrites configuration parameter |
Server |
9.2.1 |
|
CONNECT_TO_VERTICA meta-function |
Server |
9.2.1 |
9.3 |
ReuseDataConnections configuration parameter |
Server |
9.2.1 |
9.3 |
Network interfaces (superseded by network addresses) |
Server |
9.2 |
|
Database branching |
Server |
9.2 |
10.0 |
KERBEROS_HDFS_CONFIG_CHECK meta-function |
Server |
9.2 |
|
Java 5 support |
JDBC Client |
9.2 |
9.2.1 |
Configuration parameters for enabling projections with aggregated data:
-
EnableExprsInProjections
-
EnableGroupByProjections
-
EnableTopKProjections
-
EnableUDTProjections
|
Server |
9.2 |
|
DISABLE_ELASTIC_CLUSTER() |
Server |
9.1.1 |
11.0 |
eof_timeout parameter of KafkaSource |
Server |
9.1.1 |
9.2 |
Windows Server 2012 |
Server |
9.1.1 |
|
Debian 7.6, 7.7 |
Client driver |
9.1.1 |
9.2.1 |
IdolLib function library |
Server |
9.1 |
9.1.1 |
SSL certificates that contain weak CA signatures such as MD5 |
Server |
9.1 |
|
HCatalogConnectorUseLibHDFSPP configuration parameter |
Server |
9.1 |
|
S3 UDSource |
Server |
9.1 |
9.1.1 |
HCatalog Connector support for WebHCat |
Server |
9.1 |
|
partition_key column in system tables STRATA and STRATA_STRUCTURES |
Server |
9.1 |
10.0.1 |
Vertica Pulse |
Server |
9.0.1 |
9.1.1 |
Support for SQL Server 2008 |
Server |
9.0.1 |
9.0.1 |
SUMMARIZE_MODEL meta-function |
Server |
9.0 |
9.1 |
RestrictSystemTable parameter |
Server |
9.0.1 |
|
S3EXPORT multipart parameter |
Server |
9.0 |
|
EnableStorageBundling configuration parameter |
Server |
9.0 |
|
Machine Learning for Predictive Analytics package parameter key_columns for data preparation functions. |
Server |
9.0 |
9.0.1 |
DROP_PARTITION meta-function, superseded by DROP_PARTITIONS |
Server |
9.0 |
|
Machine Learning for Predictive Analytics package parameter owner . |
Server |
8.1.1 |
9.0 |
Backup and restore --setupconfig command |
Server |
8.1 |
9.1.1 |
SET_RECOVER_BY_TABLE meta-function. Do not disable recovery by table. |
Server |
8.0.1 |
|
Column rebalance_projections_status.duration_sec |
Server |
8.0 |
|
HDFS Connector |
Server |
8.0 |
9.0 |
Prejoin projections |
Server |
8.0 |
9.2 |
Administration Tools option --compat21 |
Server |
7.2.1 |
|
admin_tools -t config_nodes |
Server |
7.2 |
11.0.1 |
Projection buddies with inconsistent sort order |
Server |
7.2 |
9.0 |
backup.sh |
Server |
7.2 |
9.0 |
restore.sh |
Server |
7.2 |
9.0 |
copy_vertica_database.sh |
Server |
7.2 |
|
JavaClassPathForUDx configuration parameter |
Server |
7.1 |
|
ADD_LOCATION meta-function |
Server |
7.1 |
|
bwlimit configuration parameter |
Server |
7.1 |
9.0 |
vbr configuration parameters retryCount and retryDelay |
Server |
7.1 |
11.0 |
EXECUTION_ENGINE_PROFILE counters: file handles, memory allocated |
Server |
7.0 |
9.3 |
EXECUTION_ENGINE_PROFILES counter memory reserved |
Server |
7.0 |
|
MERGE_PARTITIONS() meta-function |
Server |
7.0 |
|
krb5 client authentication method
Note
Use the Kerberos gss method for client authentication, instead of krb5.
|
All clients |
7.0 |
|
range-segmentation-clause |
Server |
6.1.1 |
9.2 |
scope parameter of meta-function CLEAR_PROFILING |
Server |
6.1 |
|
Projection creation type IMPLEMENT_TEMP_DESIGN |
Server, clients |
6.1 |
|
3 - New and changed in Vertica 24.1
New features and changes in Vertica 24.1.
3.1 - Admin
New features for administration.
Grafana dashboards
Vertica provides the following dashboards to visualize Prometheus metrics:
You can also download the source for each dashboard from the vertica/grafana-dashboards repository.
For details about Vertica and Prometheus, see HTTPS endpoints and Prometheus metrics.
3.2 - Client connectivity
New features for client connectivity.
Workload routing
For details on workload routing, see Workload routing.
User and role-based workload routing
You can now grant or revoke USAGE privileges on a workload. For details, see Workload routing.
Workload routing rule priorities
You can now set the priority of a routing rule when you create or alter it. Priorities are used when multiple routing rules apply to a user or their enabled roles. For details, see Workload routing.
View available workloads
You can now view the workloads available to your user and enabled roles with SHOW AVAILABLE WORKLOADS.
3.3 - Client drivers
New features for client drivers.
ADO.NET: Read-only filesystem support
To better support read-only filesystems like on Kubernetes, the following changes have been made to the ADO.NET driver's logging behavior:
- The ADO.NET driver no longer creates a configuration file if one doesn't exist.
- The ADO.NET driver no longer modifies or reads from the Windows registry.
- The ADO.NET driver now uses the configuration file found in either the home or project directories, with the former having priority.
- The changes to logging behavior made by the following functions now only last for the lifetime of the application, and their respective
bool persist
parameters (which previously wrote changes to the configuration file) no longer have any effect:
SetLogPath(String path)
SetLogNamespace(String lognamespace)
SetLogLevel(VerticaLogLevel loglevel)
ADO.NET: Windows installer restored
In Vertica 23.4, the ADO.NET driver was removed from the Windows client driver installer. This functionality has been restored. While you can still use a package or local .dll
reference to use the driver, you can also use the installer if your use case depends on certain tools interacting with the driver, like TIBCO Spotfire.
For details, see Installing the ADO.NET client driver.
OAuth configuration improvements
The following parameters can now be set in an OAuth authentication record. For details on these parameters, see OAuth authentication parameters:
auth_url
token_url
scope
validate_hostname
This feature centralizes OAuth configuration in the server and replaces the oauthjsonconfig (JDBC) and OAuthJsonConfig (ODBC) parameters, which required these and other parameters to be specified on a per-client basis. In general, clients are now only required to specify the following to authenticate to Vertica with OAuth:
- Client secret (for confidential clients)
- An access or refresh token
For a list of OAuth parameters for each client, see JDBC connection properties and ODBC DSN connection properties.
3.4 - Containers and Kubernetes
New features for containerized Vertica.
v1 API version
The VerticaDB CRD uses the v1
API version. This API version manages deployments with vclusterops
, a Go library that uses a high-level REST interface to administer the database with the Node Management Agent and HTTPS service. The v1beta1
API version is deprecated.
To upgrade your VerticaDB CRs to API version v1
with 24.1.0, you must migrate API versions. For details, see Upgrading Vertica on Kubernetes.
VerticaDB operator 2.0.0
The VerticaDB operator 2.0.0 is a cluster-scoped operator that can watch objects in any namespace within the cluster. This operator is compatible with both the v1
API version and the deprecated v1beta1
API version. In addition, the cluster administrator's workflow for granting user privileges with the 2.0.0 is streamlined.
For details about VerticaDB operator 2.0.0, see the following:
Image updates
The minimal and full Vertica on Kubernetes images no longer include Administration tools (admintools) or static SSH keys that encrypt internal communications between pods.
For a list of all available images, see Vertica images and the Vertica Docker Hub repositories.
Changes to VerticaDB parameters
The following lists detail the changes to the VerticaDB custom resource definition parameters. For a complete list of the current parameters and annotations, see Custom resource definition parameters and Helm chart parameters.
New parameters
The following custom resource definition parameters were added:
tlsNMASecret
serviceAccountName
The following Helm chart parameters were added:
serviceAccountAnnotations
serviceAccountNameOverride
reconcileConcurrency.verticaautoscaler
reconcileConcurrency.verticadb
reconcileConcurrency.eventtrigger
Removed parameters
The following deprecated parameters were removed:
communal.kerberosServiceName
communal.kerberosRealm
You can use communal.additionalConfig
in place of these parameters.
Renamed parameters
The following table describes the renamed parameters:
Previous name |
New name |
communal.hadoopConfig |
hadoopConfig |
httpNodePort |
verticaHTTPNodePort |
subclusters.isPrimary |
subclusters.type |
subclusters.nodePort |
subclusters.clientNodePort |
superuserPasswordSecret |
passwordSecret |
Converted to annotations
Some parameters were converted to annotations. The following table describes the annotation conversions:
Parameter name |
Annotation name |
ignoreClusterLease |
vertica.com/ignore-cluster-lease |
communal.includeUIDInPath |
vertica.com/include-uid-in-path |
restartTimeout |
vertica.com/restart-timeout |
New annotations
The following annotations were added:
vertica.com/run-nma-in-sidecar
vertica.com/superuser-name
scrutinize diagnotics
You can run scrutinize
to collect diagnostic information about your VerticaDB custom resource instance. This command creates a tar file that you can upload to Vertica support for troubleshooting assistance.
For details about scrutinize
in a containerized environment, see scrutinize for VerticaDB.
Specify ServiceAccount in VerticaDB CR
The serviceAccountName
parameter lets you associate a VerticaDB CR instance with a service account. For details, see Custom resource definition parameters.
Support Google Secret Manager
The VerticaDB operator can access Secrets that you store in Google Secret Manager. This lets you maintain a single location for the sensitive information that you use with Google Cloud and Vertica on Kubernetes.
For details, see Secrets management.
Support anyuid in RedHat OpenShift
Vertica supports the anyuid
security context constraint (SCC) to enforce enhanced security measures. For details about Vertica and OpenShift, see Red Hat OpenShift integration.
Add custom UID and GID in VerticaDB CR
Set the runAsUser
and runAsGroup
parameters to use any value for the user ID (UID) or group ID (GID) with the VerticaDB CR. You must nest them under podSecurityContext
.
For details, see Custom resource definition parameters.
Spread encryption enabled by default
The encryptSpreadComm
custom resource definition (CRD) parameter was updated to enable Spread TLS by default. In addition, the parameter accepts new values to enable or clear spread encryption.
For details about the CRD parameter, see Custom resource definition parameters. For details about spread encryption, see Control channel Spread TLS.
Custom superuser name
You can set the superuser-name
annotation to use a custom superuser name with your VerticaDB custom resource. For details, see Custom resource definition parameters.
3.5 - Data Collector
New features related to Data Collector.
SET_DATA_COLLECTOR_POLICY
The SET_DATA_COLLECTOR_POLICY (using parameters) function sets individual policies for the Data Collector. It supersedes SET_DATA_COLLECTOR_POLICY and SET_DATA_COLLECTOR_TIME_POLICY.
3.6 - Data load
New features for loading data
During automatic data loads, Vertica now divides the load into batches for parallel execution. By default, Vertica chooses a batch size based on the total data volume and the number of executor nodes. You can override the default using the BATCH_SIZE
option with EXECUTE DATA LOADER.
Execute data loader with specific files
You can now call EXECUTE DATA LOADER with specific files to be loaded so that the loader does not check all files in the location. This option is useful if your workflow uses a "push" model where a notifier detects new files and exceutes the loader directly.
DATA_LOADER_EVENTS table
The DATA_LOADER_EVENTS system table records events for all data loaders, including path, whether the load succeeded, and how many times it has been retried. When querying the table, you see only events for the data loaders you have access to.
Iceberg tables support fallback name mapping
Vertica can now read Iceberg data even if the Parquet files do not encode field IDs. If the Parquet files do not contain the needed information, Vertica uses the fallback name mappings from the Iceberg metadata. This process is automatic and does not require any changes to the table definition in Vertica.
3.7 - Database management
New features for database management.
LogRotate service
You can now automatically rotate log files with the LogRotate service. Previously, this functionality depended on the Linux logrotate
tool. The LogRotate service removes this dependency.
For details, see Rotating log files.
By default, Vertica performs writes using a single thread, but a single write usually includes multiple files or parts of files. For writes to S3, you can use a larger thread pool to perform writes in parallel. This thread pool is used for all file writes to S3, including file exports and writes to communal storage.
The size of the thread pool is controlled by the ObjStoreUploadParallelism configuration parameter. Each node has a single thread pool used for all file writes. In general, one or two threads per concurrent writer produces good results.
Node Management Agent: Improved error reporting
Most Node Management Agent (NMA) endpoints now return errors that conform to the RFC7807 specification. For details on the NMA, see Node Management Agent.
HTTPS service
For details on the HTTPS service, see HTTPS service.
Improved error reporting
All HTTPS endpoints now return errors that conform to the RFC7807 specification.
Subscription states
The new /v1/subscriptions
endpoint returns information on your subscriptions, including:
node_name
shard_name
subscription_state
: Subscription status (ACTIVE
, PENDING
, PASSIVE
, or REMOVING
)
is_primary
: Whether the subscription is a primary subscription
For example:
$ curl -i -sk -w "\n" --user dbadmin:my_password "https://127.0.0.1:$HTTP_SERVER_PORT_1/v1/subscriptions"
{
"subscriptions_list":
[
{
"node_name": "node08",
"shard_name": "segment0004",
"subscription_state": "ACTIVE",
"is_primary": false
},
...
]
}
Depot and data paths
The following storage locations fields have been added to the /v1/nodes
and /v1/nodes/
node_name
endpoints:
data_path
: A list of paths used to store USAGE 'DATA,TEMP' data.
depot_path
: The path used to store USAGE 'DEPOT' data.
For example:
$ curl -i -sk --user dbadmin:my_password https://vmart.example.com:8443/v1/nodes
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 648
Connection: keep-alive
Server: oatpp/1.3.0
{
"detail": null,
"node_list": [
{
"name": "v_vmart_node0001",
"node_id": 45035996273704982,
"address": "192.0.2.0",
"state": "UP",
"database": "VMart",
"is_primary": true,
"is_readonly": false,
"catalog_path": "\/scratch_b\/VMart\/v_vmart_node0001_catalog\/Catalog",
"data_path": [
"\/scratch_b\/VMart\/v_vmart_node0001_data"
],
"depot_path": "\/scratch_b\/VMart/my_depot",
"subcluster_name": "",
"last_msg_from_node_at": "2023-12-01T12:38:37.009443",
"down_since": null,
"build_info": "v24.1.0-20231126-36ee8c3de77d43c6ad7bbef252302977952ac9d6"
}
]
}
$ curl -i -sk --user dbadmin:my_password https://vmart.example.com:8443/v1/nodes/v_vmart_node0001/
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 648
Connection: keep-alive
Server: oatpp/1.3.0
{
"detail": null,
"node_list": [
{
"name": "v_vmart_node0001",
"node_id": 45035996273704982,
"address": "192.0.2.0",
"state": "UP",
"database": "VMart",
"is_primary": true,
"is_readonly": false,
"catalog_path": "\/scratch_b\/VMart\/v_vmart_node0001_catalog\/Catalog",
"data_path": [
"\/scratch_b\/VMart\/v_vmart_node0001_data"
],
"depot_path": "\/scratch_b\/VMart/my_depot",
"subcluster_name": "",
"last_msg_from_node_at": "2023-12-01T12:38:37.009443",
"down_since": null,
"build_info": "v24.1.0-20231126-36ee8c3de77d43c6ad7bbef252302977952ac9d6"
}
]
}
3.8 - Directed Queries
New features related to directed queries.
New status table and function
The DIRECTED_QUERY_STATUS system table records information about directed queries that have been executed, including how many times it has been executed. You can use the CLEAR_DIRECTED_QUERY_USAGE function to reset the counter for an individual directed query or for all of them.
3.9 - Eon Mode
New features for Eon Mode
Namespace support
Eon Mode databases now support namespaces. Namespaces are a collection of schemas and tables in your database that are grouped under a common name and segmented into the number of shards defined by that namespace. In Eon Mode databases, namespaces represent the top-level data structure in the Vertica object hierarchy. Each table and schema in the database belongs to a namespace.
By default, databases contain a single namespace, default_namespace
, which is formed on database creation with the shard count specified during setup. You can create additional namespaces with the CREATE NAMESPACE statement, and drop them with DROP NAMESPACE. When running Vertica statements and functions, such as CREATE TABLE and CREATE SCHEMA, you must specify the namespace to which the objects belong or under which to create them. If no namespace is specified, Vertica assumes the table or schema to be a member of the default_namespace
. For details about namespaces, including an extended example, see Managing namespaces.
3.10 - Machine learning
New features related to machine learning.
MLSUPERVISOR role privileges
Users with the MLSUPERVISOR role can now import and export models using the IMPORT_MODELS and EXPORT_MODELS meta-functions.
Export and Import to UDFS locations
You can now import and export models to any supported file system or object store, such as Amazon S3 buckets and Google Cloud Storage object stores. For more information, see IMPORT_MODELS and EXPORT_MODELS.
3.11 - Management Console
New features for Management Console.
S3 bucket requirement for create and revive
When you create or revive a database, you must specify an S3 bucket that was authorized when you deployed the CloudFormation template.
For details, see the following:
3.12 - Tables
New features related to tables and their definition.
ALTER TABLE...ADD COLUMN
You can now use ALTER TABLE to add more than one column, using one ADD COLUMN clause for each.