This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

New features

This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.

This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.

For known and fixed issues in the most recent release, see the Vertica Release Notes.

2 - Deprecated and removed functionality

Vertica retires functionality in two phases:.

Vertica retires functionality in two phases:

  • Deprecated: Vertica announces deprecated features and functionality in a major or minor release. Deprecated features remain in the product and are functional. Published release documentation announces deprecation on this page. When users access this functionality, it may return informational messages about its pending removal.

  • Removed: Vertica removes a feature in a major or minor release that follows the deprecation announcement. Users can no longer access the functionality, and this page is updated to verify removal (see History, below). Documentation that describes this functionality is removed, but remains in previous documentation versions.

Deprecated

The following functionality was deprecated and will be removed in future versions:

Release Functionality Notes
24.2 OAuth2JITClient security parameter. Use OAuth2JITRolesClaimName instead.
24.2

The following Helm chart parameters:

  • logging.maxFileSize
  • logging.maxFileAge
  • logging.filePath
  • logging.maxFileRotation

Removed

History

The following functionality or support has been deprecated or removed as indicated:

Functionality Component Deprecated in: Removed in:
OAuth2JITClient security parameter. Server 24.2.0

The following Helm chart parameters:

  • logging.maxFileSize
  • logging.maxFileAge
  • logging.filePath
  • logging.maxFileRotation
Kubernetes 24.2.0

The following JDBC and ODBC connection properties:

  • OAuthRefreshToken/oauthrefreshtoken
  • OAuthClientSecret/oauthclientsecret
  • oauthtruststorepath (JDBC only)
  • oauthtruststorepassword (JDBC only)
Client drivers 24.1.0
Integration with the Linux logrotate utility Server 24.1.0
v1beta1 VerticaDB custom resource definition API version Kubernetes 23.4.0
serviceAccountNameOverride Helm chart parameter Kubernetes 23.4.0 24.1.0
skipRoleAndRoleBindingCreation Helm chart parameter Kubernetes 23.4.0 24.1.0
spec.communal.kerberosRealm VerticaDB custom resource definition parameter Kubernetes 23.4.0 24.1.0
spec.communal.kerberosServiceName VerticaDB custom resource definition parameter Kubernetes 23.4.0 24.1.0
spec.temporarySubclusterRouting VerticaDB custom resource definition parameter Kubernetes 23.4.0 24.1.0
Vertica Kubernetes (No keys) image Kubernetes 23.4.0 24.1.0
Vertica Kubernetes admintools support Kubernetes 23.4.0 24.1.0
Oracle Enterprise Linux 6.x (Red Hat compatible kernels only) Supported platforms 23.4.0 24.1.0
Oracle Enterprise Linux 7.x (Red Hat compatible kernels only) Supported platforms 23.4.0
Red Hat Enterprise Linux 7.x (RHEL 7) support Supported platforms 23.4.0 24.1.0

The following log search tokenizers:

  • v_txtindex.AdvancedLogTokenizer
  • v_txtindex.BasicLogTokenizer
  • v_txtindex.WhitespaceLogTokenizer
  • logWordITokenizerPositionFactory and logWordITokenizerFactory from the v_txtindex.logSearchLib library
Server 23.4.0 24.1.0
DHParams Server 23.3.0
OAuthJsonConfig and oauthjsonconfig Client drivers 23.3.0
Visual Studio 2012, 2013, and 2015 plug-ins and the Microsoft Connectivity Pack Client drivers 12.0.4 23.3.0
ADO.NET driver support for .NET 3.5 Client drivers 12.0.3
prometheus.createServiceMonitor Helm chart parameter Kubernetes 12.0.3
webhook.caBundle Helm chart parameter Kubernetes 12.0.3 24.1.0
cert-manager for Helm chart TLS configuration Kubernetes 12.0.2 23.3.0
Use webhook.certSource parameter to generate certificates internally or provide custom certificates. See Helm chart parameters. Kubernetes 12.0.2

The following Kafka user-defined session parameters:

  • kafka_SSL_CA

  • kafka_SSL_Certificate

  • kafka_SSL_PrivateKey_Secret

  • kafka_SSL_PrivateKeyPassword_secret

  • kafka_Enable_SSL

Kafka 12.0.3
vsql support for macOS 10.12-10.14 Client drivers 12.0.3
CA bundles Security 12.0.2

The following parameters for CREATE NOTIFIER and ALTER NOTIFIER:

  • TLSMODE

  • CA BUNDLE

  • CERTIFICATE

Security 12.0.2
The TLSMODE PREFER parameter for CONNECT TO VERTICA. Security 12.0.2
JDBC 4.0 and 4.1 support Client drivers 12.0.2 23.4.0
Support for Visual Studio 2008 and 2010 plug-ins Client drivers 12.0.2 12.0.3
Internet Explorer 11 support Management Console 12.0.1
ODBC support for macOS 10.12-10.14 Client drivers 12.0

The following ODBC/JDBC OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

Client drivers 12.0
hive_partition_cols parameter for PARQUET and ORC parsers Server 12.0

The following ODBC/JDBC OAuth parameters:

  • OAuthAccessToken/oauthaccesstoken

  • OAuthRefreshToken/oauthrefreshtoken

  • OAuthClientId/oauthclientid

  • OAuthClientSecret/oauthclientsecret

  • OAuthTokenUrl/oauthtokenurl

  • OAuthDiscoveryUrl/oauthdiscoveryurl

  • OAuthScope/oauthscope

Client drivers 12.0
INFER_EXTERNAL_TABLE_DDL function Server 11.1.1
Admission Controller Webhook image Kubernetes 11.0.1 11.0.2
Admission Controller Helm chart Kubernetes 11.0.1
Shared DATA and DATA,TEMP storage locations Server 11.0.1
DESIGN_ALL option for EXPORT_CATALOG() Server 11.0
HDFSUseWebHDFS configuration parameter and LibHDFS++ Server 11.0
INFER_EXTERNAL_TABLE_DDL (path, table) syntax Server 11.0 11.1.1

AWS library functions:

  • AWS_GET_CONFIG

  • AWS_SET_CONFIG

  • S3EXPORT

  • S3EXPORT_PARTITION

Server 11.0 12.0
Vertica Spark connector V1 Client 11.0
admintools db_add_subcluster --is-secondary argument Server 11.0
Red Hat Enterprise Linux/CentOS 6.x Server 10.1.1 11.0
STRING_TO_ARRAY(array,delimiter) syntax Server 10.1.1
Vertica JDBC API com.vertica.jdbc.kv package Client Drivers 10.1
ARRAY_CONTAINS function Server 10.1

Client-server TLS parameters:

  • SSLCertificate

  • SSLPrivateKey

  • SSLCA

  • EnableSSL

LDAP authentication parameters:

  • tls_key

  • tls_cert

  • tls_cacert

  • tls_reqcert

LDAPLink and LDAPLink dry-run parameters:

  • LDAPLinkTLSCACert

  • LDAPLinkTLSCADir

  • LDAPLinkStartTLS

  • LDAPLinkTLSReqCert

Server 10.1 11.0
MD5 hashing algorithm for user passwords Server 10.1
Reading structs from ORC files as expanded columns Server 10.1 11.0
vbr configuration section [S3] and S3 configuration parameters Server 10.1
flatten_complex_type_nulls parameter to the ORC and Parquet parsers Server 10.1 11.0
System table WOS_CONTAINER_STORAGE Server 10.0.1 11.0.2
skip_strong_schema_match parameter to the Parquet parser Server 10.0.1 10.1
Specifying segmentation on specific nodes Server 10.0.1
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE Server 10.0.1 11.0.1
Meta-function ANALYZE_CORRELATIONS Server 10.0
Eon Mode meta-function BACKGROUND_DEPOT_WARMING Server 10.0
Reading structs from Parquet files as expanded columns Server 10.0 10.1

Eon Mode meta-functions:

  • SET_DEPOT_PIN_POLICY

  • CLEAR_DEPOT_PIN_POLICY

Server 10.0 10.1
vbr configuration parameter SnapshotEpochLagFailureThreshold Server 10.0

Array-specific functions:

  • array_min

  • array_max

  • array_sum

  • array_avg

Server 10.0 10.1
DMLTargetDirect configuration parameter Server 10.0
HiveMetadataCacheSizeMB configuration parameter Server 10.0 10.1
MoveOutInterval Server 10.0
MoveOutMaxAgeTime Server 10.0
MoveOutSizePct Server 10.0
Windows 7 Client Drivers 9.3.1
DATABASE_PARAMETERS admintools command Server 9.3.1
Write-optimized store (WOS) Server 9.3 10.0
7.2_upgrade vbr task Server 9.3
DropFailedToActivateSubscriptions configuration parameter Server 9.3 10.0
--skip-fs-checks Server 9.2.1
32-bit ODBC Linux and OS X client drivers Client 9.2.1 9.3
Vertica Python client Client 9.2.1 10.0
macOS 10.11 Client 9.2.1
DisableDirectToCommunalStorageWrites configuration parameter Server 9.2.1
CONNECT_TO_VERTICA meta-function Server 9.2.1 9.3
ReuseDataConnections configuration parameter Server 9.2.1 9.3
Network interfaces (superseded by network addresses) Server 9.2
Database branching Server 9.2 10.0
KERBEROS_HDFS_CONFIG_CHECK meta-function Server 9.2
Java 5 support JDBC Client 9.2 9.2.1

Configuration parameters for enabling projections with aggregated data:

  • EnableExprsInProjections

  • EnableGroupByProjections

  • EnableTopKProjections

  • EnableUDTProjections

Server 9.2
DISABLE_ELASTIC_CLUSTER() Server 9.1.1 11.0
eof_timeout parameter of KafkaSource Server 9.1.1 9.2
Windows Server 2012 Server 9.1.1
Debian 7.6, 7.7 Client driver 9.1.1 9.2.1
IdolLib function library Server 9.1 9.1.1
SSL certificates that contain weak CA signatures such as MD5 Server 9.1
HCatalogConnectorUseLibHDFSPP configuration parameter Server 9.1
S3 UDSource Server 9.1 9.1.1
HCatalog Connector support for WebHCat Server 9.1
partition_key column in system tables STRATA and STRATA_STRUCTURES Server 9.1 10.0.1
Vertica Pulse Server 9.0.1 9.1.1
Support for SQL Server 2008 Server 9.0.1 9.0.1
SUMMARIZE_MODEL meta-function Server 9.0 9.1
RestrictSystemTable parameter Server 9.0.1
S3EXPORT multipart parameter Server 9.0
EnableStorageBundling configuration parameter Server 9.0
Machine Learning for Predictive Analytics package parameter key_columns for data preparation functions. Server 9.0 9.0.1
DROP_PARTITION meta-function, superseded by DROP_PARTITIONS Server 9.0
Machine Learning for Predictive Analytics package parameter owner. Server 8.1.1 9.0
Backup and restore --setupconfig command Server 8.1 9.1.1
SET_RECOVER_BY_TABLE meta-function. Do not disable recovery by table. Server 8.0.1
Column rebalance_projections_status.duration_sec Server 8.0
HDFS Connector Server 8.0 9.0
Prejoin projections Server 8.0 9.2
Administration Tools option --compat21 Server 7.2.1
admin_tools -t config_nodes Server 7.2 11.0.1
Projection buddies with inconsistent sort order Server 7.2 9.0
backup.sh Server 7.2 9.0
restore.sh Server 7.2 9.0
copy_vertica_database.sh Server 7.2
JavaClassPathForUDx configuration parameter Server 7.1
ADD_LOCATION meta-function Server 7.1
bwlimit configuration parameter Server 7.1 9.0
vbr configuration parameters retryCount and retryDelay Server 7.1 11.0
EXECUTION_ENGINE_PROFILE counters: file handles, memory allocated Server 7.0 9.3
EXECUTION_ENGINE_PROFILES counter memory reserved Server 7.0
MERGE_PARTITIONS() meta-function Server 7.0

krb5 client authentication method

All clients 7.0
range-segmentation-clause Server 6.1.1 9.2
scope parameter of meta-function CLEAR_PROFILING Server 6.1
Projection creation type IMPLEMENT_TEMP_DESIGN Server, clients 6.1

3 - New and changed in Vertica 24.2

New features and changes in Vertica 24.2.

3.1 - Backup and restore

New features for backup and restore.

vbr target namespace creation

For vbr restore and replicate tasks, if the namespace specified in the --target-namespace parameter does not exist in the target database, vbr creates a namespace with the name specified in --target-namespace and the shard count of the source namespace, and then replicates or restores the objects to that namespace. See Eon Mode database requirements for details.

3.2 - Client connectivity

New features for client connectivity.

Add and remove subclusters from routing rules

You can now alter a routing rule to add or remove subclusters. For details, see ALTER ROUTING RULE.

3.3 - Client drivers

New features for client drivers.

OAuth support for ADO.NET

You can now use OAuth to connect to Vertica with ADO.NET.

The ADO.NET driver uses a simplified configuration scheme and takes a single connection property: an access token retrieved by the client from the identity provider. Other flows like token refresh should be handled externally by the driver.

The JDBC and ODBC drivers will follow a similar configuration scheme in future versions.

For details on OAuth, see OAuth 2.0 authentication.

3.4 - Configuration

New configuration parameters.

S3

Vertica now supports S3 access through a proxy. See the S3Proxy configuration parameter and the proxy field in S3BucketConfig.

3.5 - Containers and Kubernetes

New features for containerized Vertica.

AWS Secrets Manager support

The VerticaDB operator can access secrets that you store in Amazon Web Service's AWS Secrets Manager. This lets you maintain a single location for all sensitive information that you share between AWS and Vertica on Kubernetes.

For details, see Secrets management.

VerticaRestorePointsQuery custom resource definition

The VerticaRestorePointsQuery custom resource definition (CRD) retrieves restore points from an archive so that you can restore objects or roll back your database to a previous state. The custom resource (CR) specifies an archive and an optional period of time, and the VerticaDB operator retrieves restore points saved to the archive.

For details, see VerticaRestorePointsQuery custom resource definition and Custom resource definition parameters.

VerticaDB CRD parameters

The VerticaDB custom resource definition provides the following CRD parameters to revive from a restore point:

  • restorePoint.archive
  • restorePoint.id
  • restorePoint.index

You can use the VerticaRestorePointsQuery custom resource definition to retrieve saved restore points. For details about the CRD parameters, see Custom resource definition parameters.

VerticaScrutinize custom resource definition

The VerticaScrutinize CRD runs scrutinize to collect diagnostic information about a VerticaDB CR. Vertica support might request you provide this diagnostic information while resolving support cases.

For details, see VerticaScrutinize custom resource definition.

Namespace-scoped operators

You can use Helm charts to deploy the VerticaDB operator to watch only resources within a namespace. This requires that you set the following Helm chart parameters during installation:

  • controllers.enable
  • controllers.scope

For installation instructions, see Installing the VerticaDB operator. For details about each parameter, see Helm chart parameters.

Node Management Agent (NMA) sidecar

Vertica on Kubernetes runs the Node Management Agent in a sidecar container. The NMA exposes a REST API the VerticaDB operator uses to administer a cluster.

For details, see Containerized Vertica on Kubernetes.

Support OpenShift restricted-v2 SCC

Vertica on Kubernetes supports the restricted-v2 SCC for OpenShift. This is the most restrictive SCC available.

For details about the Vertica and OpenShift integration, see Red Hat OpenShift integration.

Namespace-scoped operators

You can use Helm charts to deploy the VerticaDB operator to watch only resources within a namespace. This requires that you set the following Helm chart parameters during installation:

  • controllers.enable
  • controllers.scope

For installation instructions, see Installing the VerticaDB operator. For details about each parameter, see Helm chart parameters.

Containerized Kafka Scheduler

Vertica on Kubernetes supports the Kafka scheduler, a mechanism that automatically loads data from Kafka into a Vertica database table. Vertica packages the scheduler in a Helm chart so you can easily deploy a scheduler into your Kubernetes environment.

For details, see Containerized Kafka Scheduler.

3.6 - Data load

New features related to loading data.

Automatic load triggers from AWS

Data loaders can process messages from SQS (Simple Queue Service) queues on AWS to load new files that are added to an S3 bucket. You define a trigger when creating the data loader, and Vertica automatically runs EXECUTE DATA LOADER in response to events in the queue.

Iceberg version 2

CREATE EXTERNAL TABLE ICEBERG supports both version 1 and version 2 of the metadata format.

3.7 - Database management

New features for database management.

HTTPS service

For details on the HTTPS service, see HTTPS service.

View client connections

You can now use the /v1/node/connections endpoint to view the number of client connections to that node.

The following example shows that there are 11 total connections to the node at 127.0.0.1, 2 of which are just starting to initialize:

$ curl -sk -w "\n" --user dbadmin: "https://127.0.0.1:8443/v1/node/connections"
{
  "total_connections": 11,
  "user_sessions": 9,
  "initializing_connections": 2
} 

Drain subclusters

You can now use the /v1/subclusters/subcluster_name/drain endpoint to drain connections from a subcluster. This functionality was previously limited to the SHUTDOWN_WITH_DRAIN function. To verify the draining status of your subclusters, query DRAINING_STATUS.

To drain the connections from a subcluster, you can send a POST request with one of the following:

  • An empty body
  • {"cancel": false}

For example:

$ curl -i -X POST -d '{"cancel": false}' -ks -w "\n" --user dbadmin: https://127.0.0.1:8443/v1/subclusters/sc_01/drain
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 43
Connection: keep-alive
Server: oatpp/1.3.0 

To stop draining connections, send a POST request with {"cancel": true}. For example:

$ curl -i -X POST -d '{"cancel": true}' -ks -w "\n" --user dbadmin: https://127.0.0.1:8443/v1/subclusters/sc_01/drain
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 47
Connection: keep-alive
Server: oatpp/1.3.0

View subscription states

The new /v1/subscriptions endpoint returns information on a node's subscriptions to shards:

  • node_name
  • shard_name: The shard the node is subscribed to
  • subscription_state: Subscription status (ACTIVE, PENDING, PASSIVE, or REMOVING)
  • is_primary: Whether the subscription is a primary subscription

For example:

$ curl -i -sk -w "\n" --user dbadmin:my_password "https://127.0.0.1:8443/v1/subscriptions"

{
  "subscriptions_list":
   [
    {
      "node_name": "node08",
      "shard_name": "segment0004",
      "subscription_state": "ACTIVE",
      "is_primary": false
    },
    ...
  ]
} 

Depot and data paths

The following storage locations fields have been added to the /v1/nodes and /v1/nodes/node_name endpoints:

  • data_path: A list of paths used to store USAGE 'DATA,TEMP' data.
  • depot_path: The path used to store USAGE 'DEPOT' data.

For example:

$ curl -i -sk --user dbadmin:my_password https://127.0.0.1:8443/v1/nodes
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 648
Connection: keep-alive
Server: oatpp/1.3.0

{
  "detail": null,
  "node_list": [
    {
      "name": "v_vmart_node0001",
      "node_id": 45035996273704982,
      "address": "192.0.2.0",
      "state": "UP",
      "database": "VMart",
      "is_primary": true,
      "is_readonly": false,
      "catalog_path": "\/scratch_b\/VMart\/v_vmart_node0001_catalog\/Catalog",
      "data_path": [
        "\/scratch_b\/VMart\/v_vmart_node0001_data"
      ],
      "depot_path": "\/scratch_b\/VMart/my_depot",
      "subcluster_name": "",
      "last_msg_from_node_at": "2023-12-01T12:38:37.009443",
      "down_since": null,
      "build_info": "v24.1.0-20231126-36ee8c3de77d43c6ad7bbef252302977952ac9d6"
    }
  ]
}

$ curl -i -sk --user dbadmin:my_password https://127.0.0.1:8443/v1/nodes/v_vmart_node0001/
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 648
Connection: keep-alive
Server: oatpp/1.3.0

{
  "detail": null,
  "node_list": [
    {
      "name": "v_vmart_node0001",
      "node_id": 45035996273704982,
      "address": "192.0.2.0",
      "state": "UP",
      "database": "VMart",
      "is_primary": true,
      "is_readonly": false,
      "catalog_path": "\/scratch_b\/VMart\/v_vmart_node0001_catalog\/Catalog",
      "data_path": [
        "\/scratch_b\/VMart\/v_vmart_node0001_data"
      ],
      "depot_path": "\/scratch_b\/VMart/my_depot",
      "subcluster_name": "",
      "last_msg_from_node_at": "2023-12-01T12:38:37.009443",
      "down_since": null,
      "build_info": "v24.1.0-20231126-36ee8c3de77d43c6ad7bbef252302977952ac9d6"
    }
  ]
}

3.8 - Machine learning

New features related to machine learning.

Partial least squares (PLS) regression support

Vertica now supports PLS regression models.

Combining aspects of PCA (principal component analysis) and linear regression, the PLS regression algorithm extracts a set of latent components that explain as much covariance as possible between the predictor and response variables, and then performs a regression that predicts response values using the extracted components.

This technique is particularly useful when the number of predictor variables is greater than the number of observations or the predictor variables are highly collinear. If either of these conditions is true of the input relation, ordinary linear regression fails to converge to an accurate model.

The PLS_REG function creates and trains a PLS model, and the PREDICT_PLS_REG function makes predictions on an input relation using a PLS model. For an in-depth example, see PLS regression.

Vector autoregression (VAR) support

Vertica now supports VAR models.

VAR is a multivariate autoregressive time series algorithm that captures the relationship between multiple time series variables over time. Unlike AR, which only considers a single variable, VAR models incorporate feedback between different variables in the model, enabling the model to analyze how variables interact across lagged time steps. For example, with two variables—atmospheric pressure and rain accumulation—a VAR model could determine whether a drop in pressure tends to result in rain at a future date.

The AUTOREGRESSOR function automatically executes the algorithm that fits your input data:

  • One value column: the function executes autoregression and returns a trained AR model.
  • Multiple value columns: the function executes vector autoregression and returns a trained VAR model.

To make predictions with a VAR model, use the PREDICT_AUTOREGRESSOR function. See VAR model example for an extended example.

3.9 - Security and authentication

New features for security and authentication.

OAuth2 security configuration parameters

There are new security configuration parameters that provide more control for users created with just-in-time (JIT) provisioning:

  • OAuth2JITRolesClaimName: Identifies an IdP role claim. JIT-provisioned users are automatically assigned the claim roles as default roles. This parameter replaces OAuth2JITClient.
  • OAuth2JITGroupsClaimName: Identifies an IdP group claim. JIT-provisioned users are automatically assigned the group claim name or group claim roles as default roles.
  • OAuth2JITForbiddenRoles: Restricts the specified roles during automatic role assignment.

For details, see Security parameters

OAuth authentication parameters

Vertica provides the following OAuth authentication parameters that configure an OAuth authentication record that uses JIT provisioning:

  • groups_claim_name
  • oauth2_jit_authorized_roles
  • role_group_suffix
  • roles_claim_name

For details about each parameter, see OAuth authentication parameters.

Automatic role assignment for JWT validation

Vertica supports automatic role assignment for just-in-time provisioned (JIT) users that use authentication records with the JWT validation type.

For details, see Just-in-time user provisioning.

The LDAP Link service now supports fixed schedules with LDAPLinkCron. This acts as an alternative to LDAPLinkInterval.

LDAPLinkInterval calculates the time of the next synchronization based on the completion time of the last synchronization. For example, suppose LDAPLinkInterval is set to 24 hours. If synchronization starts at 9:00 AM and finishes in 30 minutes, the next synchronization will occur at 9:30 AM the next day.

The new LDAPLinkCron parameter lets you designate an exact time for the synchronization with a cron expression so that the completion time doesn't affect the next runtime. Value separators are not currently supported.

For details, see LDAP link parameters.

For example, to run the LDAP Link synchronization operation on the second day of every month at 7:00 PM:

=> ALTER DATABASE DEFAULT SET LDAPLinkCron='0 19 */2 * *';

3.10 - Stored procedures

New features for stored procedures in 24.2.0.

Schedule steps and ranges

Schedules now support cron expressions that use steps and ranges. For details, see Scheduled execution.