This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
New features
This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.
This guide briefly describes the new features introduced in the most recent releases of Vertica and provides references to detailed information in the documentation set.
For known and fixed issues in the most recent release, see the Vertica Release Notes.
1 - Deprecated and removed functionality
Vertica retires functionality in two phases:.
Vertica retires functionality in two phases:
-
Deprecated: Vertica announces deprecated features and functionality in a major or minor release. Deprecated features remain in the product and are functional. Published release documentation announces deprecation on this page. When users access this functionality, it may return informational messages about its pending removal.
-
Removed: Vertica removes a feature in a major or minor release that follows the deprecation announcement. Users can no longer access the functionality, and this page is updated to verify removal (see History, below). Documentation that describes this functionality is removed, but remains in previous documentation versions.
Deprecated
The following functionality was deprecated and will be removed in future versions:
Removed
The following functionality was removed:
Release |
Functionality |
Notes |
11.1.1 |
INFER_EXTERNAL_TABLE_DDL (path, table) syntax |
Replaced by USING PARAMETERS syntax and INFER_TABLE_DDL |
History
The following functionality or support has been deprecated or removed as indicated:
Functionality |
Component |
Deprecated in: |
Removed in: |
INFER_EXTERNAL_TABLE_DDL function |
Server |
11.1.1 |
|
Admission Controller Webhook image |
Kubernetes |
11.0.1 |
11.0.2 |
Admission Controller Helm chart |
Kubernetes |
11.0.1 |
|
Shared DATA and DATA,TEMP storage locations |
Server |
11.0.1 |
|
DESIGN_ALL option for EXPORT_CATALOG() |
Server |
11.0 |
|
HDFSUseWebHDFS configuration parameter and LibHDFS++ |
Server |
11.0 |
|
INFER_EXTERNAL_TABLE_DDL (path, table) syntax |
Server |
11.0 |
11.1.1 |
AWS library functions:
-
AWS_GET_CONFIG
-
AWS_SET_CONFIG
-
S3EXPORT
-
S3EXPORT_PARTITION
|
Server |
11.0 |
|
Vertica Spark connector V1 |
Client |
11.0 |
|
admintools db_add_subcluster --is-secondary argument |
Server |
11.0 |
|
Red Hat Enterprise Linux/CentOS 6.x |
Server |
10.1.1 |
11.0 |
STRING_TO_ARRAY(array,delimiter) syntax |
Server |
10.1.1 |
|
Vertica JDBC API com.vertica.jdbc.kv package |
Client Drivers |
10.1 |
|
ARRAY_CONTAINS function |
Server |
10.1 |
|
Client-server TLS parameters:
-
SSLCertificate
-
SSLPrivateKey
-
SSLCA
-
EnableSSL
LDAP authentication parameters:
-
tls_key
-
tls_cert
-
tls_cacert
-
tls_reqcert
LDAPLink and LDAPLink dry-run parameters:
-
LDAPLinkTLSCACert
-
LDAPLinkTLSCADir
-
LDAPLinkStartTLS
-
LDAPLinkTLSReqCert
|
Server |
10.1 |
11.0 |
MD5 hashing algorithm for user passwords |
Server |
10.1 |
|
Reading structs from ORC files as expanded columns |
Server |
10.1 |
11.0 |
vbr configuration section [S3] and S3 configuration parameters |
Server |
10.1 |
|
flatten_complex_type_nulls parameter to the ORC and Parquet parsers |
Server |
10.1 |
11.0 |
System table WOS_CONTAINER_STORAGE |
Server |
10.0.1 |
11.0.2 |
skip_strong_schema_match parameter to the Parquet parser |
Server |
10.0.1 |
10.1 |
Specifying segmentation on specific nodes |
Server |
10.0.1 |
|
DBD meta-function DESIGNER_SET_ANALYZE_CORRELATIONS_MODE |
Server |
10.0.1 |
11.0.1 |
Meta-function ANALYZE_CORRELATIONS |
Server |
10.0 |
|
Eon Mode meta-function BACKGROUND_DEPOT_WARMING |
Server |
10.0 |
|
Reading structs from Parquet files as expanded columns |
Server |
10.0 |
10.1 |
Eon Mode meta-functions:
-
SET_DEPOT_PIN_POLICY
-
CLEAR_DEPOT_PIN_POLICY
|
Server |
10.0 |
10.1 |
vbr configuration parameter SnapshotEpochLagFailureThreshold |
Server |
10.0 |
|
Array-specific functions:
-
array_min
-
array_max
-
array_sum
-
array_avg
|
Server |
10.0 |
10.1 |
DMLTargetDirect configuration parameter |
Server |
10.0 |
|
HiveMetadataCacheSizeMB configuration parameter |
Server |
10.0 |
10.1 |
MoveOutInterval |
Server |
10.0 |
|
MoveOutMaxAgeTime |
Server |
10.0 |
|
MoveOutSizePct |
Server |
10.0 |
|
Windows 7 |
Client Drivers |
|
9.3.1 |
DATABASE_PARAMETERS admintools command |
Server |
9.3.1 |
|
Write-optimized store (WOS) |
Server |
9.3 |
10.0 |
7.2_upgrade vbr task |
Server |
9.3 |
|
DropFailedToActivateSubscriptions configuration parameter |
Server |
9.3 |
10.0 |
--skip-fs-checks |
Server |
9.2.1 |
|
32-bit ODBC Linux and OS X client drivers |
Client |
9.2.1 |
9.3 |
Vertica Python client |
Client |
9.2.1 |
10.0 |
Mac 10.11 |
Client |
9.2.1 |
|
DisableDirectToCommunalStorageWrites configuration parameter |
Server |
9.2.1 |
|
CONNECT_TO_VERTICA meta-function |
Server |
9.2.1 |
9.3 |
ReuseDataConnections configuration parameter |
Server |
9.2.1 |
9.3 |
Network interfaces (superseded by network addresses) |
Server |
9.2 |
|
Database branching |
Server |
9.2 |
10.0 |
KERBEROS_HDFS_CONFIG_CHECK meta-function |
Server |
9.2 |
|
Java 5 support |
JDBC Client |
9.2 |
9.2.1 |
Configuration parameters for enabling projections with aggregated data:
-
EnableExprsInProjections
-
EnableGroupByProjections
-
EnableTopKProjections
-
EnableUDTProjections
|
Server |
9.2 |
|
DISABLE_ELASTIC_CLUSTER() |
Server |
9.1.1 |
11.0 |
eof_timeout parameter of KafkaSource |
Server |
9.1.1 |
9.2 |
Windows Server 2012 |
Server |
9.1.1 |
|
Debian 7.6, 7.7 |
Client driver |
9.1.1 |
9.2.1 |
IdolLib function library |
Server |
9.1 |
9.1.1 |
SSL certificates that contain weak CA signatures such as MD5 |
Server |
9.1 |
|
HCatalogConnectorUseLibHDFSPP configuration parameter |
Server |
9.1 |
|
S3 UDSource |
Server |
9.1 |
9.1.1 |
HCatalog Connector support for WebHCat |
Server |
9.1 |
|
partition_key column in system tables STRATA and STRATA_STRUCTURES |
Server |
9.1 |
10.0.1 |
Vertica Pulse |
Server |
9.0.1 |
9.1.1 |
Support for SQL Server 2008 |
Server |
9.0.1 |
9.0.1 |
SUMMARIZE_MODEL meta-function |
Server |
9.0 |
9.1 |
RestrictSystemTable parameter |
Server |
9.0.1 |
|
S3EXPORT multipart parameter |
Server |
9.0 |
|
EnableStorageBundling configuration parameter |
Server |
9.0 |
|
Machine Learning for Predictive Analytics package parameter key_columns for data preparation functions. |
Server |
9.0 |
9.0.1 |
DROP_PARTITION meta-function, superseded by DROP_PARTITIONS |
Server |
9.0 |
|
Machine Learning for Predictive Analytics package parameter owner . |
Server |
8.1.1 |
9.0 |
Backup and restore --setupconfig command |
Server |
8.1 |
9.1.1 |
SET_RECOVER_BY_TABLE meta-function. Do not disable recovery by table. |
Server |
8.0.1 |
|
Column rebalance_projections_status.duration_sec |
Server |
8.0 |
|
HDFS Connector |
Server |
8.0 |
9.0 |
Prejoin projections |
Server |
8.0 |
9.2 |
Administration Tools option --compat21 |
Server |
7.2.1 |
|
admin_tools -t config_nodes |
Server |
7.2 |
11.0.1 |
Projection buddies with inconsistent sort order |
Server |
7.2 |
9.0 |
backup.sh |
Server |
7.2 |
9.0 |
restore.sh |
Server |
7.2 |
9.0 |
copy_vertica_database.sh |
Server |
7.2 |
|
JavaClassPathForUDx configuration parameter |
Server |
7.1 |
|
ADD_LOCATION meta-function |
Server |
7.1 |
|
bwlimit configuration parameter |
Server |
7.1 |
9.0 |
vbr configuration parameters retryCount and retryDelay |
Server |
7.1 |
11.0 |
EXECUTION_ENGINE_PROFILE counters: file handles, memory allocated |
Server |
7.0 |
9.3 |
EXECUTION_ENGINE_PROFILES counter memory reserved |
Server |
7.0 |
|
MERGE_PARTITIONS() meta-function |
Server |
7.0 |
|
krb5 client authentication method
Note
Use the Kerberos gss method for client authentication, instead of krb5.
|
All clients |
7.0 |
|
range-segmentation-clause |
Server |
6.1.1 |
9.2 |
scope parameter of meta-function CLEAR_PROFILING |
Server |
6.1 |
|
Projection creation type IMPLEMENT_TEMP_DESIGN |
Server, clients |
6.1 |
|
3 - New and changed in Vertica 11.1.0
3.1 - Containers and Kubernetes
Vertica on Kubernetes supports OpenShift, a hybrid cloud platform that adds security features and additional support to Kubernetes.
Red hat OpenShift support
Vertica on Kubernetes supports OpenShift, a hybrid cloud platform that adds security features and additional support to Kubernetes. The Vertica DB operator is available in the OpenShift OperatorHub as a community-based operator that supports OpenShift versions 4.8 and higher.
For details, see Red hat OpenShift integration.
Vertica server online upgrade
You can upgrade the Vertica server versions without taking your cluster offline. Set the new upgradePolicy custom resource parameter to upgrade one subcluster at a time without interrupting your database activity.
For details, see Upgrading Vertica on Kubernetes.
Managed Kubernetes services support
Vertica supports managed Kubernetes services on Google Kubernetes Engine (GKE).
Improved logging with helm chart installs
Vertica DB operators that were installed with Helm have additional logging features. You can specify whether the operator sends logs to standard output, or to a file in the operator pod filesystem. In addition, you can set logging levels with new Helm chart parameters.
For details about the new logging Helm chart parameters, see Helm chart parameters.
Add environment variables to the Vertica server container
Use annotations in the custom resource to add environment variables in the Vertica server container. For details, see Custom resource definition parameters.
3.2 - Data types
The JSON and Avro parsers now support complex types with strong typing.
JSON and Avro complex types
The JSON and Avro parsers now support complex types with strong typing. This support is in addition to flexible complex types that use a VMap (LONG VARBINARY) column to hold complex types. For details, see JSON data and Avro data.
This change does not include KafkaAvroParser.
Casting complex types
Vertica now supports casting structs (the ROW type) and non-native arrays. When casting a ROW, you can change the names of the fields. When casting an array, you can change the bounds. When casting to a bounded native array, inputs that are too long are truncated. When casting to a non-native array (an array containing complex data types including other arrays), if the new bounds are too small for the data the cast fails.
3.3 - Loading data
The Parquet parser now supports the following:.
The Parquet parser now supports the following:
3.4 - Machine learning
Previous versions of Vertica supported the following data types:.
TensorFlow: additional support for INT and FLOAT types
Previous versions of Vertica supported the following data types:
-
Input: TF_FLOAT (default), TF_DOUBLE
-
Output: TF_FLOAT (default), TF_INT64
Vertica 11.1 adds support for the following data types:
-
Input: TF_INT8, TF_INT16, TF_INT32, TF_INT64
-
Output: TF_DOUBLE, TF_INT8, TF_INT16, TF_INT32
For details, see tf_model_desc.json overview.
3.5 - SDK updates
A new API allows a UDx written in C++ to report errors, warnings, and other messages to the client.
Reporting errors and warnings (C++)
A new API allows a UDx written in C++ to report errors, warnings, and other messages to the client. A message can contain a details string and a hint, like the messages that Vertica itself reports. See Sending messages.
3.6 - Security and authentication
The new PasswordLockTimeUnit security parameter lets you specify the time units for which an account is locked after a certain number of FAILED_LOGIN_ATTEMPTS.
PasswordLockTimeUnit
The new PasswordLockTimeUnit security parameter lets you specify the time units for which an account is locked after a certain number of FAILED_LOGIN_ATTEMPTS. The number of time units is controlled by the profile parameter PASSWORD_LOCK_TIME
.
Account locking is disabled by default and must be configured manually by creating or altering a profile. For details, see Account locking.
OAuth 2.0 support
You can now create and manage OAuth 2.0 authentication records and allow ODBC and JDBC clients to verify with an identity provider and then authenticate to Vertica with an access token (rather than a username and password).
3.7 - SQL functions and statements
The INFER_EXTERNAL_TABLE_DDL function now supports input files in the Avro format, in addition to Parquet and ORC.
The INFER_EXTERNAL_TABLE_DDL function now supports input files in the Avro format, in addition to Parquet and ORC.
ALTER USER for subcluster-specific resource pool
The ALTER USER function now supports assigning a subcluster resource pool to a user. For example:
=> ALTER USER user RESOURCE POOL resource-pool FOR SUBCLUSTER subcluster-name;
3.8 - Stored procedures
Stored procedures that use SECURITY DEFINER now execute with the definer's:.
Execute with definer's user-level parameters
Stored procedures that use SECURITY DEFINER now execute with the definer's:
3.9 - System tables
The NODES system table contains a new column named BUILD_INFO.
New BUILD_INFO column in NODES table
The NODES system table contains a new column named BUILD_INFO. This column contains the version of the Vertica server binary that is running on the node.
New status for NODE_STATE column in NODES and NODE_STATES tables
The NODE_STATE column in the NODES and NODE_STATES tables has a new status named UNKOWN. A node can be in this state when Vertica identifies it as part of the cluster, but cannot communicate with the node to determine its current state.
4 - New and changed in Vertica 11.1.1
4.1 - Configuration
Configuration parameter SaveDCEEProfileThresholdUS can now be set at the session level.
Session-level support for SaveDCEEProfileThresholdUS
Configuration parameter SaveDCEEProfileThresholdUS can now be set at the session level.
4.2 - Containers and Kubernetes
Users that do not have cluster administrator privileges can install and run the operator by passing a custom service account with operator privileges to the serviceAccountNameOverride Helm chart parameter.
Operator helm chart privileges with service accounts
Users that do not have cluster administrator privileges can install and run the operator by passing a custom service account with operator privileges to the serviceAccountNameOverride
Helm chart parameter. The serviceAccountNameOverride
parameter prevents the Helm chart from creating a default service account for the operator installation.
The cluster administrator must create the custom service account and grant it operator privileges before the non-admin user installs the Helm chart.
For details about granting operator privileges, see Installing the Vertica DB operator. For details about serviceAccountNameOverride, see Helm chart parameters.
Install the helm chart without the admission controller webhook
Users with limited permissions might want to install only the operator to manage Vertica on Kubernetes. The webhook.enable Helm chart parameter controls whether the Helm chart installs the admission controller webhook along with the operator.
For detailed installation steps, see Installing the Vertica DB operator. For details about webhook.enable, see Helm chart parameters.
Managed Kubernetes services support
Vertica supports managed Kubernetes services on Azure Kubernetes Service (AKS).
Air-gapped operator with helm chart
As a security feature, you can deploy the operator in an air-gapped cluster, which is a cluster that is disconnected from other networks. Use the following parameters to configure the isolated operator:
-
image.repo
-
rbac_proxy_image.name
-
rbac_proxy_image.repo
-
imagePullSecrets
For additional details about each of these parameters, see Helm chart parameters.
Customize LoadBalancer service types
Assign a static IP address for subclusters that use the LoadBalancer service type with the subcluster[i].loadBalancerIP
parameter. This is useful for bare metal Vertica on Kubernetes deployments.
Add custom annotations to implementation-specific services with the subclusters[i].serviceAnnotations
parameter. Managed Kubernetes offerings use annotations on service objects to control logging and other actions.
For details about each parameter, see Custom resource definition parameters. For a list of supported Managed services, see Containerized environments.
During an upgrade, the operator runs a reconcile iteration and requeues any resource objects for additional work if their actual state does not match the desired state. Set the upgradeRequeueTime
parameter to determine how long the operator waits to complete any requeued work.
For additional details, see Upgrading Vertica on Kubernetes and Custom resource definition parameters.
VerticaAutoscaler custom resource
The new VerticaAutoscaler custom resource automatically scales existing subclusters by subcluster or pod. You can select the metric and trigger that determines when the workload resource scales.
For details, see VerticaAutoscaler custom resource.
Prometheus integration
Vertica on Kubernetes integrates with Prometheus to collect metrics on the VerticaDB operator. Helm chart parameters configure role-based access control (RBAC) or allow external client connections without RBAC authorization.
For details, see Prometheus integration.
4.3 - Data export
You can now export data to JSON files, including heterogeneous complex types.
JSON export
You can now export data to JSON files, including heterogeneous complex types. For details, see EXPORT TO JSON.
Export to single file
The following file exporters can now export all data to a single file:
If you choose to use this option, a single node performs the export.
4.4 - Data types
When directly constructing ROW or ARRAY types, for example to use in a WHERE clause, you can now include fields and elements of these types.
ROW and ARRAY literals support nested complex types
When directly constructing ROW or ARRAY types, for example to use in a WHERE clause, you can now include fields and elements of these types. For example:
=> SELECT id.name, id.num, GPA FROM students
WHERE major = ARRAY[ROW('Science','Physics')];
name | num | GPA
-------+-----+-----
bob | 121 | 3.3
carol | 123 | 3.4
(2 rows)
Adding and removing columns of complex types
You can now use ALTER TABLE to add and remove columns of complex types. Previously, you could create a table with complex-type columns but could not add them to existing tables.
4.5 - Database management
You can now create notifiers that send messages to syslog.
Syslog notifiers
You can now create notifiers that send messages to syslog.
Events monitored by this notifier type are not logged to MONITORING_EVENTS nor vertica.log
.
For details, see Configuring reporting for syslog.
4.6 - Geospatial analytics
The ST_GeomFromGeoJSON function has a new ignore_errors parameter.
ST_GeomFromGeoJSON ignore_errors parameter
The ST_GeomFromGeoJSON function has a new ignore_errors
parameter. When set to true, it returns a NULL if the input GeoJSON fails to parse.
Use this setting when you are loading data that might contain faulty GeoJSON values. You can fill any faulty value's GEOMETRY column with NULL, and continue loading records.
For details, see ST_GeomFromGeoJSON.
4.7 - Apache Hadoop integration
For loading and exporting data, Vertica can access HDFS clusters protected by mTLS through the swebhdfs scheme.
WebHDFS access with mTLS
For loading and exporting data, Vertica can access HDFS clusters protected by mTLS through the swebhdfs
scheme. You must create a certificate and key and set the WebhdfsClientCertConf configuration parameter. The parameter value is a JSON string listing name services or authorities and their corresponding keys, so you can configure access to more than one HDFS cluster.
You can use CREATE KEY and CREATE CERTIFICATE to create temporary, session-scoped values. These keys and certificates are stored in memory, not on disk. You can set the configuration parameter at the session or database level.
For details and examples, see HDFS file system.
4.8 - Loading data
You can now treat an Avro map as an ARRAY[ROW].
Avro maps
You can now treat an Avro map as an ARRAY[ROW]. The ROW fields must be named key
and value
. These are the names that the Avro format uses for those fields in the data, and the Avro parser relies on field names to match data to table columns.
4.9 - Machine learning
You can now import and export XGBoost PMML models.
Support for XGBoost PMML models
You can now import and export XGBoost PMML models. In PMML, an XGBoost model is an ensemble of TreeModels called a multiple model.
New probability_normalization parameter for PREDICT_XGB_CLASSIFIER and PREDICT_XGB_CLASSIFIER_CLASSES
Previously, the PREDICT_XGB_CLASSIFIER and PREDICT_XGB_CLASSIFIER_CLASSES functions used only the logit method to normalize the prediction probability of different classes. While these functions still use the logit method by default, this new parameter switches the normalization method to softmax.
4.10 - Management Console
Management Console supports the following new features for Microsoft Azure:.
Eon Mode subcluster and node actions support for Microsoft Azure
Management Console supports the following new features for Microsoft Azure:
-
Create new subclusters
-
Scale subclusters up and down
-
View primary and secondary subcluster information
-
Start, stop, create, and terminate nodes in a subcluster
For details, see Microsoft Azure in MC.
4.11 - Query optimization
Previously, Vertica implemented WITH clauses by either rewriting WITH statements using subqueries, or materializing WITH queries into local temp tables.
Materialized WITH clause now supported using EE5 temp relation
Previously, Vertica implemented WITH clauses by either rewriting WITH statements using subqueries, or materializing WITH queries into local temp tables. By default, Vertica now executes WITH clause queries in sections of a multi-part plan and saves their output in EE5 temp relations. This update improves overall performance of WITH clauses. To learn more, see Materialization of WITH Clause.
4.12 - SDK updates
You can now use a UDx written in Python to report errors, warnings, and other messages to the client.
Reporting errors and warnings (python)
You can now use a UDx written in Python to report errors, warnings, and other messages to the client. A message can contain a details string and a hint, like the messages that Vertica itself reports. The interface is similar to the C++ reporting API. For details, see Sending messages.
4.13 - Security and authentication
Vertica now supports Okta as an identity provider (IDP) for OAuth authentication.
OAuth
Okta support
Vertica now supports Okta as an identity provider (IDP) for OAuth authentication. For details, see OAuth 2.0 authentication.
Empty username and password requirement removed (JDBC)
In 11.1.0, JDBC client applications that used OAuth had to pass an empty string for the user and password parameters in the connection string to authenticate. This parameter is no longer required when authenticating with OAuth.
Discovery URL parameter (keycloak only)
You can now specify a discovery URL for both an authentication record and JDBC/ODBC. The discovery URL is an endpoint that contains your identity provider's OpenID Provider Configuration Document. This parameter is only supported for Keycloak.
If specified as a connection property in JDBC or ODBC, the client drivers will automatically retrieve the token URL from the discovery URL.
Similarly, if specified as an authentication parameter in Vertica, Vertica will automatically retrieve the introspect_url
from the discovery_url
.
Scope parameter
The optional scope parameter lets you define the extent of access granted by the access token. For details about OAuth scoping, see the OAuth Documentation.
For details on this and other parameters, see JDBC connection properties and DSN Parameters (ODBC).
GRANT/REVOKE ALTER on TLS CONFIGURATION
You can now delegate the management (adding or removing certificates, changing the TLSMODE, etc.) of TLS Configurations to users or roles by granting and revoking ALTER privileges.
4.14 - SQL functions and statements
You can use INSERT to insert multiple rows in the target table, by specifying multiple comma-delimited VALUES lists.
Multi-row INSERT
You can use INSERT to insert multiple rows in the target table, by specifying multiple comma-delimited VALUES lists. For example:
=> CREATE TABLE public.t1(a int, b int, c varchar(16));
CREATE TABLE
=> INSERT INTO t1 VALUES (1,2, 'un, deux'), (3,4, 'trois, quartre');
OUTPUT
--------
2
(1 row)
=> COMMIT;
COMMIT
=> SELECT * FROM t1;
a | b | c
---+---+----------------
1 | 2 | un, deux
3 | 4 | trois, quartre
(4 rows)
UPDATE enhancements
The FROM clause of an UPDATE statement can now reference the target table, as follows:
FROM DEFAULT [join-type] JOIN table-reference [ ON join-predicate ]
DEFAULT specifies the table to update. This keyword can be used only once in the FROM clause, and it cannot be used elsewhere in the UPDATE statement.
UPDATE statements also now support the following hints:
INFER_TABLE_DDL for native and external tables
The INFER_TABLE_DDL function inspects a data file in Parquet, ORC, or Avro format and returns a draft table definition. It supports native and external tables and replaces INFER_EXTERNAL_TABLE_DDL, which is now deprecated.
Aliases for collection functions
For compatibility with other database platforms, ARRAY_COUNT is now an alias of APPLY_COUNT and ARRAY_LENGTH is now an alias of APPLY_COUNT_ELEMENTS. ARRAY_COUNT and ARRAY_LENGTH are no longer deprecated.