This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Statements
The primary structure of a SQL query is its statement.
The primary structure of a SQL query is its statement. Whether a statement stands on its own, or is part of a multi-statement query, each statement must end with a semicolon. The following example contains four common SQL statements—CREATE TABLE, INSERT, SELECT, and COMMIT:
=> CREATE TABLE comments (id INT, comment VARCHAR);
CREATE TABLE
=> INSERT INTO comments VALUES (1, 'Hello World');
OUTPUT
--------
1
(1 row)
=> SELECT * FROM comments;
id | comment
----+-------------
1 | Hello World
(1 row)
=> COMMIT;
COMMIT
=>
1 - ACTIVATE DIRECTED QUERY
Activates a directed query and makes it available to the query optimizer across all sessions.
Activates a directed query and makes it available to the query optimizer across all sessions.
Syntax
ACTIVATE DIRECTED QUERY { query-name | where-clause }
Arguments
query-name
- Name of the directed query to activate, as stored in the DIRECTED_QUERIES column
query_name
. You can also use GET DIRECTED QUERY to obtain names of all directed queries that map to an input query.
where-clause
- Resolves to one or more directed queries that are filtered from system table DIRECTED_QUERIES. For example, the following statement activates all directed queries with the same
save_plans_version
identifier:
=> ACTIVATE DIRECTED QUERY WHERE save_plans_version = 21;
Privileges
Superuser
Activation life cycle
After you activate a directed query, it remains active until it is explicitly deactivated by DEACTIVATE DIRECTED QUERY or removed from storage by DROP DIRECTED QUERY. If a directed query is active at the time of database shutdown, Vertica automatically reactivates it when you restart the database.
Examples
See Managing directed queries.
2 - ALTER statements
ALTER statements let you change existing database objects.
ALTER statements let you change existing database objects.
2.1 - ALTER ACCESS POLICY
Performs one of the following actions on existing access policies:.
Performs one of the following actions on existing access policies:
-
Modify an access policy by changing its expression, and by enabling/disabling the policy.
-
Copy an access policy from one table to another.
Syntax
Modify policy:
ALTER ACCESS POLICY ON [[database.]schema.]table
{ FOR COLUMN column [ expression ] | FOR ROWS [ WHERE expression ] } { GRANT TRUSTED } { ENABLE | DISABLE }
Copy policy:
ALTER ACCESS POLICY ON [[database.]schema.]table
{ FOR COLUMN column | FOR ROWS } COPY TO TABLE table;
Parameters
-
`[database.]schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
table
- The name of the table that contains the access policy you want to enable, disable, or copy.
FOR COLUMN
column
[
expression
]
- Replaces the access policy expression that was previously set for this column. Omit
expression
from the FOR COLUMN
clause in order to enable or disable this policy only, or copy it to another table.
FOR ROWS [WHERE
expression
]
- Replaces the row access policy expression that was previously set for this table. Omit
WHERE
expression
from the FOR ROWS
clause in order to enable or disable this policy only, or copy it to another table.
GRANT TRUSTED
Specifies that GRANT statements take precedence over the access policy in determining whether users can perform DML operations on the target table. If omitted, users can only modify table data if the access policy allows them to see the stored data in its original, unaltered state. For more information, see Access policies and DML operations.
Important
GRANT TRUSTED only affects DML operations and does not enable users to see data that the access policy would otherwise mask. Specifying this option may allow users with certain grants to update data that they cannot see.
ENABLE | DISABLE
- Indicates whether to enable or disable the access policy at the table level.
COPY TO TABLE
tablename
- Copies the existing access policy to the specified table. The copied access policy includes its enabled/disabled and GRANT TRUSTED statuses.
The following requirements apply:
Privileges
Modify access policy
Non-superuser: Ownership of the table
Copy access policy
Non-superuser: Ownership of the source and destination tables
Examples
See Managing access policies
See also
CREATE ACCESS POLICY
2.2 - ALTER ARCHIVE
Eon Mode only
Modifies an existing restore point archive in one of the following ways:
- Assign a new owner to the archive.
- Adjust the maximum number of restore points an archive can hold.
Syntax
ALTER ARCHIVE archive-name { OWNER TO user-name | LIMIT max-restore-points }
Arguments
archive-name
- Name of the archive to alter.
OWNER TO
user-name
- Reassign ownership of the archive to user-name.
LIMIT
max-restore-points
- Alters the maximum number of restore points that can be stored in the archive. If an archive contains a greater number of restore points than the new
max-restore-points
limit, the oldest restore point in the archive is deleted until the number of restore points equals max-restore-points
. The default value is 'unlimited'.
Privileges
Superuser or archive owner
Examples
Change the restore point limit of the vertica-restore
archive to 5:
=> ALTER ARCHIVE vertica-restore LIMIT 5;
See also
2.3 - ALTER AUTHENTICATION
Modifies the settings for a specified authentication method.
Modifies the settings for a specified authentication method.
Syntax
ALTER AUTHENTICATION auth_record {
| { ENABLE | DISABLE }
| { LOCAL | HOST [ { TLS | NO TLS } ] host_ip_address }
| RENAME TO new_auth_record_name
| METHOD value
| SET param=value[,...]
| PRIORITY value
| [ [ NO ] FALLTHROUGH ]
}
Parameters
Parameter Name |
Description |
auth_record |
Name of the authentication method to alter.
Type: VARCHAR
|
ENABLE | DISABLE |
Enable or disable the specified authentication method.
Default: Enabled
When you perform an upgrade and use Kerberos authentication, you must manually set the authentication to ENABLE as it is disabled by default.
|
LOCAL | HOST [ { TLS | NO TLS } host_ip_address |
Specify that the authentication method applies to local or remote (HOST ) connections.
For authentication methods that use LDAP, specify whether or not LDAP uses Transport Layer Security (TLS).
For remote (HOST ) connections, you must specify the IP address of the host from which the user or application is connecting, VARCHAR.
Vertica supports IPv4 and IPv6 addresses.
|
RENAME TO new_auth_record_name |
Rename the authentication record.
Type: VARCHAR
|
METHOD value |
The authentication method you are altering. |
SET param = value |
Set a parameter name and value for the authentication method that you are creating. This is required for LDAP, Ident, and OAuth authentication methods. |
PRIORITY value |
If the user is associated with multiple authentication methods, the priority value specifies which authentication method Vertica tries first.
Default: 0
Type: INTEGER
Greater values indicate higher priorities. For example, a priority of 10 is higher than a priority of 5; priority 0 is the lowest possible value.
For details, see Authentication record priority.
|
[ [ NO ] FALLTHROUGH ] |
Specifies whether to enable authentication fallthrough. For details, see Client authentication. |
Privileges
Superuser
Examples
Enabling and Disabling Authentication Methods
This example uses ALTER AUTHENTICATION to disable the v_ldap
authentication method and then enable it again:
=> ALTER AUTHENTICATION v_ldap DISABLE;
=> ALTER AUTHENTICATION v_ldap ENABLE;
Renaming Authentication Methods
This example renames the v_kerberos
authentication method to K5
. All users who have been granted the v_kerberos
authentication method now have the K5
method granted instead.
=> ALTER AUTHENTICATION v_kerberos RENAME TO K5;
Modifying Authentication Parameters
This example sets the system user for ident1
authentication to user1
:
=> CREATE AUTHENTICATION ident1 METHOD 'ident' LOCAL;
=> ALTER AUTHENTICATION ident1 SET system_users='user1';
When you set or modify LDAP or Ident parameters using ALTER AUTHENTICATION, Vertica validates them.
This example changes the IP address and specifies the parameters for an LDAP authentication method named Ldap1
. Specify the bind parameters for the LDAP server. Vertica connects to the LDAP server, which authenticates the database client. If authentication succeeds, Vertica authenticates any users who have been associated with (granted) the Ldap1
authentication method on the designated LDAP server:
=> CREATE AUTHENTICATION Ldap1 METHOD 'ldap' HOST '172.16.65.196';
=> ALTER AUTHENTICATION Ldap1 SET host='ldap://172.16.65.177',
binddn_prefix='cn=', binddn_suffix=',dc=qa_domain,dc=com';
The next example specifies the parameters for an LDAP authentication method named Ldap2
. Specify the LDAP search and bind parameters. Sometimes, Vertica does not have enough information to create the distinguished name (DN) for a user attempting to authenticate. In such cases, you must specify to use LDAP search and bind:
=> CREATE AUTHENTICATION Ldap2 METHOD 'ldap' HOST '172.16.65.196';
=> ALTER AUTHENTICATION Ldap2 SET basedn='dc=qa_domain,dc=com',
binddn='cn=Manager,dc=qa_domain,
dc=com',search_attribute='cn',bind_password='secret';
Changing the Authentication Method
This example changes the localpwd
authentication from hash to trust:
=> CREATE AUTHENTICATION localpwd METHOD 'hash' LOCAL;
=> ALTER AUTHENTICATION localpwd METHOD 'trust';
Set Multiple Realms
This example sets another realm for the authentication method krb_local:
=> ALTER AUTHENTICATION krb_local set realm = 'COMPANY.COM';
See also
2.4 - ALTER CA BUNDLE
Adds and removes certificates from or changes the owner of a certificate authority (CA) bundle.
Deprecated
CA bundles are only usable with certain deprecated parameters in
Kafka notifiers. You should prefer using
TLS configurations and the TLS CONFIGURATION parameter for notifiers instead.
Adds and removes certificates from or changes the owner of a certificate authority (CA) bundle.
Syntax
ALTER CA BUNDLE name
[ADD CERTIFICATES ca_cert[, ca_cert[, ...]]
[REMOVE CERTIFICATES ca_cert[, ca_cert[, ...]]
[OWNER TO user]
Parameters
name
- The name of the CA bundle.
ca_cert
- The name of the CA certificate to add or remove from the bundle.
user
- The name of a database user.
Privileges
Ownership of the CA bundle.
Examples
See Managing CA bundles.
See also
2.5 - ALTER DATA LOADER
Changes the properties of a data loader.
Changes the properties of a data loader created with CREATE DATA LOADER.
Syntax
ALTER DATA LOADER [schema.]name {
SET TO copy-statement
| RETRY LIMIT { NONE | DEFAULT | limit }
| RETENTION INTERVAL monitoring-retention
| RENAME TO new-name
| TRIGGERED BY integration-json
| DROP TRIGGER [IF EXISTS]
| OWNER TO user
| ENABLE | DISABLE
}
Arguments
schema
- Schema containing the data loader. The default schema is
public
.
name
- Name of the data loader to alter.
SET TO
copy-statement
- The new COPY statement that the loader executes. The FROM clause typically uses a glob.
RETRY LIMIT { NONE | DEFAULT |
limit
}
- Maximum number of times to retry a failing file. Each time the data loader is executed, it attempts to load all files that have not yet been successfully loaded, up to this per-file limit. If set to DEFAULT, at load time the loader uses the value of the DataLoaderDefaultRetryLimit configuration parameter.
Default: DEFAULT
RETENTION INTERVAL
monitoring-retention
- How long to keep records in the events table. DATA_LOADER_EVENTS records events for all data loaders, but each data loader has its own retention interval.
Default: 14 days
RENAME TO
new-name
- New name for the data loader.
TRIGGERED BY
integration-json
- New external trigger for the data loader. For details on the JSON fields, see CREATE DATA LOADER. Before altering the trigger you must disable the data loader using DISABLE. After making the change, enable it again using ENABLE.
DROP TRIGGER [IF EXISTS]
- Remove the trigger from this loader. You must first disable the data loader using DISABLE. After making the change, enable it again using ENABLE.
If the data loader does not have a trigger and you do not include IF EXISTS, the statement returns an error.
OWNER TO
user
- New owner for the data loader.
ENABLE | DISABLE
- Whether the data loader can be executed. Calling EXECUTE DATA LOADER on a disabled data loader returns an error. When created, data loaders with triggers are disabled by default; all others are enabled by default. If a data loader has a trigger, you must disable it before dropping it or altering the trigger.
If a data loader is disabled while it is executing, Vertica finishes the current execution.
Privileges
Non-superuser:
- USAGE on the schema.
- Owner or ALTER privilege on the data loader.
See also
2.6 - ALTER DATABASE
Use ALTER DATABASE to perform the following tasks:.
Use ALTER DATABASE to perform the following tasks:
To see the current value of a parameter, query system table CONFIGURATION_PARAMETERS or use SHOW DATABASE.
Syntax
ALTER DATABASE db-spec {
DROP ALL FAULT GROUP
| EXPORT ON { subnet-name | DEFAULT }
| RESET STANDBY
| SET [PARAMETER] parameter=value [,...]
| CLEAR [PARAMETER] parameter[,...]
}
Parameters
db-spec
- Specifies the database to alter, one of the following:
DROP ALL FAULT GROUP
- Drops all fault groups defined on the specified database.
EXPORT ON
- Specifies the network to use for importing and exporting data, one of the following:
For details, see Identify the database or nodes used for import/export, and Changing node export addresses.
RESET STANDBY
- Enterprise Mode only, restores all down nodes and reverts their replacement nodes to standby status. If any replaced nodes cannot resume activity, Vertica leaves their standby nodes in place.
SET [PARAMETER]
- Sets the specified parameters.
CLEAR [PARAMETER]
- Resets the specified parameters to their default values.
Privileges
Superuser
2.7 - ALTER FAULT GROUP
Modifies an existing fault group.
Modifies an existing fault group. ALTER FAULT GROUP can perform the following tasks:
Syntax
ALTER FAULT GROUP fault-group-name {
| ADD NODE node-name
| DROP NODE node-name
| ADD FAULT GROUP child-fault-group-name
| DROP FAULT GROUP child-fault-group-name
| RENAME TO new-fault-group-name }
Parameters
fault-group-name
- The existing fault group name you want to modify.
Tip
For a list of all fault groups defined in the cluster, query the
FAULT_GROUPS system table.
node-name
- The node name you want to add to or drop from the existing (parent) fault group.
child-fault-group-name
- The name of the child fault group you want to add to or remove from an existing parent fault group.
new-fault-group-name
- The new name for the fault group you want to rename.
Privileges
Superuser
Examples
This example shows how to rename the parent0
fault group to parent100
:
=> ALTER FAULT GROUP parent0 RENAME TO parent100;
ALTER FAULT GROUP
Verify the change by querying the FAULT_GROUPS system table:
=> SELECT member_name FROM fault_groups;
member_name
----------------------
v_exampledb_node0003
parent100
mygroup
(3 rows)
See also
2.8 - ALTER FUNCTION statements
Vertica provides ALTER statements for each type of user-defined extension.
Vertica provides ALTER statements for each type of user-defined extension. Each ALTER statement modifies the metadata of a user-defined function in the Vertica catalog:
Vertica also provides ALTER FUNCTION (SQL), which modifies the metadata of a user-defined SQL function.
2.8.1 - ALTER AGGREGATE FUNCTION
Alters a user-defined aggregate function.
Alters a user-defined aggregate function.
Syntax
ALTER AGGREGATE FUNCTION [[db-name.]schema.]function-name( [ parameter-list ] ) {
OWNER TO new-owner
| RENAME TO new-name
| SET SCHEMA new-schema
}
Parameters
[
db-name.
]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function-name
``
- Name of the SQL function to alter.
arg-list
- Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
Note
Vertica supports function overloading, and uses the parameter list to identify the function to alter.
OWNER TO
new-owner
- Transfers function ownership to another user.
RENAME TO
new-name
- Renames this function.
SET SCHEMA
new-schema
- Moves the function to another schema.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must also have the following schema privileges:
For these operations... |
Schema privileges required... |
RENAME TO (rename function) |
CREATE, USAGE |
SET SCHEMA (move function to another schema) |
CREATE: destination schema USAGE: current schema |
See also
CREATE AGGREGATE FUNCTION
2.8.2 - ALTER ANALYTIC FUNCTION
Alters a user-defined analytic function.
Alters a user-defined analytic function.
Syntax
ALTER ANALYTIC FUNCTION [[db-name.]schema.]function-name( [ parameter-list ] ) {
OWNER TO new-owner
| RENAME TO new-name
| SET FENCED boolean-expr
| SET SCHEMA new-schema
}
Parameters
[
db-name.
]
schema
- Database and schema. The default schema is
public
. If you specify a database, it must be the current database.
function-name
- Name of the function to alter.
parameter-list
- Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
Note
Vertica supports function overloading, and uses the parameter list to identify the function to alter.
OWNER TO
new-owner
- Transfers function ownership to another user.
RENAME TO
new-name
- Renames this function.
SET FENCED { true | false }
- Specifies whether to enable fenced mode for this function.
SET SCHEMA
new-schema
- Moves the function to another schema.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must also have the following schema privileges:
Operation |
Schema privileges required |
RENAME TO (rename function) |
CREATE, USAGE |
SET SCHEMA (move function to another schema) |
|
See also
CREATE ANALYTIC FUNCTION
2.8.3 - ALTER FILTER
Alters a user-defined filter.
Alters a user-defined filter.
Syntax
ALTER FILTER [[db-name.]schema.]function-name( [ parameter-list ] ) {
OWNER TO new-owner
| RENAME TO new-name
| SET FENCED boolean-expr
| SET SCHEMA new-schema
}
Parameters
[
db-name.
]
schema
- Database and schema. The default schema is
public
. If you specify a database, it must be the current database.
function-name
- Name of the function to alter.
parameter-list
- Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
Note
Vertica supports function overloading, and uses the parameter list to identify the function to alter.
OWNER TO
new-owner
- Transfers function ownership to another user.
RENAME TO
new-name
- Renames this function.
SET FENCED { true | false }
- Specifies whether to enable fenced mode for this function.
SET SCHEMA
new-schema
- Moves the function to another schema.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must also have the following schema privileges:
Operation |
Schema privileges required |
RENAME TO (rename function) |
CREATE, USAGE |
SET SCHEMA (move function to another schema) |
|
See also
CREATE FILTER
2.8.4 - ALTER FUNCTION (scalar)
Alters a user-defined scalar function.
Alters a user-defined scalar function.
Syntax
ALTER FUNCTION [[db-name.]schema.]function-name( [ parameter-list] ) {
OWNER TO new-owner
| RENAME TO new-name
| SET FENCED boolean-expr
| SET SCHEMA new-schema
}
Parameters
[
db-name.
]
schema
- Database and schema. The default schema is
public
. If you specify a database, it must be the current database.
function-name
- Name of the function to alter.
parameter-list
- Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
Note
Vertica supports function overloading, and uses the parameter list to identify the function to alter.
OWNER TO
new-owner
- Transfers function ownership to another user.
RENAME TO
new-name
- Renames this function.
SET FENCED { true | false }
- Specifies whether to enable fenced mode for this function.
SET SCHEMA
new-schema
- Moves the function to another schema.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must also have the following schema privileges:
Operation |
Schema privileges required |
RENAME TO (rename function) |
CREATE, USAGE |
SET SCHEMA (move function to another schema) |
|
Examples
Rename function UDF_one
to UDF_two
:
=> ALTER FUNCTION UDF_one (int, int) RENAME TO UDF_two;
Move function UDF_two
to schema macros
:
=> ALTER FUNCTION UDF_two (int, int) SET SCHEMA macros;
Disable fenced mode for function UDF_two
:
=> ALTER FUNCTION UDF_two (int, int) SET FENCED false;
See also
CREATE FUNCTION (scalar)
2.8.5 - ALTER FUNCTION (SQL)
Alters a user-defined SQL function.
Alters a user-defined SQL function.
Syntax
ALTER FUNCTION [[db-name.]schema.]function-name( [arg-list] ) {
OWNER TO new-owner
| RENAME TO new-name
| SET SCHEMA new-schema
}
Parameters
[
db-name.
]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function-name
- The name of the SQL function to alter.
arg-list
- A comma-delimited list of function argument names. If none, specify an empty list.
OWNER TO
new-owner
- Transfers function ownership to another user.
RENAME TO
new-name
- Renames this function.
SET SCHEMA
new-schema
- Moves the function to another schema.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must also have the following schema privileges:
For these operations... |
Schema privileges required... |
RENAME TO (rename function) |
CREATE, USAGE |
SET SCHEMA (move function to another schema) |
CREATE: destination schema USAGE: current schema |
Examples
Rename function SQL_one
to SQL_two
:
=> ALTER FUNCTION SQL_one (int, int) RENAME TO SQL_two;
Move function SQL_two
to schema macros
:
=> ALTER FUNCTION SQL_two (int, int) SET SCHEMA macros;
Reassign ownership of SQL_two
:
=> ALTER FUNCTION SQL_two (int, int) OWNER TO user1;
See also
2.8.6 - ALTER PARSER
Alters a user-defined parser.
Alters a user-defined parser.
Syntax
ALTER PARSER [[db-name.]schema.]function-name( [ parameter-list ] ) {
OWNER TO new-owner
| RENAME TO new-name
| SET FENCED boolean-expr
| SET SCHEMA new-schema
}
Parameters
[
db-name.
]
schema
- Database and schema. The default schema is
public
. If you specify a database, it must be the current database.
function-name
- Name of the function to alter.
parameter-list
- Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
Note
Vertica supports function overloading, and uses the parameter list to identify the function to alter.
OWNER TO
new-owner
- Transfers function ownership to another user.
RENAME TO
new-name
- Renames this function.
SET FENCED { true | false }
- Specifies whether to enable fenced mode for this function.
SET SCHEMA
new-schema
- Moves the function to another schema.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must also have the following schema privileges:
Operation |
Schema privileges required |
RENAME TO (rename function) |
CREATE, USAGE |
SET SCHEMA (move function to another schema) |
|
See also
CREATE PARSER
2.8.7 - ALTER SOURCE
Alters a user-defined load source function.
Alters a user-defined load source function.
Syntax
ALTER SOURCE [[db-name.]schema.]function-name( [ parameter-list ] ) {
OWNER TO new-owner
| RENAME TO new-name
| SET FENCED boolean-expr
| SET SCHEMA new-schema
}
Parameters
[
db-name.
]
schema
- Database and schema. The default schema is
public
. If you specify a database, it must be the current database.
function-name
- Name of the function to alter.
parameter-list
- Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
Note
Vertica supports function overloading, and uses the parameter list to identify the function to alter.
OWNER TO
new-owner
- Transfers function ownership to another user.
RENAME TO
new-name
- Renames this function.
SET FENCED { true | false }
- Specifies whether to enable fenced mode for this function.
SET SCHEMA
new-schema
- Moves the function to another schema.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must also have the following schema privileges:
Operation |
Schema privileges required |
RENAME TO (rename function) |
CREATE, USAGE |
SET SCHEMA (move function to another schema) |
|
See also
CREATE SOURCE
2.8.8 - ALTER TRANSFORM FUNCTION
Alters a user-defined transform function.
Alters a user-defined transform function.
Syntax
ALTER TRANSFORM FUNCTION [[db-name.]schema.]function-name( [ parameter-list ] ) {
OWNER TO new-owner
| RENAME TO new-name
| SET FENCED { true | false }
| SET SCHEMA new-schema
}
Parameters
[
db-name.
]
schema
- Database and schema. The default schema is
public
. If you specify a database, it must be the current database.
function-name
- Name of the function to alter.
parameter-list
- Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
Note
Vertica supports function overloading, and uses the parameter list to identify the function to alter.
OWNER TO
new-owner
- Transfers function ownership to another user.
RENAME TO
new-name
- Renames this function.
SET FENCED { true | false }
- Specifies whether to enable fenced mode for this function.
SET SCHEMA
new-schema
- Moves the function to another schema.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must also have the following schema privileges:
Operation |
Schema privileges required |
RENAME TO (rename function) |
CREATE, USAGE |
SET SCHEMA (move function to another schema) |
|
See also
CREATE TRANSFORM FUNCTION
2.9 - ALTER HCATALOG SCHEMA
Alters parameter values on a schema that was created with CREATE HCATALOG SCHEMA.
Alters parameter values on a schema that was created with CREATE HCATALOG SCHEMA. HCatalog schemas are used by the HCatalog Connector to access data stored in a Hive data warehouse. For more information, see Using the HCatalog Connector.
Some parameters cannot be altered after creation. If you need to change one of those values, delete and recreate the schema instead. You can use ALTER HCATALOG SCHEMA to change the following parameters:
-
HOSTNAME
-
PORT
-
HIVESERVER2_HOSTNAME
-
WEBSERVICE_HOSTNAME
-
WEBSERVICE_PORT
-
WEBHDFS_ADDRESS
-
HCATALOG_CONNECTION_TIMEOUT
-
HCATALOG_SLOW_TRANSFER_LIMIT
-
HCATALOG_SLOW_TRANSFER_TIME
-
SSL_CONFIG
-
CUSTOM_PARTITIONS
Syntax
ALTER HCATALOG SCHEMA schema-name SET [param=value]+;
Parameters
Parameter |
Description |
schema-name |
The name of the schema in the Vertica catalog to alter. The tables in the Hive database are available through this schema. |
param |
The name of the parameter to alter. |
value |
The new value for the parameter. You must specify a value; this statement does not read default values from configuration files like CREATE HCATALOG SCHEMA . |
Privileges
One of the following:
Examples
The following example shows how to change the Hive metastore hostname and port for the "hcat" schema. In this example, Hive uses High Availability metastore.
=> ALTER HCATALOG SCHEMA hcat SET HOSTNAME='thrift://ms1.example.com:9083,thrift://ms2.example.com:9083';
The following example shows the error you receive if you try to set an unalterable parameter.
=> ALTER HCATALOG SCHEMA hcat SET HCATALOG_USER='admin';
ERROR 4856: Syntax error at or near "HCATALOG_USER" at character 39
2.10 - ALTER LIBRARY
Replaces the library file that is currently associated with a UDx library in the Vertica catalog.
Replaces the library file that is currently associated with a UDx library in the Vertica catalog. Vertica automatically distributes copies of the updated file to all cluster nodes. UDxs defined in the catalog that reference the updated library automatically start using the updated library file. A UDx is considered to be the same if its name and signature match.
The current and replacement libraries must be written in the same language.
Caution
If a UDx function that is present in the original library is not present in the updated library, it is automatically dropped. This can result in loss of data if that function is in use, for example if a table depends on it to populate a column.
Syntax
ALTER LIBRARY [[database.]schema.]name [DEPENDS 'depends-path'] AS 'path';
Arguments
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
name
- The name of an existing library created with CREATE LIBRARY.
DEPENDS '
depends-path
'
Files or libraries on which this library depends, one or more files or directories on the initiator node file system or other supported file systems or object stores. For a directory, end the path entry with a slash (/
), optionally followed by a wildcard (*
). To specify more than one file, separate entries with colons (:
).
If any path entry contain colons, such as a URI, place brackets around the entire DEPENDS path and use double quotes for the individual path elements, as in the following example:
DEPENDS '["s3://mybucket/gson-2.3.1.jar"]'
To specify libraries with multiple directory levels, see Multi-level Library Dependencies.
DEPENDS has no effect for libraries written in R. R packages must be installed locally on each node, including external dependencies.
Important
The performance of CREATE LIBRARY can degrade in Eon Mode, in proportion to the number and depth of dependencies specified by the DEPENDS clause.
AS
path
- The absolute path on the initiator node file system of the replacement library file.
Privileges
Superuser, or UDXDEVELOPER and CREATE on the schema. Non-superusers must explicitly enable the UDXDEVELOPER role. See CREATE LIBRARY for examples.
Multi-level library dependencies
If a DEPENDS clause specifies a library with multiple directory levels, Vertica follows the library path to include all subdirectories of that library. For example, the following CREATE LIBRARY statement enables the UDx library mylib
to import all Python packages and modules that it finds in subdirectories of site-packages
:
=> CREATE LIBRARY mylib AS '/path/to/python_udx' DEPENDS '/path/to/python/site-packages' LANGUAGE 'Python';
Important
DEPENDS can specify Java library dependencies that are up to 100 levels deep.
Examples
This example shows how to update an already-defined library named myFunctions
with a new file.
=> ALTER LIBRARY myFunctions AS '/home/dbadmin/my_new_functions.so';
See also
Developing user-defined extensions (UDxs)
2.11 - ALTER LOAD BALANCE GROUP
Changes the configuration of a load balance group.
Changes the configuration of a load balance group.
Syntax
ALTER LOAD BALANCE GROUP group-name {
RENAME TO new-name |
SET FILTER TO 'ip-cidr-addr' |
SET POLICY TO 'policy' |
ADD {ADDRESS | FAULT GROUP | SUBCLUSTER} add-list |
DROP {ADDRESS | FAULT GROUP | SUBCLUSTER} drop-list
}
Parameters
group-name
- Name of an existing load balance group to change.
RENAME TO
new-name
- Renames the group to new-name.
SET FILTER TO '
ip-cidr-addr
'
- An IPv4 or IPv6 CIDR to replace the existing IP address filter that selects which members of a fault group or subcluster to include in the load balance group. This setting is only valid if the load balance group contains fault groups or subclusters.
SET POLICY TO '
policy
'
- Changes the policy the load balance group uses to select the target node for the incoming connection. One of:
See CREATE LOAD BALANCE GROUP for details.
ADD {ADDRESS | FAULT GROUP | SUBCLUSTER }
- Adds objects of the specified type to the load balance group. Load balance groups can only contain one type of object. For example, if you created the load balance group using a list of addresses, you can only add additional addresses, not fault groups or subclusters.
add-list
- A comma-delimited list of objects (addresses, fault groups, or subclusters) to add to the fault group.
DROP {ADDRESS | FAULT GROUP | SUBCLUSTER}
- Removes objects of the specified type from the load balance group (addresses, fault groups, or subclusters). The object type must match the type of the objects already in the load balance group.
drop-list
- The list of objects to remove from the load balance group.
Privileges
Superuser
Examples
Remove an address from the load balance group named group_2.
=> SELECT * FROM LOAD_BALANCE_GROUPS;
name | policy | filter | type | object_name
---------+------------+--------+-----------------------+-------------
group_1 | ROUNDROBIN | | Network Address Group | node01
group_1 | ROUNDROBIN | | Network Address Group | node02
group_2 | ROUNDROBIN | | Network Address Group | node03
(3 rows)
=> ALTER LOAD BALANCE GROUP group_2 DROP ADDRESS node03;
ALTER LOAD BALANCE GROUP
=> SELECT * FROM LOAD_BALANCE_GROUPS;
name | policy | filter | type | object_name
---------+------------+--------+-----------------------+-------------
group_1 | ROUNDROBIN | | Network Address Group | node01
group_1 | ROUNDROBIN | | Network Address Group | node02
group_2 | ROUNDROBIN | | Empty Group |
(3 rows)
The following example adds three network addresses to the group named group_2:
=> ALTER LOAD BALANCE GROUP group_2 ADD ADDRESS node01,node02,node03;
ALTER LOAD BALANCE GROUP
=> SELECT * FROM load_balance_groups WHERE name = 'group_2';
-[ RECORD 1 ]----------------------
name | group_2
policy | ROUNDROBIN
filter |
type | Network Address Group
object_name | node01
-[ RECORD 2 ]----------------------
name | group_2
policy | ROUNDROBIN
filter |
type | Network Address Group
object_name | node02
-[ RECORD 3 ]----------------------
name | group_2
policy | ROUNDROBIN
filter |
type | Network Address Group
object_name | node03
See also
2.12 - ALTER MODEL
Allows users to rename an existing model, change ownership, or move it to a another schema.
Allows users to rename an existing model, change ownership, or move it to a another schema.
Syntax
ALTER MODEL [[database.]schema.]model
{ OWNER TO owner
| RENAME TO new-name
| SET SCHEMA schema
| { INCLUDE | EXCLUDE | MATERIALIZE } [ SCHEMA ] PRIVILEGES }
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
model
- Identifies the model to alter.
OWNER TO
owner
- Reassigns ownership of this model to
owner
. If a non-superuser, you must be the current owner.
RENAME TO
- Renames the mode, where
new-name
conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
SET SCHEMA
schema
- Moves the model from one schema to another. If privilege inheritance is enabled with INCLUDE SCHEMA PRIVILEGES, the model inherits the privileges of its new parent schema.
[ { INCLUDE | EXCLUDE | MATERIALIZE } [ SCHEMA ] PRIVILEGES ]
- The INCLUDE and EXCLUDE parameters determine whether the model inherits the privileges of its parent schema, overriding the schema-level setting:
-
INCLUDE SCHEMA PRIVILEGES: The model inherits the privileges of its parent schema. This parameter has no effect while privilege inheritance is disabled at the database level with DisableInheritedPrivileges.
-
EXCLUDE SCHEMA PRIVILEGES: The model does not inherit the privileges of its parent schema.
The MATERIALIZE parameter converts inherited privileges into explicit grants on the model. If privilege inheritance is disabled at the database level with DisableInheritedPrivileges, MATERIALIZE converts the set of inherited privileges that would be on the model if privilege inheritance was enabled.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must have the following schema privileges:
Schema privileges required... |
For these operations... |
CREATE, USAGE |
Rename model |
CREATE: destination schema
USAGE: current schema
|
Move model to another schema |
Examples
To move a model to another schema:
=> ALTER MODEL my_kmeans_model SET SCHEMA clustering_models;
ALTER MODEL
To rename a model:
=> ALTER MODEL my_kmeans_model RENAME TO kmeans_model;
To change the owner of the model:
=> ALTER MODEL kmeans_model OWNER TO analytics_user;
2.13 - ALTER NETWORK ADDRESS
Changes the configuration of an existing network address.
Changes the configuration of an existing network address.
Syntax
ALTER NETWORK ADDRESS name {
RENAME TO new-name
| SET TO 'ip-addr' [PORT port-number]
| { ENABLE | DISABLE }
}
Parameters
name
- Name of an existing network address to change.
RENAME TO
new-name
- Renames the network address to
new-name
. This name change has no effect on the network address's membership in load balance groups.
SET TO '
ip-addr
'
- Changes the IP address assigned to the network address.
PORT
port-number
- Sets the port number for the network address. You must supply a network address when altering the port number.
ENABLE | DISABLE
- Enables or disables the network address.
Examples
Rename the network address from test_addr
to alt_node1
, then change its IP address to 192.168.1.200
with port number 4000
:
=> ALTER NETWORK ADDRESS test_addr RENAME TO alt_node1;
ALTER NETWORK ADDRESS
=> ALTER NETWORK ADDRESS alt_node1 SET TO '192.168.1.200' PORT 4000;
ALTER NETWORK ADDRESS
See also
2.14 - ALTER NETWORK INTERFACE
This statement has been deprecated.
Renames a network interface.
Syntax
ALTER NETWORK INTERFACE network-interface-name RENAME TO new-network-interface-name
Parameters
network-interface-name
- The name of the existing network interface.
new-network-interface-name
- The new name for the network interface.
Privileges
Superuser
Examples
Rename a network interface:
=> ALTER NETWORK INTERFACE myNetwork RENAME TO myNewNetwork;
2.15 - ALTER NODE
Sets and clears node-level configuration parameters on the specified node.
Sets and clears node-level configuration parameters on the specified node. ALTER NODE also performs the following management tasks:
For information about removing a node, see
Syntax
ALTER NODE node-name {
EXPORT ON { network-interface | DEFAULT }
| [IS] node-type
| REPLACE [ WITH standby-node ]
| RESET
| SET [PARAMETER] parameter=value[,...]
| CLEAR [PARAMETER] parameter[,...]
}
Parameters
node-name
- The name of the node to alter.
[IS]
node-type
- Changes the node type, where
node-type
is one of the following:
-
PERMANENT: (default): A node that stores data.
-
EPHEMERAL: A node that is in transition from one type to another—typically, from PERMANENT to either STANDBY or EXECUTE.
-
STANDBY: A node that is reserved to replace any node when it goes down. A standby node stores no segments or data until it is called to replace a down node. When used as a replacement node, Vertica changes its type to PERMANENT. For more information, see Active standby nodes.
-
EXECUTE: A node that is reserved for computation purposes only. An execute node contains no segments or data.
Note
STANDBY and EXECUTE node types are supported only in Enterprise Mode.
EXPORT ON
- Specifies the network to use for importing and exporting data, one of the following:
-
network-interface
: The name of a network interface of the public network.
-
DEFAULT
: Use the default network interface of the public network, as specified by ALTER DATABASE.
REPLACE [WITH
standby-node
]
- Enterprise Mode only, replaces the specified node with an available active standby node. If you omit the
WITH
clause, Vertica tries to find a replacement node from the same fault group as the down node.
If you specify a node that is not down, Vertica ignores this statement.
RESET
- Enterprise Mode only, restores the specified down node and returns its replacement to standby status. If the down node cannot resume activity, Vertica ignores this statement and leaves the standby node in place.
SET [PARAMETER]
- Sets one or more configuration parameters to the specified value at the node level.
CLEAR [PARAMETER]
- Clears one or more specified configuration parameters.
Privileges
Superuser
Examples
Specify to use the default network interface of public network on v_vmart_node0001
for import/export operations:
=> ALTER NODE v_vmart_node0001 EXPORT ON DEFAULT;
Replace down node v_vmart_node0001
with an active standby node, then restore it:
=> ALTER NODE v_vmart_node0001 REPLACE WITH standby1;
...
=> ALTER NODE v_vmart_node0001 RESET;
Set and clear configuration parameter MaxClientSessions
:
=> ALTER NODE v_vmart_node0001 SET MaxClientSessions = 0;
...
=> ALTER NODE v_vmart_node0001 CLEAR MaxClientSessions;
Set the node type as EPHEMERAL
:
=> ALTER NODE v_vmart_node0001 IS EPHEMERAL;
2.16 - ALTER NOTIFIER
Updates an existing notifier.
Updates an existing notifier.
Note
To change the action URL associated with an existing identifier,
drop the notifier and recreate it.
Syntax
ALTER NOTIFIER notifier-name
[ ENABLE | DISABLE ]
[ MAXPAYLOAD 'max-payload-size' ]
[ MAXMEMORYSIZE 'max-memory-size' ]
[ TLS CONFIGURATION tls-configuration ]
[ TLSMODE 'tls-mode' ]
[ CA BUNDLE bundle-name [ CERTIFICATE certificate-name ] ]
[ IDENTIFIED BY 'uuid' ]
[ [NO] CHECK COMMITTED ]
[ PARAMETERS 'adapter-params' ]
Parameters
notifier-name
- Specifies the notifier to update.
[NO] CHECK COMMITTED
- Specifies to wait for delivery confirmation before sending the next message in the queue. Not all messaging systems support delivery confirmation.
ENABLE | DISABLE
- Specifies whether to enable or disable the notifier.
MAXPAYLOAD
- The maximum size of the message, up to 2 TB, specified in kilobytes, megabytes, gigabytes, or terabytes as follows:
MAXPAYLOAD integer{K|M|G|T}
The default setting is adapter-specific—for example, 1 M for Kafka.
Changes to this parameter take effect either after the notifier is disabled and reenabled or after the database restarts.
MAXMEMORYSIZE
- The maximum size of the internal notifier, up to 2 TB, specified in kilobytes, megabytes, gigabytes, or terabytes as follows:
MAXMEMORYSIZE integer{K|M|G|T}
If the queue exceeds this size, the notifier drops excess messages.
- TLS CONFIGURATION
tls-configuration
The TLS CONFIGURATION to use for TLS.
Notifiers support the following TLS modes:
If the notifier ACTION
is 'syslog'
or 'sns'
, this parameter has no effect.
To encrypt messages sent to syslog, you must configure syslog for TLS.
To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:
-
SNSCAFile or AWSCAFile
-
SNSCAPath or AWSCAPath
-
SNSEnableHttps
TLSMODE '
tls-mode
'
-
Deprecated
This parameter has been superseded by the TLS CONFIGURATION parameter. If you use this parameter while the TLS CONFIGURATION parameter is not set, Vertica automatically creates a new TLS Configuration for the notifier uses the same values as the deprecated parameter.
Specifies the type of connection between the notifier and an endpoint, one of the following:
-
disable
(default): Plaintext connection.
-
verify-ca
: Encrypted connection, and the server's certificate is verified as being signed by a trusted CA.
If you set this parameter to verify-ca
, the generated TLS Configuration will be set to TRY_VERIFY, which has the same behavior as VERIFY_CA.
If the notifier ACTION
is 'syslog'
or 'sns'
, this parameter has no effect.
To encrypt messages sent to syslog, you must configure syslog for TLS.
To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:
-
SNSCAFile or AWSCAFile
-
SNSCAPath or AWSCAPath
-
SNSEnableHttps
CA BUNDLE
bundle-name
-
Deprecated
This parameter has been superseded by the TLS CONFIGURATION parameter. If you use this parameter while the TLS CONFIGURATION parameter is not set, Vertica automatically creates a new TLS Configuration for the notifier uses the same values as the deprecated parameter.
Specifies a CA bundle. The certificates inside the bundle are used to validate the Kafka server's certificate if the TLSMODE
requires it.
If a CA bundle is specified for a notifier that currently uses disable
, which doesn't validate the Kafka server's certificate, the bundle will go unused when connecting to the Kafka server. This behavior persists unless the TLSMODE
is changed to one that validates server certificates.
Changes to contents of the CA bundle take effect either after the notifier is disabled and re-enabled or after the database restarts. However, changes to which CA bundle the notifier uses takes effect immediately.
If the notifier ACTION
is 'syslog'
or 'sns'
, this parameter has no effect.
To encrypt messages sent to syslog, you must configure syslog for TLS.
To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:
-
SNSCAFile or AWSCAFile
-
SNSCAPath or AWSCAPath
-
SNSEnableHttps
CERTIFICATE
certificate-name
-
Deprecated
This parameter has been superseded by the TLS CONFIGURATION parameter. If you use this parameter while the TLS CONFIGURATION parameter is not set, Vertica automatically creates a new TLS Configuration for the notifier uses the same values as the deprecated parameter.
Specifies a client certificate for validation by the endpoint.
If the notifier ACTION
is 'syslog'
or 'sns'
, this parameter has no effect.
To encrypt messages sent to syslog, you must configure syslog for TLS.
To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:
-
SNSCAFile or AWSCAFile
-
SNSCAPath or AWSCAPath
-
SNSEnableHttps
IDENTIFIED BY '
uuid
'
- Specifies the notifier's unique identifier. If set, all the messages published by this notifier have this attribute.
PARAMETERS '
adapter-params
'
- Specifies one or more optional adapter parameters that are passed as a string to the adapter. Adapter parameters apply only to the adapter associated with the notifier.
Changes to this parameter take effect either after the notifier is disabled and reenabled or after the database restarts.
For Kafka notifiers, refer to Kafka and Vertica configuration settings.
Privileges
Superuser
Encrypted notifiers for SASL_SSL Kafka configurations
Follow this procedure to create or alter notifiers for Kafka endpoints that use SASL_SSL. Note that you must repeat this procedure whenever you change the TLSMODE, certificates, or CA bundle for a given notifier.
-
Create a TLS Configuration with the desired TLS mode, certificate, and CA certificates.
-
Use CREATE or ALTER to disable the notifier and set the TLS Configuration:
=> ALTER NOTIFIER encrypted_notifier
DISABLE
TLS CONFIGURATION kafka_tls_config;
-
ALTER the notifier and set the proper rdkafka adapter parameters for SASL_SSL:
=> ALTER NOTIFIER encrypted_notifier PARAMETERS
'sasl.username=user;sasl.password=password;sasl.mechanism=PLAIN;security.protocol=SASL_SSL';
-
Enable the notifier:
=> ALTER NOTIFIER encrypted_notifier ENABLE;
Examples
Update the settings on an existing notifier:
=> ALTER NOTIFIER my_dc_notifier
ENABLE
MAXMEMORYSIZE '2G'
IDENTIFIED BY 'f8b0278a-3282-4e1a-9c86-e0f3f042a971'
CHECK COMMITTED;
Add a TLS Configuration to a notifier. To create a custom TLS Configuration, see TLS configurations:
=> ALTER NOTIFIER my_notifier TLS CONFIGURATION notifier_tls_config
See also
2.17 - ALTER PROCEDURE (stored)
Alters a stored procedure, retaining any existing grants.
Alters a stored procedure, retaining any existing grants.
Syntax
ALTER PROCEDURE procedure ( [ [ parameter_mode ] [ parameter ] parameter_type [, ...] ] )
[ SECURITY { INVOKER | DEFINER }
| RENAME TO new_procedure_name
| OWNER TO new_owner
| SET SCHEMA new_schema
| SOURCE TO new_source
]
Parameters
procedure
- The procedure to alter.
parameter_mode
- The IN and INOUT parameters of the stored procedure.
parameter
- The name of the parameter.
parameter_type
- The type of the parameter.
SECURITY { INVOKER | DEFINER }
- Specifies whether to execute the procedure with the privileges of the invoker or its definer (owner).
For details, see Executing stored procedures.
RENAME TO
new_procedure_name
- The new name for the procedure.
OWNER TO
new_owner
- The new owner (definer) of the procedure.
SET SCHEMA
new_schema
- The new schema of the procedure.
SOURCE TO
new_source
- The new procedure source code. For details, see Scope and structure.
Privileges
OWNER TO
Superuser
RENAME and SCHEMA TO
Non-superuser:
Other operations
Non-superuser: Ownership of the procedure
Examples
See Altering stored procedures.
2.18 - ALTER PROFILE
Changes a profile.
Changes a profile. All parameters that are not set in a profile inherit their setting from the default profile. You can use ALTER PROFILE
to change the default profile.
Syntax
ALTER PROFILE name LIMIT [
PASSWORD_LIFE_TIME setting
PASSWORD_MIN_LIFE_TIME setting
PASSWORD_GRACE_TIME setting
FAILED_LOGIN_ATTEMPTS setting
PASSWORD_LOCK_TIME setting
PASSWORD_REUSE_MAX setting
PASSWORD_REUSE_TIME setting
PASSWORD_MAX_LENGTH setting
PASSWORD_MIN_LENGTH setting
PASSWORD_MIN_LETTERS setting
PASSWORD_MIN_UPPERCASE_LETTERS setting
PASSWORD_MIN_LOWERCASE_LETTERS setting
PASSWORD_MIN_DIGITS setting
PASSWORD_MIN_SYMBOLS setting
PASSWORD_MIN_CHAR_CHANGE setting ]
Parameters
Note
To reset a parameter to inherit from the default profile, set its value to default
.
Name |
Description |
name |
The name of the profile to create, where *name *conforms to conventions described in Identifiers.
To modify the default profile, set name to default . For example:
ALTER PROFILE DEFAULT LIMIT PASSWORD_MIN_SYMBOLS 1;
|
PASSWORD_LIFE_TIME |
Set to an integer value, one of the following:
After your password's lifetime and grace period expire, you must change your password on your next login, if you have not done so already.
|
PASSWORD_MIN_LIFE_TIME |
Set to an integer value, one of the following:
|
PASSWORD_GRACE_TIME |
Set to an integer value, one of the following:
|
FAILED_LOGIN_ATTEMPTS |
Set to an integer value, one of the following:
|
PASSWORD_LOCK_TIME |
-
≥ 1: The number of days (units configurable with PasswordLockTimeUnit) a user's account is locked after FAILED_LOGIN_ATTEMPTS number of login attempts. The account is automatically unlocked when the lock time elapses.
-
UNLIMITED : Account remains indefinitely inaccessible until a superuser manually unlocks it.
|
PASSWORD_REUSE_MAX |
Set to an integer value, one of the following:
|
PASSWORD_REUSE_TIME |
Set to an integer value, one of the following:
|
PASSWORD_MAX_LENGTH |
The maximum number of characters allowed in a password, one of the following:
- Integer between 8 and 512, inclusive
|
PASSWORD_MIN_LENGTH |
The minimum number of characters required in a password, one of the following:
|
PASSWORD_MIN_LETTERS |
Minimum number of letters (a-z and A-Z) that must be in a password, one of the following:
|
PASSWORD_MIN_UPPERCASE_LETTERS |
Minimum number of uppercase letters (A-Z) that must be in a password, one of the following:
|
PASSWORD_MIN_LOWERCASE_LETTERS |
Minimum number of lowercase letters (a-z) that must be in a password, one of the following:
|
PASSWORD_MIN_DIGITS |
Minimum number of digits (0-9) that must be in a password, one of the following:
|
PASSWORD_MIN_SYMBOLS |
Minimum number of symbols—printable non-letter and non-digit characters such as $, #, @—that must be in a password, one of the following:
|
PASSWORD_MIN_CHAR_CHANGE |
Minimum number of characters that must be different from the previous password:
|
Privileges
Superuser
Profile settings and client authentication
The following profile settings affect client authentication methods, such as LDAP or GSS:
-
FAILED_LOGIN_ATTEMPTS
-
PASSWORD_LOCK_TIME
All other profile settings are used only by Vertica to manage its passwords.
Examples
ALTER PROFILE sample_profile LIMIT FAILED_LOGIN_ATTEMPTS 3;
See also
2.19 - ALTER PROFILE RENAME
Rename an existing profile.
Rename an existing profile.
Syntax
ALTER PROFILE name RENAME TO new-name;
Parameters
name
- The current name of the profile.
new-name
- The new name for the profile.
Privileges
Superuser
Examples
This example shows how to rename an existing profile.
ALTER PROFILE sample_profile RENAME TO new_sample_profile;
See also
2.20 - ALTER PROJECTION
Changes the DDL of the specified projection.
Changes the DDL of the specified projection.
Syntax
ALTER PROJECTION [[database.]schema.]projection
{ RENAME TO new-name | ON PARTITION RANGE BETWEEN min-val AND max-val | { ENABLE | DISABLE } }
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
projection
- The projection to change, where
projection
can be one of the following:
-
Projection base name: Rename all projections that share this base name.
-
Projection name: Rename the specified projection and its base name. If the projection is segmented, its buddies are unaffected by this change.
See Projection naming for projection name conventions.
RENAME TO
new-name
- The new projection name.
ON PARTITION RANGE
-
Note
Valid only for projections that were created with a partition range.
Limits projection data to a range of partition keys. The minimum value must be less than or equal to the maximum value.
Values can be NULL. A null minimum value indicates no lower bound and a null maximum value indicates no upper bound. If both are NULL, this statement drops the projection endpoints, producing a regular projection instead of a range projection.
If the new range of keys is outside the previous range, Vertica throws a warning that the projection is out of date and must be refreshed before it can be used.
For other requirements and usage details, see Partition range projections.
- ENABLE | DISABLE
- Specifies whether to mark this projection as unavailable for queries on its anchor table. If a projection is the queried table's only superprojection, attempts to disable it return with a rollback message. ENABLE restores the projection's availability to query planning. You can also mark a projection as unavailable for individual queries using the hint SKIP_PROJS.
Default: ENABLE
Privileges
Non-superuser, CREATE and USAGE on the schema and one of the following anchor table privileges:
Syntactic sugar
The statement
=> ALTER PROJECTION foo REMOVE PARTITION RANGE;
has the same effect as
=> ALTER PROJECTION foo ON PARTITION RANGE BETWEEN NULL AND NULL;
Examples
=> SELECT export_tables('','public.store_orders');
export_tables
---------------------------------------------
CREATE TABLE public.store_orders
(
order_no int,
order_date timestamp NOT NULL,
shipper varchar(20),
ship_date date NOT NULL
);
(1 row)
=> CREATE PROJECTION store_orders_p AS SELECT * from store_orders;
CREATE PROJECTION
=> ALTER PROJECTION store_orders_p RENAME to store_orders_new;
ALTER PROJECTION
=> ALTER PROJECTION store_orders_new DISABLE;
=> SELECT * FROM store_orders_new;
ERROR 3586: Insufficient projections to answer query
DETAIL: No projections eligible to answer query
HINT: Projection store_orders_new not used in the plan because the projection is disabled.
=> ALTER PROJECTION store_orders_new ENABLE;
See also
CREATE PROJECTION
2.21 - ALTER RESOURCE POOL
Modifies an existing resource pool by setting one or more parameters.
Modifies an existing resource pool by setting one or more parameters.
You can use ALTER RESOURCE POOL to modify some parameters in Vertica built-in resource pools. For details on default settings and restrictions, see Built-in resource pools configuration.
Important
Changes to parameters of the built-in
GENERAL resource pool take effect only when the database restarts.
Syntax
ALTER RESOURCE POOL pool-name [ FOR subcluster ] parameter-name setting[...]
Arguments
pool-name
- Name of the resource pool to modify.
FOR
subcluster
Eon Mode only, the subcluster to associate with this resource pool, where subcluster
is one of the following:
SUBCLUSTER
subcluster-name
: Resource pool for an existing subcluster. You cannot be connected to this subcluster, otherwise Vertica returns an error.
CURRENT SUBCLUSTER
: Resource pool for the subcluster that you are connected to.
Important
You cannot use ALTER RESOURCE POOL to convert a global resource pool to a subcluster-level resource pool.
parameter-name setting
- A resource pool parameter and its new setting. To reset this parameter to its default value, specify
DEFAULT
.
If you specify a subcluster, you can alter only the MAXMEMORYSIZE
, MAXQUERYMEMORYSIZE
, and MEMORYSIZE
parameters for built-in pools.
Parameters
-
CASCADE TO
Secondary resource pool for executing queries that exceed the
RUNTIMECAP
setting of their assigned resource pool:
CASCADE TO secondary-pool
-
CPUAFFINITYMODE
Specifies whether the resource pool has exclusive or shared use of the CPUs specified in
CPUAFFINITYSET
:
CPUAFFINITYMODE { SHARED | EXCLUSIVE | ANY }
SHARED
: Queries that run in this resource pool share its CPUAFFINITYSET
CPUs with other Vertica resource pools.
EXCLUSIVE
: Dedicates CPUAFFINITYSET
CPUs to this resource pool only, and excludes other Vertica resource pools. If CPUAFFINITYSET
is set as a percentage, then that percentage of CPU resources available to Vertica is assigned solely for this resource pool.
ANY
: Queries in this resource pool can run on any CPU, invalid if CPUAFFINITYSET
designates CPU resources.
Default: ANY
-
CPUAFFINITYSET
CPUs available to this resource pool. All cluster nodes must have the same number of CPUs. The CPU resources assigned to this set are unavailable to general resource pools.
CPUAFFINITYSET {
'cpu-index[,...]'
| 'cpu-indexi-cpu-indexn'
| 'integer%'
| NONE
}
cpu-index
[,...]
: Dedicates one or more comma-delimited CPUs to this resource pool.
cpu-indexi-cpu-indexn
: Dedicates a range of contiguous CPU indexes i
through n
to this resource pool.
integer
%
: Percentage of all available CPUs to use for this resource pool. Vertica rounds this percentage down to include whole CPU units.
NONE
(empty string): No affinity set is assigned to this resource pool. Queries associated with this pool are executed on any CPU.
Default: NONE
Important
CPUAFFINITYSET
and CPUAFFINITYMODE
must be set together in the same statement.
-
EXECUTIONPARALLELISM
Number of threads used to process any single query issued in this resource pool.
EXECUTIONPARALLELISM { limit | AUTO }
limit
: An integer value between 1 and the number of cores. Setting this parameter to a reduced value increases throughput of short queries issued in the resource pool, especially if queries are executed concurrently.
AUTO
or 0
: Vertica calculates the setting from the number of cores, available memory, and amount of data in the system. Unless memory is limited, or the amount of data is very small, Vertica sets this parameter to the number of cores on the node.
Default: AUTO
-
MAXCONCURRENCY
Maximum number of concurrent execution slots available to the resource pool across the cluster:
MAXCONCURRENCY { integer | NONE }
NONE
(empty string): Unlimited number of concurrent execution slots.
Default: NONE
-
MAXMEMORYSIZE
Maximum size per node the resource pool can grow by borrowing memory from the
GENERAL
pool:
MAXMEMORYSIZE {
'integer%'
|'integer{K|M|G|T}'
NONE
}
integer
%
: Percentage of total memory
integer
{K|M|G|T}
: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
NONE
(empty string): Unlimited, resource pool can borrow any amount of available memory from the GENERAL
pool.
Default: NONE
MAXQUERYMEMORYSIZE
Maximum amount of memory this resource pool can allocate at runtime to process a query. If the query requires more memory than this setting, Vertica stops execution and returns an error.
Set this parameter as follows:
MAXQUERYMEMORYSIZE {
'integer%'
| 'integer{K|M|G|T}'
| NONE
}
integer
%
: Percentage of
MAXMEMORYSIZE
for this resource pool.
integer
{K|M|G|T}
: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes, up to the value of
MAXMEMORYSIZE
.
NONE
(empty string): Unlimited; resource pool can borrow any amount of available memory from the GENERAL pool, within the limits set by
MAXMEMORYSIZE
.
Default: NONE
Important
Changes to MAXQUERYMEMORYSIZE
are applied retroactively to queries that are currently executing. If you reduce this setting, queries that were budgeted with the previous memory size are liable to fail if they try to allocate more memory than the new setting allows.
-
MEMORYSIZE
Total per-node memory available to the Vertica resource manager that is allocated to this resource pool:
MEMORYSIZE {
'integer%'
| 'integer{K|M|G|T}'
}
integer
%
: Percentage of total memory
integer
{K|M|G|T}
: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
Default: 0%. No memory allocated, the resource pool borrows memory from the
GENERAL
pool.
-
PLANNEDCONCURRENCY
Preferred number of queries to execute concurrently in the resource pool. This setting applies to the entire cluster:
PLANNEDCONCURRENCY { num-queries | AUTO }
-
num-queries
: Integer value ≥ 1, the preferred number of queries to execute concurrently in the resource pool. When possible, query resource budgets are limited to allow this level of concurrent execution.
-
AUTO
: Value is calculated automatically at query runtime. Vertica sets this parameter to the lower of these two calculations, but never less than 4:
-
Number of logical cores
-
Memory divided by 2GB
If the number of logical cores on each node is different, AUTO
is calculated differently for each node. Distributed queries run like the minimal effective planned concurrency. Single node queries run with the planned concurrency of the initiator.
Default: AUTO
Tip
Change this parameter only after evaluating performance over a period of time.
-
PRIORITY
Priority of queries in this resource pool when they compete for resources in the
GENERAL
pool:
PRIORITY { integer | HOLD }
-
integer
: Negative or positive integer value, where higher numbers denote higher priority:
-
User-defined resource pool: -100
to 100
-
Built-in resource pools
SYSQUERY
,
RECOVERY
, and
TM
: -110
to 110
-
HOLD
: Sets priority to -999
. Queries in this resource pool are queued until
QUEUETIMEOUT
is reached.
Default: 0
-
QUEUETIMEOUT
Maximum time a request can wait for pool resources before it is rejected, not more than one year:
QUEUETIMEOUT { integer | 'interval' | 'NONE' }
-
integer
: Maximum wait time in seconds
-
[interval](/en/sql-reference/language-elements/literals/datetime-literals/interval-literal/): Maximum wait time expressed in the following format:
num year num months num [days] HH:MM:SS.ms
-
NONE
(empty string): No maximum wait time, request can be queued indefinitely, up to one year.
If the value that you specify resolves to more than one year, Vertica returns with a warning and sets the parameter to 365 days:
=> ALTER RESOURCE POOL user_0 QUEUETIMEOUT '11 months 50 days 08:32';
WARNING 5693: Using 1 year for QUEUETIMEOUT
ALTER RESOURCE POOL
=> SELECT QUEUETIMEOUT FROM resource_pools WHERE name = 'user_0';
QUEUETIMEOUT
--------------
365
(1 row)
Default: 00:05 (5 minutes)
-
RUNTIMECAP
Maximum execution time allowed to queries in this resource pool, not more than one year, otherwise Vertica returns with an error. If a query exceeds this setting, it tries to cascade to a secondary pool:
RUNTIMECAP { 'interval' | NONE }
-
interval
: Maximum wait time expressed in the following format:
num year num month num [day] HH:MM:SS.ms
-
NONE
(empty string): No maximum wait time, request can be queued indefinitely, up to one year.
If the user or session also has a RUNTIMECAP
, the shorter limit applies.
-
RUNTIMEPRIORITY
Determines how the resource manager should prioritize dedication of run-time resources (CPU, I/O bandwidth) to queries already running in this resource pool:
RUNTIMEPRIORITY { HIGH | MEDIUM | LOW }
Default: MEDIUM
-
RUNTIMEPRIORITYTHRESHOLD
Maximum time (in seconds) in which query processing must complete before the resource manager assigns to it the resource pool's RUNTIMEPRIORITY
. All queries begin execution with a priority of HIGH.
RUNTIMEPRIORITYTHRESHOLD seconds
Default: 2
SINGLEINITIATOR
Set to false for backward compatibility. Do not change this setting.
Privileges
Superuser
Examples
Set resource pool PRIORITY to 5:
=> ALTER RESOURCE POOL ceo_pool PRIORITY 5;
Designate a secondary resource pool:
=> CREATE RESOURCE POOL second_pool;
=> ALTER RESOURCE POOL ceo_pool CASCADE TO second_pool;
Decrease to 0% the MAXMEMORYSIZE and MEMORYSIZE settings on the dashboard
subcluster's built-in TM resource pool. Changing these settings to 0 prevents the subcluster from running mergeout operations:
=> ALTER RESOURCE POOL TM FOR SUBCLUSTER dashboard MEMORYSIZE '0%'
MAXMEMORYSIZE '0%';
See Tuning tuple mover pool settings for more information.
See also
2.22 - ALTER ROLE
Renames an existing role.
Renames an existing role.
Note
You cannot use ALTER ROLE to rename a role that was added to the Vertica database with the LDAPLink service.
Syntax
ALTER ROLE name RENAME TO new-name
Parameters
name
- The role to rename.
new-name
- The role's new name.
Privileges
Superuser
Examples
=> ALTER ROLE applicationadministrator RENAME TO appadmin;
ALTER ROLE
See also
2.23 - ALTER ROUTING RULE
Changes an existing load balancing policy routing rule.
Changes an existing load balancing policy routing rule.
Syntax
ALTER ROUTING RULE { rule_name | FOR WORKLOAD workload_name } {
RENAME TO new_name |
SET ROUTE TO 'cidr_range'|
SET GROUP TO group_name |
SET WORKLOAD TO workload_name |
SET SUBCLUSTER TO subcluster_name [,...] |
SET PRIORITY TO priority |
{ ADD | REMOVE } SUBCLUSTER subcluster_name [,...]
}
Parameters
rule_name
- The name of the existing routing rule to change.
FOR WORKLOAD
workload_name
- The name of a workload.
RENAME TO new_name
- The new name of the routing rule.
SET ROUTE TO '
cidr_range
'
- An IPv4 or IPv6 address range in CIDR format. Changes the address range of client connections this rule applies to.
SET GROUP TO
group_name
- The load balancing group that handles the connections that match this rule.
SET WORKLOAD TO
workload_name
- The name of the workload.
SET SUBCLUSTER TO
subcluster_name
- One or more subclusters to route clients to.
SET PRIORITY TO
priority
- Sets the priority for the routing rule, where
priority
is a non-negative INTEGER. The greater the value, the greater the priority. If a user or their role is granted multiple routing rules, the one with the greatest priority applies. You can set the priority for a routing rule only when applying a routing rule to a workload.
{ADD | REMOVE} SUBCLUSTER
subcluster_name
- Add or remove one or more subclusters from the routing rule.
Examples
This example changes the routing rule named etl_rule so it uses the load balancing group named etl_rule to handle incoming connections in the IP address range of 10.20.100.0 to 10.20.100.255.
=> ALTER ROUTING RULE etl_rule SET GROUP TO etl_group;
ALTER ROUTING RULE
=> ALTER ROUTING RULE etl_rule SET ROUTE TO '10.20.100.0/24';
ALTER ROUTING RULE
=> \x
Expanded display is on.
=> SELECT * FROM routing_rules WHERE NAME = 'etl_rule';
-[ RECORD 1 ]----+---------------
name | etl_rule
source_address | 10.20.100.0/24
destination_name | etl_group
This example routes analytics
workloads to the sc_analytics_2
subcluster:
=> ALTER ROUTING RULE FOR WORKLOAD analytics SET SUBCLUSTER TO `sc_analytics_2`;
This example changes the workload rule to handle reporting
instead of analytics
workloads:
=> ALTER ROUTING RULE FOR WORKLOAD analytics SET WORKLOAD TO reporting;
This example changes the priority of the workload rule for reporting
workloads:
=> ALTER ROUTING RULE FOR WORKLOAD reporting SET PRIORITY TO 5;
This example adds a subcluster to the routing rule:
=> ALTER ROUTING RULE FOR WORKLOAD analytics ADD SUBCLUSTER sc_01, sc_02;
This example removes a subcluster from the routing rule:
=> ALTER ROUTING RULE FOR WORKLOAD analytics REMOVE SUBCLUSTER sc_01, sc_02;
See also
2.24 - ALTER SCHEDULE
Modifies a schedule.
Modifies a schedule.
Syntax
ALTER SCHEDULE [[database.]schema.]schedule {
OWNER TO new_owner
| SET SCHEMA new_schema
| RENAME TO new_schedule
| USING CRON new_cron_expression
| USING DATETIMES new_timestamp_list
}
Arguments
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
schedule
- The schedule to modify.
new_owner
- The new owner of the schedule.
new_schema
- The new schema of the schedule.
new_schedule
- The new name for the schedule.
new_cron_expression
- A
cron
expression. You should use this for recurring tasks. Value separators are not currently supported.
new_timestamp_list
- A comma-separated list of timestamps. You should use this to schedule non-recurring events at arbitrary times.
Privileges
Superuser
Examples
To change the cron
expression for a schedule:
=> ALTER SCHEDULE daily_schedule USING CRON '0 8 * * *';
To change a schedule that uses a cron
expression to use a timestamp list instead:
=> ALTER SCHEDULE my_schedule USING DATETIMES('2023-10-01 12:30:00', '2022-11-01 12:30:00');
To rename a schedule:
=> ALTER SCHEDULE daily_schedule RENAME TO daily_8am_gmt;
2.25 - ALTER SCHEMA
Changes one or more schemas in one of the following ways:.
Changes one or more schemas in one of the following ways:
-
Enable or disable inheritance of schema privileges by tables created in the schemas.
-
Reassign schema ownership to another user.
-
Change schema disk quota.
-
Rename one or more schemas.
Syntax
ALTER SCHEMA [{ namespace. | database. }]schema
DEFAULT {INCLUDE | EXCLUDE} SCHEMA PRIVILEGES
| OWNER TO user-name [CASCADE]
| DISK_QUOTA { value | SET NULL }
You can rename more than one schema in a single operation:
ALTER SCHEMA [{ namespace. | database. }]schema[,...] RENAME TO [namespace.]new-schema-name[,...]
Parameters
namespace
- For Eon Mode databases, namespace of the schema to alter. If no namespace is specified, the namespace of the schema is assumed to be
default_namespace
.
database
- For Enterprise Mode databases, name of the database containing the schema. If specified, it must be the current database.
schema
- Name of the schema to modify.
DEFAULT {INCLUDE | EXCLUDE} SCHEMA PRIVILEGES
Specifies whether to enable or disable default inheritance of privileges for new tables in the specified schema:
-
EXCLUDE SCHEMA PRIVILEGES
(default): Disables inheritance of schema privileges.
-
INCLUDE SCHEMA PRIVILEGES
: Specifies to grant tables in the specified schema the same privileges granted to that schema. This option has no effect on existing tables in the schema.
See also Enabling schema inheritance.
OWNER TO
- Reassigns schema ownership to the specified user:
OWNER TO user-name [CASCADE]
By default, ownership of objects in the reassigned schema remain unchanged. To reassign ownership of schema objects to the new schema owner, qualify the OWNER TO clause with CASCADE. For details, see Cascading Schema Ownership below.
DISK_QUOTA
- One of the following:
-
A string, an integer followed by a supported unit: K, M, G, or T. If the new value is smaller than the current usage, the operation succeeds but no further disk space can be used until usage is reduced below the new quota.
-
SET NULL to remove a quota.
For more information, see Disk quotas.
RENAME TO
- Renames one or more schemas:
RENAME TO [namespace.]new-schema-name[,...]
The following requirements apply:
-
The new schema name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, models, and schemas in the namespace.
-
If you specify multiple schemas to rename, the source and target lists must have the same number of names.
-
If you specified the namespace
of the schema to alter, you must provide the same namespace for the new-schema-name
. You cannot rename a schema to a different namespace.
Caution
Renaming a schema referenced by a view causes the view to fail unless another schema is created to replace it.
Privileges
One of the following:
Cascading schema ownership
By default, ALTER SCHEMA...OWNER TO does not affect ownership of objects in the target schema or the privileges granted on them. If you qualify the OWNER TO clause with CASCADE, Vertica acts as follows on objects in the target schema:
If issued by non-superusers, ALTER SCHEMA...OWNER TO CASCADE ignores all objects that belong to other users, and returns with notices on the objects that it cannot change. For example:
-
Schema ms
is owned by user mayday
, and contains two tables: ms.t1
owned by mayday, and ms.t2
owned by user joe
:
=> \dt
List of tables
Schema | Name | Kind | Owner | Comment
----------------+-----------------------+-------+---------+---------
ms | t1 | table | mayday |
ms | t2 | table | joe |
-
User mayday
transfers ownership of schema ms
to user dbadmin
, using CASCADE. On return, ALTER SCHEMA reports that it cannot transfer ownership of table ms.t2
and its projections, which are owned by user joe
:
=> \c - mayday
You are now connected as user "mayday".
=> ALTER SCHEMA ms OWNER TO dbadmin CASCADE;
NOTICE 3583: Insufficient privileges on ms.t2
NOTICE 3583: Insufficient privileges on ms.t2_b0
NOTICE 3583: Insufficient privileges on ms.t2_b1
ALTER SCHEMA
=> \c
You are now connected as user "dbadmin".
=> \dt
List of tables
Schema | Name | Kind | Owner | Comment
----------------+-----------------------+-------+---------+---------
ms | t1 | table | dbadmin |
ms | t2 | table | joe |
-
User dbadmin
transfers ownership of schema ms
to user pat
, again using CASCADE. This time, because dbadmin
is a superuser, ALTER SCHEMA transfers ownership of all ms
tables to user pat
=> ALTER SCHEMA ms OWNER TO pat CASCADE;
ALTER SCHEMA
=> \dt
List of tables
Schema | Name | Kind | Owner | Comment
----------------+-----------------------+-------+---------+---------
ms | t1 | table | pat |
ms | t2 | table | pat |
Swapping schemas
Renaming schemas is useful for swapping schemas without actually moving data. To facilitate the swap, enter a non-existent, temporary placeholder schema. For example, the following ALTER SCHEMA
statement uses the temporary schema temps
to facilitate swapping schema S1 with schema S2. In this example, S1
is renamed to temps
. Then S2
is renamed to S1
. Finally, temps
is renamed to S2
.
=> ALTER SCHEMA S1, S2, temps RENAME TO temps, S1, S2;
Examples
The following example renames schemas S1
and S2
to S3
and S4
, respectively:
=> ALTER SCHEMA S1, S2 RENAME TO S3, S4;
This example sets the default behavior for new table t2
to automatically inherit the schema's privileges:
=> ALTER SCHEMA s1 DEFAULT INCLUDE SCHEMA PRIVILEGES;
=> CREATE TABLE s1.t2 (i, int);
This example sets the default for new tables to not automatically inherit privileges from the schema:
=> ALTER SCHEMA s1 DEFAULT EXCLUDE SCHEMA PRIVILEGES;
In an Eon Mode database, the following statement renames the train
schema to ferry
in the transit
namespace:
=> ALTER SCHEMA transit.train RENAME TO transit.ferry;
See also
2.26 - ALTER SEQUENCE
Changes a sequence in two ways:.
Changes a sequence in two ways:
- Changes values that control sequence behavior—for example, its start value and range of minimum and maximum values. These changes take effect only when you start a new database session.
- Changes sequence name, schema, or ownership. These changes take effect immediately.
Syntax
Change sequence behavior:
ALTER SEQUENCE [[database.]schema.]sequence
[ INCREMENT [ BY ] integer ]
[ MINVALUE integer | NO MINVALUE ]
[ MAXVALUE integer | NO MAXVALUE ]
[ RESTART [ WITH ] integer ]
[ CACHE integer | NO CACHE ]
[ CYCLE | NO CYCLE ]
Change sequence name, schema, or ownership:
ALTER SEQUENCE [schema.]sequence-name {
RENAME TO name
| SET SCHEMA schema]
| OWNER TO owner
}
Arguments
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
sequence
- Name of the sequence to alter.
In the case of IDENTITY table columns, Vertica generates the sequence name using the following convention:
table-name_col-name_seq
To obtain this name, query the SEQUENCES system table.
INCREMENT
integer
Positive or negative minimum value change on each call to NEXTVAL. The default is 1.
This value is a minimum increment size. The sequence can increment by more than this value unless you also specify NO CACHE
.
MINVALUE
integer
| NO MINVALUE
- Minimum value the sequence can generate. If this change would invalidate the current sequence position, the operation fails.
MAXVALUE
integer
| NO MAXVALUE
- Maximum value the sequence can generate. If this change would invalidate the current sequence position, the operation fails.
RESTART
integer
- New start value of the sequence. The next call to NEXTVAL returns the new start value.
Caution
Using ALTER SEQUENCE to set a sequence start value below its
current value can result in duplicate keys.
CACHE | NO CACHE
How many unique sequence numbers to pre-allocate and store in memory on each node for faster access. Vertica sets up caching for each session and distributes it across all nodes. A value of 0 or 1 is equivalent to NO CACHE
.
Caution
If sequence caching is set to a low number, nodes might request a new set of cache values more frequently. While it supplies a new cache, Vertica must lock the catalog. Until Vertica releases the lock, other database activities such as table inserts are blocked, which can adversely affect overall performance.
By default, the sequence cache is set to 250,000.
For details, see Distributing sequences.
CYCLE | NO CYCLE
- How to handle reaching the end of the sequence. The default is
NO CYCLE
, meaning that a call to NEXTVAL returns an error after the sequence reaches its upper or lower limit. CYCLE
instead wraps, as follows:
- Ascending sequence: wraps to the minimum value.
- Descending sequence: wraps to the maximum value.
RENAME TO
name
- New name in the current schema for the sequence. Supported only for named sequences.
SET SCHEMA
schema
- Moves the sequence to a new schema. Supported only for named sequences.
OWNER TO
owner
- Reassigns sequence ownership to another user.
Privileges
For named sequences, USAGE on the schema and one of the following:
-
Sequence owner
-
ALTER privilege on the sequence
-
For certain operations, non-superusers must have the following schema privileges:
- To rename a sequence: CREATE, USAGE
- To move a sequence to another schema: CREATE on the destination, USAGE on the source
For IDENTITY column sequences, USAGE on the table schema and one of the following:
-
Table owner
-
ALTER privileges
Non-superusers must also have SELECT privileges to enable or disable constraint enforcement, or remove partitioning.
Examples
See Altering sequences.
See also
CREATE SEQUENCE
2.27 - ALTER SESSION
ALTER SESSION sets and clears session-level configuration parameter values for the current session.
ALTER SESSION sets and clears session-level configuration parameter values for the current session. To identify session-level parameters, query the CONFIGURATION_PARAMETERS system table.
Syntax
ALTER SESSION {
SET [PARAMETER] parameter=value[,...]
| CLEAR { [PARAMETER] parameter[,...] | PARAMETER ALL }
| SET UDPARAMETER [ FOR libname ] ud-parameter=value[,...]
| CLEAR UDPARAMETER { [ FOR libname ] ud-parameter[,...] | ALL }
}
Arguments
SET [PARAMETER]
parameter
=
value
- Sets one or more configuration parameters to the specified value.
CLEAR { [PARAMETER]
parameter
| PARAMETER ALL }
- Clears the specified configuration parameters, or all parameters, of changes that were set in the current session.
SET UDPARAMETER [ FOR
libname
]
ud-parameter
=
value
[,...]
- Sets one or more user-defined session parameters to be used with UDxs. The maximum length of a parameter name when set with ALTER SESSION is 128 characters.
If you specify a library, then only that library can access the parameter's value. Use this restriction to protect parameters that hold sensitive data, such as credentials.
CLEAR UDPARAMETER { [ FOR
libname
]
ud-parameter
[,...] | ALL }
- Clears the specified user-defined parameters, or all of them, of changes that were set in the current session.
Privileges
None
Examples
Set a configuration parameter:
=> ALTER SESSION SET ForceUDxFencedMode = 1;
ALTER SESSION
=> SELECT parameter_name, current_value, default_value
FROM CONFIGURATION_PARAMETERS
WHERE parameter_name = 'ForceUDxFencedMode';
parameter_name | current_value | default_value
--------------------+---------------+---------------
ForceUDxFencedMode | 1 | 0
(1 row)
Clear all session-level settings for configuration parameters:
=> ALTER SESSION CLEAR PARAMETER ALL;
ALTER SESSION
Set a UDx parameter in a single library:
=> ALTER SESSION SET UDPARAMETER FOR MyLibrary RowCount = 25;
ALTER SESSION
2.28 - ALTER SUBCLUSTER
Changes the configuration of a subcluster.
Changes the configuration of a subcluster. You can use this statement to rename a subcluster or make it the default subcluster.
Syntax
ALTER SUBCLUSTER subcluster-name {
RENAME TO new-name |
SET DEFAULT
}
Parameters
subcluster-name
- The name of the subcluster to alter.
RENAME TO
new-name
- Changes the name of the subcluster to
new-name
.
SET DEFAULT
- Makes the subcluster the default subcluster. When you add new nodes to the database and do not specify a subcluster to contain them, Vertica adds them to the default subcluster. There can be only one default subcluster at a time. The subcluster that was previously the default subcluster becomes a non-default subcluster.
Privileges
Superuser
Examples
This example makes the analytics_cluster the default subcluster:
=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
subcluster_name
--------------------
default_subcluster
(1 row)
=> ALTER SUBCLUSTER analytics_cluster SET DEFAULT;
ALTER SUBCLUSTER
=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
subcluster_name
-------------------
analytics_cluster
(1 row)
This example renames default_subcluster to load_subcluster:
=> ALTER SUBCLUSTER default_subcluster RENAME TO load_subcluster;
ALTER SUBCLUSTER
=> SELECT DISTINCT subcluster_name FROM subclusters;
subcluster_name
-------------------
load_subcluster
analytics_cluster
(2 rows)
See also
2.29 - ALTER SUBNET
Renames an existing subnet.
Renames an existing subnet.
Syntax
ALTER SUBNET subnet-name RENAME TO new-subnet-name
Parameters
subnet-name
- The name of the existing subnet.
new-subnet-name
- The new name for the subnet.
Privileges
Superuser
Examples
=> ALTER SUBNET mysubnet RENAME TO myNewSubnet;
2.30 - ALTER TABLE
Modifies the metadata of an existing table.
Modifies the metadata of an existing table. All changes are auto-committed.
Syntax
ALTER TABLE [[{ namespace. | database. } ]schema.]table {
ADD COLUMN [ IF NOT EXISTS ] column datatype
[ column‑constraint ]
[ ENCODING encoding-type ]
[ PROJECTIONS (projections-list) | ALL PROJECTIONS ]
[, ...]
| ADD table-constraint
| ALTER COLUMN column {
ENCODING encoding-type PROJECTIONS (projection-list)
| { SET | DROP } expression }
| ALTER CONSTRAINT constraint-name { ENABLED | DISABLED }
| DISK_QUOTA { value | SET NULL }
| DROP CONSTRAINT constraint-name [ CASCADE | RESTRICT ]
| DROP [ COLUMN ] [ IF EXISTS ] column [ CASCADE | RESTRICT ]
| FORCE OUTER integer
| { INCLUDE | EXCLUDE | MATERIALIZE } [ SCHEMA ] PRIVILEGES
| OWNER TO owner
| partition-clause [ REORGANIZE ]
| REMOVE PARTITIONING
| RENAME [ COLUMN ] name TO new‑name
| RENAME TO new-table-name[,...]
| REORGANIZE
| SET {
ActivePartitionCount { count | DEFAULT }
| IMMUTABLE ROWS
| MERGEOUT { 1 | 0 }
| SCHEMA [namespace.]schema }
}
Note
Several ALTER TABLE clauses cannot be specified with other clauses in the same statement (see Exclusive ALTER TABLE Clauses below). Otherwise, ALTER TABLE supports multiple comma-delimited clauses. For example, the following ALTER TABLE statement changes the my_table
table in two ways: reassigns ownership to Joe
, and sets a UNIQUE constraint on the b
column:
=> ALTER TABLE my_table OWNER TO Joe,
ADD CONSTRAINT unique_b UNIQUE (b) ENABLED;
Arguments
namespace
- For Eon Mode databases, namespace that contains the table to alter. If no namespace is specified, the namespace of the table is assumed to be
default_namespace
.
database
- For Enterprise Mode databases, name of the database containing the table. If specified, it must be the current database.
schema
- Name of the schema that contains the table to alter.
table
- The table to alter.
-
ADD COLUMN
- Adds one or more columns to the table and to the specified projections (or all projections, if not specified).
Adding a column with the same name as one that already exists is an error unless you use IF NOT EXISTS.
Restrictions on columns of complex types also apply to columns that you add using ADD COLUMN.
You can qualify the new column definition with one or more of these options:
-
A column constraint:
[ CONSTRAINT constraint-name ]
{ [NOT] NULL
| DEFAULT expression
| SET USING expression
| DEFAULT USING expression }
Columns that use DEFAULT with static values can be added in a single ALTER TABLE statement. Columns that use non-static DEFAULT values must be added in separate ALTER TABLE statements. For more information, see Column-constraint.
-
ENCODING
: the column encoding type, by default AUTO
.
-
PROJECTIONS
: a comma-separated list of base names of existing projections to add the column to. Vertica adds the column to all buddies of each projection. The projection list cannot include projections with pre-aggregated data such as live aggregate projections. Instead of specifying projections, you can use ALL PROJECTIONS
to add the column to all projections, excluding those with pre-aggregated data.
-
ADD table‑constraint
- Adds a constraint to a table that does not have any associated projections.
-
ALTER COLUMN
- You can alter an existing column in one of two ways:
-
Set encoding on a column for one or more projections of this table:
ENCODING encoding-type PROJECTIONS (projections-list)
where projections-list
is a comma-delimited list of projections to update with the new encoding. You can specify each projection in two ways:
-
Projection base name: Update all projections that share this base name.
-
Projection name: Update the specified projection. If the projection is segmented, the change is propagated to all buddies.
If one of the projections does not contain the target column, Vertica returns with a rollback error.
For details, see Projection Column Encoding.
-
Set or drop a setting for a column of scalar data, including primitive arrays:
SET { DEFAULT expression
| USING expression
| DEFAULT USING expression
| NOT NULL
| DATA TYPE datatype
}
DROP { DEFAULT
| SET USING
| DEFAULT USING
| NOT NULL
}
You cannot change the data type of a column of any complex type that is neither a scalar type nor an array of scalar types. One exception applies: in external tables, you can change a primitive column type to a complex type.
Setting a DEFAULT or SET USING expression has no effect on existing column values. To refresh the column with its DEFAULT or SET USING expression, update it as follows
-
SET USING column: Call REFRESH_COLUMNS on the table.
-
DEFAULT column: update the column as follows:
UPDATE table-name SET column-name=DEFAULT;
Altering a column with DEFAULT or SET USING can increase disk usage, which can cause the operation to fail if it would violate the table or schema disk quota.
-
ALTER CONSTRAINT
- Specifies whether to enforce primary key, unique key, and check constraints:
ALTER CONSTRAINT constraint-name {ENABLED | DISABLED}
DISK_QUOTA
- One of the following:
-
A string, an integer followed by a supported unit: K, M, G, or T. If the new value is smaller than the current usage, the operation succeeds but no further disk space can be used until usage is reduced below the new quota.
-
SET NULL to remove a quota.
For more information, see Disk quotas.
-
DROP CONSTRAINT
- Drops the specified table constraint from the table:
DROP CONSTRAINT constraint-name [CASCADE | RESTRICT]
You can qualify DROP CONSTRAINT with one of these options:
Dropping a table constraint has no effect on views that reference the table.
-
DROP [COLUMN]
- Drops the specified column from the table and that column's ROS containers:
DROP [COLUMN] [IF EXISTS] column [CASCADE | RESTRICT]
You can qualify DROP COLUMN with one of these options:
-
IF EXISTS generates an informational message if the column does not exist. If you omit this option and the column does not exist, Vertica generates a ROLLBACK error message.
-
CASCADE is required if the column has dependencies.
-
RESTRICT drops the column only from the given table.
The column's table cannot be immutable.
See Dropping table columns.
FORCE OUTER
integer
- Specifies whether a table is joined to another as an inner or outer input. For details, see Controlling join inputs.
{INCLUDE | EXCLUDE | MATERIALIZE} [SCHEMA] PRIVILEGES
- Specifies default inheritance of schema privileges for this table:
-
EXCLUDE PRIVILEGES (default) disables inheritance of privileges from the schema.
-
INCLUDE PRIVILEGES grants the table the same privileges granted to its schema.
-
MATERIALIZE PRIVILEGES copies grants to the table and creates a GRANT object on the table. This disables the inherited privileges flag on the table, so you can:
-
Grant more specific privileges at the table level.
-
Use schema-level privileges as a template.
-
Move the table to a different schema.
-
Change schema privileges without affecting the table.
See also Setting privilege inheritance on tables and views.
OWNER TO
owner
- Changes the table owner.
-
partition-clause [REORGANIZE]
- Invalid for external tables, logically divides table data storage through a PARTITION BY clause:
PARTITION BY expression
[ GROUP BY expression ]
[ SET ACTIVEPARTITIONCOUNT integer ]
For details, see Partition clause.
If you qualify the partition clause with REORGANIZE and the table previously specified no partitioning, the Vertica Tuple Mover immediately implements the partition clause. If the table previously specified partitioning, the Tuple Mover evaluates ROS storage containers and reorganizes them as needed to conform with the new partition clause.
REMOVE PARTITIONING
- Specifies to remove partitioning from a table definition. The Tuple Mover subsequently removes existing partitions from ROS containers.
-
RENAME [COLUMN]
- Renames the specified column within the table. The column's table cannot be immutable.
-
RENAME TO
- Renames one or more tables:
RENAME TO new-table-name[,...]
The following requirements apply:
-
The renamed table must be in the same schema as the original table.
-
The new table name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
-
If you specify multiple tables to rename, the source and target lists must have the same number of names.
-
REORGANIZE
- Valid only for partitioned tables, invokes the Tuple Mover to reorganize ROS storage containers as needed to conform with the table's current partition clause. ALTER TABLE...REORGANIZE and Vertica meta-function PARTITION_TABLE operate identically.
REORGANIZE can also qualify a new partition clause.
SET
- Changes a table setting, one of the following:
-
ActivePartitionCount { count
| DEFAULT }, valid only for partitioned tables, specifies how many partitions are active for this table, one of the following:
-
count
: Unsigned integer, supersedes configuration parameter ActivePartitionCount.
-
DEFAULT: Removes the table-level active partition count. The table obtains its active partition count from the configuration parameter ActivePartitionCount.
For details on usage, see Active and inactive partitions.
-
IMMUTABLE ROWS prevents changes to table row values by blocking DML operations such as UPDATE and DELETE. Once set, table immutability cannot be reverted.
You cannot set a flattened table to be immutable. For details on all immutable table restrictions, see Immutable tables.
-
MERGEOUT { 1 | 0 } specifies whether to enable or disable mergeout to ROS containers that consolidate projection data of this table. By default, mergeout is enabled (1) on all tables.
-
SCHEMA [namespace.
]schema-name
moves the table from its current schema to [namespace.
]schema-name
. For Eon Mode databases, the new schema must be in the same namespace as the old schema. When no namespace
is provided, the namespace is set to default_namespace
.
Vertica automatically moves all projections that are anchored to the source table to the destination schema. It also moves all IDENTITY columns to the destination schema. For details, see Moving tables to another schema
Privileges
Non-superuser: USAGE on the schema and one of the following:
-
Table owner
-
ALTER privileges
Non-superusers must also have SELECT privileges to enable or disable constraint enforcement, or remove partitioning.
For certain operations, non-superusers must have the following schema privileges:
-
To rename a table: CREATE, USAGE
-
To move a table to another schema: USAGE on the source schema, CREATE on the destination schema
Restrictions for complex types
Complex types used in native tables have some restrictions, in addition to the restrictions for individual types listed on their reference pages:
-
A native table must have at least one column that is a primitive type or a native array (one-dimensional array of a primitive type). If a flex table has real columns, it must also have at least one column satisfying this restriction.
-
Complex type columns cannot be used in ORDER BY or PARTITION BY clauses nor as FILLER columns.
-
Complex type columns cannot have constraints.
-
Complex type columns cannot use DEFAULT or SET USING.
-
Expressions returning complex types cannot be used as projection columns, and projections cannot be segmented or ordered by columns of complex types.
Exclusive ALTER TABLE clauses
The following ALTER TABLE clauses cannot be combined with another ALTER TABLE clause:
-
ADD COLUMN
-
DROP COLUMN
-
RENAME COLUMN
-
SET SCHEMA
-
RENAME [TO]
Node down limitations
Enterprise Mode only
The following ALTER TABLE operations are not supported when one or more database cluster nodes are down:
-
ALTER COLUMN ... ADD table-constraint
-
ALTER COLUMN ... SET DATA TYPE
-
ALTER COLUMN ... { SET DEFAULT | DROP DEFAULT }
-
ALTER COLUMN ... { SET USING | DROP SET USING }
-
ALTER CONSTRAINT
-
DROP COLUMN
-
DROP CONSTRAINT
Pre-aggregated projection restrictions
You cannot modify the metadata of anchor table columns that are included in live aggregate or Top-K projections. You also cannot drop these columns. To make these changes, you must first drop all live aggregate and Top-K projections that are associated with it.
External table restrictions
Not all ALTER TABLE options pertain to external tables. For instance, you cannot add a column to an external table, but you can rename the table:
=> ALTER TABLE mytable RENAME TO mytable2;
ALTER TABLE
Locked tables
If the operation cannot obtain an O lock on the target table, Vertica tries to close any internal Tuple Mover sessions that are running on that table. If successful, the operation can proceed. Explicit Tuple Mover operations that are running in user sessions do not close. If an explicit Tuple Mover operation is running on the table, the operation proceeds only when the operation is complete.
Examples
In an Eon Mode database, the following statement alters the schema of the flights
table from schema airline1
in namespace airport
to schema airline2
in namespace airport
:
=> ALTER TABLE airport.airline1.flights SET SCHEMA airport.airline2.flights;
See also
2.30.1 - Projection column encoding
After you create a table and its projections, you can call ALTER TABLE...ALTER COLUMN to set or change the encoding type of an existing column in one or more projections.
After you create a table and its projections, you can call ALTER TABLE...ALTER COLUMN to set or change the encoding type of an existing column in one or more projections. For example:
ALTER TABLE store.store_dimension ALTER COLUMN store_region
ENCODING rle PROJECTIONS (store.store_dimension_p1_b0, store.store_dimension_p2);
In this example, the ALTER TABLE statement specifies to set RLE encoding on column store_region
for two projections: store_dimension_p1_b0
and store_dimension_p2
. The PROJECTIONS
list references the two projections by their projection name and base name, respectively. You can reference a projection either way; in both cases, the change is propagated to all buddies of the projection and stored in its DDL accordingly:
=> select export_objects('','store.store_dimension');
export_objects
------------------------------------------------------------------
CREATE TABLE store.store_dimension
(
store_key int NOT NULL,
store_name varchar(64),
store_number int,
store_address varchar(256),
store_city varchar(64),
store_state char(2),
store_region varchar(64)
);
CREATE PROJECTION store.store_dimension_p1
(
store_key,
store_name,
store_number,
store_address,
store_city,
store_state,
store_region ENCODING RLE
)
AS
SELECT store_dimension.store_key,
store_dimension.store_name,
store_dimension.store_number,
store_dimension.store_address,
store_dimension.store_city,
store_dimension.store_state,
store_dimension.store_region
FROM store.store_dimension
ORDER BY store_dimension.store_key
SEGMENTED BY hash(store_dimension.store_key) ALL NODES KSAFE 1;
CREATE PROJECTION store.store_dimension_p2
(
store_key,
store_name,
store_number,
store_address,
store_city,
store_state,
store_region ENCODING RLE
)
AS
SELECT ...
Important
When you add or change a column's encoding type, it has no immediate effect on existing projection data. Vertica applies the encoding only to newly loaded data, and to existing data on
mergeout.
2.31 - ALTER TLS CONFIGURATION
Alters a specified TLS Configuration object.
Alters a specified TLS Configuration object. For information on existing TLS Configuration objects, query TLS_CONFIGURATIONS.
Syntax
ALTER TLS CONFIGURATION tls_config_name {
[ CERTIFICATE { NULL | cert_name } ]
[ ADD CA CERTIFICATES ca_cert_name [,...] ]
[ REMOVE CA CERTIFICATES ca_cert_name [,...] ]
[ CIPHER SUITES { '' | 'openssl_cipher [,...]' } ]
[ TLSMODE 'tlsmode' ]
[ OWNER TO user_name ]
}
Parameters
tls_config_name
- The TLS Configuration object to alter.
NULL
- Removes the non-CA certificate from the TLS Configuration.
cert_name
- A certificate created with CREATE CERTIFICATE.
You must have USAGE privileges on the certificate (either from ownership of the certificate or USAGE on its key, if any) to add it to a TLS Configuration.
ca_cert_name
- A CA certificate created with CREATE CERTIFICATE.
You must have USAGE privileges on the certificate (either from ownership of the certificate or USAGE on its key, if any) to add it to a TLS Configuration.
openssl_cipher
- A comma-separated list of cipher suites to use instead of the default set of cipher suites. Providing an empty string for this parameter clears the alternate cipher suite list and instructs the specified TLS Configuration to use the default set of cipher suites.
To view enabled cipher suites, use LIST_ENABLED_CIPHERS.
tlsmode
- How Vertica establishes TLS connections and handles certificates, one of the following, in order of ascending security:
-
DISABLE
: Disables TLS. All other options for this parameter enable TLS.
-
ENABLE
: Enables TLS. Vertica does not check client certificates.
-
TRY_VERIFY
: Establishes a TLS connection if one of the following is true:
If the other host presents an invalid certificate, the connection will use plaintext.
-
VERIFY_CA
: Connection succeeds if Vertica verifies that the other host's certificate is from a trusted CA. If the other host does not present a certificate, the connection uses plaintext.
-
VERIFY_FULL
: Connection succeeds if Vertica verifies that the other host's certificate is from a trusted CA and the certificate's cn
(Common Name) or subjectAltName
attribute matches the hostname or IP address of the other host.
Note that for client certificates, cn
is used for the username, so subjectAltName
must match the hostname or IP address of the other host.
VERIFY_FULL
is unsupported for client-server TLS (the connection type handled by ServerTLSConfig) and behaves like VERIFY_CA
.
Note
Whether Vertica or the other party acts as the client or server depends on the type of connection. For connections between the Vertica database and an LDAP server for LDAP Link or LDAP authentication, the Vertica database is the client and the LDAP server is the server:
For all other connection types, Vertica is the server and the other party is the client:
Privileges
Non-superuser: ALTER privileges on the TLS Configuration.
Examples
To remove all certificates and CA certificates from the LDAPLink TLS Configuration:
=> SELECT * FROM tls_configurations WHERE name='LDAPLink';
name | owner | certificate | ca_certificate | cipher_suites | mode
----------+---------+-------------+----------------+---------------+---------
LDAPLink | dbadmin | server_cert | ca | | DISABLE
LDAPLink | dbadmin | server_cert | ica | | DISABLE
(2 rows)
=> ALTER TLS CONFIGURATION LDAPLink CERTIFICATE NULL REMOVE CA CERTIFICATES ca, ica;
ALTER TLS CONFIGURATION
=> SELECT * FROM tls_configurations WHERE name='LDAPLink';
name | owner | certificate | ca_certificate | cipher_suites | mode
----------+---------+-------------+----------------+---------------+---------
LDAPLink | dbadmin | | | | DISABLE
(3 rows)
To use an alternate set of cipher suites for client-server TLS:
=> ALTER TLS CONFIGURATION server CIPHER SUITES
'DHE-PSK-AES256-CBC-SHA384,
DHE-PSK-AES128-GCM-SHA256,
PSK-AES128-CBC-SHA256';
ALTER TLS CONFIGURATION
=> SELECT name, cipher_suites FROM tls_configurations WHERE name='server';
name | cipher_suites
server | DHE-PSK-AES256-CBC-SHA384,DHE-PSK-AES128-GCM-SHA256,PSK-AES128-CBC-SHA256
(1 row)
For other examples, see:
2.32 - ALTER TRIGGER
Modifies a trigger.
Modifies a trigger.
Syntax
ALTER TRIGGER [[database.]schema.]trigger {
OWNER TO new_owner
| SET SCHEMA new_schema
| RENAME TO new_trigger
| PROCEDURE TO new_procedure
}
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
trigger
- The trigger to modify.
new_owner
- The new owner and definer of the trigger. This affects the behavior of AS DEFINER.
new_schema
- The new schema of the trigger.
new_trigger
- The new name for the trigger.
new_procedure
- The function signature of the stored procedure.
Privileges
Superuser
Examples
To attach a different procedure to the trigger:
=> ALTER TRIGGER daily_1am PROCEDURE TO log_user_actions(10, 20);
To rename a trigger:
=> ALTER TRIGGER daily_1am RENAME TO daily_1am_gmt;
To move the trigger to a different schema:
=> ALTER TRIGGER daily_1am_gmt
2.33 - ALTER USER
Changes user account parameters and user-level configuration parameters.
Changes user account parameters and user-level configuration parameters.
Syntax
ALTER USER user-name {
account-parameter value[,...]
| SET [PARAMETER] cfg-parameter=value[,...]
| CLEAR [PARAMETER] cfg-parameter[,...]
}
Parameters
user-name
- Name of the user. Names that contain special characters must be double-quoted. To enforce case-sensitivity, use double-quotes.
For details on name requirements, see Creating a database name and password.
account-parameter
value
- Specifies user account settings (see below).
Note
Changes to a user account apply only to the current session and to all later sessions launched by this user.
SET [PARAMETER]
- Sets the specified configuration parameters. The new setting applies only to the current session, and to all later sessions launched by this user. Concurrent user sessions are unaffected by new settings unless they call meta-function RESET_SESSION.
CLEAR [PARAMETER]
- Resets the specified configuration parameters to their default values.
User account parameters
Specify one or more user-account parameters and their settings as a comma-delimited list:
account-parameter value[,...]
Important
The following user-account parameters are invalid for a user who is added to the Vertica database with the LDAPLink service:
-
IDENTIFIED BY
-
PROFILE
-
SECURITY ALGORITHM
Parameter |
Setting |
ACCOUNT |
Locks or unlocks user access to the database, one of the following:
Tip
To automate account locking, set a maximum number of failed login attempts with CREATE PROFILE.
|
DEFAULT ROLE |
Specifies what roles are the default roles for this user, set to one of the following:
-
NONE (default): Removes all default roles.
-
role [,...] : Comma-delimited list of roles.
-
ALL : Sets as default all user roles.
-
ALL EXCEPT role [,...] : Comma-delimited list of roles to exclude as default roles.
Default roles are automatically activated when a user logs in. The roles specified by this parameter supersede any roles assigned earlier.
Note
DEFAULT ROLE cannot be specified in combination with other ALTER USER parameters.
|
GRACEPERIOD |
Specifies how long a user query can block on any session socket, one of the following:
-
NONE (default): Removes any grace period previously set on session queries.
-
' interval ' : Specifies as an interval the maximum grace period for current session queries, up to 20 days.
For details, see Handling session socket blocking.
|
IDENTIFIED BY |
Changes the user's password:
IDENTIFIED BY '[new-password]'
| ['hashed-password' SALT 'hash-salt']
[REPLACE 'current-password']
-
new-password : ASCII password that Vertica then hashes for internal storage. An empty string enables this user to access the database with no password.
-
hashed-password : A pre-hashed password and its associated hex string hash-salt . Setting a password this way bypasses all password complexity requirements.
-
REPLACE: Required for non-superusers, who must supply their current password. Non-superusers can only change their own passwords.
For details, see Password guidelines and Creating a database name and password.
|
IDLESESSIONTIMEOUT |
The length of time the system waits before disconnecting an idle session, one of the following:
-
NONE (default): No limit set for this user. If you omit this parameter, no limit is set for this user.
-
'interval' : An interval value, up to one year.
For details, see Managing client connections.
|
MAXCONNECTIONS |
Sets the maximum number of connections the user can have to the server, one of the following:
-
NONE (default): No limit set. If you omit this parameter, the user can have an unlimited number of connections across the database cluster.
-
integer ON DATABASE : Sets to integer the maximum number of connections across the database cluster.
-
integer ON NODE : Sets to integer the maximum number of connections to each node.
For details, see Managing client connections.
|
MEMORYCAP |
Sets how much memory can be allocated to user requests, one of the following:
|
PASSWORD EXPIRE |
Forces immediate expiration of the user's password. The user must change the password on the next login.
Note
PASSWORD EXPIRE has no effect when using external password authentication methods such as LDAP or Kerberos.
|
PROFILE |
Assigns a profile that controls password requirements for this user, one of the following:
|
RENAME TO |
Assigns the user a new user name. All privileges assigned to the user remain unchanged.
Note
RENAME TO cannot be specified in combination with other ALTER USER parameters.
|
RESOURCE POOL pool-name [FOR SUBCLUSTER sc-name ] |
Assigns a resource pool to this user. The user must also be granted privileges to this pool, unless privileges to the pool are set to PUBLIC .
The FOR SUBCLUSTER clause assigns a subcluster-specific resource pool to the user. You can assign only one subcluster-specific resource pool to each user.
|
RUNTIMECAP |
Sets how long this user's queries can execute, one of the following:
-
NONE (default): No limit set for this user. If you omit this parameter, no limit is set for this user.
-
'interval' : An interval value, up to one year.
A query's runtime limit can be set at three levels: the user's runtime limit, the user's resource pool, and the session setting. For more information, see Setting a runtime limit for queries.
|
SEARCH_PATH |
Specifies the user's default search path, that tells Vertica which schemas to search for unqualified references to tables and UDFs, one of the following:
-
DEFAULT (default): Sets the search path as follows:
"$user", public, v_catalog, v_monitor, v_internal
-
Comma-delimited list of schemas.
For details, see Setting Search Paths.
|
SECURITY_ALGORITHM ' algorithm ' |
Sets the user-level security algorithm for hash authentication, where algorithm is one of the following:
The user's password expires when you change the SECURITY_ALGORITHM value and must be reset.
|
TEMPSPACECAP |
Sets how much temporary file storage is available for user requests, one of the following:
|
Privileges
Non-superusers can change the following options on their own user accounts:
-
IDENTIFIED BY
-
RESOURCE POOL
-
SEARCH_PATH
-
SECURITY_ALGORITHM
When changing a another user's resource pool to one outside of the PUBLIC schema, the user must have USAGE privileges on the resource pool from at least one of the following:
Setting user-level configuration parameters
SET | CLEAR PARAMETER can specify only user-level configuration parameters, otherwise Vertica returns an error. Only superusers can set and clear user-level parameters, unless they are also supported at the session level.
To get the names of user-level parameters, query system table CONFIGURATION_PARAMETERS. For example:
=> SELECT parameter_name, allowed_levels FROM configuration_parameters
WHERE allowed_levels ilike '%USER%' AND parameter_name ilike '%depot%' ORDER BY parameter_name;
parameter_name | allowed_levels
-----------------------------+-------------------------
BackgroundDepotWarming | SESSION, USER, DATABASE
DepotOperationsForQuery | SESSION, USER, DATABASE
EnableDepotWarmingFromPeers | SESSION, USER, DATABASE
UseDepotForReads | SESSION, USER, DATABASE
UseDepotForWrites | SESSION, USER, DATABASE
(5 rows)
The following example sets the user-level configuration parameter UseDepotForWrites for two users, Yvonne and Ahmed:
=> SHOW USER Yvonne PARAMETER ALL;
user | parameter | setting
--------+-------------------------+---------
Yvonne | DepotOperationsForQuery | Fetches
(1 row)
=> ALTER USER Yvonne SET PARAMETER UseDepotForWrites = 0;
ALTER USER
=> SHOW USER Yvonne PARAMETER ALL;
user | parameter | setting
--------+-------------------------+---------
Yvonne | DepotOperationsForQuery | Fetches
Yvonne | UseDepotForWrites | 0
(2 rows)
=> ALTER USER Ahmed SET PARAMETER DepotOperationsForQuery = 'Fetches';
ALTER USER
=> SHOW USER ALL PARAMETER ALL;
user | parameter | setting
--------+-------------------------+---------
Ahmed | DepotOperationsForQuery | Fetches
Yvonne | DepotOperationsForQuery | Fetches
Yvonne | UseDepotForWrites | 0
(3 rows)
Examples
Set a user's password
=> CREATE USER user1;
=> ALTER USER user1 IDENTIFIED BY 'newpassword';
Set user's security algorithm and password
This example sets a user's security algorithm and password to SHA-512
and newpassword
, respectively. When you execute the ALTER USER
statement, Vertica hashes the password with the SHA-512 algorithm and saves the hash:
=> CREATE USER user1;
=> ALTER USER user1 SECURITY_ALGORITHM 'SHA512' IDENTIFIED BY 'newpassword'
Assign default roles to a user
This example make a user's assigned roles their default roles. Default roles are automatically set (enabled) when a user logs in:
=> CREATE USER user1;
CREATE USER
=> GRANT role1, role2, role3 to user1;
=> ALTER USER user1 DEFAULT ROLE ALL;
You can pair ALL with EXCEPT to exclude certain roles:
=> CREATE USER user2;
CREATE USER
=> GRANT role1, role2, role3 to user2;
=> ALTER USER user2 DEFAULT ROLE ALL EXCEPT role1;
See also
2.34 - ALTER VIEW
Modifies the metadata of an existing.
Modifies the metadata of an existing view. The changes are auto-committed.
Syntax
General usage:
ALTER VIEW [[database.]schema.]view {
| OWNER TO owner
| SET SCHEMA schema
| { INCLUDE | EXCLUDE | MATERIALIZE } [ SCHEMA ] PRIVILEGES
}
Rename view:
ALTER VIEW [[database.]schema.]view[,...] RENAME TO new-view-name[,...]
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
view
- The view to alter.
SET SCHEMA
schema
``
- Moves the view from one schema to another.
OWNER TO
owner
- Changes the view owner.
Important
The new view owner should also have SELECT privileges on the objects that the view references; otherwise the view is inaccessible to that user.
-
{ INCLUDE | EXCLUDE | MATERIALIZE } [SCHEMA] PRIVILEGES
- Specifies default inheritance of schema privileges for this view:
-
EXCLUDE [SCHEMA] PRIVILEGES
(default) disables inheritance of privileges from the schema.
-
INCLUDE [SCHEMA] PRIVILEGES
grants the view the same privileges granted to its schema.
-
MATERIALIZE
: Copies grants to the view and creates a GRANT object on the view. This disables the inherited privileges flag on the view, so you can:
-
Grant more specific privileges at the view level
-
Use schema-level privileges as a template
-
Move the view to a different schema
-
Change schema privileges without affecting the view
See also Setting privilege inheritance on tables and views.
RENAME TO
- Renames one or more views:
RENAME TO new-view-name[,...]
The following requirements apply:
-
The new view name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
-
If you specify multiple views to rename, the source and target lists must have the same number of names.
-
Renaming a view requires USAGE
and CREATE
privileges on the schema that contains the view.
Privileges
Non-superuser: USAGE on the schema and one of the following:
For certain operations, non-superusers must have the following schema privileges:
Schema privileges required... |
For these operations... |
CREATE, USAGE |
Rename view |
CREATE: destination schema USAGE: current schema |
Move view to another schema |
Examples
Rename view view1
to view2
:
=> CREATE VIEW view1 AS SELECT * FROM t;
CREATE VIEW
=> ALTER VIEW view1 RENAME TO view2;
ALTER VIEW
3 - BEGIN
Starts a transaction block.
Starts a transaction block.
Syntax
BEGIN [ WORK | TRANSACTION ] [ isolation-level ] [ READ [ONLY] | WRITE ]
Parameters
WORK | TRANSACTION
- Optional keywords for readability only.
isolation-level
- Specifies the transaction's isolation level, which determines what data the transaction can access when other transactions are running concurrently, one of the following:
For details, see Transactions.
READ [ONLY] | WRITE
- Specifies the transaction mode, one of the following:
Setting the transaction session mode to read-only disallows the following SQL statements, but does not prevent all disk write operations:
-
INSERT, UPDATE, DELETE, and COPY if the target table is not a temporary table
-
All CREATE, ALTER, and DROP commands
-
GRANT, REVOKE, and EXPLAIN if the SQL to run is one of the statements cited above.
Privileges
None
Examples
Create a transaction with the isolation level set to READ COMMITTED and the transaction mode to READ WRITE:
=> BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED READ WRITE;
BEGIN
=> CREATE TABLE sample_table (a INT);
CREATE TABLE
=> INSERT INTO sample_table (a) VALUES (1);
OUTPUT
--------
1
(1 row)
=> END;
COMMIT
See also
4 - CALL
Invokes a stored procedure created with CREATE PROCEDURE (Stored).
Invokes a stored procedure created with CREATE PROCEDURE (stored).
Syntax
CALL [[database.]schema.]procedure( [ argument-list] );
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
procedure
- The name of the stored procedure, where
procedure
conforms to conventions described in Identifiers.
argument-list
- A comma-delimited list of arguments to pass to the stored procedure, whose types correspond to the types of the argument's IN parameters.
Privileges
Non-superuser: EXECUTE on the procedure
Examples
See Executing stored procedures and Stored procedures: use cases and examples.
See also
5 - COMMENT ON statements
COMMENT ON statements let you create comments on database objects, such as schemas, tables, and libraries.
COMMENT ON
statements let you create comments on database objects, such as schemas, tables, and libraries. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
5.1 - COMMENT ON AGGREGATE FUNCTION
Adds, revises, or removes a comment on an aggregate function.
Adds, revises, or removes a comment on an aggregate function. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON AGGREGATE FUNCTION [[database.]schema.]function (function-args) IS { 'comment' | NULL };
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- The name of the aggregate function with which to associate the comment.
function-args
- The function arguments.
comment
- Specifies the comment text to add. If a comment already exists for this function, this overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the APPROXIMATE_MEDIAN(x FLOAT)
function:
=> COMMENT ON AGGREGATE FUNCTION APPROXIMATE_MEDIAN(x FLOAT) IS 'alias of APPROXIMATE_PERCENTILE with 0.5 as its parameter';
The following example removes a comment from the APPROXIMATE_MEDIAN(x FLOAT)
function:
=> COMMENT ON AGGREGATE FUNCTION APPROXIMATE_MEDIAN(x FLOAT) IS NULL;
5.2 - COMMENT ON ANALYTIC FUNCTION
Adds, revises, or removes a comment on an analytic function.
Adds, revises, or removes a comment on an analytic function. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON ANALYTIC FUNCTION [[database.]schema.]function (function-args) IS { 'comment' | NULL };
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- The name of the analytic function with which to associate the comment.
function-args
- The function arguments.
comment
- Specifies the comment text to add. If a comment already exists for this function, this overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the user-defined an_rank()
function:
=> COMMENT ON ANALYTIC FUNCTION an_rank() IS 'built from the AnalyticFunctions library';
The following example removes a comment from the user-defined an_rank()
function:
=> COMMENT ON ANALYTIC FUNCTION an_rank() IS NULL;
5.3 - COMMENT ON COLUMN
Adds, revises, or removes a comment on a column in a table or projection.
Adds, revises, or removes a comment on a column in a table or projection. Each object can have one comment. Comments are stored in the COMMENTS system table.
Syntax
COMMENT ON COLUMN [[database.]schema.]object.column IS {'comment' | NULL}
Arguments
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
object
- Name of the table or projection containing the column.
column
- Name of the column to which the comment applies.
comment
| NULL
- Comment text to add, or NULL to remove an existing comment. The new value replaces any existing comment for this column.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment for a table column:
=> COMMENT ON COLUMN store.store_sales_fact.transaction_time IS 'GMT';
You can also add comments to columns in projections:
=> COMMENT ON COLUMN customer_dimension_vmart_node01.customer_name IS 'Last name only';
To remove a comment, specify NULL:
=> COMMENT ON COLUMN store.store_sales_fact.transaction_time IS NULL;
5.4 - COMMENT ON CONSTRAINT
Adds, revises, or removes a comment on a constraint.
Adds, revises, or removes a comment on a constraint. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON CONSTRAINT constraint ON [[database.]schema.]table IS ... {'comment' | NULL };
Parameters
constraint
- The name of the constraint associated with the comment.
[
database.
]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
table
- The name of the table constraint with which to associate a comment.
comment
- Specifies the comment text to add. If a comment already exists for this constraint, this comment overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the constraint_x
constraint on the promotion_dimension table:
=> COMMENT ON CONSTRAINT constraint_x ON promotion_dimension IS 'Primary key';
The following example removes a comment from the constraint_x
constraint on the promotion_dimension table:
=> COMMENT ON CONSTRAINT constraint_x ON promotion_dimension IS NULL;
5.5 - COMMENT ON FUNCTION
Adds, revises, or removes a comment on a function.
Adds, revises, or removes a comment on a function. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON FUNCTION [[database.]schema.]function (function-args) IS { 'comment' | NULL };
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- The name of the function with which to associate the comment.
function-args
- The function arguments.
comment
- Specifies the comment text to add. If a comment already exists for this function, this overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the macros.zerowhennull (x INT)
function:
=> COMMENT ON FUNCTION macros.zerowhennull(x INT) IS 'Returns a 0 if not NULL';
The following example removes a comment from the macros.zerowhennull (x INT)
function:
=> COMMENT ON FUNCTION macros.zerowhennull(x INT) IS NULL;
5.6 - COMMENT ON LIBRARY
Adds, revises, or removes a comment on a library.
Adds, revises, or removes a comment on a library . Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON LIBRARY [[database.]schema.]library IS {'comment' | NULL}
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
library
- The name of the library associated with the comment.
comment
- Specifies the comment text to add. If a comment already exists for this library, this comment overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the library MyFunctions:
=> COMMENT ON LIBRARY MyFunctions IS 'In development';
The following example removes a comment from the library MyFunctions:
=> COMMENT ON LIBRARY MyFunctions IS NULL;
See also
5.7 - COMMENT ON NODE
Adds, revises, or removes a comment on a node.
Adds, revises, or removes a comment on a node. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Dropping an object drops all comments associated with the object.
Syntax
COMMENT ON NODE node-name IS { 'comment' | NULL }
Parameters
node-name
- The name of the node associated with the comment.
comment
- Specifies the comment text to add. If a comment already exists for this node, this comment overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment for the initiator node
:
=> COMMENT ON NODE initiator IS 'Initiator node';
The following example removes a comment from the initiator node
:
=> COMMENT ON NODE initiator IS NULL;
See also
COMMENTS
5.8 - COMMENT ON PROJECTION
Adds, revises, or removes a comment on a projection.
Adds, revises, or removes a comment on a projection. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Dropping an object drops all comments associated with the object.
Syntax
COMMENT ON PROJECTION [[database.]schema.]projection IS { 'comment' | NULL }
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
projection
- The name of the projection associated with the comment.
comment
- Specifies the text of the comment to add. If a comment already exists for this projection, the comment you enter here overwrites the previous comment.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the customer_dimension_vmart_node01
projection:
=> COMMENT ON PROJECTION customer_dimension_vmart_node01 IS 'Test data';
The following example removes a comment from the customer_dimension_vmart_node01
projection:
=> COMMENT ON PROJECTION customer_dimension_vmart_node01 IS NULL;
See also
COMMENTS
5.9 - COMMENT ON SCHEMA
Adds, revises, or removes a comment on a schema.
Adds, revises, or removes a comment on a schema. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON SCHEMA schema-name IS {'comment' | NULL}
Parameters
schema-name
- The schema associated with the comment.
comment
- Text of the comment to add. If a comment already exists for this schema, the comment you enter here overwrites the previous comment.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the public
schema:
=> COMMENT ON SCHEMA public IS 'All users can access this schema';
The following example removes a comment from the public
schema.
=> COMMENT ON SCHEMA public IS NULL;
5.10 - COMMENT ON SEQUENCE
Adds, revises, or removes a comment on a sequence.
Adds, revises, or removes a comment on a sequence. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON SEQUENCE [[database.]schema.]sequence IS { 'comment' | NULL }
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
sequence
- The name of the sequence associated with the comment.
comment
- Specifies the text of the comment to add. If a comment already exists for this sequence, this comment overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the sequence called prom_seq.
=> COMMENT ON SEQUENCE prom_seq IS 'Promotion codes';
The following example removes a comment from the prom_seq sequence.
=> COMMENT ON SEQUENCE prom_seq IS NULL;
5.11 - COMMENT ON TABLE
Adds, revises, or removes a comment on a table.
Adds, revises, or removes a comment on a table. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON TABLE [[database.]schema.]table IS { 'comment' | NULL }
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
table
- The name of the table with which to associate the comment.
comment
- Specifies the text of the comment to add. Enclose the text of the comment within single-quotes. If a comment already exists for this table, the comment you enter here overwrites the previous comment.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes a previously added comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the promotion_dimension table:
=> COMMENT ON TABLE promotion_dimension IS '2011 Promotions';
The following example removes a comment from the promotion_dimension table:
=> COMMENT ON TABLE promotion_dimension IS NULL;
5.12 - COMMENT ON TRANSFORM FUNCTION
Adds, revises, or removes a comment on a user-defined transform function.
Adds, revises, or removes a comment on a user-defined transform function. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON TRANSFORM FUNCTION [[database.]schema.]tfunction
...( [ tfunction-arg-name tfunction-arg-type ][,...] ) IS {'comment' | NULL}
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
tfunction
- The name of the transform function with which to associate the comment.
-
tfunction-arg-name tfunction-arg-type
- The names and data types of one or more transform function arguments. If you supply argument names and types, each type must match the type specified in the library used to create the original transform function.
comment
- Specifies the comment text to add. If a comment already exists for this transform function, this comment overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment to the macros.zerowhennull (x INT)
UTF function:
=> COMMENT ON TRANSFORM FUNCTION macros.zerowhennull(x INT) IS 'Returns a 0 if not NULL';
The following example removes a comment from the acros.zerowhennull (x INT)
function by using the NULL
option:
=> COMMENT ON TRANSFORM FUNCTION macros.zerowhennull(x INT) IS NULL;
5.13 - COMMENT ON VIEW
Adds, revises, or removes a comment on a view.
Adds, revises, or removes a comment on a view. Each object can have one comment. Comments are stored in the system table
COMMENTS
.
Syntax
COMMENT ON VIEW [[database.]schema.]view IS { 'comment' | NULL }
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
view
- The name of the view with which to associate the comment.
comment
- Specifies the text of the comment to add. If a comment already exists for this view, this comment overwrites the previous one.
Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.
NULL
- Removes an existing comment.
Privileges
Non-superuser: object owner
Examples
The following example adds a comment from the curr_month_ship
view:
=> COMMENT ON VIEW curr_month_ship IS 'Shipping data for the current month';
The following example removes a comment from the curr_month_ship
view:
=> COMMENT ON VIEW curr_month_ship IS NULL;
6 - COMMIT
Ends the current transaction and makes all changes that occurred during the transaction permanent and visible to other users.
Ends the current transaction and makes all changes that occurred during the transaction permanent and visible to other users.
COMMIT
is a synonym for
END
Syntax
COMMIT [ WORK | TRANSACTION ]
Parameters
-
WORK TRANSACTION
- Optional keywords for readability only.
Privileges
None
Examples
This example shows how to commit an insert.
=> CREATE TABLE sample_table (a INT);
=> INSERT INTO sample_table (a) VALUES (1);
OUTPUT
--------
1
=> COMMIT;
See also
7 - CONNECT TO VERTICA
Connects to another Vertica database to enable importing and exporting data across Vertica databases, with COPY FROM VERTICA and EXPORT TO VERTICA, respectively.
Connects to another Vertica database to enable importing and exporting data across Vertica databases, with
COPY FROM VERTICA
and
EXPORT TO VERTICA
, respectively.
After you establish a connection to another database, the connection remains open in the current session until you explicitly close it with
DISCONNECT
. You can have only one connection to another database at a time. However, you can establish successive connections to different databases in the same session.
By default, invoking CONNECT TO VERTICA
occurs over the Vertica private network. For information about creating a connection over a public network, see Using public and private IP networks.
Important
Copy and export operations can fail if either side of the connection is a single-node cluster installed on localhost
.
Syntax
CONNECT TO VERTICA db-spec
[ USER username ]
[ PASSWORD 'password' ] ON 'host', port
[ TLS CONFIGURATION tls_configuration ]
[ TLSMODE PREFER ]
Parameters
db-spec
- The target database, either the database name or
DEFAULT
.
username
- The username to use when connecting to the other database. If omitted, the current user's username is used.
password
- The password of the user on the target database. If omitted, you can use credential forwarding or mutual TLS to authenticate to the target database. For details, see Passwordless authentication
host
- The host name of one of the nodes in the other database.
port
- The port number of the other database.
tls_configuration
- The TLS Configuration to use for TLS. The TLS Configuration is ignored if
ImportExportTLSMode
is set to any of the following:
-
REQUIRE_FORCE
-
VERIFY_CA_FORCE
-
VERIFY_FULL_FORCE
The effective TLS mode of CONNECT TO VERTICA changes depending on the TLSMODE of the TLS Configuration and the value of ImportExportTLSMode (for non-FORCE values). For details, see Effective TLSMode.
If the target database is configured for mutual TLS and the user on the target database can authenticate with TLS, you can use tls_configuration
to authenticate instead of password
.
- TLSMODE PREFER
-
Deprecated
This parameter has been superseded by the TLS CONFIGURATION parameter. TLSMODE PREFER only takes effect if TLS CONFIGURATION is not set.
Overrides the value of configuration parameter
ImportExportTLSMode
for this connection to PREFER
. If TLS CONFIGURATION is set or ImportExportTLSMode
is set to REQUIRE_FORCE, VERIFY_CA_FORCE, or VERIFY_FULL_FORCE, then TLSMODE PREFER has no effect.
If TLSMODE PREFER and ImportExportTLSMode
are both not set, CONNECT TO VERTICA uses ENABLE.
Effective TLS mode
The effective TLS mode of CONNECT TO VERTICA is determined by the TLSMODE of the TLS Configuration and the value of ImportExportTLSMode. The following table summarizes this interaction for non-FORCE values of ImportExportTLSMode:
TLS Configuration |
ImportExportTLSMode |
Effective TLS mode |
ENABLE |
PREFER |
PREFER |
ENABLE |
Anything except PREFER |
REQUIRE |
TRY_VERIFY, VERIFY_CA |
Anything |
VERIFY_CA |
VERIFY_FULL |
Anything |
VERIFY_FULL |
Privileges
None
Security requirements
When importing from or exporting to a Vertica database, you can connect only to a database that uses trusted (username only) or password-based authentication, as described in Security and authentication. OAuth and Kerberos authentication methods are not supported.
If configured with a certificate, Vertica encrypts data during transmission using TLS and attempts to encrypt plan metadata. You can set configuration parameter
ImportExportTLSMode
to require encryption for plan metadata.
Passwordless authentication
If you omit the password
in the call to CONNECT TO VERTICA, you can authenticate to the target database in the following ways:
- Credential forwarding
- TLS authentication
For brevity, in this and later sections, "caller" refers to the user that runs CONNECT TO VERTICA from the source database to connect to the target database.
Credential forwarding
Credential forwarding lets you authenticate as the current user on the target database by forwarding your hash or password, depending on the caller's authentication method on the target database. To do this, the following requirements must be met:
For an example, see Examples.
TLS authentication
TLS authentication lets you use the certificates and keys in the specified tls_configuration
parameter to authenticate instead of a password. To do this, the following requirements must be met:
- The caller exists in both databases.
- In the source database:
- A custom TLS configurations contains a CA certificate, a client certificate, and client key that can be used to authenticate to the target database with TLS authentication.
- The caller has USAGE privileges on the TLS Configuration.
- In the target database:
For an example, see Examples.
Examples
Password authentication
The following example authenticates as the dbadmin to ExampleDB
on the host VerticaHost01
on port 5433
:
=> CONNECT TO VERTICA ExampleDB USER dbadmin PASSWORD 'Password123' ON 'VerticaHost01',5433;
CONNECT
Credential forwarding
In the following example, Penny wants to run CONNECT TO VERTICA from db_1
to connect to db_2
:
Note
In general, the
LDAP link service should be used instead to synchronize users between databases. For demonstration purposes, the example below manually configures the user
penny
on both databases. You can skip these steps if you use LDAP Link.
-
On db_1
, create the user penny
:
=> CREATE USER penny IDENTIFIED BY 'my_password';
-
On db_2
, create the user penny
with the same hash, salt, and security (hashing) algorithm:
- On
db_1
, retrieve the hash (from the password
column) and salt from PASSWORDS:
=> SELECT user_name, password, salt FROM passwords WHERE user_name='penny';
-[ RECORD 1 ]-------------------------------------------------------------------------------------------------------------------------------------
user_name | penny
password | sha512b2e0911954a79d9d419b7b42774d36d17dd8a663c966bf0dd8f4cd6aad00d3c7ee7ba74b9bf0e56071cd995ae04aeaae537b35903e97255ed5fe286b6ff0d00a
salt | eac4ad840264e8c590120b31b815318f
- On
db_1
, retrieve the effective security algorithm from PASSWORD_AUDITOR:
=> SELECT user_name, effective_security_algorithm FROM password_auditor WHERE user_name='penny';
user_name | effective_security_algorithm
-----------+------------------------------
penny | SHA512
(1 row)
- On
db_2
, create the user penny
:
- On
db_2
, use ALTER USER to set the security algorithm to the value in db_1
. For details, see Password hashing algorithm:
=> ALTER USER penny SECURITY_ALGORITHM 'SHA512';
- On
db_2
, set penny
's password using the hash and salt from db_1
:
=> ALTER USER penny IDENTIFIED BY
'sha512b2e0911954a79d9d419b7b42774d36d17dd8a663c966bf0dd8f4cd6aad00d3c7ee7ba74b9bf0e56071cd995ae04aeaae537b35903e97255ed5fe286b6ff0d00a'
SALT 'eac4ad840264e8c590120b31b815318f';
-
On db_1
, enable credential forwarding for penny
by setting EnableConnectCredentialForwarding (disabled by default):
=> ALTER USER penny SET EnableConnectCredentialForwarding=1;
-
On db_2
, create an authentication record with the hash
method:
Note
This example authentication record lets its grantees authenticate to Vertica without TLS. In general, you should also configure client-server TLS and then specify
HOST TLS
so the hashed password is forwarded to the target database over a secure connection. For example:
=> CREATE AUTHENTICATION v_hash_auth METHOD 'hash' HOST TLS '0.0.0.0/0';
=> CREATE AUTHENTICATION v_hash_auth METHOD 'hash' HOST '0.0.0.0/0';
-
Grant the authentication record to penny
:
=> GRANT AUTHENTICATION v_hash_auth TO penny;
-
On db_1
, run CONNECT TO VERTICA to connect to db_2
, omitting the password. This example uses the default port:
=> CONNECT TO VERTICA db_2 ON 'example.com', 5433;
TLS authentication
In the following example, Penny wants to run CONNECT TO VERTICA from db_1
to connect to db_2
without specifying her password:
-
Create the user penny
on both databases.
=> CREATE USER penny IDENTIFIED BY 'my_password';
-
On db_1
, create or import a CA certificate, server certificate, and client certificate.
-- Create a CA certificate
=> CREATE KEY root_key TYPE 'RSA' LENGTH 2048;
=> CREATE CA CERTIFICATE mtls_root_cert
SUBJECT '/C=US/ST=Massachusetts/L=Burlington/O=OpenText/OU=Vertica/CN=Vertica Root CA'
VALID FOR 3650
EXTENSIONS 'authorityKeyIdentifier' = 'keyid:always,issuer', 'nsComment' = 'Vertica generated root CA cert'
KEY mtls_root_key;
-- Create a client certificate, signing it with the CA certificate
=> CREATE KEY client_key TYPE 'RSA' LENGTH 2048;
=> CREATE CERTIFICATE mtls_client_cert
SUBJECT '/C=US/ST=Massachusetts/L=Burlington/O=OpenText/OU=Vertica/CN=penny'
SIGNED BY mtls_root_cert
EXTENSIONS 'nsComment' = 'Vertica client cert', 'extendedKeyUsage' = 'clientAuth'
KEY mtls_client_key;
-- Create a server certificate, signing it with the CA certificate
CREATE KEY mtls_server_key TYPE 'RSA' LENGTH 2048;
CREATE CERTIFICATE mtls_server_cert
SUBJECT '/C=US/ST=Massachusetts/L=Burlington/O=OpenText/OU=Vertica/CN=*.example.com'
SIGNED BY mtls_root_cert
EXTENSIONS 'extendedKeyUsage' = 'serverAuth'
KEY mtls_server_key;
-
On db_1
, create a custom TLS configuration to use with CONNECT TO VERTICA, adding the mtls_client_cert
and mtls_root_cert
as the certificates and setting TLSMODE to ENABLE
or higher:
=> CREATE TLS CONFIGURATION mtls;
=> ALTER TLS CONFIGURATION mtls CERTIFICATE mtls_client_cert ADD CA CERTIFICATES mtls_root_cert TLSMODE 'ENABLE';
-
On db_2
, import the CA certificate, server certificate, and server key from db_1
. You can retrieve the contents of a certificate and key from the CERTIFICATES and CRYPTOGRAPHIC_KEYS system tables:
=> CREATE CA CERTIFICATE mtls_root_cert AS '-----BEGIN CERTIFICATE-----certificate_text-----END CERTIFICATE-----';
=> CREATE CERTIFICATE mtls_server_key AS '-----BEGIN PRIVATE KEY-----key-----END PRIVATE KEY-----'
=> CREATE CERTIFICATE mtls_server_cert AS '-----BEGIN CERTIFICATE-----certificate_text-----END CERTIFICATE-----';
-
On db_2
, configure mutual mode client-server TLS using mtls_root_cert
and mtls_server_cert
and setting the TLSMODE to TRY_VERIFY
or higher:
=> ALTER TLS CONFIGURATION server CERTIFICATE mtls_server_cert ADD CA CERTIFICATES mtls_root_cert TLSMODE 'TRY_VERIFY';
-
On db_2
, create an authentication record for TLS authentication.
=> CREATE AUTHENTICATION mtls_auth METHOD 'tls' HOST TLS '0.0.0.0/0';
-
On db_2
, grant mtls_auth
to penny
:
=> GRANT AUTHENTICATION mtls_auth TO penny;
-
On db_1
, run CONNECT TO VERTICA to connect to db_2
, specifying the mtls
TLS Configuration. This example uses the default port:
=> CONNECT TO VERTICA vmart USER penny ON 'example.com', 5433 TLS CONFIGURATION mtls;
8 - COPY
COPY; Load data;.
COPY bulk-loads data into a Vertica database. By default, COPY automatically commits itself and any current transaction except when loading temporary tables. If COPY is terminated or interrupted, Vertica rolls it back.
COPY reads data as UTF-8 encoding.
For information on loading one or more files or pipes on a cluster host or on a client system, see COPY LOCAL.
To define a data pipeline to automatically load new files, see Automatic load.
Syntax
COPY [ /*+ LABEL (label-string)*/ ] [[{namespace. | database. }]schema.]target-table
[ ( { column-as-expression | column }
[ DELIMITER [ AS ] 'char' ]
[ ENCLOSED [ BY ] 'char' ]
[ ENFORCELENGTH ]
[ ESCAPE [ AS ] 'char' | NO ESCAPE ]
[ FILLER datatype]
[ FORMAT 'format' ]
[ NULL [ AS ] 'string' ]
[ TRIM 'byte' ]
[,...] ) ]
[ COLUMN OPTION (column
[ DELIMITER [ AS ] 'char' ]
[ ENCLOSED [ BY ] 'char' ]
[ ENFORCELENGTH ]
[ ESCAPE [ AS ] 'char' | NO ESCAPE ]
[ FORMAT 'format' ]
[ NULL [ AS ] 'string' ]
[ TRIM 'byte' ]
[,...] ) ]
FROM {
[ LOCAL ] STDIN [ compression ]
| { 'path-to-data'
[ ON { nodename | (nodeset) | ANY NODE | EACH NODE } ] [ compression ] }[,...]
[ PARTITION COLUMNS column[,...] ]
| LOCAL 'path-to-data' [ compression ] [,...]
| VERTICA {source-namespace. | source-database. }[source-schema.]source-table[( source-column[,...] ) ]
}
[ NATIVE
| FIXEDWIDTH COLSIZES {( integer )[,...]}
| NATIVE VARCHAR
| ORC
| PARQUET
]
| [ WITH ] UDL-clause[...]
}
[ ABORT ON ERROR ]
[ DELIMITER [ AS ] 'char' ]
[ ENCLOSED [ BY ] 'char'
[ ENFORCELENGTH ]
[ ERROR TOLERANCE ]
[ ESCAPE [ AS ] 'char' | NO ESCAPE ]
[ EXCEPTIONS 'path' [ ON nodename] [,...]
[ NULL [ AS ] 'string' ]
[ RECORD TERMINATOR 'string' ]
[ REJECTED DATA {'path' [ ON nodename] [,...] | AS TABLE reject-table} ]
[ REJECTMAX integer ]
[ SKIP integer ]
[ SKIP BYTES integer ]
[ STREAM NAME 'streamName']
[ TRAILING NULLCOLS ]
[ TRIM 'byte' ]
[ [ WITH ] PARSER parser ([ arg=value[,...] ]) ] ]
[ NO COMMIT ]
Parameters
See Parameters.
Restrictions
See Restrictions.
Privileges
Superusers have full COPY privileges. The following requirements apply to non-superusers:
-
INSERT privilege on table
-
USAGE privilege on schema
-
USER-accessible storage location
-
Applicable READ or WRITE privileges granted to the storage location where files are read or written
COPY can specify a path to store rejected data and exceptions. If the path resolves to a storage location, the following privileges apply to non-superusers:
8.1 - Parameters
COPY parameters and their descriptions are divided into the following sections:.
COPY parameters and their descriptions are divided into the following sections:
Target options
The following options apply to the target tables and their columns:
-
LABEL
Assigns a label to a statement to identify it for profiling and debugging.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
Note
COPY ignores schema
when used in CREATE EXTERNAL TABLE or CREATE FLEX EXTERNAL TABLE statements.
target-table
- The target columnar or flexible table for loading new data. Vertica loads the data into all projections that include columns from the schema table.
column-as-expression
- An expression used to compute values for the target column, which must not be of a complex type. For example:
=> COPY t(year AS TO_CHAR(k, 'YYYY')) FROM 'myfile.dat'
Use this option to transform data when it is loaded into the target database.
For details, see Transforming data during loads.
column
- Restricts the load to one or more specified columns in the table. If you omit specifying columns, COPY loads all columns by default.
Table columns that you omit from the column list are assigned their DEFAULT or SET USING values, if any; otherwise, COPY inserts NULL.
If you leave the column
parameter blank to load all columns in the table, you can use the optional parameter COLUMN OPTION to specify parsing options for specific columns.
The data file must contain the same number of columns as the COPY command's column list.
COLUMN OPTION
- Specifies load metadata for one or more columns declared in the table column list. For example, you can specify that a column has its own DELIMITER, ENCLOSED BY, or NULL AS expression, and so on. You do not have to specify every column name explicitly in the COLUMN OPTION list, but each column you specify must correspond to a column in the table column list.
Column options
Depending on how they are specified, the following COPY options can qualify specific columns or all columns. Some parser-specific options can also apply to either specific columns or all columns. See Global and column-specific options For details about these two modes.
ENFORCELENGTH
- If specified, COPY rejects data rows of type CHAR, VARCHAR, BINARY, and VARBINARY, or elements of those types in collections, if they are larger than the declared size.
By default, COPY truncates offending rows of these data types and elements of these types in collections, but does not reject the rows. For more details, see Handling Messy Data.
If a collection does not fit with all of its elements, COPY rejects the row without truncating. It does not reduce the number of elements. This can happen if each element is individually within limits but the number of elements causes the collection to exceed the maximum size for the column.
FILLER
datatype
- Reads but does not copy the data of an input column. Use filler columns to ignore input columns that do not have columns in the table. You can also use filler columns to transform data (see Examples and Transforming data during loads). Filler columns cannot be of complex types.
If a filler column has the same name as a column in the table, you must explicitly scope other uses of the name in the COPY statement. To refer to an ambiguous filler column, prefix the name with "*FILLER*".
:
=> COPY names(first FILLER VARCHAR, last,
full AS "*FILLER*".first||' '||last) FROM ...;
FORMAT '
format
'
- Input format, one of the following:
See Binary (native) data to learn more about these formats.
When loading date/time columns, using FORMAT significantly improves load performance. COPY supports the same formats as the TO_DATE function. See Template patterns for date/time formatting.
If you specify invalid format strings, the COPY operation returns an error.
NULL [AS]
- The string representing a null value. The default is an empty string (
''
). You can specify a null value as any ASCII value in the range E'\000'
to E'\177'
inclusive. You cannot use the same character for both the DELIMITER and NULL options. For details, see Delimited data.
The following options are available for specifying source data:
LOCAL
- Loads data files (up to 4,294,967,295 files) on a client system, rather than on a cluster host. LOCAL can qualify the STDIN and path-to-data arguments. For details, see COPY LOCAL.
Restrictions: Invalid for CREATE EXTERNAL TABLE AS COPY
STDIN
- Reads from the client a standard input instead of a file. STDIN takes one input source only. To load multiple input sources, use path-to-data.
User must have INSERT privileges on the table and USAGE privileges on its schema.
Restrictions: Invalid for CREATE EXTERNAL TABLE AS COPY
path-to-data
- Specifies the absolute path of the file (or files) containing the data, which can be from multiple input sources.
-
If the file is stored in HDFS, path-to-data
is a URI in the webhdfs
scheme, typically [[s]web]hdfs://[
nameservice
]/
path
. See HDFS file system.
-
If the file is stored in an S3 bucket, path-to-data
is a URI in the format s3://
bucket
/
path
. See S3 object store.
-
If the file is stored in Google Cloud Storage, path-to-data
is a URI in the format gs://
bucket
/
path
. See Google Cloud Storage (GCS) object store.
-
If the file is stored in Azure Blob Storage, path-to-data
is a URI in the format azb://
account
/
container
/
path
. See Azure Blob Storage object store.
-
If the file is on the local Linux file system or an NFS mount, path-to-data
is a local absolute file path.
path-to-data
can optionally contain wildcards to match more than one file. The file or files must be accessible to the local client or the host on which the COPY statement runs. COPY skips empty files in the file list. A file list that includes directories causes the query to fail. See Specifying where to load data from. The supported patterns for wildcards are specified in the Linux Manual Page for Glob (7), and for ADO.net platforms, through the .NET Directory.getFiles method.
You can use variables to construct the pathname as described in Using load scripts.
If path-to-data
resolves to a storage location on a local file system, and the user invoking COPY is not a superuser, the following requirements apply:
Further, if a user has privileges but is not a superuser, and invokes COPY from that storage location, Vertica ensures that symbolic links do not result in unauthorized access.
PARTITION COLUMNS
column
[,...]
- Columns whose values are specified in the directory structure and not in the data itself.
- For Hive-style partitioning, paths contain directory names of the form
form
colname
=
value
, and COPY parses values for the specified partition columns. If a value is missing or cannot be coerced to the column data type, the source path is rejected.
- For other partition schemes, column expressions specify how to extract values using the CURRENT_LOAD_SOURCE function. If the expression cannot be evaluated, the source path is rejected.
For more information, see Partitioned data.
ON
nodename
- Specifies the node on which the data to copy resides and the node that should parse the load file. If you omit
nodename
, the location of the input file defaults to the initiator node. Use nodename
to copy and parse a load file from a node other than the COPY initiator node.
Note
nodename
is invalid with STDIN and LOCAL.
ON (
nodeset
)
- Specifies a set of nodes on which to perform the load. The same data must be available for load on all named nodes.
nodeset
is a comma-separated list of node names in parentheses. For example:
=> COPY t FROM 'file1.txt' ON (v_vmart_node0001, v_vmart_node0002);
Vertica apportions the load among all of the specified nodes. If you also specify ERROR TOLERANCE or REJECTMAX, Vertica instead chooses a single node on which to perform the load.
If the data is available on all nodes, you usually use ON ANY NODE, which is the default for loads from HDFS and cloud object stores. However, you can use ON nodeset
to do manual load-balancing among concurrent loads.
-
ON ANY NODE
- Specifies that the data to load is available on all nodes, so COPY opens the path and parses it from any node in the cluster. For an Eon Mode database, COPY uses nodes within the same subcluster as the initiator.
Caution
The data must be the same on all nodes. If the data differs on two nodes, an incorrect or incomplete result is returned, with no error or warning.
Vertica attempts to apportion the load among several nodes if a file is large enough to benefit from apportioning. It chooses a single node if ERROR TOLERANCE or REJECTMAX is specified.
You can use a wildcard or glob (such as *.dat
) to load multiple input files, combined with the ON ANY NODE clause. If you use a glob, COPY distributes the list of files to all cluster nodes and spreads the workload.
ON ANY NODE is invalid with STDIN and LOCAL. STDIN can only use the client host, and LOCAL indicates a client node.
ON ANY NODE is the default for loads from all paths other than Linux (HDFS and cloud object stores).
ON EACH NODE
- Loads data from the specified path on each node. Use this option when the path exists on all nodes but the data files it contains are different on each node. If the path is not valid on all nodes, COPY loads the valid paths and produces a warning. If the path is a shared location, COPY loads it only once as for ON ANY NODE.
-
compression
- The input compression type, one of the following:
-
UNCOMPRESSED (default)
-
BZIP
-
GZIP
-
LZO
-
ZSTD
Input files can be of any format. If you use wildcards, all qualifying input files must be in the same format. To load different file formats, specify the format types specifically.
The following requirements and restrictions apply:
-
When using concatenated BZIP or GZIP files, verify that all source files terminate with a record terminator before concatenating them.
-
Concatenated BZIP and GZIP files are not supported for NATIVE (binary) and NATIVE VARCHAR formats.
-
LZO files are assumed to be compressed with lzop
. Vertica supports the following lzop arguments:
-
BZIP, GZIP, ZSTD, and LZO compression cannot be used with ORC format.
VERTICA
- See COPY FROM VERTICA.
[WITH]
UDL-clause
[...]
- Specifies one or more user-defined load functions—one source, and optionally one or more filters and one parser, as follows:
SOURCE source( [arg=value[,...] ]
[ FILTER filter( [arg=value[,...] ] ) ]...
[ PARSER parser( [arg=value[,...] ] ) ]
To use a flex table parser for column tables, use the PARSER parameter followed by a flex table parser argument. For supported flex table parsers, see Bulk loading data into flex tables.
Handling options
The following options control how COPY handles different contingencies:
ABORT ON ERROR
- Specifies that COPY stops if any row is rejected. The statement is rolled back and no data is loaded.
COLSIZES (
integer
[,...])
- Specifies column widths when loading fixed-width data. COPY requires that you specify COLSIZES when using the FIXEDWIDTH parser. COLSIZES and the list of integers must correspond to the columns listed in the table column list. For details, see Fixed-width format data.
ERROR TOLERANCE
- Specifies that COPY treats each source during execution independently when loading data. The statement is not rolled back if a single source is invalid. The invalid source is skipped and the load continues.
Using this parameter disables apportioned load.
Restrictions: Invalid for ORC or Parquet data
EXCEPTIONS { '
path
' [ ON
nodename
] [,...]
- Exceptions, used in combination with
REJECTED DATA
, describe why each rejected row was rejected. Each exception row corresponds to a rejection row.
This option specifies the file name or absolute path of the output file for exceptions. If the file already exists, it is overwritten.
Files are written on the node or nodes executing the load or, if ON
is used, on the designated node. You cannot specify a path on a shared file system or qualify the path with ON ANY NODE
. For more information, see Saving load exceptions (EXCEPTIONS).
To collect exceptions together with the rejected rows, use REJECTED DATA AS TABLE
. You cannot use EXCEPTIONS
with REJECTED DATA AS TABLE
.
If you use this option with COPY...ON ANY NODE, you must still specify the individual nodes for the exception files, as in the following example:
EXCEPTIONS '/home/ex01.txt' on v_db_node0001,'/home/ex02.txt'
on v_db_node0002,'/home/ex03.txt' on v_db_node0003
If path
resolves to a storage location, non-superusers have the following requirements:
-
The storage location must be created with the USER option (see CREATE LOCATION).
-
The user must have READ access to the storage location where the files exist, as described in GRANT (storage location).
REJECTED DATA { '
path
' [ ON
nodename
] [,...] | AS TABLE
reject-table
- Specifies where to write each row that failed to load. If this option is specified, records that failed due to parsing errors are always written. Records that failed due to an error during a transformation are written only if the CopyFaultTolerantExpressions configuration parameter is set.
Vertica can write rejected data to the local file system or to a table:
-
Local file system: Vertica writes rejected row data to the specified path on the node or nodes executing the load or, if ON
is used, on the designated node. You cannot specify a path on a shared file system or qualify the path with ON ANY NODE
. To make rejected data available on all nodes, write rejections to a table.
The value of path
can be a directory or a file prefix. If there are multiple load sources, path
is always treated as a directory. If there are not multiple load sources but path
ends with '/
', or if a directory of that name already exists, it is also treated as a directory. Otherwise, path
is treated as a file prefix.
If the output file already exists, it is overwritten.
When REJECTED DATA
with a path is used with LOCAL
, the output is written to the client.
-
Table: Vertica writes rejected row data and exceptions to a table. You cannot use this option with EXCEPTIONS
.
Writing rejections to a table is generally the better option, because they can then be queried from any node. If rejected data needs to be available from outside of Vertica, you can write it to a table and then export that table to a shared file system.
For more information about both options, see Handling messy data.
REJECTMAX
integer
- The maximum number of logical records that can be rejected before a load fails. For details, see Handling messy data.
REJECTMAX disables apportioned load.
SKIP
integer
- The number of records to skip in a load file. For example, you can use the SKIP option to omit table header information.
Restrictions: Invalid for ORC or Parquet data
STREAM NAME
- Supplies a COPY load stream identifier. Using a stream name helps to quickly identify a particular load. The STREAM NAME value that you supply in the load statement appears in the STREAM_NAME column of system tables LOAD_STREAMS and LOAD_SOURCES.
A valid stream name can contain any combination of alphanumeric or special characters up to 128 bytes in length.
For example:
=> COPY mytable FROM myfile
DELIMITER '|' STREAM NAME 'My stream name';
-
WITH parser
- Specifies the parser to use when bulk loading columnar tables, one of the following:
By default, COPY uses the DELIMITER parser for UTF-8 format, delimited text input data. You do not specify the DELIMITER parser directly; absence of a specific parser indicates the default.
To use a flex table parser for column tables, use the PARSER parameter followed by a flex table parser argument. For supported flex table parsers, see Bulk loading data into flex tables.
When loading into flex tables, you must use a compatible parser. For supported flex table parsers, see Bulk loading data into flex tables.
COPY LOCAL does not support the NATIVE, NATIVE VARCHAR, ORC, and PARQUET parsers.
For parser support for complex data types, see the documentation of the specific parser.
For parser details, see Data formats in Data load.
-
NO COMMIT
- Prevents the COPY statement from committing its transaction automatically when it finishes copying data. This option must be the last COPY statement parameter. For details, see Using transactions to stage a load.
This option is ignored by CREATE EXTERNAL TABLE AS COPY.
Parser-specific options
The following options apply only when using specific parsers.
DELIMITED parser
DELIMITER
Indicates the single ASCII character used to separate columns within each record of a file. You can use any ASCII value in the range E'\000' to E'\177', inclusive. You cannot use the same character for both the DELIMITER and NULL parameters. For more information, see Delimited data.
Default: Vertical bar ('|').
ENCLOSED [BY]
Sets the quote character within which to enclose data, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000'). By default, ENCLOSED BY has no value, meaning data is not enclosed by any sort of quote character.
ESCAPE [AS]
Sets the escape character. Once set, the character following the escape character is interpreted literally, rather than as a special character. You can define an escape character using any ASCII value in the range E'\001' to E'\177', inclusive (any ASCII character except NULL: E'\000').
The COPY statement does not interpret the data it reads in as String literals. It also does not follow the same escape rules as other SQL statements (including the COPY parameters). When reading data, COPY interprets only the characters defined by these options as special values:
-
ESCAPE [AS]
-
DELIMITER
-
ENCLOSED [BY]
-
RECORD TERMINATOR
-
All COLLECTION options
Default: Backslash ('\').
NO ESCAPE
Eliminates escape-character handling. Use this option if you do not need any escape character and you want to prevent characters in your data from being interpreted as escape sequences.
RECORD TERMINATOR
- Specifies the literal character string indicating the end of a data file record. For more information about using this parameter, see Delimited data.
TRAILING NULLCOLS
- Specifies that if Vertica encounters a record with insufficient data to match the columns in the table column list, COPY inserts the missing columns with NULL values. For other information and examples, see Fixed-width format data.
COLLECTIONDELIMITER
For columns of collection types, indicates the single ASCII character used to separate elements within each collection. You can use any ASCII value in the range E'\000' to E'\177', inclusive. No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.
Default: Comma (',').
COLLECTIONOPEN
, COLLECTIONCLOSE
For columns of collection types, these options indicate the characters that mark the beginning and end of the collection. It is an error to use these characters elsewhere within the list of elements without escaping them. No COLLECTION option may have the same value as any other COLLECTION option.
Default: Square brackets ('[' and ']').
COLLECTIONNULLELEMENT
The string representing a null element value in a collection. You can specify a null value as any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII value except NULL: E'\000'). No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.
Default: 'null'
COLLECTIONENCLOSE
For columns of collection types, sets the quote character within which to enclose individual elements, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000').
No COLLECTION option may have the same value as any other COLLECTION option.
Default: double quote ('"')
FIXEDWIDTH parser
SKIP BYTES
integer
- The total number of bytes in a record to skip.
TRIM
- Trims the number of bytes you specify from a column. This option is only available when loading fixed-width data. You can set TRIM at the table level for a column, or as part of the COLUMN OPTION parameter.
8.2 - Restrictions
COPY has the following restrictions:.
COPY has the following restrictions:
Invalid data
COPY considers the following data invalid:
-
Missing columns (an input line has fewer columns than the recipient table).
-
Extra columns (an input line has more columns than the recipient table).
-
Empty columns for an INTEGER or DATE/TIME data type. If a column is empty for either of these types, COPY
does not use the default value that was defined by CREATE TABLE. However, if you do not supply a column option as part of the COPY
statement, the default value is used.
-
Incorrect representation of a data type. For example, trying to load a non-numeric value into an INTEGER column is invalid.
Constraint violations
If primary key, unique key, or check constraints are enabled for automatic enforcement in the target table, Vertica enforces those constraints when you load new data. If a violation occurs, Vertica rolls back the operation and returns an error.
Empty line handling
When COPY
encounters an empty line while loading data, the line is neither inserted nor rejected, but COPY
increments the line record number. Consider this behavior when evaluating rejected records. If you return a list of rejected records and COPY
encountered an empty row while loading data, the position of rejected records is not incremented by one, as demonstrated in the following example.
The example first loads values into a table that defines the first column as INT. Note the errors on rows 3, 4, and 8:
=> \! cat -n /home/dbadmin/test.txt
1 1|A|2
2 2|B|4
3 A|D|7
4 A|E|7
5
6
7 6|A|3
8 B|A|3
The empty rows (5 and 6) shift the reporting of the error on row 8:
=> SELECT row_number, rejected_data, rejected_reason FROM test_bad;
row_number | rejected_data | rejected_reason
------------+---------------+----------------------------------------------
3 | A|D|7 | Invalid integer format 'A' for column 1 (c1)
4 | A|E|7 | Invalid integer format 'A' for column 1 (c1)
6 | B|A|3 | Invalid integer format 'B' for column 1 (c1)
(3 rows)
Compressed file errors
When loading compressed files, COPY
might abort and report an error, if the file seems to be corrupted. For example, this behavior can occur if reading the header block fails.
Disk quota
Tables and schemas can have disk quotas. If a load would violate either quota, the operation fails. For more information, see Disk quotas.
8.3 - Parsers
Vertica supports several parsers to load different types of data.
Vertica supports several parsers to load different types of data. Some parsers are for use only with flex tables, as noted.
8.3.1 - DELIMITED
Use the DELIMITED parser, which is the default, to load delimited text data using COPY.
Use the DELIMITED parser, which is the default, to load delimited text data using COPY. You can specify the delimiter, escape characters, how to handle null values, and other parameters.
The DELIMITED parser supports both apportioned load and cooperative parse.
COPY options
The following options are specific to this parser. See Parameters for other applicable options.
DELIMITER
Indicates the single ASCII character used to separate columns within each record of a file. You can use any ASCII value in the range E'\000' to E'\177', inclusive. You cannot use the same character for both the DELIMITER and NULL parameters. For more information, see Delimited data.
Default: Vertical bar ('|').
ENCLOSED [BY]
Sets the quote character within which to enclose data, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000'). By default, ENCLOSED BY has no value, meaning data is not enclosed by any sort of quote character.
ESCAPE [AS]
Sets the escape character. Once set, the character following the escape character is interpreted literally, rather than as a special character. You can define an escape character using any ASCII value in the range E'\001' to E'\177', inclusive (any ASCII character except NULL: E'\000').
The COPY statement does not interpret the data it reads in as String literals. It also does not follow the same escape rules as other SQL statements (including the COPY parameters). When reading data, COPY interprets only the characters defined by these options as special values:
-
ESCAPE [AS]
-
DELIMITER
-
ENCLOSED [BY]
-
RECORD TERMINATOR
-
All COLLECTION options
Default: Backslash ('\').
NO ESCAPE
Eliminates escape-character handling. Use this option if you do not need any escape character and you want to prevent characters in your data from being interpreted as escape sequences.
RECORD TERMINATOR
- Specifies the literal character string indicating the end of a data file record. For more information about using this parameter, see Delimited data.
TRAILING NULLCOLS
- Specifies that if Vertica encounters a record with insufficient data to match the columns in the table column list, COPY inserts the missing columns with NULL values. For other information and examples, see Fixed-width format data.
COLLECTIONDELIMITER
For columns of collection types, indicates the single ASCII character used to separate elements within each collection. You can use any ASCII value in the range E'\000' to E'\177', inclusive. No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.
Default: Comma (',').
COLLECTIONOPEN
, COLLECTIONCLOSE
For columns of collection types, these options indicate the characters that mark the beginning and end of the collection. It is an error to use these characters elsewhere within the list of elements without escaping them. No COLLECTION option may have the same value as any other COLLECTION option.
Default: Square brackets ('[' and ']').
COLLECTIONNULLELEMENT
The string representing a null element value in a collection. You can specify a null value as any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII value except NULL: E'\000'). No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.
Default: 'null'
COLLECTIONENCLOSE
For columns of collection types, sets the quote character within which to enclose individual elements, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000').
No COLLECTION option may have the same value as any other COLLECTION option.
Default: double quote ('"')
Data types
The DELIMITED parser supports reading one-dimensional collections (arrays or sets) of scalar types.
If the total size of an array exceeds the size defined by the target table, the parser rejects the row.
Examples
The following example shows the default behavior, in which the delimiter character is '|'
=> CREATE TABLE employees (id INT, name VARCHAR(50), department VARCHAR(50));
CREATE TABLE
=> COPY employees FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 42|Sheldon Cooper|Physics
>> 17|Howard Wolowitz|Astronomy
>> \.
=> SELECT * FROM employees;
id | name | department
----+-----------------+--------------
17 | Howard Wolowitz | Astrophysics
42 | Sheldon Cooper | Physics
(2 rows)
The following example shows loading array values with the default options.
=> CREATE TABLE researchers (id INT, name VARCHAR, grants ARRAY[VARCHAR], values ARRAY[INT]);
CREATE TABLE
=> COPY researchers FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 42|Sheldon Cooper|[US-7376,DARPA-1567]|[65000,135000]
>> 17|Howard Wolowitz|[NASA-1683,NASA-7867,SPX-76]|[16700,85000,45000]
>> \.
=> SELECT * FROM researchers;
id | name | grants | values
----+-----------------+------------------------------------+---------------------
17 | Howard Wolowitz | ["NASA-1683","NASA-7867","SPX-76"] | [16700,85000,45000]
42 | Sheldon Cooper | ["US-7376","DARPA-1567"] | [65000,135000]
(2 rows)
In the following example, collections are enclosed in braces and delimited by periods, and the arrays contain null values.
=> COPY researchers FROM STDIN COLLECTIONOPEN '{' COLLECTIONCLOSE '}' COLLECTIONDELIMITER '.';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 19|Leonard|{"us-1672".null."darpa-1963"}|{16200.null.16700}
>> \.
=> SELECT * FROM researchers;
id | name | grants | values
----+-----------------+------------------------------------+---------------------
17 | Howard Wolowitz | ["NASA-1683","NASA-7867","SPX-76"] | [16700,85000,45000]
42 | Sheldon Cooper | ["US-7376","DARPA-1567"] | [65000,135000]
19 | Leonard | ["us-1672",null,"darpa-1963"] | [16200,null,16700]
(3 rows)
8.3.2 - FAVROPARSER
Parses data from an Avro file.
Parses data from an Avro file. The following requirements apply:
-
Avro files must be encoded in the Avro binary serialization encoding format, described in the Apache Avro standard. The parser also supports Snappy and deflate compression.
-
FAVROPARSER does not support Avro files with separate schema files. The Avro file must include the schema.
You can load complex types in the Avro source (arrays, structs, or combinations) with strong typing or as flexible complex types. A flexible complex type is loaded into a VMap column, as in flex tables. To load complex types as VMap columns, specify a column type of LONG VARBINARY. To preserve the indexing in complex types, set flatten_maps
to false.
This parser can notify you if it finds keys in the data that are not part of the table definition. See Unmatched Keys.
When loading into a flex table, Vertica loads all data into the __raw__
(VMap) column, including complex types found in the data.
This parser does not support apportioned load or cooperative parse.
Syntax
FAVROPARSER ( [parameter=value[,...]] )
Parameters
flatten_maps
- Boolean, whether to flatten all Avro maps. Key names are concatenated with nested levels. This value is recursive and affects all data in the load.
This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.
Default: true
flatten_arrays
- Boolean, whether to flatten all Avro arrays. Key names are concatenated with nested levels. This value is recursive and affects all data in the load.
This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.
Default: false
flatten_records
- Boolean, whether to flatten all Avro records. Key names are concatenated with nested levels. This value is recursive and affects all data in the load.
This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.
Default: true
reject_on_materialized_type_error
Boolean, whether to reject a data row that contains a materialized column value that cannot be coerced into a compatible data type. If the value is false and the type cannot be coerced, the parser sets the value in that column to NULL.
If the column is an array and the data to be loaded is too large, then false sets the column value to NULL and true rejects the row.
If the column is a strongly-typed complex type, as opposed to a flexible complex type, then a type mismatch anywhere in the complex type causes the entire column to be treated as a mismatch. The parser does not partially load complex types.
Default: false
suppress_warnings
- String, which warnings to suppress:
-
unmatched_key
(see Unmatched Keys)
-
true
or t
(suppress all warnings)
-
false
or f
(do not suppress warnings)
Default: false
Primitive data types
FAVROPARSER supports the following primitive data types, including as element types and field values in complex types.
Note
Vertica does not have an explicit 4-byte (32-bit integer) or smaller types. Instead, Vertica encoding and compression automatically eliminate the storage overhead of values that require less than 64 bits.
Avro logical types
FAVROPARSER supports the following Avro logical types. The target column must use a Vertica data type that supports the logical type. When you attempt to load data using an invalid logical type, the logical type is ignored and the underlying Avro type is used.
AVRO Logical Type |
Base Avro Type |
Supported Vertica Data Types |
decimal
0 < precision ≤ 1024
0 ≤ scale ≤ precision
|
bytes or fixed |
NUMERIC, Character
Vertica rejects the value if:
-
The Avro precision setting is greater than the precision setting for the target column.
-
For fixed types, the precision value is greater than what is allowed by the size attribute.
If the data type for the target column uses the default precision setting, the precision setting in the Avro schema overrides the default.
|
date |
integer |
DATE, Character |
time-micros |
long |
TIME/TIMETZ, Character
The time logical type does not provide a time zone value. For target columns that use the TIMETZ data type, Vertica uses UTC as the default.
|
time-millis |
int |
timestamp-micros |
long |
TIMESTAMP/TIMESTAMPTZ, TIME/TIMETZ
For timestamp-millis only, the timezone is included and is represented as an offset to UTC. Additionally, the millisecond values are right-extended with padded zeros.
|
timestamp-millis |
long |
duration |
fixed |
INTERVAL, Character |
Avro complex data types
The Avro format supports several complex data types. When loading into strongly-typed columns, you can use the ROW and ARRAY types to represent them. For example, Avro Record and Enums are structs (ROWs); see the Avro specification.
You can use ARRAY[ROW] to match an Avro map. You must name the ROW fields key
and value
. These are the names that the Avro format uses for those fields in the data, and the parser relies on field names to match data to table columns.
If the total size of an array exceeds the size defined by the target table, the parser sets the value to null.
When loading into flex tables or using flexible complex types, this parser handles Avro complex types as follows:
Record
The name of each field is used as a virtual column name. If flatten_records
is true and several nesting levels are present, Vertica concatenates the record names to create the key name.
Map
The value of each map key is used as a virtual column name. If flatten_maps
is true and several nesting levels are present, Vertica concatenates the key names to create the key name.
Enum
Vertica treats Avro Enums like records, with the name of the Enum as the key and the value as the value.
Array
Vertica treats Avro Arrays as key/value pairs. By default, the index of each element is the key. In the following example, product_detail
is a Record with a field, product_category
, that is an Array:
=> CREATE FLEX TABLE products;
CREATE TABLE
=> COPY products FROM :datafile WITH PARSER FAVROPARSER();
Rows Loaded
-------------
2
(1 row)
=> SELECT MAPTOSTRING(__raw__) FROM products ORDER BY __identity__;
maptostring
--------------------------------------------------------------------------------
{
"__name__": "Order",
"customer_id": "111222",
"order_details": {
"0.__name__": "OrderDetail",
"0.product_detail.__name__": "Product",
"0.product_detail.price": "46.21",
"0.product_detail.product_category": {
"0": "electronics",
"1": "printers",
"2": "computers"
},
"0.product_detail.product_description": "hp printer X11ew description :\
P",
"0.product_detail.product_hash": "\u0000\u0001\u0002\u0003\u0004",
"0.product_detail.product_id": "999012",
"0.product_detail.product_map.one": "1.1",
"0.product_detail.product_map.two": "1.1",
"0.product_detail.product_name": "hp printer X11ew",
"0.product_detail.product_status": "ONLY_FEW_LEFT",
"0.quantity": "3",
"0.total": "354.34"
},
"order_id": "2389646",
"total": "132.43"
}
...
If flatten_arrays
is true and several nesting levels are present, Vertica concatenates the indices to create the key name.
=> COPY products FROM :datafile WITH PARSER FAVROPARSER(flatten_arrays=true);
Rows Loaded
-------------
2
(1 row)
=> SELECT MAPTOSTRING(__raw__) FROM products ORDER BY __identity__;
maptostring
--------------------------------------------------------------------------------
{
"__name__": "Order",
"customer_id": "111222",
"order_details.0.__name__": "OrderDetail",
"order_details.0.product_detail.__name__": "Product",
"order_details.0.product_detail.price": "46.21",
"order_details.0.product_detail.product_category.0": "electronics",
"order_details.0.product_detail.product_category.1": "printers",
"order_details.0.product_detail.product_category.2": "computers",
"order_details.0.product_detail.product_description": "hp printer X11ew des\
cription :P",
"order_details.0.product_detail.product_hash": "\u0000\u0001\u0002\u0003\u0\
004",
"order_details.0.product_detail.product_id": "999012",
"order_details.0.product_detail.product_map.one": "1.1",
"order_details.0.product_detail.product_map.two": "1.1",
"order_details.0.product_detail.product_name": "hp printer X11ew",
"order_details.0.product_detail.product_status": "ONLY_FEW_LEFT",
"order_details.0.quantity": "3",
"order_details.0.total": "354.34",
"order_id": "2389646",
"total": "132.43"
}
...
Union
Vertica treats Avro Unions as arrays.
Unmatched keys
Data being loaded can contain keys that are not part of the table definition. If you are loading into a flex table (or a flexible complex type column), no data is lost. For a table with strongly-defined columns, however, new keys cannot be loaded because the table does not have a place to put them.
This parser emits warnings if it finds new keys and if both of the following are true:
New keys are logged in the UDX_EVENTS system table. If a new key is a complex type with nested keys, only the top-level key is logged. When you see a warning about unmatched keys, you can query this table and then use ALTER TABLE to modify your table definition for future loads.
Querying an external table loads data and thus can trigger these warnings. To prevent them, set the suppress_warnings
parameter to 'unmatched_keys' or 'true':
=> CREATE EXTERNAL TABLE restaurants(
name VARCHAR(50),
menu ARRAY[ROW(item VARCHAR(50), price NUMERIC(8,2)),100])
AS COPY FROM '/data/rest.json'
PARSER FAVROPARSER(suppress_warnings='unmatched_key');
Examples
This example shows how to create and load a flex table with Avro data using favroparser
. After loading the data, you can query virtual columns:
=> CREATE FLEX TABLE avro_basic();
CREATE TABLE
=> COPY avro_basic FROM '/home/dbadmin/data/weather.avro' PARSER FAVROPARSER();
Rows Loaded
-------------
5
(1 row)
=> SELECT station, temp, time FROM avro_basic;
station | temp | time
---------+------+---------------
mohali | 0 | -619524000000
lucknow | 22 | -619506000000
norwich | -11 | -619484400000
ams | 111 | -655531200000
baddi | 78 | -655509600000
(5 rows)
For more information, see Avro data.
8.3.3 - FCEFPARSER
Parses ArcSight Common Event Format (CEF) log files.
Parses ArcSight Common Event Format (CEF) log files. This parser loads values directly into any table column with a column name that matches a source data key. The parser stores the data loaded into a flex table in a single VMap.
This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__
column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.
Syntax
FCEFPARSER ( [parameter-name='value'[,...]] )
Parameters
delimiter
- Single-character delimiter.
Default: ' '
record_terminator
- Single-character record terminator.
**Default ****value: **newline
trim
- Boolean, specifies whether to trim white space from header names and key values.
Default: true
reject_on_unescaped_delimiter
- Boolean, specifies whether to reject rows containing unescaped delimiters. The CEF standard does not permit them.
Default: false
Examples
The following example illustrates creating a sample flex table for CEF data, with two real columns, eventId
and priority
.
-
Create a flex table cefdata
:
=> create flex table cefdata();
CREATE TABLE
-
Load some basic CEF data, using the flex parser fcefparser
:
=> copy cefdata from stdin parser fcefparser();
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> CEF:0|ArcSight|ArcSight|2.4.1|machine:20|New alert|High|
>> \.
-
Use the maptostring()
function to view the contents of your cefdata
flex table:
=> select maptostring(__raw__) from cefdata;
maptostring
-------------------------------------------------------------
{
"deviceproduct" : "ArcSight",
"devicevendor" : "ArcSight",
"deviceversion" : "2.4.1",
"name" : "New alert",
"severity" : "High",
"signatureid" : "machine:20",
"version" : "0"
}
(1 row)
-
Select some virtual columns from the cefdata
flex table:
= select deviceproduct, severity, deviceversion from cefdata;
deviceproduct | severity | deviceversion
---------------+----------+---------------
ArcSight | High | 2.4.1
(1 row)
For more information, see Common event format (CEF) data
See also
8.3.4 - FCSVPARSER
Parses CSV format (comma-separated values) data.
Parses CSV format (comma-separated values) data. Use this parser to load CSV data into columnar, flex, and hybrid tables. All data must be encoded in Unicode UTF-8 format. The fcsvparser
parser supports the RFC 4180 standard for CSV data, and other options, to accommodate variations in CSV file format definitions. Invalid records are rejected. For more information about data formats, see Handling Non-UTF-8 input.
This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__
column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.
Syntax
FCSVPARSER ( [parameter='value'[,...]] )
Parameters
type
- The default parameter values for the parser, one of the following strings:
You do not have to use the type parameter when loading data that conforms to the RFC 4180 standard (such as MS Excel files). See Loading CSV data for the RFC4180
default parameters, and other options you can specify for traditional CSV files.
Default: RFC4180
delimiter
- A single-character value used to separate fields in the CSV data.
Default: ,
(for rfc4180
and traditional
)
escape
- A single-character value used as an escape character to interpret the next character in the data literally.
Default:
-
rfc4180
: "
-
traditional
: \
enclosed_by
- A single-character value. Use
enclosed_by
to include a value that is identical to the delimiter, but should be interpreted literally. For example, if the data delimiter is a comma (,
), and you want to use a comma within the data ("my name is jane, and his is jim"
).
Default: "
record_terminator
- A single-character value used to specify the end of a record.
Default:
-
rfc4180
: \n
-
traditional
: \r\n
header
- Boolean, specifies whether to use the first row of data as a header column. When
header=true
(default), and no header exists, fcsvparser uses a default column heading. The default header consists of ucol
n
, where n is the column offset number, starting with 0
for the first column. You can specify custom column heading names using the header_names
parameter, described next.
If you specify header=false
, the fcsvparser
parses the first row of input as data, rather than as column headers.
Default: true
header_names
- A list of column header names, delimited by the character defined by the parser's delimiter parameter. Use this parameter to specify header names in a CSV file without a header row, or to override the column names present in the CSV source. To override one or more existing column names, specify the header names to use. This parameter overrides any header row in the data.
trim
- Boolean, specifies whether to trim white space from header names and key values.
Default: true
omit_empty_keys
- Boolean, specifies how the parser handles header keys without values. If true, keys with an empty value in the
header
row are not loaded.
Default: false
reject_on_duplicate
- Boolean, specifies whether to ignore duplicate records (false), or to reject duplicates (true). In either case, the load continues.
Default: false
reject_on_empty_key
- Boolean, specifies whether to reject any row containing a key without a value.
Default: false
reject_on_materialized_type_error
- Boolean, specifies whether to reject any materialized column value that the parser cannot coerce into a compatible data type. See Loading CSV data.
Default: false
Examples
The following example loads data that complies with RFC 4180:
=> CREATE TABLE sample(a INT, b VARCHAR, c INT);
CREATE TABLE
=> COPY sample FROM stdin PARSER FCSVPARSER(header='false');
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 10,10,20
>> 10,"10",30
>> 10,"20""5",90
>> \.
=> SELECT * FROM sample;
a | b | c
----+------+----
10 | 10 | 20
10 | 10 | 30
10 | 20"5 | 90
(3 rows)
Note the value 20"5
, which came from the input "20""5"
. Per RFC 4180, to include a double-quote character in a string, you must double-quote the entire string and then escape the double-quote with another double-quote.
For more information and examples using other parameters of this parser, see Loading CSV data.
See also
8.3.5 - FDELIMITEDPAIRPARSER
Parses delimited data files.
Parses delimited data files. This parser provides a subset of the functionality in the parser fdelimitedparser
. Use the fdelimitedpairparser
when the data you are loading specifies pairs of column names with data in each row.
This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__
column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.
Syntax
FDELIMITEDPAIRPARSER ( [parameter-name='value'[,...]] )
Parameters
delimiter
- Specifies a single-character delimiter.
Default: ' '
record_terminator
- Specifies a single-character record terminator.
Default: newline
trim
- Boolean specifies whether to trim white space from header names and key values.
Default: true
Examples
The following example illustrates creating a sample flex table for simple delimited data, with two real columns, eventId
and priority
.
-
Create a table:
=> create flex table CEFData(eventId int default(eventId::int), priority int default(priority::int) );
CREATE TABLE
-
Load a sample delimited OpenText ArcSight log file into the CEFData
table, using the fcefparser
:
=> copy CEFData from '/home/release/kmm/flextables/sampleArcSight.txt' parser fdelimitedpairparser();
Rows Loaded | 200
-
After loading the sample data file, use maptostring()
to display the virtual columns in the __raw__
column of CEFData
:
=> select maptostring(__raw__) from CEFData limit 1; maptostring
-----------------------------------------------------------
"agentassetid" : "4-WwHuD0BABCCQDVAeX21vg==",
"agentzone" : "3083",
"agt" : "265723237",
"ahost" : "svsvm0176",
"aid" : "3tGoHuD0BABCCMDVAeX21vg==",
"art" : "1099267576901",
"assetcriticality" : "0",
"at" : "snort_db",
"atz" : "America/Los_Angeles",
"av" : "5.3.0.19524.0",
"cat" : "attempted-recon",
"categorybehavior" : "/Communicate/Query",
"categorydevicegroup" : "/IDS/Network",
"categoryobject" : "/Host",
"categoryoutcome" : "/Attempt",
"categorysignificance" : "/Recon",
"categorytechnique" : "/Scan",
"categorytupledescription" : "An IDS observed a scan of a host.",
"cnt" : "1",
"cs2" : "3",
"destinationgeocountrycode" : "US",
"destinationgeolocationinfo" : "Richardson",
"destinationgeopostalcode" : "75082",
"destinationgeoregioncode" : "TX",
"destinationzone" : "3133",
"device product" : "Snort",
"device vendor" : "Snort",
"device version" : "1.8",
"deviceseverity" : "2",
"dhost" : "198.198.121.200",
"dlat" : "329913940429",
"dlong" : "-966644973754",
"dst" : "3334896072",
"dtz" : "America/Los_Angeles",
"dvchost" : "unknown:eth1",
"end" : "1364676323451",
"eventid" : "1219383333",
"fdevice product" : "Snort",
"fdevice vendor" : "Snort",
"fdevice version" : "1.8",
"fdtz" : "America/Los_Angeles",
"fdvchost" : "unknown:eth1",
"lblstring2label" : "sig_rev",
"locality" : "0",
"modelconfidence" : "0",
"mrt" : "1364675789222",
"name" : "ICMP PING NMAP",
"oagentassetid" : "4-WwHuD0BABCCQDVAeX21vg==",
"oagentzone" : "3083",
"oagt" : "265723237",
"oahost" : "svsvm0176",
"oaid" : "3tGoHuD0BABCCMDVAeX21vg==",
"oat" : "snort_db",
"oatz" : "America/Los_Angeles",
"oav" : "5.3.0.19524.0",
"originator" : "0",
"priority" : "8",
"proto" : "ICMP",
"relevance" : "10",
"rt" : "1099267573000",
"severity" : "8",
"shost" : "198.198.104.10",
"signature id" : "[1:469]",
"slat" : "329913940429",
"slong" : "-966644973754",
"sourcegeocountrycode" : "US",
"sourcegeolocationinfo" : "Richardson",
"sourcegeopostalcode" : "75082",
"sourcegeoregioncode" : "TX",
"sourcezone" : "3133",
"src" : "3334891530",
"start" : "1364676323451",
"type" : "0"
}
(1 row)
-
Select the eventID
and priority
real columns, along with two virtual columns, atz
and destinationgeoregioncode
:
=> select eventID, priority, atz, destinationgeoregioncode from CEFData limit 10;
eventID | priority | atz | destinationgeoregioncode
------------+----------+---------------------+--------------------------
1218325417 | 5 | America/Los_Angeles |
1219383333 | 8 | America/Los_Angeles | TX
1219533691 | 9 | America/Los_Angeles | TX
1220034458 | 5 | America/Los_Angeles | TX
1220034578 | 9 | America/Los_Angeles |
1220067119 | 5 | America/Los_Angeles | TX
1220106960 | 5 | America/Los_Angeles | TX
1220142122 | 5 | America/Los_Angeles | TX
1220312009 | 5 | America/Los_Angeles | TX
1220321355 | 5 | America/Los_Angeles | CA
(10 rows)
See also
8.3.6 - FDELIMITEDPARSER
Parses data using a delimiter character to separate values.
Parses data using a delimiter character to separate values. The fdelimitedparser
loads delimited data, storing it in a single-value VMap.
This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__
column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.
Note
By default, fdelimitedparser
treats empty fields as NULL
, rather than as an empty string (''
). This behavior makes casting easier. Casting a NULL to an integer (NULL::int
) is valid, while casting an empty string to an integer (''::int
) is not. If required, use the treat_empty_val_as_null
parameter to change the default behavior of fdelimitedparser
.
Syntax
FDLIMITEDPARSER ( [parameter-name='value'[,...]] )
Parameters
delimiter
- Single character delimiter.
Default:|
record_terminator
- Single-character record terminator.
Default:\n
trim
- Boolean, specifies whether to trim white space from header names and key values.
Default: true
header
- Boolean, specifies that a header column exists. The parser uses
col###
for the column names if you use this parameter but no header exists.
Default:true
omit_empty_keys
- Boolean, specifies how the parser handles header keys without values. If
omit_empty_keys=true
, keys with an empty value in the header
row are not loaded.
Default: false
reject_on_duplicate
- Boolean, specifies whether to ignore duplicate records (
false
), or to reject duplicates (true
). In either case, the load continues.
Default:false
reject_on_empty_key
- Boolean, specifies whether to reject any row containing a key without a value.
Default:false
reject_on_materialized_type_error
- Boolean, specifies whether to reject any row value for a materialized column that the parser cannot coerce into a compatible data type. See Using flex table parsers.
Default:false
treat_empty_val_as_null
- Boolean, specifies that empty fields become
NULLs
, rather than empty strings (''
).
Default: true
Examples
-
Create a flex table for delimited data:
t=> CREATE FLEX TABLE delim_flex ();
CREATE TABLE
-
Use the fdelimitedparser
to load some delimited data from STDIN
, specifying a comma (,
) column delimiter:
=> COPY delim_flex FROM STDIN parser fdelimitedparser (delimiter=',');
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> deviceproduct, severity, deviceversion
>> ArcSight, High, 2.4.1
>> \.
You can now query virtual columns in the delim_flex
flex table:
=> SELECT deviceproduct, severity, deviceversion from delim_flex;
deviceproduct | severity | deviceversion
---------------+----------+---------------
ArcSight | High | 2.4.1
(1 row)
See also
8.3.7 - FJSONPARSER
Parses and loads a JSON file.
Parses and loads a JSON file. This file can contain either repeated JSON data objects (including nested maps), or an outer list of JSON elements.
When loading into a flex or hybrid table, the parser stores the JSON data in a single-value VMap. When loading into a hybrid or columnar table, the parser loads data directly into any table column with a column name that matches a key in the JSON source data.
You can load complex types in the JSON source (arrays, structs, or combinations) with strong typing or as flexible complex types. A flexible complex type is loaded into a VMap column, as in flex tables. To load complex types as VMap columns, specify a column type of LONG VARBINARY. To preserve the indexing in complex types, set flatten_maps
to false.
This parser can notify you if it finds keys in the data that are not part of the table definition. See Unmatched Keys.
FJSONPARSER supports cooperative parse only if record_terminator
is specified. It does not support apportioned load.
Syntax
FJSONPARSER ( [parameter=value[,...]] )
Parameters
flatten_maps
- Boolean, whether to flatten sub-maps within the JSON data, separating map levels with a period (
.
). This value affects all data in the load, including nested maps.
This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.
Default: true
flatten_arrays
- Boolean, whether to convert lists to sub-maps with integer keys. When lists are flattened, key names are concatenated as for maps. Lists are not flattened by default. This value affects all data in the load, including nested lists.
This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.
Default: false
reject_on_duplicate
- Boolean, whether to ignore duplicate records (false), or to reject duplicates (true). In either case, the load continues.
Default: false
reject_on_empty_key
- Boolean, whether to reject any row containing a field key without a value.
Default: false
omit_empty_keys
- Boolean, whether to omit any field key from the data that does not have a value. Other fields in the same record are loaded.
Default: false
record_terminator
- When set, any invalid JSON records are skipped and parsing continues with the next record. Records must be terminated uniformly. For example, if your input file has JSON records terminated by newline characters, set this parameter to
E'\n')
. If any invalid JSON records exist, parsing continues after the next record_terminator
.
Even if the data does not contain invalid records, specifying an explicit record terminator can improve load performance by allowing cooperative parse and apportioned load to operate more efficiently.
When you omit this parameter, parsing ends at the first invalid JSON record.
reject_on_materialized_type_error
Boolean, whether to reject a data row that contains a materialized column value that cannot be coerced into a compatible data type. If the value is false and the type cannot be coerced, the parser sets the value in that column to NULL.
If the column is an array and the data to be loaded is too large, then false sets the column value to NULL and true rejects the row.
If the column is a strongly-typed complex type, as opposed to a flexible complex type, then a type mismatch anywhere in the complex type causes the entire column to be treated as a mismatch. The parser does not partially load complex types.
Default: false
start_point
- String, the name of a key in the JSON load data at which to begin parsing. The parser ignores all data before the
start_point
value. The value is loaded for each object in the file. The parser processes data after the first instance, and up to the second, ignoring any remaining data.
start_point_occurrence
- Integer, the nth occurrence of the value you specify with
start_point
. Use in conjunction with start_point
when the data has multiple start values and you know the occurrence at which to begin parsing.
Default: 1
suppress_nonalphanumeric_key_chars
- Boolean, whether to suppress non-alphanumeric characters in JSON key values. The parser replaces these characters with an underscore (
_
) when this parameter is true.
Default: false
key_separator
- Character for the parser to use when concatenating key names.
Default: period (.
)
suppress_warnings
- String, which warnings to suppress:
-
unmatched_key
(see Unmatched Keys)
-
true
or t
(suppress all warnings)
-
false
or f
(do not suppress warnings)
Default: false
Data types
If the total size of an array exceeds the size defined by the target table, the parser sets the value to null.
Unmatched keys
Data being loaded can contain keys that are not part of the table definition. If you are loading into a flex table (or a flexible complex type column), no data is lost. For a table with strongly-defined columns, however, new keys cannot be loaded because the table does not have a place to put them.
This parser emits warnings if it finds new keys and if both of the following are true:
New keys are logged in the UDX_EVENTS system table. If a new key is a complex type with nested keys, only the top-level key is logged. When you see a warning about unmatched keys, you can query this table and then use ALTER TABLE to modify your table definition for future loads.
Querying an external table loads data and thus can trigger these warnings. To prevent them, set the suppress_warnings
parameter to 'unmatched_keys' or 'true':
=> CREATE EXTERNAL TABLE restaurants(
name VARCHAR(50),
menu ARRAY[ROW(item VARCHAR(50), price NUMERIC(8,2)),100])
AS COPY FROM '/data/rest.json'
PARSER FJSONPARSER(suppress_warnings='unmatched_key');
Examples
The following example loads JSON data from STDIN using the default parameters:
=> CREATE TABLE people(age INT, name VARCHAR);
CREATE TABLE
=> COPY people FROM STDIN PARSER FJSONPARSER();
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> {"age": 5, "name": "Tim"}
>> {"age": 3}
>> {"name": "Fred"}
>> {"name": "Bob", "age": 10}
>> \.
=> SELECT * FROM people;
age | name
-----+------
| Fred
10 | Bob
5 | Tim
3 |
(4 rows)
The following example uses the reject_on_duplicate
parameter to reject duplicate values:
=> CREATE FLEX TABLE json_dupes();
CREATE TABLE
=> COPY json_dupes FROM stdin PARSER fjsonparser(reject_on_duplicate=true)
exceptions '/home/dbadmin/load_errors/json_e.out'
rejected data '/home/dbadmin/load_errors/json_r.out';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> {"a":"1","a":"2","b":"3"}
>> \.
=> \!cat /home/dbadmin/load_errors/json_e.out
COPY: Input record 1 has been rejected (Rejected by user-defined parser).
Please see /home/dbadmin/load_errors/json_r.out, record 1 for the rejected record.
COPY: Loaded 0 rows, rejected 1 rows.
The following example loads array data:
$ cat addrs.json
{"number": 301, "street": "Grant", "attributes": [1, 2, 3, 4]}
=> CREATE EXTERNAL TABLE customers(number INT, street VARCHAR, attributes ARRAY[INT])
AS COPY FROM 'addrs.json' PARSER fjsonparser();
=> SELECT number, street, attributes FROM customers;
num | street| attributes
-----+-----------+---------------
301 | Grant | [1,2,3,4]
(1 row)
The following example loads a flexible complex type, rejecting rows that have empty keys within the nested records. Notice that while the data has two restaurants, one has a key name that is an empty string. This one is rejected:
$ cat rest1.json
{
"name" : "Bob's pizzeria",
"cuisine" : "Italian",
"location_city" : ["Cambridge", "Pittsburgh"],
"menu" : [{"item" : "cheese pizza", "" : "$8.25"},
{"item" : "spinach pizza", "price" : "$10.50"}]
}
{
"name" : "Bakersfield Tacos",
"cuisine" : "Mexican",
"location_city" : ["Pittsburgh"],
"menu" : [{"item" : "veggie taco", "price" : "$9.95"},
{"item" : "steak taco", "price" : "$10.95"}]
}
=> CREATE TABLE rest (name VARCHAR, cuisine VARCHAR, location_city LONG VARBINARY, menu LONG VARBINARY);
=> COPY rest FROM '/data/rest1.json'
PARSER fjsonparser(flatten_maps=false, reject_on_empty_key=true);
Rows Loaded
------------
1
(1 row)
=> SELECT maptostring(location_city), maptostring(menu) FROM rest;
maptostring | maptostring
---------------------------+-------------------------------------------------------
{
"0": "Pittsburgh"
} | {
"0": {
"item": "veggie taco",
"price": "$9.95"
},
"1": {
"item": "steak taco",
"price": "$10.95"
}
}
(1 row)
To instead load partial data, use omit_empty_keys
to bypass the missing keys while loading everything else:
=> COPY rest FROM '/data/rest1.json'
PARSER fjsonparser(flatten_maps=false, omit_empty_keys=true);
Rows Loaded
-------------
2
(1 row)
=> SELECT maptostring(location_city), maptostring(menu) from rest;
maptostring | maptostring
-------------------------------------------------+---------------------------------
{
"0": "Pittsburgh"
} | {
"0": {
"item": "veggie taco",
"price": "$9.95"
},
"1": {
"item": "steak taco",
"price": "$10.95"
}
}
{
"0": "Cambridge",
"1": "Pittsburgh"
} | {
"0": {
"item": "cheese pizza"
},
"1": {
"item": "spinach pizza",
"price": "$10.50"
}
}
(2 rows)
To instead load this data with strong typing, define the complex types in the table:
=> CREATE EXTERNAL TABLE restaurants
(name VARCHAR, cuisine VARCHAR,
location_city ARRAY[VARCHAR(80)],
menu ARRAY[ ROW(item VARCHAR(80), price FLOAT) ]
)
AS COPY FROM '/data/rest.json' PARSER fjsonparser();
=> SELECT * FROM restaurants;
name | cuisine | location_city | \
menu
-------------------+---------+----------------------------+--------------------\
--------------------------------------------------------
Bob's pizzeria | Italian | ["Cambridge","Pittsburgh"] | [{"item":"cheese pi\
zza","price":0.0},{"item":"spinach pizza","price":0.0}]
Bakersfield Tacos | Mexican | ["Pittsburgh"] | [{"item":"veggie ta\
co","price":0.0},{"item":"steak taco","price":0.0}]
(2 rows)
In the following example, the data contains two new fields. One is a top-level field (a new column), and the other is a new field on an existing struct. The new fields are recorded in the UDX_EVENTS system table:
=> COPY rest FROM '/data/rest2.json' PARSER FJSONPARSER();
WARNING 10596: Warning in UDx call in user-defined object [FJSONParser], code: 0, message:
Data source contained keys which did not match table schema
HINT: SELECT key, sum(num_instances) FROM v_monitor.udx_events WHERE event_type = 'UNMATCHED_KEY' GROUP BY key
Rows Loaded
-------------
2
(1 row)
=> SELECT key, SUM(num_instances) FROM v_monitor.UDX_EVENTS
WHERE event_type = 'UNMATCHED_KEY' GROUP BY key;
key | SUM
------------------------+-----
chain | 1
menu.elements.calories | 7
(2 rows)
For other examples, see JSON data.
8.3.8 - FREGEXPARSER
Parses a regular expression, matching columns to the contents of the named regular expression groups.
Parses a regular expression, matching columns to the contents of the named regular expression groups.
This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__
column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.
Syntax
FREGEXPARSER ( pattern=[parameter-name='value'[,...]] )
Parameters
pattern
- Specifies the regular expression of data to match.
Default: Empty string (""
)
use_jit
- Boolean, specifies whether to use just-in-time compiling when parsing the regular expression.
Default: false
record_terminator
- Specifies the character used to separate input records.
Default: \n
logline_column
- A string that captures the destination column containing the full string that the regular expression matched.
Default: Empty string (""
)
Example
These examples use the following regular expression, which searches for information that includes the timestamp
, date
, thread_name
, and thread_id
strings.
Caution
For display purposes, this sample regular expression adds new line characters to split long lines of text. To use this expression in a query, first copy and edit the example to remove any new line characters.
This example expression loads any thread_id
hex value, regardless of whether it has a 0x
prefix, (<thread_id>(?:0x)?[0-9a-f]+)
.
'^(?<time>\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d+)
(?<thread_name>[A-Za-z ]+):(?<thread_id>(?:0x)?[0-9a-f]+)
-?(?<transaction_id>[0-9a-f])?(?:[(?<component>\w+)]
\<(?<level>\w+)\> )?(?:<(?<elevel>\w+)> @[?(?<enode>\w+)]?: )
?(?<text>.*)'
-
Create a flex table (vlog
) to contain the results of a Vertica log file. For this example, we made a copy of a log file in the directory /home/dbadmin/data/vertica.log
:
=> CREATE FLEX TABLE vlog1();
CREATE TABLE
-
Use the fregexparser
with the sample regular expression to load data from the log file. Be sure to remove any line characters before using this expression shown here:
=> COPY vlog1 FROM '/home/dbadmin/tempdat/KMvertica.log'
PARSER FREGEXPARSER(pattern='^(?<time>\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d+)
(?<thread_name>[A-Za-z ]+):(?<thread_id>(?:0x)?[0-9a-f]+)
-?(?<transaction_id>[0-9a-f])?(?:[(?<component>\w+)]
\<(?<level>\w+)\> )?(?:<(?<elevel>\w+)> @[?(?<enode>\w+)]?: )
?(?<text>.*)'
);
Rows Loaded
-------------
31049
(1 row)
-
After successfully loading data, use the MAPTOSTRING() function with the table's __raw__
column. The four rows (limt 4
) that the query returns are regular expression results of the KMvertica.log
file, parsed with fregexparser
. The output shows thread_id
values with a preceding 0x
or without:
=> SELECT maptostring(__raw__) FROM vlog1 LIMIT 4;
maptostring
-------------------------------------------------------------------------------------
{
"text" : " [Init] <INFO> Log /home/dbadmin/VMart/v_vmart_node0001_catalog/vertica.log
opened; #2",
"thread_id" : "0x7f2157e287c0",
"thread_name" : "Main",
"time" : "2017-03-21 23:30:01.704"
}
{
"text" : " [Init] <INFO> Processing command line: /opt/vertica/bin/vertica -D
/home/dbadmin/VMart/v_vmart_node0001_catalog -C VMart -n v_vmart_node0001 -h
10.20.100.247 -p 5433 -P 4803 -Y ipv4",
"thread_id" : "0x7f2157e287c0",
"thread_name" : "Main",
"time" : "2017-03-21 23:30:01.704"
}
{
"text" : " [Init] <INFO> Starting up Vertica Analytic Database v8.1.1-20170321",
"thread_id" : "7f2157e287c0",
"thread_name" : "Main",
"time" : "2017-03-21 23:30:01.704"
}
{
"text" : " [Init] <INFO> Compiler Version: 4.8.2 20140120 (Red Hat 4.8.2-15)",
"thread_id" : "7f2157e287c0",
"thread_name" : "Main",
"time" : "2017-03-21 23:30:01.704"
}
(4 rows)
See also
8.3.9 - ORC
Use the ORC clause with the COPY FROM statement to load data in the ORC format.
Use the ORC clause with the COPY statement to load data in the ORC format. When loading data into Vertica, you can read all primitive types, UUIDs, and complex types.
By default, the ORC parser uses strong schema matching, meaning that columns in the data must exactly match the columns in the table using the data. You can optionally use Loose (soft) Schema Matching.
If the table definition includes columns of primitive types and those columns are not in the data, the parser fills those columns with NULL. If the table definition includes columns of complex types, those columns must be present in the data.
This parser does not support apportioned load or cooperative parse. However, it enables parallel loading by dividing loads into individual sources, such as row groups or stripes, which are processed concurrently across multiple Vertica nodes using multi-threading. The number of threads is automatically determined based on the available memory in the resource pool.
Syntax
ORC ( [ parameter=value[,...] ] )
Parameters
All parameters are optional.
hive_partition_cols
- Comma-separated list of columns that are partition columns in the data.
Deprecated
Instead, use COPY PARTITION COLUMNS. See
Partitioned data. If you use both this parameter and PARTITION COLUMNS, COPY ignores the parameter.
allow_no_match
- Whether to accept a path containing a glob with no matching files and report zero rows in query results. If this parameter is not set, Vertica returns an error if the path in the FROM clause does not match at least one file.
do_soft_schema_match_by_name
- Whether to enable loose schema matching (true) instead of the strict matching based on column order in the table definition and ORC file (false, default). See Loose Schema Matching for more information.
Default: false.
reject_on_materialized_type_error
- Boolean, applies only if
do_soft_schema_match_by_name
is true. Specifies what to do when loose schema matching is being used and a value cannot be coerced from the data to the target column type. A value of true means to reject the row; a value of false means to use NULL for the value or, for strings that are too long, truncate. See the table of type coercions for coercible type mappings.
Default: true.
Loose schema matching
By default, the ORC parser uses strong schema matching. This means that all columns in the ORC data must be loaded, and they must be loaded in the same order as in the data. However, there are times when you only want to pull certain columns, or you want to be able to accommodate future changes in the ORC schema. This is called loose (or soft) schema matching.
Use the do_soft_schema_match_by_name
parameter to enable loose schema matching. This setting has the following effects:
-
Columns in the data are matched to columns in the table by their names. Names must exactly match but are case-insensitive.
-
Columns that exist in the ORC data but are not part of the table definition are ignored.
-
Columns that exist in the table definition but not the ORC data are filled with NULL.
-
If the same case-insensitive column name occurs more than once in the ORC data, the parser uses the last one. (This situation can arise when using data written by tools that are case-sensitive.)
-
Column types do not need to exactly match, so long as the data type in the ORC file can be coerced to the type used by the table.
While the Parquet parser applies loose schema matching to both column and field names, the ORC parser applies it only to column names.
Data types
This parser can read all primitive types, UUIDs, and complex types.
If the total size of an array exceeds the size defined by the target table, the parser rejects the row.
The following mappings are supported with type coercion and loose schema matching:
ORC Physical Type |
Coercible to Vertica Data Type |
BOOLEAN |
BOOLEAN |
BYTE, SHORT, INT, LONG |
INT |
FLOAT, DOUBLE |
FLOAT |
DECIMAL |
NUMERIC |
DATE |
DATE |
TIMESTAMP |
TIMESTAMP, TIMESTAMPTZ |
STRING, CHAR, VARCHAR |
CHAR, VARCHAR, LONG VARCHAR |
BINARY |
BINARY, VARBINARY, LONG VARBINARY |
Examples
The ORC clause does not use the PARSER option:
=> CREATE EXTERNAL TABLE orders (...)
AS COPY FROM 's3://DataLake/orders.orc' ORC;
You can read a map column as an array of rows, as in the following example:
=> CREATE EXTERNAL TABLE orders
(orderkey INT,
custkey INT,
prods ARRAY[ROW(key VARCHAR(10), value DECIMAL(12,2))],
orderdate DATE
) AS COPY FROM '...' ORC;
8.3.10 - PARQUET
Use the PARQUET parser with the COPY FROM statement to load data in the Parquet format.
Use the PARQUET
parser with the COPY statement to load data in the Parquet format. When loading data into Vertica you can read all primitive types, UUIDs, and complex types.
By default, the Parquet parser uses strong schema matching, meaning that columns in the data must exactly match the columns in the table using the data. You can optionally use Loose Schema Matching.
When loading Parquet data, Vertica caches the Parquet metadata to improve efficiency. This cache uses local TEMP storage and is not used if TEMP is remote. See the ParquetMetadataCacheSizeMB configuration parameter to change the size of the cache.
This parser does not support apportioned load or cooperative parse. However, it enables parallel loading by dividing loads into individual sources, such as row groups or stripes, which are processed concurrently across multiple Vertica nodes using multi-threading. The number of threads is automatically determined based on the available memory in the resource pool.
Syntax
PARQUET ( [ parameter=value[,...] ] )
Parameters
All parameters are optional.
hive_partition_cols
- Comma-separated list of columns that are partition columns in the data.
Deprecated
Instead, use COPY PARTITION COLUMNS. See
Partitioned data. If you use both this parameter and PARTITION COLUMNS, COPY ignores the parameter.
allow_no_match
- Boolean. Whether to accept a path containing a glob with no matching files and report zero rows in query results. If this parameter is not set, Vertica returns an error if the path in the FROM clause does not match at least one file.
allow_long_varbinary_match_complex_type
- Boolean. Whether to enable flexible column types (see Flexible complex types). If true, the Parquet parser allows a complex type in the data to match a table column defined as LONG VARBINARY. If false, the Parquet parser requires strong typing of complex types. With the parameter set you can still use strong typing. Set this parameter to false if you want use of flexible columns to be treated as an error.
do_soft_schema_match_by_name
- Whether to enable loose schema matching (true) instead of the strict matching based on column order in the table definition and parquet file (false, default). See Loose Schema Matching for more information.
Default: false.
reject_on_materialized_type_error
- Boolean, applies only if
do_soft_schema_match_by_name
is true. Specifies what to do when loose schema matching is being used and a value cannot be coerced from the data to the target column type. A value of true means to reject the row; a value of false means to use NULL for the value or, for strings that are too long, truncate. See the table of type coercions for coercible type mappings.
Default: true.
Loose schema matching
By default, the Parquet parser uses strong schema matching. This means that all columns and struct fields in the Parquet data must be loaded, and they must be loaded in the same order as in the data. However, there are times when you only want to pull certain columns or fields, or you want to be able to accommodate future changes in the Parquet schema. This is called loose (or soft) schema matching.
Use the do_soft_schema_match_by_name
parameter to enable loose schema matching. This setting has the following effects:
-
Columns or struct fields in the data are matched to columns or fields in the table by their names. Names must exactly match but are case-insensitive.
-
Columns or fields that exist in the Parquet data but are not part of the table definition are ignored.
-
Columns or fields that exist in the table definition but not the Parquet data are filled with NULL. The parser logs an UNMATCHED_TABLE_COLUMNS_PARQUETPARSER event in QUERY_EVENTS.
-
If the same case-insensitive column name occurs more than once in the Parquet data, the parser uses the last one. (This situation can arise when using data written by tools that are case-sensitive.)
-
Column and field types do not need to exactly match, so long as the data type in the Parquet file can be coerced to the type used by the table. If a type cannot be coerced, the parser logs a TYPE_MISMATCH_COLUMNS_PARQUETPARSER event in QUERY_EVENTS. If reject_on_materialized_type_error
is true then the parser rejects the row. If it is false, the parser uses NULL or, for string values that are too long, truncates the value.
Data types
The Parquet parser maps Parquet data types to Vertica data types as follows.
Parquet Logical Type |
Vertica Data Type |
StringLogicalType |
VARCHAR |
MapLogicalType |
ARRAY[ROW] |
ListLogicalType |
ARRAY/SET |
IntLogicalType |
INT/NUMERIC |
DecimalLogicalType |
NUMERIC |
DateLogicalType |
DATE |
TimeLogicalType |
TIME |
TimestampLogicalType |
TIMESTAMP |
UUIDLogicalType |
UUID |
Decimal precision must be <= 153.
If the total size of an array exceeds the size defined by the target table, the parser rejects the row.
The following logical types are not supported:
- EnumLogicalType
- IntervalLogicalType (Intervals exported using EXPORT TO PARQUET do not use this logical type)
- JSONLogicalType
- BSONLogicalType
- UnknownLogicalType
The Parquet parser supports the following mappings of physical types:
Parquet Physical Type |
Vertica Data Type |
BOOLEAN |
BOOLEAN |
INT32/INT64 |
INT |
INT64 (a number of microseconds) |
INTERVAL |
INT96 |
Supported only for TIMESTAMP |
FLOAT |
DOUBLE |
DOUBLE |
DOUBLE |
BYTE_ARRAY |
VARBINARY |
FIXED_LEN_BYTE_ARRAY |
BINARY |
The following mappings are supported with type coercion and loose schema matching.
Parquet Physical Type |
Coercible to Vertica Data Type |
BOOLEAN |
BOOLEAN |
INT32, INT64, BOOLEAN |
INT |
FLOAT, DOUBLE |
DOUBLE |
INT32, INT96 |
DATE |
INT64, INT96 |
TIMESTAMP, TIMESTAMPTZ |
INT64
If precision > 0: INT32, BYTE_ARRAY, FIXED_LEN_BYTE_ARRAY
|
Numeric |
BYTE_ARRAY |
CHAR, VARCHAR, LONG VARCHAR, BINARY, VARBINARY, LONG VARBINARY |
FIXED_LEN_BYTE_ARRAY |
UUID |
Vertica supports only 3-level-encoded arrays, not 2-level-encoded.
Examples
The PARQUET clause does not use the PARSER option:
=> COPY sales FROM 's3://DataLake/sales.parquet' PARQUET;
In the following example, the data directory contains no files:
=> CREATE EXTERNAL TABLE customers (...)
AS COPY FROM 'webhdfs:///data/*.parquet' PARQUET;
=> SELECT COUNT(*) FROM customers;
ERROR 7869: No files match when expanding glob: [webhdfs:///data/*.parquet]
To read zero rows instead of producing an error, use the allow_no_match
parameter:
=> CREATE EXTERNAL TABLE customers (...)
AS COPY FROM 'webhdfs:///data/*.parquet'
PARQUET(allow_no_match='true');
=> SELECT COUNT(*) FROM customers;
count
-------
0
(1 row)
To allow reading a complex type (menu, in this example) as a flexible column type, use the allow_long_varbinary_match_complex_type
parameter:
=> CREATE EXTERNAL TABLE restaurants
(name VARCHAR, cuisine VARCHAR, location_city ARRAY[VARCHAR], menu LONG VARBINARY)
AS COPY FROM '/data/rest*.parquet'
PARQUET(allow_long_varbinary_match_complex_type='True');
To read only some columns from the restaurant data, use loose schema matching:
=> CREATE EXTERNAL TABLE restaurants(name VARCHAR, cuisine VARCHAR)
AS COPY FROM '/data/rest*.parquet'
PARQUET(allow_long_varbinary_match_complex_type='True',
do_soft_schema_match_by_name='True');
=> SELECT * from restaurant;
name | cuisine
-------------------+----------
Bob's pizzeria | Italian
Bakersfield Tacos | Mexican
(2 rows)
8.4 - Examples
For additional COPY examples, see the reference pages for specific parsers, including: DELIMITED (Parser), ORC (Parser), PARQUET (Parser), FJSONPARSER (Parser), and FAVROPARSER (Parser).
For additional COPY examples, see the reference pages for specific parsers, including: DELIMITED, ORC, PARQUET, FJSONPARSER, and FAVROPARSER.
Specifying string options
Use COPY with FORMAT, DELIMITER, NULL, and ENCLOSED BY options:
=> COPY public.customer_dimension (customer_since FORMAT 'YYYY')
FROM STDIN
DELIMITER ','
NULL AS 'null'
ENCLOSED BY '"';
This example sets and references a vsql
variable for the input file:
=> \set input_file ../myCopyFromLocal/large_table.gzip
=> COPY store.store_dimension
FROM :input_file
DELIMITER '|'
NULL ''
RECORD TERMINATOR E'\f';
Including multiple source files
Create a table and then copy multiple source files to it:
=> CREATE TABLE sampletab (a int);
CREATE TABLE
=> COPY sampletab FROM '/home/dbadmin/one.dat', 'home/dbadmin/two.dat';
Rows Loaded
-------------
2
(1 row)
Use wildcards to indicate a group of files:
=> COPY myTable FROM 'webhdfs:///mydirectory/ofmanyfiles/*.dat';
Wildcards can include regular expressions:
=> COPY myTable FROM 'webhdfs:///mydirectory/*_[0-9]';
Specify multiple paths in a single COPY statement:
=> COPY myTable FROM 'webhdfs:///data/sales/01/*.dat', 'webhdfs:///data/sales/02/*.dat',
'webhdfs:///data/sales/historical.dat';
Distributing a load
Load data that is shared across all nodes. Vertica distributes the load across all nodes, if possible:
=> COPY sampletab FROM '/data/file.dat' ON ANY NODE;
Load data from two files. Because the first load file does not specify nodes (or ON ANY NODE), the initiator performs the load. Loading the second file is distributed across all nodes:
=> COPY sampletab FROM '/data/file1.dat', '/data/file2.dat' ON ANY NODE;
Specify different nodes for each load file:
=> COPY sampletab FROM '/data/file1.dat' ON (v_vmart_node0001, v_vmart_node0002),
'/data/file2.dat' ON (v_vmart_node0003, v_vmart_node0004);
Loading data from shared storage
To load data from shared storage, use URLs in the corresponding schemes:
-
HDFS: [[s]web]hdfs://[
nameservice
]/
path
-
S3: s3://
bucket
/
path
-
Google Cloud: gs://
bucket
/
path
-
Azure: azb://
account
/
container
/
path
Note
Loads from HDFS, S3, GCS, and Azure default to ON ANY NODE
; you do not need to specify it.
Load a file stored in HDFS using the default name node or name service:
=> COPY t FROM 'webhdfs:///opt/data/file1.dat';
Load data from a particular HDFS name service (testNS). You specify a name service if your database is configured to read from more than one HDFS cluster:
=> COPY t FROM 'webhdfs://testNS/opt/data/file2.csv';
Load data from an S3 bucket:
=> COPY t FROM 's3://AWS_DataLake/*' ORC;
Partitioned data
Data files can be partitioned using the directory structure, such as:
path/created=2016-11-01/region=northeast/*
path/created=2016-11-01/region=central/*
path/created=2016-11-01/region=southeast/*
path/created=2016-11-01/...
path/created=2016-11-02/region=northeast/*
path/created=2016-11-02/region=central/*
path/created=2016-11-02/region=southeast/*
path/created=2016-11-02/...
path/created=2016-11-03/...
path/...
Load partition columns using the PARTITION COLUMNS option:
=> CREATE EXTERNAL TABLE records (id int, name varchar(50), created date, region varchar(50))
AS COPY FROM 'webhdfs:///path/*/*/*'
PARTITION COLUMNS created, region;
Using filler columns
In the following example, the table has columns for first name, last name, and full name, but the data being loaded contains columns for first, middle, and last names. The COPY statement reads all of the source data but only loads the source columns for first and last names. It constructs the data for the full name by concatenating each of the source data columns, including the middle name. The middle name is read as a FILLER column so it can be used in the concatenation, but is ignored otherwise. (There is no table column for middle name.)
=> CREATE TABLE names(first VARCHAR(20), last VARCHAR(20), full VARCHAR(60));
CREATE TABLE
=> COPY names(first,
middle FILLER VARCHAR(20),
last,
full AS first||' '||middle||' '||last)
FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> Marc|Gregory|Smith
>> Sue|Lucia|Temp
>> Jon|Pete|Hamilton
>> \.
=> SELECT * from names;
first | last | full
-------+----------+--------------------
Jon | Hamilton | Jon Pete Hamilton
Marc | Smith | Marc Gregory Smith
Sue | Temp | Sue Lucia Temp
(3 rows)
In the following example, JSON data contains an unencrypted password. The JSON data and the table definition use the same name for the field. Instead of loading the value directly, the COPY statement uses a filler column and a (hypothetical) function to encrypt the value. Both the table column (users.password
) and the filler column ("*FILLER*".password
) are fully qualified:
=> CREATE TABLE users(user VARCHAR(32), password VARCHAR(32));
=> COPY users(user, password FILLER VARCHAR(32),
users.password AS encrypt("*FILLER*".password))
FROM ... PARSER FJSONPARSER();
Setting defaults for null values
When creating a table, you can specify a default value for a column. When loading data, Vertica uses the default if the column value is not present. However, defaults do not override null values.
To intercept null values and apply defaults, you can first stage the load with COPY NO COMMIT and then update the values:
=> CREATE TABLE sales (id INT, shipped TIMESTAMP(6) DEFAULT now());
=> COPY sales FROM ... NO COMMIT;
=> UPDATE sales SET shipped = DEFAULT WHERE epoch IS NULL;
=> COMMIT;
All Vertica tables have an implicit epoch
column, which records when the data was committed. For data that is not yet committed, this value is null. You can use this column to identify the just-loaded data and update it before committing.
Loading data into a flex table
Create a Flex table and copy JSON data into it:
=> CREATE FLEX TABLE darkdata();
CREATE TABLE
=> COPY tweets FROM '/myTest/Flexible/DATA/tweets_12.json' PARSER FJSONPARSER();
Rows Loaded
-------------
12
(1 row)
Using named pipes
COPY supports named pipes that follow the same naming conventions as file names on the given file system. Permissions are open
, write
, and close
.
Create a named pipe and set two vsql
variables:
=> \! mkfifo pipe1
=> \set dir `pwd`/
=> \set file '''':dir'pipe1'''
Copy an uncompressed file from the named pipe:
=> \! cat pf1.dat > pipe1 &
=> COPY large_tbl FROM :file delimiter '|';
=> SELECT * FROM large_tbl;
=> COMMIT;
Loading compressed data
Copy a GZIP file from a named pipe and uncompress it:
=> \! gzip pf1.dat
=> \! cat pf1.dat.gz > pipe1 &
=> COPY large_tbl FROM :file ON site01 GZIP delimiter '|';
=> SELECT * FROM large_tbl;
=> COMMIT;
=> \!gunzip pf1.dat.gz
9 - COPY FROM VERTICA
Imports data from another Vertica database.
Imports data from another Vertica database. COPY FROM VERTICA is similar to COPY, but supports only a subset of its parameters.
Important
The source database must be no more than one major release behind the target database.
Syntax
COPY [[{database | namespace }.]target-schema.]target-table
[( target-columns )]
FROM VERTICA {source-database | source-namespace }.[source-schema.]source-table
[( source-columns )]
[STREAM NAME 'stream name']
[NO COMMIT]
Arguments
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
target-table
- The target table for the imported data. Vertica loads the data into all projections that include columns from the schema table.
target-columns
- A comma-delimited list of columns in
target-table
to store the copied data.See Mapping Between Target and Source Columns below.
You cannot use FILLER columns or columns of complex types, except native arrays, as part of the column definition.
{
source-database
|
source-namespace
}
- The source of the data to import. Namespaces are available only in Eon Mode.
A connection to the source database must already exist in the current session before starting the copy operation; otherwise Vertica returns an error. For details, see
CONNECT TO VERTICA
.
[
source-schema
.]
source-table
- The table that is the source of the imported data. If
schema
is any schema other than public
or if you specified a source-namespace
, you must supply the schema name.
source-columns
- A comma-delimited list of the columns in the source table to import. If omitted, all columns are exported.Columns cannot be of complex types. See Mapping Between Target and Source Columns below.
STREAM NAME
- A COPY load stream identifier. Using a stream name helps to quickly identify a particular load. The STREAM NAME value that you specify in the load statement appears in the
stream
column of the LOAD_STREAMS
system table.
NO COMMIT
- Prevents
COPY
from committing its transaction automatically when it finishes copying data. For details, see Using transactions to stage a load.
Privileges
Mapping between target and source columns
If you copy all table data from one database to another, COPY FROM VERTICA can omit specifying column lists if column definitions in both tables comply with the following conditions:
-
Same number of columns
-
Identical column names
-
Same sequence of columns
-
Matching or compatible column data types
-
No complex data types (ARRAY, SET, or ROW), except for native arrays
If any of these conditions is not true, the COPY FROM VERTICA statement must include column lists that explicitly map target and source columns to each other, as follows:
-
Contain the same number of columns.
-
List source and target columns in the same order.
-
Pair columns with the same (or compatible) data types.
Node failure during COPY
See Handling node failure during copy/export.
Examples
The following example copies the contents of an entire table from the vmart
database to an identically-defined table in the current database:
=> CONNECT TO VERTICA vmart USER dbadmin PASSWORD 'myPassword' ON 'VertTest01',5433;
CONNECT
=> COPY customer_dimension FROM VERTICA vmart.customer_dimension;
Rows Loaded
-------------
500000
(1 row)
=> DISCONNECT vmart;
DISCONNECT
For more examples, see Copying data from another Vertica database.
See also
EXPORT TO VERTICA
10 - COPY LOCAL
Using the COPY statement with its LOCAL option lets you load a data file on a client system, rather than on a cluster host.
Using the COPY statement with its LOCAL option lets you load data files (up to 4,294,967,295 files) on a client system, rather than on a cluster host. COPY LOCAL supports the STDIN and 'pathToData'
parameters, but not the [ON nodename] clause. COPY LOCAL does not support multiple file batches in NATIVE or NATIVE VARCHAR formats. COPY LOCAL does not support reading ORC or Parquet files; use ON NODE instead. COPY LOCAL does not support CURRENT_LOAD_SOURCE().
The COPY LOCAL option is platform-independent. The statement works in the same way across all supported Vertica platforms and drivers. For more details about supported drivers, see Client drivers.
COPY LOCAL does not automatically create files detailing exceptions and rejections, even if exceptions occur.
COPY LOCAL is subject to the following restrictions when used with vsql and the client libraries:
- COPY LOCAL must be the first statement in any multi-statement query.
- COPY LOCAL cannot be used multiple times in the same query.
Note
On Windows clients, the path you supply for the COPY LOCAL file is limited to 216 characters due to limitations in the Windows API.
Privileges
User must have INSERT privilege on the table and USAGE privilege on the schema.
How COPY LOCAL works
COPY LOCAL loads data in a platform-neutral way. The COPY LOCAL statement loads all files from a local client system to the Vertica host, where the server processes the files. You can copy files in various formats: uncompressed, compressed, fixed-width format, in bzip or gzip format, or specified as a bash glob. Files of a single format (such as all bzip, or gzip) can be comma-separated in the list of input files. You can also use any of the applicable COPY statement options (as long as the data format supports the option). For instance, you can define a specific delimiter character, or how to handle NULLs, and so forth.
For more information about using the COPY LOCAL option to load data, see COPY for syntactical descriptions, and Specifying where to load data from for detailed examples.
The Vertica host uncompresses and processes the files as necessary, regardless of file format or the client platform from which you load the files. Once the server has the copied files, Vertica maintains performance by distributing file parsing tasks, such as encoding, compressing, uncompressing, across nodes.
Viewing copy local operations in a query plan
When you use the COPY LOCAL option, the GraphViz query plan includes a label for Load-Client-File
, rather than Load-File
. Following is a section from a sample query plan:
-----------------------------------------------
PLAN: BASE BULKLOAD PLAN (GraphViz Format)
-----------------------------------------------
digraph G {
graph [rankdir=BT, label = " BASE BULKLOAD PLAN \nAll Nodes Vector:
\n\n node[0]=initiator (initiator) Up\n", labelloc=t, labeljust=l ordering=out]
.
.
.
10[label = "Load-Client-File(/tmp/diff) \nOutBlk=[UncTuple]",
color = "green", shape = "ellipse"];
Examples
The following example shows a load from a local file.
$ cat > t.dat
12
17
9
^C
=> CREATE TABLE numbers (value INT);
CREATE TABLE
=> COPY numbers FROM LOCAL 't.dat';
Rows Loaded
-------------
3
(1 row)
=> SELECT * FROM numbers;
value
-------
12
17
9
(3 rows)
11 - CREATE statements
CREATE statements let you create new database objects such as tables and users.
CREATE statements let you create new database objects such as tables and users.
11.1 - CREATE ACCESS POLICY
Creates an access policy that filters access to table data to users and roles.
Creates an access policy that filters access to table data to users and roles. You can create access policies for table rows and columns. Vertica applies the access policy filters with each query and returns only the data that is permissible for the current user or role.
You cannot set access policies on columns of complex data types other than native arrays. If the table contains complex-type columns, you can still set row access policies and column access policies on other columns.
Syntax
CREATE ACCESS POLICY ON [[database.]schema.]table
{ FOR COLUMN column | FOR ROWS WHERE } expression [GRANT TRUSTED] { ENABLE | DISABLE }
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
table
- The table with the target column or rows.
FOR COLUMN
column
- The column on which to apply this access policy. The column can be a native array, but other complex types are not supported. (See Complex types.)
FOR ROWS WHERE
- The rows on which to apply this access policy.
expression
- A SQL expression that specifies conditions for accessing row or column data:
-
Row access policies limit access to specific rows in a table, as specified by the policy's WHERE expression. Only rows that satisfy this expression are fetched from the table. For details and sample usage, see Creating row access policies.
-
Column access policies limit access to specific table columns. The access policy expression can also specify how to render column data to specific users and roles. For details and sample usage, see Creating column access policies.
GRANT TRUSTED
Specifies that GRANT statements take precedence over the access policy in determining whether users can perform DML operations on the target table. If omitted, users can only modify table data if the access policy allows them to see the stored data in its original, unaltered state. For more information, see Access policies and DML operations.
Important
GRANT TRUSTED only affects DML operations and does not enable users to see data that the access policy would otherwise mask. Specifying this option may allow users with certain grants to update data that they cannot see.
ENABLE | DISABLE
- Whether to enable the access policy. You can enable and disable existing access policies with ALTER ACCESS POLICY.
Privileges
Non-superuser: Ownership of the table
Restrictions
The following limitations apply to access policies:
-
A column can have only one access policy.
-
Column access policies cannot be set on columns of complex types other than native arrays.
-
Column access policies cannot be set for materialized columns on flex tables. While it is possible to set an access policy for the __raw__
column, doing so restricts access to the whole table.
-
Row access policies are invalid on temporary tables and tables with aggregate projections.
-
Access policy expressions cannot contain:
-
If the query optimizer cannot replace a deterministic expression that involves only constants with their computed values, it blocks all DML operations such as INSERT.
See also
11.2 - CREATE ARCHIVE
Eon Mode only
Creates an archive for in-database restore points.
After you create an archive, you can save restore points to it using the SAVE RESTORE POINT TO ARCHIVE statement. The archive stores restore points in descending chronological order based on the time at which the restore points were saved, where an index of 1 specifies the most recently saved restore point.
Syntax
CREATE ARCHIVE archive-name [ LIMIT max-restore-points ]
Arguments
archive-name
- Name of the archive to create, where
archive-name
conforms to conventions described in Identifiers.
max-restore-points
- Maximum number of restore points to store in the archive. If an archive contains
max-restore-points
restore points and a new restore point is created, the oldest restore point in the archive is deleted. The default value is 'unlimited'.
Privileges
Superuser
Examples
The following statement creates an archive named verticadb
that stores a maximum of three restore points:
=> CREATE ARCHIVE verticadb LIMIT 3;
CREATE ARCHIVE
To view all archives in the database, query the ARCHIVES system table:
=> SELECT * FROM ARCHIVES;
name | limit_count
-----------+-------------
verticadb | 3
(1 row)
See also
11.3 - CREATE AUTHENTICATION
Creates and enables an authentication record associated with users or roles.
Creates and enables an authentication record associated with users or roles. Authentication records are automatically enabled after creation.
Syntax
CREATE AUTHENTICATION auth-record-name
METHOD 'auth-method'
access-method
[ FALLTHROUGH ]
Parameters
Name |
Description |
auth-record-name |
Name of the authentication record, where auth-record-name conforms to conventions described in Identifiers. |
auth-method |
The authentication method, one of the following:
-
trust : Users can authenticate with a valid username (that is, without a password).
-
reject : Rejects the connection attempt.
-
hash : Users must provide a valid username and password. For details, see Hash authentication.
-
gss : Authorizes clients that connect to Vertica with an MIT Kerberos implementation. The Key Distribution Center (KDC) must support Kerberos 5 using the GSS-API. Non-MIT Kerberos implementations must use the GSS-API. For details, see Kerberos authentication.
-
ident : Authenticates the client against a username on an Ident server. For details, see Ident authentication.
-
ldap : Authenticates a client and their username and password with an LDAP or Active Directory server. For details, see LDAP authentication.
-
tls : Authenticates clients that provide a certificate with a Common Name (CN) that specifies a valid database username. Vertica must be configured for mutual mode TLS to use this method. For details, see TLS authentication
-
oauth : Authenticates a client with an access token. For details, see OAuth 2.0 authentication.
For details, see Supported Client Authentication Methods.
|
access-method |
The access method the client uses to connect, specified in one of the following ways:
-
LOCAL : Matches connection attempts made using local domain sockets.
-
HOST [ TLS | NO TLS ] ' host-ip-address ' : Matches connection attempts made using TCP/IP, where host-ip-address can be an IPv4 or IPv6 address. You can qualify HOST with one of the following options:
|
[ FALLTHROUGH ] |
Whether to enable fallthrough authentication for this record. To disable fallthrough, see ALTER AUTHENTICATION.
Fallthrough cannot be enabled for authentication records that use the following authentication methods:
|
Privileges
DBADMIN
Examples
See Creating authentication records.
See also
11.4 - CREATE CA BUNDLE
Creates a certificate authority (CA) bundle.
Deprecated
CA bundles are only usable with certain deprecated parameters in
Kafka notifiers. You should prefer using
TLS configurations and the TLS CONFIGURATION parameter for notifiers instead.
Creates a certificate authority (CA) bundle. These contain root CA certificates.
Syntax
CREATE CA BUNDLE name [CERTIFICATES ca_cert[, ca_cert[, ...]]
Parameters
name
- The name of the CA bundle.
ca_cert
- The name of the CA certificate. If no certificates are specified, the bundle will be empty.
Privileges
Ownership of the CA certificates in the CA bundle.
Examples
See Managing CA bundles.
See also
11.5 - CREATE CERTIFICATE
Creates or imports a certificate, Certificate Authority (CA), or intermediate CA.
Creates or imports a certificate, Certificate Authority (CA), or intermediate CA. These certificates can be used with ALTER TLS CONFIGURATION to set up client-server TLS, LDAPLink TLS, LDAPAuth TLS, and internode TLS.
CREATE CERTIFICATE generates x509v3 certificates.
Syntax
CREATE [TEMP[ORARY]] [CA] CERTIFICATE certificate_name
{AS cert [KEY key_name]
| SUBJECT subject
[ SIGNED BY ca_cert ]
[ VALID FOR days ]
[ EXTENSIONS ext = val[,...] ]
[ KEY private_key ]}
Parameters
TEMPORARY
- Create with session scope. The key is stored in memory and is valid only for the current session.
CA
- Designates the certificate as a CA or intermediate certificate. If omitted, the operation creates a normal certificate.
certificate_name
- The name of the certificate.
AS
cert
- The imported certificate (string).
This parameter should include the entire chain of certificates, excluding the CA certificate.
KEY
key_name
- The name of the key.
This parameter only needs to be set for client/server certificates and CA certificates that you intend to sign other certificates with in Vertica. If your imported CA certificate will only be used for validating other certificates, you do not need to specify a key.
SUBJECT
subject
- The entity to issue the certificate to (string).
SIGNED BY
ca_cert
- The name of the CA that signed the certificate.
When adding a CA certificate, this parameter is optional. Specifying it will create an intermediate CA that cannot be used to sign other CA certificates.
When creating a certificate, this parameter is required.
VALID FOR
days
- The number of days that the certificate is valid.
EXTENSIONS
ext
=
val
- Strings specifying certificate extensions. For a full list of extensions, see the OpenSSL documentation.
KEY
private_key
- The name of the certificate's private key.
When importing a certificate, this parameter is required.
Privileges
Superuser
Default extensions
CREATE CERTIFICATE generates x509v3 certificates and includes several extensions by default. These differ based on the type of certificate you create:
CA Certificate:
-
'basicConstraints' = 'critical, CA:true'
-
'keyUsage' = 'critical, digitalSignature, keyCertSign'
-
'nsComment' = Vertica generated [CA] certificate'
-
'subjectKeyIdentifier' = 'hash'
Certificate:
-
'basicConstraints' = 'CA:false'
-
'keyUsage' = 'critical, digitalSignature, keyEncipherment'
Examples
See Generating TLS certificates and keys.
See also
11.6 - CREATE DATA LOADER
CREATE DATA LOADER creates an automatic data loader that executes a COPY statement when new data files appear in a specified location. The loader records which files have already been successfully loaded and skips them.
CREATE DATA LOADER creates an automatic data loader that executes a COPY statement when new data files appear in a specified path. The loader records which files have already been successfully loaded and skips them. These events are recorded in the DATA_LOADER_EVENTS system table.
You can execute a data loader in the following ways:
Executing the loader automatically commits the transaction.
The DATA_LOADERS system table shows all defined loaders. The DATA_LOADER_EVENTS system table records paths that were attempted and their outcomes. To prevent unbounded growth, records in DATA_LOADER_EVENTS are purged after a specified retention interval. If previously-loaded files are still in the source path after this purge, the data loader sees them as new files and loads them again.
Syntax
CREATE [ OR REPLACE ] DATA LOADER [ IF NOT EXISTS ]
[schema.]name
[ RETRY LIMIT { NONE | DEFAULT | limit } ]
[ RETENTION INTERVAL monitoring-retention ]
[ TRIGGERED BY integration-json ]
AS copy-statement
Arguments
OR REPLACE
- If a data loader with the same name in the same schema exists, replace it. The original data loader's monitoring table is dropped.
This option cannot be used with IF NOT EXISTS
.
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
This option cannot be used with OR REPLACE
.
schema
- Schema containing the data loader. The default schema is
public
.
name
- Name of the data loader.
RETRY LIMIT { NONE | DEFAULT |
limit
}
- Maximum number of times to retry a failing file. Each time the data loader is executed, it attempts to load all files that have not yet been successfully loaded, up to this per-file limit. If set to DEFAULT, at load time the loader uses the value of the DataLoaderDefaultRetryLimit configuration parameter.
Default: DEFAULT
RETENTION INTERVAL
monitoring-retention
- How long to keep records in the events table. DATA_LOADER_EVENTS records events for all data loaders, but each data loader has its own retention interval.
Default: 14 days
TRIGGERED BY
integration-json
- Information about an external input for the data loader, a JSON object with the following keys:
provider
(required): name of the provider; the only supported value is "AWS-SQS".
resourceUrl
(required): endpoint for the integration.
timeout
: number of seconds to wait between loads. The default is 10.
minFileNumPerLoad
: if the number of files waiting to be loaded reaches this threshold, the load begins immediately instead of waiting for the next scheduled load. The default is 10 files.
minTotalSizePerLoad
: if the total size of files waiting to be loaded reaches this threshold, the load begins immediately. The value is an integer with a unit of measure, one of K
, M
, G
, or T
. The default is 1G
.
If a data loader has a trigger, the loader is initially disabled. When you are ready to use it, use ALTER DATA LOADER with the ENABLE option. When the loader is enabled, Vertica executes it automatically when SQS produces notifications for new files. For details, see Triggers.
AS
copy-statement
- The COPY statement that the loader executes. The FROM clause typically uses a glob.
Privileges
Non-superuser: CREATE privileges on the schema.
Restrictions
- COPY NO COMMIT is not supported.
- Data loaders are executed with COPY ABORT ON ERROR.
- The COPY statement must specify file paths. You cannot use COPY FROM VERTICA.
Triggers (AWS SQS only)
If the data to be loaded is stored on AWS and you have configured an SQS queue, then instead of executing a data loader explicitly using EXECUTE DATA LOADER, you can define the loader to respond to S3 events. When files are added to the AWS bucket, a service running on AWS adds a message to the SQS queue. Vertica reads from this queue, executes the data loader, and, if the load was successful, removes the message from the queue. After the initial setup, this process is automatic.
To configure the data loader to read from SQS, use the TRIGGERED BY option with a JSON object following this template:
{
"provider" : "AWS-SQS",
"resourceUrl" : "https://sqs.region.amazonaws.com/account_number/queue_name"
}
You can set additional values to control timing. See the description of the TRIGGERED BY
syntax.
Use the following SQS parameters to authenticate to SQS:
SQSAuth
, to set an ID and key for all queue access.
SQSQueueCredentials
, to set per-queue access. Queue-specific values take precedence over SQSAuth
.
A data loader with an SQS trigger is initially disabled. Use ALTER DATA LOADER with the ENABLE option when you are ready to begin processing events. To pause execution, call ALTER DATA LOADER with the DISABLE option.
You can alter the trigger JSON using ALTER DATA LOADER. You must disable the loader before changing the trigger and then enable it again.
If a data loader has a trigger, you must disable it before dropping it.
File systems
The source path can be any shared file system or object store that all database nodes can access. To use an HDFS or Linux file system safely, you must prevent the loader from reading a partially-written file. One way to achieve this is to only execute the loader when files are not being written to the loader's path. Another way to achieve this is to write new files in a temporary location and move them to the loader's path only when they are complete.
Examples
The following data loader can be executed manually or by using a scheduled stored procedure:
=> CREATE DATA LOADER s.dl1 RETRY LIMIT NONE
AS COPY s.local FROM 's3://b/data/*.dat';
=> SELECT name, schemaname, copystmt, retrylimit FROM data_loaders;
name | schemaname | copystmt | retrylimit
------+------------+---------------------------------------+------------
dl1 | s | COPY s.local FROM 's3://b/data/*.dat' | -1
(1 row)
The following data loader reads from an SQS queue on AWS and loads new files automatically at least every 10 seconds, or immediately when five files or 100MB of data accumulate:
=> CREATE DATA LOADER s.dl2
TRIGGERED BY '{ "provider" : "AWS-SQS",
"resourceUrl" : "https://sqs.us-east-1.amazonaws.com/accountID/queueName",
"minFileNumPerLoad" : "5",
"minTotalSizePerLoad" : "100M" }'
AS COPY s.local FROM 's3://b/data/*.dat';
See Automatic load for an extended example.
See also
11.7 - CREATE DIRECTED QUERY
Saves an association between an input query and a query that is annotated with optimizer hints.
Saves an association between an input query and a query that is annotated with optimizer hints.
CREATE DIRECTED QUERY has two variants:
- CREATE DIRECTED QUERY OPTIMIZER directs the query optimizer to generate annotated SQL from the specified input query. The annotated query contains hints that the optimizer can use to recreate its current query plan for that input query.
- CREATE DIRECTED QUERY CUSTOM specifies an annotated query supplied by the user. Vertica associates the annotated query with the input query specified by the last SAVE QUERY statement.
In both cases, Vertica associates the annotated query and input query, and registers their association in the system table DIRECTED_QUERIES under query_name
.
Syntax
Optimizer-generated
CREATE DIRECTED QUERY OPT[IMIZER] query-name [COMMENT 'comments'] input-query
User-defined (custom)
CREATE DIRECTED QUERY CUSTOM query-name [COMMENT 'comments'] annotated-query
Parameters
OPT[IMIZER]
- Directs the query optimizer to generate an annotated query from
input-query
, and associate both in the new directed query.
CUSTOM
- Specifies to associate
annotated-query
with the query previously specified by SAVE QUERY.
query
-name
- A unique identifier for the directed query, a string that conforms to conventions described in Identifiers.
COMMENT '
comments
'
- Comment about the directed query, up to 128 characters. Comments can be useful for future reference—for example, to explain why a given directed query was created.
If you omit this argument, Vertica inserts one of the following comments:
input-query
- The input query to associate with an optimizer-generated directed query. The input query supports only one optimizer hint,
:v
(alias IGNORECONST).
annotated-query
- A query with embedded optimizer hints to associate with the input query most recently saved with SAVE QUERY.
Privileges
Superuser
See also
Creating directed queries
11.8 - CREATE EXTERNAL TABLE AS COPY
CREATE EXTERNAL TABLE AS COPY creates a table definition for data external to your Vertica database.
CREATE EXTERNAL TABLE AS COPY creates a table definition for data external to your Vertica database. This statement is a combination of the CREATE TABLE and COPY statements, supporting a subset of each statement's parameters.
Canceling a CREATE EXTERNAL TABLE AS COPY statement can cause unpredictable results. If you need to make a change, allow the statement to complete, drop the table, and then retry.
You can use ALTER TABLE to change the data types of columns instead of dropping and recreating the table.
You can use CREATE EXTERNAL TABLE AS COPY with any types except types from the Place package.
Vertica also supports external tables backed by Iceberg data. See CREATE EXTERNAL TABLE ICEBERG.
Note
Vertica does not create superprojections for external tables, since external tables are not stored in the database.
Syntax
CREATE EXTERNAL TABLE [ IF NOT EXISTS ] [[{namespace. | database. }]schema.]table-name
( column-definition[,...] )
[{INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES]
AS COPY
[ ( { column-as-expression | column }
[ DELIMITER [ AS ] 'char' ]
[ ENCLOSED [ BY ] 'char' ]
[ ENFORCELENGTH ]
[ ESCAPE [ AS ] 'char' | NO ESCAPE ]
[ FILLER datatype ]
[ FORMAT 'format' ]
[ NULL [ AS ] 'string' ]
[ TRIM 'byte' ]
[,...] ) ]
[ COLUMN OPTION ( column
[ DELIMITER [ AS ] 'char' ]
[ ENCLOSED [ BY ] 'char' ]
[ ENFORCELENGTH ]
[ ESCAPE [ AS ] 'char' | NO ESCAPE ]
[ FORMAT 'format' ]
[ NULL [ AS ] 'string' ]
[ TRIM 'byte' ]
[,...] ) ]
FROM {
{ 'path-to-data'
[ ON { nodename | (nodeset) | ANY NODE | EACH NODE } ] [ compression ] }[,...]
[ PARTITION COLUMNS column[,...] ]
| LOCAL 'path-to-data' [ compression ] [,...]
| VERTICA {source-namespace. | source-database. }[source-schema.]source-table[( source-column[,...] ) ]
}
[ NATIVE
| FIXEDWIDTH COLSIZES {( integer )[,...]}
| NATIVE VARCHAR
| ORC
| PARQUET
]
[ ABORT ON ERROR ]
[ DELIMITER [ AS ] 'char' ]
[ ENCLOSED [ BY ] 'char' ]
[ ENFORCELENGTH ]
[ ERROR TOLERANCE ]
[ ESCAPE AS 'char' | NO ESCAPE ]
[ EXCEPTIONS 'path' [ ON nodename ] [,...] ]
[ [ WITH ] FILTER filter( [ arg=value[,...] ] ) ]
[ NULL [ AS ] 'string' ]
[ [ WITH ] PARSER parser([arg=value [,...] ]) ]
[ RECORD TERMINATOR 'string' ]
[ REJECTED DATA 'path' [ ON nodename ] [,...] ]
[ REJECTMAX integer ]
[ SKIP integer ]
[ SKIP BYTES integer ]
[ TRAILING NULLCOLS ]
[ TRIM 'byte' ]
Parameters
For all supported parameters, see the CREATE TABLE and COPY statements. For information on using this statement with UDLs, see User-defined load (UDL).
For additional guidance on using COPY parameters, see Specifying where to load data from.
Privileges
Superuser, or non-superuser with the following privileges:
Partitioned data
Data can be partitioned using its directory structure and Vertica can take advantage of that partitioning to improve query performance for external tables. For details, seePartitioned data.
If you see unexpected results when reading data, verify that globs in your file paths correctly align with the partition structure. See Troubleshooting external tables.
ORC and Parquet data
When using the ORC and Parquet formats, Vertica supports some additional options in the COPY
statement and data structures for columns. See ORC and PARQUET.
Examples
The following example defines an external table for delimited data stored in HDFS:
=> CREATE EXTERNAL TABLE sales (itemID INT, date DATE, price FLOAT)
AS COPY FROM 'hdfs:///data/ext1.csv' DELIMITER ',';
The following example uses data in the ORC format that is stored in S3. The data has two partition columns. For more information about partitions, see Partitioned data.
=> CREATE EXTERNAL TABLE records (id int, name varchar(50), created date, region varchar(50))
AS COPY FROM 's3://datalake/sales/*/*/*'
PARTITION COLUMNS created, region;
The following example shows how you can read from all Parquet files in a local directory, with no partitions and no globs:
=> CREATE EXTERNAL TABLE sales (itemID INT, date DATE, price FLOAT)
AS COPY FROM '/data/sales/*.parquet' PARQUET;
The following example creates an external table for data containing arrays:
=> CREATE EXTERNAL TABLE cust (cust_custkey int,
cust_custname varchar(50),
cust_custstaddress ARRAY[varchar(100)],
cust_custaddressln2 ARRAY[varchar(100)],
cust_custcity ARRAY[varchar(50)],
cust_custstate ARRAY[char(2)],
cust_custzip ARRAY[int],
cust_email varchar(50), cust_phone varchar(30))
AS COPY FROM 'webhdfs://data/*.parquet' PARQUET;
To allow users without superuser access to use external tables with data on the local file system, S3, or GCS, create a location for 'user' usage and grant access to it. This example shows granting access to a user named Bob to any external table whose data is located under /tmp (including in subdirectories to any depth):
=> CREATE LOCATION '/tmp' ALL NODES USAGE 'user';
=> GRANT ALL ON LOCATION '/tmp' to Bob;
The following example shows CREATE EXTERNAL TABLE using a user-defined source:
=> CREATE SOURCE curl AS LANGUAGE 'C++' NAME 'CurlSourceFactory' LIBRARY curllib;
=> CREATE EXTERNAL TABLE curl_table1 as COPY SOURCE CurlSourceFactory;
See also
Creating external tables
11.9 - CREATE EXTERNAL TABLE ICEBERG
Creates an external table for data stored by Apache Iceberg.
Creates an external table for data stored by Apache Iceberg. An Iceberg table consists of data files and metadata describing the schema. Unlike other external tables, an Iceberg external table need not specify column definitions (DDL). The information is read from Iceberg metadata at query time. For certain data types you can adjust column definitions, for example to specify VARCHAR sizes.
A single Iceberg table can have more than one metadata file, each describing a different version of the table. You can create an external table using either the base location of the table or a specific metadata file.
If a metadata file specifies columns or struct fields that are not present in the data, Vertica treats the missing values as NULL.
All Iceberg files, both data and metadata, must be accessible to all database nodes.
Iceberg can store data in several file formats. Vertica can read Iceberg data in the Parquet format only and with either version 1 or version 2 metadata. If the Parquet files encode field IDs, Vertica uses them directly. Otherwise, Vertica uses Iceberg fallback name mapping to read field IDs from the metadata (the schema.name-mapping.default
property). If the metadata does not contain a mapping for a field, Vertica sets the column values to NULL.
Syntax
CREATE EXTERNAL TABLE [[{namespace. | database. }]schema.]table
STORED BY ICEBERG LOCATION { path | metadata-file }
[COLUMN TYPES (column-name type[,...])] ;
Arguments
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
table
- Name of the table to create, which must be unique among names of all sequences, tables, projections, views, and models within the schema.
STORED BY ICEBERG LOCATION {
path
|
metadata-file
}
- Location of Iceberg data, one of:
-
Base location of an Iceberg File System table. Vertica uses the latest metadata file.
-
Path to a metadata file with a name ending in .metadata.json
. Vertica uses this metadata file even if it is not the latest.
On S3, an Iceberg table is not a File System table but a metastore. This means you cannot specify a base location on S3. You must specify the full path to a metadata file.
If the paths embedded in the Iceberg data are not accessible to Vertica, use the IcebergPathMapping configuration parameter to provide mappings. See Path Prefixes.
COLUMN TYPES (
column-name
type
[,...])
- Column names and types for VARCHAR, VARBINARY, or ARRAY columns or ROW fields only. You can specify types only to set lengths or array bounds, not type coercion. See Data Types. If you do not specify a type, the table uses the Vertica defaults.
You cannot specify any other column properties, such as defaults or constraints.
Columns that are specified but not found in the Iceberg schema are ignored.
Privileges
Superuser, or non-superuser with the following privileges:
Path prefixes
Iceberg tables store file paths in the metadata as absolute URIs (host and port). Sometimes this URI differs from the URI that Vertica can use to access the data. This can particularly be an issue for files stored on HDFS, where the metadata can use a different URI scheme and port number than what Vertica expects.
To change the URIs, set the IcebergPathMapping configuration parameter. The value is a list of one or more pairs of Iceberg URI prefixes and corresponding Vertica prefixes:
=> ALTER SESSION SET IcebergPathMapping=
'{"hdfs://node-196.example.com:9000":"webhdfs://node-196.example.com:9870"}';
Include only the URI prefix (up through the port), not complete paths. If IcebergPathMapping contains more than one mapping that could apply, Vertica uses the longest entry that matches.
You can set IcebergPathMapping at the database, session, or user level.
Data types
The following table shows the mappings of Iceberg data types to Vertica data types. For types that allow it, you can use the COLUMN TYPES clause to override these defaults.
Iceberg Type |
Vertica Type |
Allows Override? |
boolean |
BOOLEAN |
No |
int (32-bit) |
INT |
No |
long (64-bit) |
INT |
No |
float (32-bit) |
FLOAT |
No |
double (64-bit) |
FLOAT |
No |
decimal(precision, scale) |
NUMERIC with same precision and scale |
No |
date |
DATE |
No |
time |
TIME |
No |
timestamp |
TIMESTAMP |
No |
timestamptz |
TIMESTAMP WITH TIMEZONE |
No |
string |
VARCHAR(80) |
VARCHAR or LONG VARCHAR with custom length |
uuid |
UUID |
No |
fixed(length) |
BINARY(length) if length <= 65000
LONG VARBINARY(length) otherwise
|
No |
binary (variable length) |
VARBINARY(80) |
VARBINARY or LONG VARBINARY with custom length |
struct |
ROW |
No, but you can override individual fields if their types permit |
list |
ARRAY (default bound) |
ARRAY with custom bound |
map<key, value> |
ARRAY[ROW(key, value)] (default bound) |
ARRAY[ROW(key, value)] with custom bound |
Restrictions
The following restrictions apply to external tables backed by Iceberg:
-
Data files must be in Parquet format.
-
Metadata files must use version 1 or version 2 format.
-
Iceberg column defaults are not supported.
-
Iceberg delete files are not supported.
-
Malformed data is an error that aborts the load. You cannot reject bad data and continue.
-
VARCHAR values are not truncated. If a string is too long, it is treated as an error.
The following restrictions apply to queries of Iceberg tables:
-
You cannot use a column in an Iceberg table as a DEFAULT or SET USING option in another table. The following example is an error:
=> CREATE TABLE t(
id INT DEFAULT (SELECT COUNT(*) FROM iceberg_table));
ERROR 0: Default and set using expressions cannot refer to external iceberg tables
-
Errors in Iceberg data or metadata, such as missing files or type mismatches, can manifest as query errors such as the following:
=> SELECT * FROM iceberg_table;
ERROR 0: Problem reading metadata for table iceberg_table. Detail: Could not determine type of column a. User specified type: int. Iceberg type: boolean
Examples
The following example creates a table based on the Parquet data files with no overrides:
=> CREATE EXTERNAL TABLE sales
STORED BY ICEBERG LOCATION 's3:/sales/*';
In the following example, the data uses a struct for the shipping address, with fields for street address (string), city (string), and zip code (integer). The following table definition overrides the default VARCHAR lengths. Note that the zip code is not included in COLUMN TYPES overrides. The ROW column contains only the fields being changed, but all fields including the zip code are part of the table definition and are included in query results:
=> CREATE EXTERNAL TABLE sales
STORED BY ICEBERG LOCATION 's3:/sales/*'
COLUMN TYPES (address ROW(street VARCHAR(50), city VARCHAR(50)));
See also
EXTERNAL_DATA_FILES_PRUNED event in QUERY_EVENTS.
11.10 - CREATE FAULT GROUP
Creates a fault group, which can contain the following:.
Enterprise Mode only
Creates a fault group, which can contain the following:
CREATE FAULT GROUP creates an empty fault group. Use ALTER FAULT GROUP to add nodes or other fault groups to an existing fault group.
Syntax
CREATE FAULT GROUP name
Parameters
name
- The name of the fault group to create, unique among all fault groups, where
name
conforms to conventions described in Identifiers.
Privileges
Superuser
Examples
The following command creates a fault group called parent0
:
=> CREATE FAULT GROUP parent0;
CREATE FAULT GROUP
See also
11.11 - CREATE FLEXIBLE EXTERNAL TABLE AS COPY
CREATE FLEXIBLE EXTERNAL TABLE AS COPY creates a flexible external table.
CREATE FLEXIBLE EXTERNAL TABLE AS COPY creates a flexible external table. This statement combines statements CREATE FLEXIBLE TABLE and COPY statements, supporting a subset of each statement's parameters.
You can also use user-defined load functions (UDLs) to create external flex tables. For details about creating and using flex tables, see Using Flex Tables.
Note
Vertica does not create a superprojection for an external table when you create it.
For details about creating and using flex tables, see Creating flex tables in Using Flex Tables.
Caution
Canceling a
CREATE FLEX EXTERNAL TABLE AS COPY
statement can cause unpredictable results. Vertica recommends that you allow the statement to finish, then use
DROP TABLE after the table exists.
Syntax
CREATE FLEX[IBLE] EXTERNAL TABLE [ IF NOT EXISTS ] [[database.]schema.]table-name
( [ column-definition[,...] ] )
[ INCLUDE | EXCLUDE [SCHEMA] PRIVILEGES ]
AS COPY [ ( { column-as-expression | column } [ FILLER datatype ] ]
FROM {
'path-to-data' [ ON nodename | ON ANY NODE | ON (nodeset) ] input-format [,...]
| [ WITH ] UDL-clause[...]
}
[ ABORT ON ERROR ]
[ DELIMITER [ AS ] 'char' ]
[ ENCLOSED [ BY ] 'char' ]
[ ENFORCELENGTH ]
[ ESCAPE [ AS ] 'char' | NO ESCAPE ]
[ EXCEPTIONS 'path' [ ON nodename ] [,...] ]
[ NULL [ AS ] 'string' ]
[ RECORD TERMINATOR 'string' ]
[ REJECTED DATA 'path' [ ON nodename ][,...] ]
[ REJECTMAX integer ]
[ SKIP integer ]
[ SKIP BYTES integer ]
[ TRAILING NULLCOLS ]
[ TRIM 'byte' ]
Parameters
For parameter descriptions, see CREATE TABLE and Parameters.
Note
CREATE FLEXIBLE EXTERNAL TABLE AS COPY supports only a subset of CREATE TABLE and COPY parameters.
Privileges
Superuser, or non-superuser with the following privileges:
Examples
To create an external flex table:
=> CREATE flex external table mountains() AS COPY FROM 'home/release/KData/kmm_ountains.json' PARSER fjsonparser();
CREATE TABLE
As with other flex tables, creating an external flex table produces two regular tables: the named table and its associated _keys
table. The keys table is not an external table:
=> \dt mountains
List of tables
Schema | Name | Kind | Owner | Comment
--------+-----------+-------+---------+---------
public | mountains | table | release |
(1 row)
You can use the helper function, COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW, to compute keys and create a view for the external table:
=> SELECT compute_flextable_keys_and_build_view ('appLog');
compute_flextable_keys_and_build_view
--------------------------------------------------------------------------------------------------
Please see public.appLog_keys for updated keys
The view public.appLog_view is ready for querying
(1 row)
Check the keys from the _keys
table for the results of running the helper application:
=> SELECT * FROM appLog_keys;
key_name | frequency | data_type_guess
----------------------------------------------------------+-----------+------------------
contributors | 8 | varchar(20)
coordinates | 8 | varchar(20)
created_at | 8 | varchar(60)
entities.hashtags | 8 | long varbinary(186)
.
.
.
retweeted_status.user.time_zone | 1 | varchar(20)
retweeted_status.user.url | 1 | varchar(68)
retweeted_status.user.utc_offset | 1 | varchar(20)
retweeted_status.user.verified | 1 | varchar(20)
(125 rows)
You can query the view:
=> SELECT "user.lang" FROM appLog_view;
user.lang
-----------
it
en
es
en
en
es
tr
en
(12 rows)
See also
11.12 - CREATE FLEXIBLE TABLE
Creates a flexible (flex) table in the logical schema.
Creates a flexible (flex) table in the logical schema.
When you create a flex table, Vertica automatically creates two dependent objects:
The flex table requires the keys table and view. Neither of these objects can exist independently of the flex table.
Syntax
Create with column definitions:
CREATE [[ scope ] TEMP[ORARY]] FLEX[IBLE] TABLE [ IF NOT EXISTS ]
[[database.]schema.]table-name
( [ column-definition[,...] [, table-constraint ][,...] ] )
[ ORDER BY column[,...] ]
[ segmentation-spec ]
[ KSAFE [k-num] ]
[ partition-clause]
[ {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES ]
[ DISK_QUOTA quota ]
Create from another table:
CREATE FLEX[IBLE] TABLE [[database.]schema.] table-name
[ ( column-name-list ) ]
[ {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES ]
AS query [ ENCODED BY column-ref-list ]
[ DISK_QUOTA quota ]
Parameters
For general parameter descriptions, see CREATE TABLE; for parameters specific to temporary flex tables, see CREATE TEMPORARY TABLE and Creating flex tables.
You cannot partition a flex table on any virtual column (key).
Privileges
Non-superuser: CREATE privilege on table schema
Default columns
The CREATE statement can omit specifying any column definitions. CREATE FLEXIBLE TABLE always creates two columns automatically:
__raw__
- LONG VARBINARY type column to store unstructured data that you load. By default, this column has a
NOT NULL
constraint.
__identity__
- IDENTITY column that is used for segmentation and sorting when no other column is defined.
Default projections
Vertica automatically creates superprojections for both the flex table and keys tables when you create them.
If you create a flex table with one or more of the ORDER BY, ENCODED BY, SEGMENTED BY, or KSAFE clauses, the clause information is used to create projections. If no clauses are in use, Vertica uses the following defaults:
Table |
Sort order |
Encoding |
Segmentation |
K-safety |
Flexible table |
ORDER BY *.__identity__ |
none |
SEGMENTED BY hash *.__identity__ ALL NODES OFFSET 0 |
1 |
Keys table |
ORDER BY *._keys_frequency |
none |
UNSEGMENTED ALL NODES |
1 |
Note
When you build a view for a flex table (see
BUILD_FLEXTABLE_VIEW), the view is ordered by
frequency,
desc, and
key_name.
Examples
The following example creates a flex table named darkdata
without specifying any column information. Vertica creates a default superprojection and buddy projection as part of creating the table:
=> CREATE FLEXIBLE TABLE darkdata();
CREATE TABLE
=> \dj darkdata1*
List of projections
Schema | Name | Owner | Node | Comment
--------+----------------------+---------+------------------+---------
public | darkdata1_b0 | dbadmin | |
public | darkdata1_b1 | dbadmin | |
public | darkdata1_keys_super | dbadmin | v_vmart_node0001 |
public | darkdata1_keys_super | dbadmin | v_vmart_node0002 |
public | darkdata1_keys_super | dbadmin | v_vmart_node0003 |
(5 rows)
=> SELECT export_objects('','darkdata1_b0');
CREATE PROJECTION public.darkdata1_b0 /*+basename(darkdata1),createtype(P)*/
(
__identity__,
__raw__
)
AS
SELECT darkdata1.__identity__,
darkdata1.__raw__
FROM public.darkdata1
ORDER BY darkdata1.__identity__
SEGMENTED BY hash(darkdata1.__identity__) ALL NODES OFFSET 0;
SELECT MARK_DESIGN_KSAFE(1);
(1 row)
=> select export_objects('','darkdata1_keys_super');
CREATE PROJECTION public.darkdata1_keys_super /*+basename(darkdata1_keys),createtype(P)*/
(
key_name,
frequency,
data_type_guess
)
AS
SELECT darkdata1_keys.key_name,
darkdata1_keys.frequency,
darkdata1_keys.data_type_guess
FROM public.darkdata1_keys
ORDER BY darkdata1_keys.frequency
UNSEGMENTED ALL NODES;
SELECT MARK_DESIGN_KSAFE(1);
(1 row)
The following example creates a table called darkdata1
with one column definition (date_col
). The statement specifies the partition by
clause to partition the data by year. Vertica creates a default superprojection and buddy projections as part of creating the table:
=> CREATE FLEX TABLE darkdata1 (date_col date NOT NULL) partition by
extract('year' from date_col);
CREATE TABLE
See also
11.13 - CREATE FUNCTION statements
Vertica provides CREATE statements for each type of user-defined extension.
Vertica provides CREATE statements for each type of user-defined extension. Each CREATE statement adds a user-defined function to the Vertica catalog:
Vertica also provides CREATE FUNCTION (SQL), which stores SQL expressions as functions that you can invoke in a query.
11.13.1 - CREATE AGGREGATE FUNCTION
Adds a user-defined aggregate function (UDAF) to the catalog.
Adds a user-defined aggregate function (UDAF) to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.
CREATE AGGREGATE FUNCTION automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading aggregate functions. When you call the SQL function, Vertica passes the input table to the function to process.
User-defined aggregate functions run in unfenced mode only.
Syntax
CREATE [ OR REPLACE ] AGGREGATE FUNCTION [ IF NOT EXISTS ]
[[database.]schema.]function AS
[ LANGUAGE 'language' ]
NAME 'factory'
LIBRARY library
[ NOT FENCED ];
Arguments
OR REPLACE
If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
IF NOT EXISTS
If a function with the same name and arguments exists, return without creating the function.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.
The function name must conform to the restrictions on Identifiers.
LANGUAGE '
language
'
- The language used to develop this function, currently
C++
only (the default).
NAME '
factory
'
- Name of the factory class that generates the function instance.
LIBRARY
library
- Name of the shared library that contains the function. This library must have already been loaded by CREATE LIBRARY.
NOT FENCED
- Indicates that the function runs in unfenced mode. Aggregate functions cannot be run in fenced mode.
Privileges
Non-superuser:
Examples
The following example demonstrates loading a library named AggregateFunctions
and then defining functions named ag_avg
and ag_cat
. The functions are mapped to the AverageFactory
and ConcatenateFactory
classes in the library:
=> CREATE LIBRARY AggregateFunctions AS '/opt/vertica/sdk/examples/build/AggregateFunctions.so';
CREATE LIBRARY
=> CREATE AGGREGATE FUNCTION ag_avg AS LANGUAGE 'C++' NAME 'AverageFactory'
library AggregateFunctions;
CREATE AGGREGATE FUNCTION
=> CREATE AGGREGATE FUNCTION ag_cat AS LANGUAGE 'C++' NAME 'ConcatenateFactory'
library AggregateFunctions;
CREATE AGGREGATE FUNCTION
=> \x
Expanded display is on.
select * from user_functions;
-[ RECORD 1 ]----------+------------------------------------------------------------------
schema_name | public
function_name | ag_avg
procedure_type | User Defined Aggregate
function_return_type | Numeric
function_argument_type | Numeric
function_definition | Class 'AverageFactory' in Library 'public.AggregateFunctions'
volatility |
is_strict | f
is_fenced | f
comment |
-[ RECORD 2 ]----------+------------------------------------------------------------------
schema_name | public
function_name | ag_cat
procedure_type | User Defined Aggregate
function_return_type | Varchar
function_argument_type | Varchar
function_definition | Class 'ConcatenateFactory' in Library 'public.AggregateFunctions'
volatility |
is_strict | f
is_fenced | f
comment |
See also
11.13.2 - CREATE ANALYTIC FUNCTION
Adds a user-defined analytic function (UDAnF) to the catalog.
Adds a user-defined analytic function (UDAnF) to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.
CREATE ANALYTIC FUNCTION automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading analytic functions. When you call the SQL function, Vertica passes the input table to the function in the library to process.
Syntax
CREATE [ OR REPLACE ] ANALYTIC FUNCTION [ IF NOT EXISTS ]
[[database.]schema.]function AS
[ LANGUAGE 'language' ]
NAME 'factory'
LIBRARY library
[ FENCED | NOT FENCED ]
Arguments
OR REPLACE
If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
IF NOT EXISTS
If a function with the same name and arguments exists, return without creating the function.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.
The function name must conform to the restrictions on Identifiers.
LANGUAGE '
language
'
- Language used to develop this function, one of the following:
NAME '
factory
'
- Name of the factory class that generates the function instance.
LIBRARY
library
- Name of the library that contains the function. This library must already be loaded by CREATE LIBRARY.
FENCED | NOT FENCED
- Enables or disables fenced mode for this function.
Default: FENCED
Privileges
Non-superuser:
Examples
This example creates an analytic function named an_rank
based on the factory class named RankFactory
in the AnalyticFunctions
library:
=> CREATE ANALYTIC FUNCTION an_rank AS LANGUAGE 'C++'
NAME 'RankFactory' LIBRARY AnalyticFunctions;
See also
Analytic functions (UDAnFs)
11.13.3 - CREATE FILTER
Adds a user-defined load filter function to the catalog.
Adds a user-defined load filter function to the catalog. The library containing the filter function must have been previously added using CREATE LIBRARY.
CREATE FILTER automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading load filter functions. When you call the SQL function, Vertica passes the input table to the function in the library to process.
Important
Installing an untrusted UDL function can compromise the security of the server. UDxs can contain arbitrary code. In particular, user-defined parser functions can read data from any arbitrary location. It is up to the developer of the function to enforce proper security limitations. Superusers must not grant access to UDxs to untrusted users.
Syntax
CREATE [ OR REPLACE ] FILTER [ IF NOT EXISTS ]
[[database.]schema.]function AS
[ LANGUAGE 'language' ]
NAME 'factory' LIBRARY library
[ FENCED | NOT FENCED ]
Arguments
OR REPLACE
If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
IF NOT EXISTS
If a function with the same name and arguments exists, return without creating the function.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.
The function name must conform to the restrictions on Identifiers.
LANGUAGE '
language
'
- The language used to develop this function, one of the following:
-
C++
(default)
-
Java
-
Python
NAME '
factory
'
- Name of the factory class that generates the function instance. This is the same name used by the RegisterFactory class.
LIBRARY
library
- Name of the C++ library shared object file, Python file, or Java Jar file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
- Enables or disables fenced mode for this function.
Default: FENCED
Privileges
Superuser
Examples
The following example demonstrates loading a library named iConverterLib
, then defining a filter function named Iconverter
that is mapped to the iConverterFactory
factory class in the library:
=> CREATE LIBRARY iConverterLib as '/opt/vertica/sdk/examples/build/IconverterLib.so';
CREATE LIBRARY
=> CREATE FILTER Iconverter AS LANGUAGE 'C++' NAME 'IconverterFactory' LIBRARY IconverterLib;
CREATE FILTER FUNCTION
=> \x
Expanded display is on.
=> SELECT * FROM user_functions;
-[ RECORD 1 ]----------+--------------------
schema_name | public
function_name | Iconverter
procedure_type | User Defined Filter
function_return_type |
function_argument_type |
function_definition |
volatility |
is_strict | f
is_fenced | f
comment |
See also
11.13.4 - CREATE FUNCTION (scalar)
Adds a user-defined scalar function (UDSF) to the catalog.
Adds a user-defined scalar function (UDSF) to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.
A UDSF takes in a single row of data and returns a single value. These functions can be used anywhere a native Vertica function or statement can be used, except CREATE TABLE with its PARTITION BY or any segmentation clause.
CREATE FUNCTION automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading UDxs. When you call the function, Vertica passes the parameters to the function in the library to process.
Syntax
CREATE [ OR REPLACE ] FUNCTION [ IF NOT EXISTS ]
[[database.]schema.]function AS
[ LANGUAGE 'language' ]
NAME 'factory'
LIBRARY library
[ FENCED | NOT FENCED ]
Arguments
OR REPLACE
If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
IF NOT EXISTS
If a function with the same name and arguments exists, return without creating the function.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.
The function name must conform to the restrictions on Identifiers.
LANGUAGE '
language
'
- Language used to develop this function, one of the following:
-
C++
(default)
-
Python
-
Java
-
R
NAME '
factory
'
- Name of the factory class that generates the function instance.
LIBRARY
library
- Name of the C++ shared object file, Python file, Java Jar file, or R functions file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
- Enables or disables fenced mode for this function. Functions written in Java and R always run in fenced mode.
Default: FENCED
Privileges
Examples
The following example loads a library named ScalarFunctions
and then defines a function named Add2ints
that is mapped to the Add2intsInfo
factory class in the library:
=> CREATE LIBRARY ScalarFunctions AS '/opt/vertica/sdk/examples/build/ScalarFunctions.so';
CREATE LIBRARY
=> CREATE FUNCTION Add2Ints AS LANGUAGE 'C++' NAME 'Add2IntsFactory' LIBRARY ScalarFunctions;
CREATE FUNCTION
=> \x
Expanded display is on.
=> SELECT * FROM USER_FUNCTIONS;
-[ RECORD 1 ]----------+----------------------------------------------------
schema_name | public
function_name | Add2Ints
procedure_type | User Defined Function
function_return_type | Integer
function_argument_type | Integer, Integer
function_definition | Class 'Add2IntsFactory' in Library 'public.ScalarFunctions'
volatility | volatile
is_strict | f
is_fenced | t
comment |
=> \x
Expanded display is off.
=> -- Try a simple call to the function
=> SELECT Add2Ints(23,19);
Add2Ints
----------
42
(1 row)
The following example uses a scalar function that returns a ROW:
=> CREATE FUNCTION div_with_rem AS LANGUAGE 'C++' NAME 'DivFactory' LIBRARY ScalarFunctions;
=> SELECT div_with_rem(18,5);
div_with_rem
------------------------------
{"quotient":3,"remainder":3}
(1 row)
See also
Developing user-defined extensions (UDxs)
11.13.5 - CREATE FUNCTION (SQL)
Stores SQL expressions as functions for use in queries.
Stores SQL expressions as functions for use in queries. User-defined SQL functions are useful for executing complex queries and combining Vertica built-in functions. You can call the function in a given query. If multiple SQL functions with the same name and argument types are in the search path, Vertica calls the first match that it finds.
SQL functions do not support complex types for arguments or return values.
SQL functions are flattened in all cases, including DDL.
Syntax
CREATE [ OR REPLACE ] FUNCTION [ IF NOT EXISTS ]
[[database.]schema.]function( [ argname argtype[,...] ] )
RETURN return_type
AS
BEGIN
RETURN expression;
END;
Arguments
OR REPLACE
- If a function of the same name and arguments exists, replace it. If you only change the function arguments, Vertica ignores this option and maintains both functions under the same name.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
IF NOT EXISTS
If a function with the same name and arguments exists, return without creating the function.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- Name of the function to create, which must conform to the conventions described in Identifiers.
argname
argtype
[,...]
- A comma-delimited list of argument names and their data types. Complex types are not supported.
return_type
- The data type that this function returns. Complex types are not supported.
RETURN
expression
- The SQL function body, which can contain built-in functions, operators, and argument names specified in the CREATE FUNCTION statement. The expression must end with a semicolon.
Note
CREATE FUNCTION allows only one RETURN expression. Return expressions do not support the following:
-
FROM, WHERE, GROUP BY, ORDER BY, and LIMIT clauses
-
Aggregation, analytics, and meta-functions
Privileges
Non-superuser:
Strictness and volatility
Vertica infers the strictness and volatility (stable, immutable, or volatile) of a SQL function from its definition. Vertica then determines the correctness of usage, such as where an immutable function is expected but a volatile function is provided.
SQL functions and views
You can create views on the queries that use SQL functions and then query the views. When you create a view, a SQL function replaces a call to the user-defined function with the function body in a view definition. Therefore, when the body of the user-defined function is replaced, the view should also be replaced.
Examples
See Creating user-defined SQL functions.
See also
11.13.6 - CREATE PARSER
Adds a user-defined load parser function to the catalog.
Adds a user-defined load parser function to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.
CREATE PARSER automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading load parser functions. When you call the SQL function, Vertica passes the input table to the function in the library to process.
Important
Installing an untrusted UDL function can compromise the security of the server. UDxs can contain arbitrary code. In particular, user-defined parser functions can read data from any arbitrary location. It is up to the developer of the function to enforce proper security limitations. Superusers must not grant access to UDxs to untrusted users.
Syntax
CREATE [ OR REPLACE ] PARSER [ IF NOT EXISTS ]
[[database.]schema.]function AS
[ LANGUAGE 'language' ]
NAME 'factory'
LIBRARY library
[ FENCED | NOT FENCED ]
Arguments
OR REPLACE
If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
IF NOT EXISTS
If a function with the same name and arguments exists, return without creating the function.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.
The function name must conform to the restrictions on Identifiers.
LANGUAGE '
language
'
- The language used to develop this function, one of the following:
-
C++ (default)
-
Java
-
Python
NAME '
factory
'
- Name of the factory class that generates the function instance. This is the same name used by the RegisterFactory class.
LIBRARY
library
- Name of the C++ library shared object file, Python file, or Java Jar file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
- Enables or disables fenced mode for this function.
Default: FENCED
Privileges
Superuser
Examples
The following example demonstrates loading a library named BasicIntegrerParserLib
, then defining a parser function named BasicIntegerParser
that is mapped to the BasicIntegerParserFactory
factory class in the library:
=> CREATE LIBRARY BasicIntegerParserLib as '/opt/vertica/sdk/examples/build/BasicIntegerParser.so';
CREATE LIBRARY
=> CREATE PARSER BasicIntegerParser AS LANGUAGE 'C++' NAME 'BasicIntegerParserFactory' LIBRARY BasicIntegerParserLib;
CREATE PARSER FUNCTION
=> \x
Expanded display is on.
=> SELECT * FROM user_functions;
-[ RECORD 1 ]----------+--------------------
schema_name | public
function_name | BasicIntegerParser
procedure_type | User Defined Parser
function_return_type |
function_argument_type |
function_definition |
volatility |
is_strict | f
is_fenced | f
comment |
See also
11.13.7 - CREATE SOURCE
Adds a user-defined load source function to the catalog.
Adds a user-defined load source function to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.
CREATE SOURCE automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading load source functions. When you call the SQL function, Vertica passes the input table to the function in the library to process.
Important
Installing an untrusted UDL function can compromise the security of the server. UDxs can contain arbitrary code. In particular, user-defined parser functions can read data from any arbitrary location. It is up to the developer of the function to enforce proper security limitations. Superusers must not grant access to UDxs to untrusted users.
Syntax
CREATE [ OR REPLACE ] SOURCE [ IF NOT EXISTS ]
[[database.]schema.]function AS
[ LANGUAGE 'language' ]
NAME 'factory'
LIBRARY library
[ FENCED | NOT FENCED ]
Arguments
OR REPLACE
If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
IF NOT EXISTS
If a function with the same name and arguments exists, return without creating the function.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.
The function name must conform to the restrictions on Identifiers.
LANGUAGE '
language
'
- Language used to develop this function, one of the following:
NAME '
factory
'
- Name of the factory class that generates the function instance. This is the same name used by the RegisterFactory class.
LIBRARY
library
- Name of the C++ library shared object file or Java Jar file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
- Enables or disables fenced mode for this function.
Default: FENCED
Privileges
Superuser
Examples
The following example demonstrates loading a library named curllib
, then defining a source function named curl
that is mapped to the CurlSourceFactory
factory class in the library:
=> CREATE LIBRARY curllib as '/opt/vertica/sdk/examples/build/cURLLib.so';
CREATE LIBRARY
=> CREATE SOURCE curl AS LANGUAGE 'C++' NAME 'CurlSourceFactory' LIBRARY curllib;
CREATE SOURCE
=> \x
Expanded display is on.
=> SELECT * FROM user_functions;
-[ RECORD 1 ]----------+--------------------
schema_name | public
function_name | curl
procedure_type | User Defined Source
function_return_type |
function_argument_type |
function_definition |
volatility |
is_strict | f
is_fenced | f
comment |
See also
11.13.8 - CREATE TRANSFORM FUNCTION
Adds a user-defined transform function (UDTF) to the catalog.
Adds a user-defined transform function (UDTF) to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.
CREATE TRANSFORM FUNCTION automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading transform functions. When you call the SQL function, Vertica passes the input table to the transform function in the library to process.
Syntax
CREATE [ OR REPLACE ] TRANSFORM FUNCTION [ IF NOT EXISTS ]
[[database.]schema.]function AS
[ LANGUAGE 'language' ]
NAME 'factory'
LIBRARY library
[ FENCED | NOT FENCED ]
Arguments
OR REPLACE
If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
IF NOT EXISTS
If a function with the same name and arguments exists, return without creating the function.
OR REPLACE and IF NOT EXISTS are mutually exclusive.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.
The function name must conform to the restrictions on Identifiers.
LANGUAGE '
language
'
- The language used to develop this function, one of the following:
-
C++
(default)
-
Java
-
R
-
Python
NAME '
factory
'
- Name of the factory class that generates the function instance.
LIBRARY
library
- Name of the C++ shared object file, Python file, Java Jar file, or R functions file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
- Enables or disables fenced mode for this function. Functions written in Java and R always run in fenced mode.
Default: FENCED
Privileges
Non-superuser:
Restrictions
A query that includes a UDTF cannot:
-
Include statements other than the SELECT statement that calls the UDTF and a PARTITION BY expression unless the UDTF is marked as a one-to-many UDTF
-
Call an analytic function
-
Call another UDTF
-
Include one of the following clauses:
Examples
The following example loads a library named TransformFunctions
and then defines a function named tokenize
that is mapped to the TokenFactory
factory class in the library:
=> CREATE LIBRARY TransformFunctions AS
'/home/dbadmin/TransformFunctions.so';
CREATE LIBRARY
=> CREATE TRANSFORM FUNCTION tokenize
AS LANGUAGE 'C++' NAME 'TokenFactory' LIBRARY TransformFunctions;
CREATE TRANSFORM FUNCTION
See also
11.14 - CREATE HCATALOG SCHEMA
Define a schema for data stored in a Hive data warehouse using the HCatalog Connector.
Define a schema for data stored in a Hive data warehouse using the HCatalog Connector. For more information, see Using the HCatalog Connector.
Most of the optional parameters are read out of Hadoop configuration files if available. If you copied the Hadoop configuration files as described in Configuring Vertica for HCatalog, you can omit most parameters. By default this statement uses the values specified in those configuration files. If the configuration files are complete, the following is a valid statement:
=> CREATE HCATALOG SCHEMA hcat;
If a value is not specified in the configuration files and a default is shown in the parameter list, then that default value is used.
Some parameters apply only if you are using HiveServer2 (the default). Others apply only if you are using WebHCat, a legacy Hadoop service. When using HiveServer2, use HIVESERVER2_HOSTNAME to specify the server host. When using WebHCat, use WEBSERVICE_HOSTNAME to specify the server host.
If you need to use WebHCat you must also set the HCatalogConnectorUseHiveServer2 configuration parameter to 0. See Hadoop parameters.
After creating the schema, you can change many (but not all) parameters using ALTER HCATALOG SCHEMA.
Syntax
CREATE HCATALOG SCHEMA [IF NOT EXISTS] schemaName
[AUTHORIZATION user-id]
[WITH [param=value [,...] ] ]
Arguments
Argument |
Description |
[IF NOT EXISTS] |
If given, the statement exits without an error when the schema named in schemaName already exists. |
schemaName |
The name of the schema to create in the Vertica catalog. The tables in the Hive database will be available through this schema. |
AUTHORIZATION user-id |
The name of a Vertica account to own the schema being created. This parameter is ignored if Kerberos authentication is being used; in that case the current vsql user is used. |
Parameters
Parameter |
Description |
HOSTNAME |
The hostname, IP address, or URI of the database server that stores the Hive data warehouse's metastore information.
If you specify this parameter and do not also specify PORT , then this value must be in the URI format used for hive.metastore.uris in hive-site.xml.
If the Hive metastore supports High Availability, you can specify a comma-separated list of URIs for this value.
If this value is not specified, hive-site.xml must be available.
|
PORT |
The port number on which the metastore database is running. If you specify this parameter, you must also specify HOSTNAME and it must be a name or IP address (not a URI). |
HIVESERVER2_HOSTNAME |
The hostname or IP address of the HiveServer2 service. This parameter is optional if in hive-site.xml you set one of the following properties:
This parameter is ignored if you are using WebHCat.
|
WEBSERVICE_HOSTNAME |
The hostname or IP address of the WebHCat service, if using WebHCat instead of HiveServer2. If this value is not specified, webhcat-site.xml must be available. |
WEBSERVICE_PORT |
The port number on which the WebHCat service is running, if using WebHCat instead of HiveServer2. If this value is not specified, webhcat-site.xml must be available. |
WEBHDFS_ADDRESS |
The host and port ("host:port") for the WebHDFS service. This parameter is used only for reading ORC and Parquet files. If this value is not set, hdfs-site.xml must be available to read these file types through the HCatalog Connector. |
HCATALOG_SCHEMA |
The name of the Hive schema or database that the Vertica schema is being mapped to. The default is schemaName . |
CUSTOM_PARTITIONS |
Whether the Hive schema uses custom partition locations ('YES' or 'NO'). If the schema uses custom partition locations, then Vertica queries Hive to get those locations when executing queries. These additional Hive queries can be expensive, so use this parameter only if you need to. The default is 'NO' (disabled). For more information, see Using Partitioned Data. |
HCATALOG_USER |
The username of the HCatalog user to use when making calls to the HiveServer2 or WebHCat server. The default is the current database user. |
HCATALOG_CONNECTION_TIMEOUT |
The number of seconds the HCatalog Connector waits for a successful connection to the HiveServer or WebHCat server. A value of 0 means wait indefinitely. |
HCATALOG_SLOW_TRANSFER_LIMIT |
The lowest data transfer rate (in bytes per second) from the HiveServer2 or WebHCat server that the HCatalog Connector accepts. See HCATALOG_SLOW_TRANSFER_TIME for details. |
HCATALOG_SLOW_TRANSFER_TIME |
The number of seconds the HCatalog Connector waits before enforcing the data transfer rate lower limit. After this time has passed, the HCatalog Connector tests whether the data transfer rate is at least as fast as the value set in HCATALOG_SLOW_TRANSFER_LIMIT. If it is not, then the HCatalog Connector breaks the connection and terminates the query. |
SSL_CONFIG |
The path of the Hadoop ssl-client.xml configuration file. This parameter is required if you are using HiveServer2 and it uses SSL wire encryption. This parameter is ignored if you are using WebHCat. |
The default values for HCATALOG_CONNECTOR_TIMEOUT, HCATALOG_SLOW_TRANSFER_LIMIT, and HCATALOG_SLOW_TRANSFER_TIME are set by the database configuration parameters HCatConnectionTimeout, HCatSlowTransferLimit, and HCatSlowTransferTime. See Hadoop parameters for more information.
Configuration files
The HCatalog Connector uses the following values from the Hadoop configuration files if you do not override them when creating the schema.
File |
Properties |
hive-site.xml |
hive.server2.thrift.bind.host (used for HIVESERVER2_HOSTNAME) |
hive.server2.thrift.port |
hive.server2.transport.mode |
hive.server2.authentication |
hive.server2.authentication.kerberos.principal |
hive.server2.support.dynamic.service.discovery |
hive.zookeeper.quorum (used as HIVESERVER2_HOSTNAME if dynamic service discovery is enabled) |
hive.zookeeper.client.port |
hive.server2.zookeeper.namespace |
hive.metastore.uris (used for HOSTNAME and PORT) |
ssl-client.xml |
ssl.client.truststore.location |
ssl.client.truststore.password |
Privileges
The user must be a superuser or be granted all permissions on the database to use this statement.
The user also requires access to Hive data in one of the following ways:
-
Have USAGE permissions on hcatalog_schema
, if Hive does not use an authorization service (Sentry or Ranger) to manage access.
-
Have permission through an authorization service, if Hive uses it to manage access. In this case you must either set EnableHCatImpersonation to 0, to access data as the Vertica principal, or grant users access to the HDFS data. For Sentry, you can use ACL synchronization to manage HDFS access.
-
Be the dbadmin user, with or without an authorization service.
Examples
The following example shows how to use CREATE HCATALOG SCHEMA to define a new schema for tables stored in a Hive database and then query the system tables that contain information about those tables:
=> CREATE HCATALOG SCHEMA hcat WITH HOSTNAME='hcathost' PORT=9083
HCATALOG_SCHEMA='default' HIVESERVER2_HOSTNAME='hs.example.com'
SSL_CONFIG='/etc/hadoop/conf/ssl-client.xml' HCATALOG_USER='admin';
CREATE SCHEMA
=> \x
Expanded display is on.
=> SELECT * FROM v_catalog.hcatalog_schemata;
-[ RECORD 1 ]----------------+-------------------------------------------
schema_id | 45035996273748224
schema_name | hcat
schema_owner_id | 45035996273704962
schema_owner | admin
create_time | 2017-12-05 14:43:03.353404-05
hostname | hcathost
port | -1
hiveserver2_hostname | hs.example.com
webservice_hostname |
webservice_port | 50111
webhdfs_address | hs.example.com:50070
hcatalog_schema_name | default
ssl_config | /etc/hadoop/conf/ssl-client.xml
hcatalog_user_name | admin
hcatalog_connection_timeout | -1
hcatalog_slow_transfer_limit | -1
hcatalog_slow_transfer_time | -1
custom_partitions | f
=> SELECT * FROM v_catalog.hcatalog_table_list;
-[ RECORD 1 ]------+------------------
table_schema_id | 45035996273748224
table_schema | hcat
hcatalog_schema | default
table_name | nation
hcatalog_user_name | admin
-[ RECORD 2 ]------+------------------
table_schema_id | 45035996273748224
table_schema | hcat
hcatalog_schema | default
table_name | raw
hcatalog_user_name | admin
-[ RECORD 3 ]------+------------------
table_schema_id | 45035996273748224
table_schema | hcat
hcatalog_schema | default
table_name | raw_rcfile
hcatalog_user_name | admin
-[ RECORD 4 ]------+------------------
table_schema_id | 45035996273748224
table_schema | hcat
hcatalog_schema | default
table_name | raw_sequence
hcatalog_user_name | admin
The following example shows how to specify more than one metastore host.
=> CREATE HCATALOG SCHEMA hcat
WITH HOSTNAME='thrift://node1.example.com:9083,thrift://node2.example.com:9083';
The following example shows how to include custom partition locations:
=> CREATE HCATALOG SCHEMA hcat WITH HCATALOG_SCHEMA='default'
HIVESERVER2_HOSTNAME='hs.example.com'
CUSTOM_PARTITIONS='yes';
11.15 - CREATE KEY
Creates a private key.
Creates a private key.
Syntax
CREATE [TEMP[ORARY]] KEY name
{ 'AES' [ PASSWORD 'password' ] | 'RSA' }
{LENGTH length | AS key_text}
Parameters
TEMPORARY
- Create with session scope. The key is stored in memory and is valid only for the current session.
name
- The name of the key.
password
- Password for the key.
length
- Size of the key in bits.
Example: 2048
key_text
- The contents of the key to import.
Example:
-----BEGIN RSA PRIVATE KEY-----...ABCD1234...-----END RSA PRIVATE KEY-----
Privileges
Superuser
Examples
See Generating TLS certificates and keys.
See also
11.16 - CREATE LIBRARY
Loads a library containing user-defined extensions (UDxs) into the Vertica catalog.
Loads a library containing user-defined extensions (UDxs) into the Vertica catalog. Vertica automatically distributes copies of the library file and supporting libraries to all cluster nodes.
Because libraries are added to the database catalog, they persist across database restarts.
After loading a library in the catalog, you can use statements such as CREATE FUNCTION to define the extensions contained in the library. See Developing user-defined extensions (UDxs) for details.
Syntax
CREATE [OR REPLACE] LIBRARY
[[database.]schema.]name
AS 'path'
[ DEPENDS 'depends-path' ]
[ LANGUAGE 'language' ]
Arguments
OR REPLACE
- If a library with the same name exists, replace it. UDxs defined in the catalog that reference the updated library automatically start using the new library file.
If you do not use this directive and the library already exists, the CREATE statement returns with an error.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
name
- Name of the library to create. This is the name used when creating functions in the library (see Creating UDx Functions). While not required, it is good practice to match the file name.
AS
path
- Path of the library to load, either an absolute path on the initiator node file system or a URI for another supported file system or object store.
DEPENDS '
depends-path
'
Files or libraries on which this library depends, one or more files or directories on the initiator node file system or other supported file systems or object stores. For a directory, end the path entry with a slash (/
), optionally followed by a wildcard (*
). To specify more than one file, separate entries with colons (:
).
If any path entry contain colons, such as a URI, place brackets around the entire DEPENDS path and use double quotes for the individual path elements, as in the following example:
DEPENDS '["s3://mybucket/gson-2.3.1.jar"]'
To specify libraries with multiple directory levels, see Multi-level Library Dependencies.
DEPENDS has no effect for libraries written in R. R packages must be installed locally on each node, including external dependencies.
Important
The performance of CREATE LIBRARY can degrade in Eon Mode, in proportion to the number and depth of dependencies specified by the DEPENDS clause.
If a Java library depends on native libraries (SO files), use DEPENDS to specify the path and call System.loadLibrary()
in your UDx to load the native libraries from that path.
LANGUAGE '
language
'
- The programming language of the functions in the library, one of:
-
C++
(default)
-
Python
-
Java
-
R
Privileges
Superuser, or UDXDEVELOPER and CREATE on the schema. Non-superusers must explicitly enable the UDXDEVELOPER role, as in the following example:
=> SET ROLE UDXDEVELOPER;
SET
-- Not required, but you can confirm the role as follows:
=> SHOW ENABLED ROLES;
name | setting
---------------+--------------
enabled roles | udxdeveloper
(1 row)
=> CREATE LIBRARY MyLib AS '/home/dbadmin/my_lib.so';
CREATE LIBRARY
-- Create functions...
-- UDXDEVELOPER also grants DROP (replace):
=> CREATE OR REPLACE LIBRARY MyLib AS '/home/dbadmin/my_lib.so';
Requirements
-
Vertica makes its own copies of the library files. Later modification or deletion of the original files specified in the statement does not affect the library defined in the catalog. To update the library, use ALTER LIBRARY.
-
Loading a library does not guarantee that it functions correctly. CREATE LIBRARY performs some basic checks on the library file to verify it is compatible with Vertica. The statement fails if it detects that the library was not correctly compiled or it finds other basic incompatibilities. However, CREATE LIBRARY cannot detect many other issues in shared libraries.
Multi-level library dependencies
If a DEPENDS clause specifies a library with multiple directory levels, Vertica follows the library path to include all subdirectories of that library. For example, the following CREATE LIBRARY statement enables the UDx library mylib
to import all Python packages and modules that it finds in subdirectories of site-packages
:
=> CREATE LIBRARY mylib AS '/path/to/python_udx' DEPENDS '/path/to/python/site-packages' LANGUAGE 'Python';
Important
DEPENDS can specify Java library dependencies that are up to 100 levels deep.
Examples
Load a library in the home directory of the dbadmin account:
=> CREATE LIBRARY MyFunctions AS '/home/dbadmin/my_functions.so';
Load a library located in the directory where you started vsql
:
=> \set libfile '\''`pwd`'/MyOtherFunctions.so\'';
=> CREATE LIBRARY MyOtherFunctions AS :libfile;
Load a library from the cloud:
=> CREATE LIBRARY SomeFunctions AS 'S3://mybucket/extensions.so';
Load a library that depends on multiple JAR files in the same directory:
=> CREATE LIBRARY DeleteVowelsLib AS '/home/dbadmin/JavaLib.jar'
DEPENDS '/home/dbadmin/mylibs/*' LANGUAGE 'Java';
Load a library with multiple explicit dependencies:
=> CREATE LIBRARY mylib AS '/path/to/java_udx'
DEPENDS '/path/to/jars/this.jar:/path/to/jars/that.jar' LANGUAGE 'Java';
Load a library with dependencies in the cloud:
=> CREATE LIBRARY s3lib AS 's3://mybucket/UdlLib.jar'
DEPENDS '["s3://mybucket/gson-2.3.1.jar"]' LANGUAGE 'Java';
11.17 - CREATE LOAD BALANCE GROUP
Creates a group of network addresses that can be targeted by a load balancing routing rule.
Creates a group of network addresses that can be targeted by a load balancing routing rule. You create a group either using a list of network addresses, or basing it on one or more fault groups or subclusters.
Note
You cannot add multiple network addresses for one node to the same load balancing group.
Syntax
CREATE LOAD BALANCE GROUP group_name WITH {
ADDRESS address[,...]
| FAULT GROUP fault_group[,...] FILTER 'IP_range'
| SUBCLUSTER subcluster[,...] FILTER 'IP_range'
}
[ POLICY 'policy_setting' ]
Parameters
group_name
- Name of the group to create. You use this name later when defining load balancing rules.
address
[,...]
- Comma-delimited list of network addresses you created earlier.
fault_group
[,...]
- Comma-delimited list of fault groups to use as the basis of the load balance group.
Note
Before you create your load balance group from a fault group, you must create network addresses on the nodes you want in your load balance group. Load balance groups only work with the network addresses you define on nodes, rather than IP addresses. See
CREATE NETWORK ADDRESS.
subcluster
[,...]
- Comma-delimited list of subclusters to use as the basis of the load balance group.
Note
As with fault groups, you must create network addresses on the nodes in the subcluster you want to be part of the load balance group.
IP_range
- Range of IP addresses in CIDR notation to include in the load balance group from the fault groups or subclusters. This range can be either IPv4 or IPv6. Only nodes that have a network address with an IP address that falls within this range are added to the load balancing group.
policy_setting
- Determines how the initially-contacted node chooses a target from the group, one of the following:
-
ROUNDROBIN (default) rotates among the available members of the load balancing group. The initially-contacted node keeps track of which node it chose last time, and chooses the next one in the cluster.
Note
Each node in the cluster maintains its own round-robin pointer that indicates which node it should pick next for each load-balancing group. Therefore, if clients connect to different initial nodes, they may be redirected to the same node.
-
RANDOM chooses an available node from the group randomly.
-
NONE disables load balancing.
Privileges
Superuser
Examples
The following statement demonstrates creating a load balance group that contains several network addresses:
=> CREATE NETWORK ADDRESS addr01 ON v_vmart_node0001 WITH '10.20.110.21';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr02 ON v_vmart_node0002 WITH '10.20.110.22';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr03 on v_vmart_node0003 WITH '10.20.110.23';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr04 on v_vmart_node0004 WITH '10.20.110.24';
CREATE NETWORK ADDRESS
=> CREATE LOAD BALANCE GROUP group_1 WITH ADDRESS addr01, addr02;
CREATE LOAD BALANCE GROUP
=> CREATE LOAD BALANCE GROUP group_2 WITH ADDRESS addr03, addr04;
CREATE LOAD BALANCE GROUP
=> SELECT * FROM LOAD_BALANCE_GROUPS;
name | policy | filter | type | object_name
------------+------------+-----------------+-----------------------+-------------
group_1 | ROUNDROBIN | | Network Address Group | addr01
group_1 | ROUNDROBIN | | Network Address Group | addr02
group_2 | ROUNDROBIN | | Network Address Group | addr03
group_2 | ROUNDROBIN | | Network Address Group | addr04
(4 rows)
This example demonstrates creating a load balancing group using a fault group:
=> CREATE FAULT GROUP fault_1;
CREATE FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE v_vmart_node0001;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE v_vmart_node0002;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE v_vmart_node0003;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE v_vmart_node0004;
ALTER FAULT GROUP
=> SELECT node_name,node_address,node_address_family,export_address
FROM v_catalog.nodes;
node_name | node_address | node_address_family | export_address
------------------+--------------+---------------------+----------------
v_vmart_node0001 | 10.20.110.21 | ipv4 | 10.20.110.21
v_vmart_node0002 | 10.20.110.22 | ipv4 | 10.20.110.22
v_vmart_node0003 | 10.20.110.23 | ipv4 | 10.20.110.23
v_vmart_node0004 | 10.20.110.24 | ipv4 | 10.20.110.24
(4 rows)
=> CREATE LOAD BALANCE GROUP group_all WITH FAULT GROUP fault_1 FILTER
'0.0.0.0/0';
CREATE LOAD BALANCE GROUP
=> CREATE LOAD BALANCE GROUP group_some WITH FAULT GROUP fault_1 FILTER
'10.20.110.21/30';
CREATE LOAD BALANCE GROUP
=> SELECT * FROM LOAD_BALANCE_GROUPS;
name | policy | filter | type | object_name
----------------+------------+-----------------+-----------------------+-------------
group_all | ROUNDROBIN | 0.0.0.0/0 | Fault Group | fault_1
group_some | ROUNDROBIN | 10.20.110.21/30 | Fault Group | fault_1
(2 rows)
See also
11.18 - CREATE LOCAL TEMPORARY VIEW
Creates or replaces a local temporary view.
Creates or replaces a local temporary view. Views are read only, so they do not support insert, update, delete, or copy operations. Local temporary views are session-scoped, so they are visible only to their creator in the current session. Vertica drops the view when the session ends.
Note
Vertica does not support global temporary views.
Syntax
CREATE [OR REPLACE] LOCAL TEMP[ORARY] VIEW view [ (column[,...] ) ] AS query
Parameters
OR REPLACE
- Specifies to overwrite the existing view
view-name
. If you omit this option and view-name
already exists, CREATE VIEW
returns an error.
view
- Identifies the view to create, where
view
conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
column
[,...]
- List of up to 9800 names to use as view column names. Vertica maps view column names to query columns according to the order of their respective lists. By default, the view uses column names as they are specified in the query.
AS
query
- A
SELECT
statement that the temporary view executes. The SELECT
statement can reference tables, temporary tables, and other views.
Privileges
See Creating views.
Examples
The following CREATE LOCAL TEMPORARY VIEW
statement creates the temporary view myview
. This view sums all individual incomes of customers listed in the store.store_sales_fact
table, and groups results by state:
=> CREATE LOCAL TEMP VIEW myview AS
SELECT SUM(annual_income), customer_state FROM public.customer_dimension
WHERE customer_key IN (SELECT customer_key FROM store.store_sales_fact)
GROUP BY customer_state
ORDER BY customer_state ASC;
The following example uses the temporary view myview
with a WHERE
clause that limits the results to combined salaries greater than $2 billion:
=> SELECT * FROM myview WHERE SUM > 2000000000;
SUM | customer_state
-------------+----------------
2723441590 | AZ
29253817091 | CA
4907216137 | CO
3769455689 | CT
3330524215 | FL
4581840709 | IL
3310667307 | IN
2793284639 | MA
5225333668 | MI
2128169759 | NV
2806150503 | PA
2832710696 | TN
14215397659 | TX
2642551509 | UT
(14 rows)
See also
11.19 - CREATE LOCATION
Creates a storage location where Vertica can store data.
Creates a storage location where Vertica can store data. After you create the location, you create storage policies that assign the storage location to the database objects that will store data in the location.
Caution
While no technical issue prevents you from using CREATE LOCATION
to add one or more Network File System (NFS) storage locations, Vertica does not support NFS data or catalog storage except for MapR mount points. You will be unable to run queries against any other NFS data. When creating locations on MapR file systems, you must specify ALL NODES SHARED
.
If you use HDFS storage locations, the HDFS data must be available when you start Vertica. Your HDFS cluster must be operational, and the ROS files must be present. If you moved data files, or they are corrupted, or your HDFS cluster is not responsive, Vertica cannot start.
Syntax
CREATE LOCATION 'path'
[NODE 'node' | ALL NODES]
[ SHARED ]
[ COMMUNAL ]
[ USAGE 'usage']
[LABEL 'label']
[LIMIT 'size']
Arguments
path
- Where to store this location's data. The type of file system on which the location is based determines the
path
format:
HDFS storage locations have additional requirements.
Important
Vertica performs no validation on storage location paths. Confirm that the path value points to a valid location.
ALL NODES | NODE '
node
'
- The node or nodes on which the storage location is defined, one of the following:
ALL NODES
(default): Create the storage location on each node that is currently part of the cluster. If you later add nodes, you must also create the location on those nodes. If SHARED is also specified, create the storage location once for use by all nodes.
NODE '
node
'
: Create the storage location on a single node, where node
is the name of the node in the NODES system table. You cannot use this option with SHARED.
SHARED
- Indicates the location set by
path
is shared (used by all nodes) rather than local to each node. You cannot specify individual nodes with SHARED; you must use ALL NODES.
Most remote file systems such as HDFS and S3 are shared. For these file systems, the path
argument represents a single location in the remote file system where all nodes store data. If using a remote file system, you must specify SHARED, even for one-node clusters.
If path
is set to S3 communal storage, SHARED
is always implied and can be omitted.
Deprecated
SHARED DATA and SHARED DATA,TEMP storage locations are deprecated.
COMMUNAL
- The location set by
path
is to be a communal storage location.
If you specify COMMUNAL
, you do not need to provide the SHARED
, ALL NODES
, or NODE
options because the storage location is necessarily shared by all nodes in the database. For the USAGE
option, you must specify a value of DATA
.
After the communal storage location is created, you can assign objects to it using the SET_OBJECT_STORAGE_POLICY function.
-
USAGE 'usage'
- The type of data the storage location can hold, where
usage
is one of the following:
-
DATA,TEMP
(default): The storage location can store persistent and temporary DML-generated data, and data for temporary tables.
-
TEMP
: A path
-specified location to store DML-generated temporary data. If path
is set to S3, then this location is used only when the RemoteStorageForTemp configuration parameter is set to 1, and TEMP
must be qualified with ALL NODES SHARED. For details, see S3 Storage of Temporary Data.
-
DATA
: The storage location can only store persistent data.
-
USER
: Users with READ and WRITE privileges can access data and external tables of this storage location.
-
DEPOT
: The storage location is used in Eon Mode to store the depot. Only create DEPOT
storage locations on local Linux file systems.
Vertica allows a single DEPOT
storage location per node. If you want to move your depot to different location (on a different file system, for example) you must first drop the old depot storage location, then create the new location.
LABEL '
label
'
- A label for the storage location, used when assigning the storage location to data objects. You use this name later when assigning the storage location to data objects.
Important
You must supply a label for depot storage locations.
LIMIT '
size
'
Valid only if the storage location usage type is set to DEPOT
, specifies the maximum amount of disk space that the depot can allocate from the storage location's file system.
You can specify size
in two ways:
-
integer
%
: Percentage of storage location disk size.
-
integer
{K|M|G|T}
: Amount of storage location disk size in kilobytes, megabytes, gigabytes, or terabytes.
Important
The depot size cannot exceed 80 percent of the file system disk space where the depot is stored. If you specify a value that is too large, Vertica issues a warning and automatically changes the value to 80 percent of the file system size.
If you do not specify a limit, it is set to 60 percent.
Privileges
Superuser
File system access
The Vertica process must have read and write permissions to the location where data is to be stored. Each file system has its own requirements:
File system |
Requirements |
Linux |
Database superuser account (usually named dbadmin) must have full read and write access to the directory in the path argument. |
HDFS without Kerberos |
A Hadoop user whose username matches the Vertica database administrator username (usually dbadmin) must have read and write access to the HDFS directory specified in the path argument. The UseServerIdentityOverUserIdentity configuration parameter must be set to true in the user session; otherwise Vertica tries to use the identity associated with the logged-in user. |
HDFS with Kerberos |
A Hadoop user whose username matches the principal in the keytab file on each Vertica node must have read and write access to the HDFS directory stored in the path argument. This is not the same as the database administrator username. The UseServerIdentityOverUserIdentity configuration parameter must be set to true in the user session; otherwise Vertica tries to use the Kerberos principal associated with the logged-in user. |
Object stores (S3, GCS, Azure) |
Database-level credentials must be specified and provide full read and write access to the location in the path argument. If session-level credentials are specified they are used, directly overriding the use of the storage location. |
Examples
Create a storage location in the local Linux file system for temporary data storage:
=> CREATE LOCATION '/home/dbadmin/testloc' USAGE 'TEMP' LABEL 'tempfiles';
Create a storage location on HDFS. The HDFS cluster does not use Kerberos:
=> CREATE LOCATION 'hdfs://hadoopNS/vertica/colddata' ALL NODES SHARED
USAGE 'data' LABEL 'coldstorage';
Create the same storage location, but on a Hadoop cluster that uses Kerberos. Note the output that reports the principal being used:
=> CREATE LOCATION 'hdfs://hadoopNS/vertica/colddata' ALL NODES SHARED
USAGE 'data' LABEL 'coldstorage';
NOTICE 0: Performing HDFS operations using kerberos principal [vertica/hadoop.example.com]
CREATE LOCATION
Create a location for user data, grant access to it, and use it to create an external table:
=> CREATE LOCATION '/tmp' ALL NODES USAGE 'user';
CREATE LOCATION
=> GRANT ALL ON LOCATION '/tmp' to Bob;
GRANT PRIVILEGE
=> CREATE EXTERNAL TABLE ext1 (x integer) AS COPY FROM '/tmp/data/ext1.dat' DELIMITER ',';
CREATE TABLE
Create a user storage location on S3 and a role, so that users without their own S3 credentials can read data from S3 using the server credential:
--- set database-level credential (once):
=> ALTER DATABASE DEFAULT SET AWSAuth = 'myaccesskeyid123456:mysecretaccesskey123456789012345678901234';
=> CREATE LOCATION 's3://datalake' SHARED USAGE 'USER' LABEL 's3user';
=> CREATE ROLE ExtUsers;
--- Assign users to this role using GRANT (Role).
=> GRANT READ ON LOCATION 's3://datalake' TO ExtUsers;
Create a communal storage location on S3 with the label cold-storage
:
=> CREATE LOCATION 's3://bucket/s3' COMMUNAL USAGE 'DATA' LABEL 'cold-storage';
See also
11.20 - CREATE NAMESPACE
Eon Mode only
Creates a namespace for an Eon Mode database.
When you create a database, a namespace named default_namespace
is created with the shard count defined during database creation. This function lets you create additional namespaces. Each namespace has a distinct shard count that determines the segmentation of its member objects. For more information, see Managing namespaces and Namespaces and shards.
Every schema and table in an Eon Mode database belongs to a namespace. When you create a table or schema, you optionally specify the namespace under which to create the object. If no namespace is specified, the table or schema is created in default_namespace
.
Important
If you create a non-default namespace, you cannot reshard default_namespace
or save an in-database restore point.
Syntax
CREATE NAMESPACE namespace [ SHARD COUNT shards ]
Arguments
namespace
- Name of the namespace. Cannot be the same as the initial substring of any schema name containing a period. For example, if a schema named
prefix.s
exists, you cannot create a namespace named prefix
.
SHARD COUNT
shards
- Shard count of the namespace. If unspecified, the shard count is set to that of
default_namespace
.
Privileges
Superuser
Examples
Create the analytics
namespace with a shard count of 12:
=> CREATE NAMESPACE analytics SHARD COUNT 12;
CREATE NAMESPACE
You can view all namespaces in your database by querying the NAMESPACES system table:
=> SELECT namespace_name, is_default, default_shard_count FROM NAMESPACES;
namespace_name | is_default | default_shard_count
-------------------+------------+---------------------
default_namespace | t | 6
analytics | f | 12
(2 rows)
For a more in-depth example, see Managing namespaces.
See also
11.21 - CREATE NETWORK ADDRESS
Creates a network address that can be used as part of a connection load balancing policy.
Creates a network address that can be used as part of a connection load balancing policy. A network address creates a name in the Vertica catalog for an IP address and port number associated with a node. Nodes can have multiple network addresses, up to one for each IP address they have on the network.
Syntax
CREATE NETWORK ADDRESS name ON node WITH 'ip-address' [PORT port-number] [ENABLED | DISABLED]
Parameters
name
- The name of the new network address. Use this name when creating connection load balancing groups.
node
- The name of the node on which to create the network address. This should be name of the node as it appears in the
node_name
column of system table NODES.
ip-address
- The IPv4 or and IPv6 address on the node to associate with the network address.
Note
Vertica does not verify that the IP address you supply in this parameter is actually associated with the specified node. Be sure that the IP address actually belongs to the node. Otherwise, your load balancing policy is liable to send a client connection to the wrong node, or a non-Vertica host. Vertica rejects IP address that are invalid for a node. For example, it checks whether the IP address falls in the loopback address range of 127.0.0.0/8. If it finds that the IP address is invalid, CREATE NETWORK ADDRESS returns an error.
PORT
port-number
- Sets the port number for the network address. You must supply a network address when altering the port number.
ENABLED | DISABLED
- Enables or disables the network address.
Privileges
Superuser
Examples
Create three network addresses, one for each node in a three-node cluster:
=> SELECT node_name,export_address from v_catalog.nodes;
node_name | export_address
---------------------+----------------
v_vmart_br_node0001 | 10.20.100.62
v_vmart_br_node0002 | 10.20.100.63
v_vmart_br_node0003 | 10.20.100.64
(3 rows)
=> CREATE NETWORK ADDRESS node01 ON v_vmart_br_node0001 WITH '10.20.100.62';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node02 ON v_vmart_br_node0002 WITH '10.20.100.63';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node03 ON v_vmart_br_node0003 WITH '10.20.100.64';
See also
11.22 - CREATE NETWORK INTERFACE
Identifies a network interface to which a node belongs.
Identifies a network interface to which a node belongs.
Use this statement when you want to configure import/export operations from individual nodes to other Vertica clusters. By default, when you install Vertica, it creates interfaces for all connected networks. You would only need CREATE NETWORK INTERFACE in situations where the network topology has changed since you installed Vertica.
Syntax
CREATE NETWORK INTERFACE network-interface-name ON node-name [WITH] 'node-IP-address' [PORT port-number] [ENABLED | DISABLED]
network-interface-name
- The name you assign to the network interface, where
network-interface-name
conforms to conventions described in Identifiers.
node-name
- The name of the node.
node-IP-address
- The node's IP address, either a public or private IP address. For more information, see Using Public and Private IP Networks.
- PORT
port-number
- Sets the port number for the network interface. You must supply a network interface when altering the port number.
- [ENABLED | DISABLED]
- Enables or disables the network interface.
Privileges
Superuser
Examples
Create a network interface:
=> CREATE NETWORK INTERFACE mynetwork ON v_vmart_node0001 WITH '123.4.5.6' PORT 456 ENABLED;
11.23 - CREATE NOTIFIER
Creates a push-based notifier to send event notifications and messages out of Vertica.
Creates a push-based notifier to send event notifications and messages out of Vertica.
Syntax
CREATE NOTIFIER [ IF NOT EXISTS ] notifier-name ACTION 'notifier-type'
[ ENABLE | DISABLE ]
[ MAXPAYLOAD 'integer{K|M}' ]
MAXMEMORYSIZE 'integer{K|M|G|T}'
[ TLS CONFIGURATION tls-configuration ]
[ TLSMODE 'tls-mode' ]
[ CA BUNDLE bundle-name [ CERTIFICATE certificate-name ] ]
[ IDENTIFIED BY 'uuid' ]
[ [NO] CHECK COMMITTED ]
[ PARAMETERS 'adapter-params' ]
Parameters
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
notifier-name
- This notifier's unique identifier.
ACTION '
notifier-type
'
- String, the type of notifier, one of the following:
-
URL, with the following format, that identifies one or more target Kafka servers:
kafka://kafka-server-ip-address:port-number
To enable failover when a Kafka server is unavailable, specify additional hosts in a comma-delimited list. For example:
kafka://192.0.2.0:9092,192.0.2.1:9092,192.0.2.2:9092
-
syslog
: Notifications are sent to syslog. To use notifiers of this type, you must set the SyslogEnabled
parameter:
=> ALTER DATABASE DEFAULT SET SyslogEnabled = 1
Events monitored by this notifier type are not logged to MONITORING_EVENTS nor vertica.log
.
-
sns
: Notifications are sent to a Simple Notification Service (SNS) endpoint.
ENABLE | DISABLE
- Specifies whether to enable or disable the notifier.
Default: ENABLE
.
MAXPAYLOAD
'integer
{K|M}
'
- The maximum size of the message, up to 10^9 bytes, specified in kilobytes or megabytes.
The following restrictions apply:
-
MAXPAYLOAD
cannot be greater than MAXMEMORYSIZE
.
-
If you configure syslog to send messages to a remote destination, ensure that MaxMessageSize
(in /etc/rsyslog
for rsyslog
) is greater than or equal to MAXPAYLOAD
.
-
The MAXPAYLOAD
for SNS notifiers cannot exceed 256KB.
Defaults:
-
Kafka: 1M
-
syslog: 1M
-
SNS: 256K
MAXMEMORYSIZE
'integer
{K|M|G|T}
'
- The maximum size of the internal notifier, up to 2 TB, specified in kilobytes, megabytes, gigabytes, or terabytes.
MAXMEMORYSIZE
must be greater than MAXPAYLOAD
.
If the size of the message queue exceeds MAXMEMORYSIZE
, the notifier drops excess messages.
- TLS CONFIGURATION
tls-configuration
The TLS CONFIGURATION to use for TLS.
Notifiers support the following TLS modes:
If the notifier ACTION
is 'syslog'
or 'sns'
, this parameter has no effect.
To encrypt messages sent to syslog, you must configure syslog for TLS.
To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:
-
SNSCAFile or AWSCAFile
-
SNSCAPath or AWSCAPath
-
SNSEnableHttps
TLSMODE '
tls-mode
'
-
Deprecated
This parameter has been superseded by the TLS CONFIGURATION parameter. If you use this parameter while the TLS CONFIGURATION parameter is not set, Vertica automatically creates a new TLS Configuration for the notifier uses the same values as the deprecated parameter.
Specifies the type of connection between the notifier and an endpoint, one of the following:
-
disable
(default): Plaintext connection.
-
verify-ca
: Encrypted connection, and the server's certificate is verified as being signed by a trusted CA.
If you set this parameter to verify-ca
, the generated TLS Configuration will be set to TRY_VERIFY, which has the same behavior as VERIFY_CA.
If the notifier ACTION
is 'syslog'
or 'sns'
, this parameter has no effect.
To encrypt messages sent to syslog, you must configure syslog for TLS.
To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:
-
SNSCAFile or AWSCAFile
-
SNSCAPath or AWSCAPath
-
SNSEnableHttps
CA BUNDLE
bundle-name
-
Deprecated
This parameter has been superseded by the TLS CONFIGURATION parameter. If you use this parameter while the TLS CONFIGURATION parameter is not set, Vertica automatically creates a new TLS Configuration for the notifier uses the same values as the deprecated parameter.
Specifies a CA bundle. The certificates inside the bundle are used to validate the Kafka server's certificate if the TLSMODE
requires it.
If a CA bundle is specified for a notifier that currently uses disable
, which doesn't validate the Kafka server's certificate, the bundle will go unused when connecting to the Kafka server. This behavior persists unless the TLSMODE
is changed to one that validates server certificates.
Changes to contents of the CA bundle take effect either after the notifier is disabled and re-enabled or after the database restarts. However, changes to which CA bundle the notifier uses takes effect immediately.
If the notifier ACTION
is 'syslog'
or 'sns'
, this parameter has no effect.
To encrypt messages sent to syslog, you must configure syslog for TLS.
To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:
-
SNSCAFile or AWSCAFile
-
SNSCAPath or AWSCAPath
-
SNSEnableHttps
CERTIFICATE
certificate-name
-
Deprecated
This parameter has been superseded by the TLS CONFIGURATION parameter. If you use this parameter while the TLS CONFIGURATION parameter is not set, Vertica automatically creates a new TLS Configuration for the notifier uses the same values as the deprecated parameter.
Specifies a client certificate for validation by the endpoint.
If the notifier ACTION
is 'syslog'
or 'sns'
, this parameter has no effect.
To encrypt messages sent to syslog, you must configure syslog for TLS.
To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:
-
SNSCAFile or AWSCAFile
-
SNSCAPath or AWSCAPath
-
SNSEnableHttps
IDENTIFIED BY
uuid
- Specifies the notifier's unique identifier. If set, all the messages published by this notifier have this attribute.
[NO] CHECK COMMITTED
- Specifies to wait for delivery confirmation before sending the next message in the queue.
Some messaging systems, like syslog, do not support delivery confirmation.
For SNS notifiers, CHECK COMMITTED must be specified, and NO CHECK COMMITTED behaves like CHECK COMMITTED.
PARAMETERS '
adapter-params
'
- Specifies one or more optional adapter parameters that are passed as a string to the adapter. Adapter parameters apply only to the adapter associated with the notifier.
For Kafka notifiers, refer to Kafka and Vertica configuration settings.
For syslog notifiers, specify the severity of the event with eventSeverity=
severity
, where severity
is one of the following:
-
0
: Emergency
-
1
: Alert
-
2
: Critical
-
3
: Error
-
4
: Warning
-
5
: Notice
-
6
: Informational
-
7
: Debug
Most syslog implementations, by default, do not log events with a severity level of 7. You must configure syslog to record these types of events.
Parameters cannot be set for SNS notifiers.
Privileges
Superuser
Encrypted notifiers for SASL_SSL Kafka configurations
Follow this procedure to create or alter notifiers for Kafka endpoints that use SASL_SSL. Note that you must repeat this procedure whenever you change the TLSMODE, certificates, or CA bundle for a given notifier.
-
Create a TLS Configuration with the desired TLS mode, certificate, and CA certificates.
-
Use CREATE or ALTER to disable the notifier and set the TLS Configuration:
=> ALTER NOTIFIER encrypted_notifier
DISABLE
TLS CONFIGURATION kafka_tls_config;
-
ALTER the notifier and set the proper rdkafka adapter parameters for SASL_SSL:
=> ALTER NOTIFIER encrypted_notifier PARAMETERS
'sasl.username=user;sasl.password=password;sasl.mechanism=PLAIN;security.protocol=SASL_SSL';
-
Enable the notifier:
=> ALTER NOTIFIER encrypted_notifier ENABLE;
Examples
Kafka notifiers
Create a Kafka notifier:
=> CREATE NOTIFIER my_dc_notifier
ACTION 'kafka://172.16.20.10:9092'
MAXMEMORYSIZE '1G'
IDENTIFIED BY 'f8b0278a-3282-4e1a-9c86-e0f3f042a971'
NO CHECK COMMITTED;
Create a notifier with an adapter-specific parameter:
=> CREATE NOTIFIER my_notifier
ACTION 'kafka://127.0.0.1:9092'
MAXMEMORYSIZE '10M'
PARAMETERS 'queue.buffering.max.ms=1000';
Create a notifier that uses an encrypted connection and verifies the Kafka server's certificate with the CA certificates in the notifier_tls_config object:
=> CREATE NOTIFIER encrypted_notifier
ACTION 'kafka://127.0.0.1:9092'
MAXMEMORYSIZE '10M'
TLS CONFIGURATION 'notifier_tls_config'
Syslog notifiers
The following example creates a notifier that writes a message to syslog when the Data collector (DC) component LoginFailures
updates:
-
Enable syslog notifiers for the current database:
=> ALTER DATABASE DEFAULT SET SyslogEnabled = 1;
-
Create and enable a syslog notifier v_syslog_notifier
:
=> CREATE NOTIFIER v_syslog_notifier ACTION 'syslog'
ENABLE
MAXMEMORYSIZE '10M'
IDENTIFIED BY 'f8b0278a-3282-4e1a-9c86-e0f3f042a971'
PARAMETERS 'eventSeverity = 5';
-
Configure the syslog notifier v_syslog_notifier
for updates to the LoginFailures
DC component with SET_DATA_COLLECTOR_NOTIFY_POLICY:
=> SELECT SET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures','v_syslog_notifier', 'Login failed!', true);
This notifier writes the following message to syslog (default location: /var/log/messages
) when a user fails to authenticate as the user Bob
:
Apr 25 16:04:58
vertica_host_01
vertica:
Event Posted:
Event Code:21
Event Id:0
Event Severity: Notice [5]
PostedTimestamp: 2022-04-25 16:04:58.083063
ExpirationTimestamp: 2022-04-25 16:04:58.083063
EventCodeDescription: Notifier
ProblemDescription: (Login failed!)
{
"_db":"VMart",
"_schema":"v_internal",
"_table":"dc_login_failures",
"_uuid":"f8b0278a-3282-4e1a-9c86-e0f3f042a971",
"authentication_method":"Reject",
"client_authentication_name":"default: Reject",
"client_hostname":"::1",
"client_label":"",
"client_os_user_name":"dbadmin",
"client_pid":523418,
"client_version":"",
"database_name":"dbadmin",
"effective_protocol":"3.8",
"node_name":"v_vmart_node0001",
"reason":"REJECT",
"requested_protocol":"3.8",
"ssl_client_fingerprint":"",
"ssl_client_subject":"",
"time":"2022-04-25 16:04:58.082568-05",
"user_name":"Bob"
}#012
DatabaseName: VMart
Hostname: vertica_host_01
For details on syslog notifiers, see Configuring reporting for syslog.
See also
11.24 - CREATE PROCEDURE (external)
Adds an external procedure to Vertica.
Enterprise Mode only
Adds an external procedure to Vertica. See External procedures for more information.
Syntax
CREATE PROCEDURE [ IF NOT EXISTS ]
[[database.]schema.]procedure( [ argument-list ] )
AS executable
LANGUAGE 'EXTERNAL'
USER OS-user
Parameters
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
This option cannot be used with OR REPLACE
.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
procedure
- Specifies a name for the external procedure, where
procedure-name
conforms to conventions described in Identifiers.
argument-list
- A comma-delimited list of procedure arguments, where each argument is specified as follows:
[ argname ] argtype
executable
- The name of the executable program in the procedures directory, a string.
OS-user
- The owner of the file, a string. The owner:
Privileges
Superuser
System security
-
The procedure file must be owned by the database administrator (OS account) or by a user in the same group as the administrator. The procedure file must also have the set UID attribute enabled, and allow read and execute permission for the group.
-
External procedures that you create with CREATE PROCEDURE (external) are always run with Linux dbadmin privileges. If a dbadmin or pseudosuperuser grants a non-dbadmin permission to run a procedure using GRANT (procedure), be aware that the non-dbadmin user runs the procedure with full Linux dbadmin privileges.
Examples
The following example shows how to create a procedure named helloplanet
for the procedure file helloplanet.sh
. This file accepts one VARCHAR argument.
Create the file:
#!/bin/bash
echo "hello planet argument: $1" >> /tmp/myprocedure.log
Create the procedure with the following SQL:
=> CREATE PROCEDURE helloplanet(arg1 varchar) AS 'helloplanet.sh' LANGUAGE 'external' USER 'dbadmin';
See also
11.25 - CREATE PROCEDURE (stored)
Creates a stored procedure.
Creates a stored procedure.
Syntax
CREATE [ OR REPLACE ] PROCEDURE [ IF NOT EXISTS ]
[[{namespace. | database. }]schema.]procedure( [ parameter-list ] )
[ LANGUAGE 'language-name' ]
[ SECURITY { DEFINER | INVOKER } ]
AS $$ source $$;
Parameters
OR REPLACE
- If a procedure with the same name already exists, replace it. Users and roles with privileges on the original procedure retain these privileges on the new procedure.
This option cannot be used with IF NOT EXISTS
.
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
This option cannot be used with OR REPLACE
.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
procedure
- The name of the stored procedure, where *
procedure-name
*conforms to conventions described in Identifiers.
parameter-list
- A comma-delimited list of formal parameters, each specified as follows:
[ parameter-mode ] parameter-name parameter-type
-
parameter-name
: the name of the parameter.
-
parameter-type
: Any SQL data type, with the following exceptions:
-
DECIMAL
-
NUMERIC
-
NUMBER
-
MONEY
-
UUID
-
GEOGRAPHY
-
GEOMETRY
-
Complex types
language-name
- Specifies the language of the procedure
source
, one of the following (both options refer to PLvSQL; PLpgSQL is included to maintain compatibility with existing scripts):
Default: PLvSQL
SECURITY { DEFINER | INVOKER }
- Determines whose privileges to use when the procedure is called and executes it as if the user is one of the following:
A procedure with SECURITY DEFINER effectively executes the procedure as that user, so changes to the database appear to be performed by the procedure's definer rather than its caller.
Caution
Improper use of SECURITY DEFINER can lead to the
confused deputy problem and introduce vulnerabilities into your system like SQL injection.
For more information, see Executing stored procedures.
source
- The procedure source code. For details, see Scope and structure.
Privileges
Non-superuser: CREATE on the procedure's schema
Examples
For more complex examples, see Stored procedures: use cases and examples
The following procedure echoes its inputs as a result set:
=> CREATE PROCEDURE echo_int_varchar(INOUT x INT, INOUT y VARCHAR) LANGUAGE PLvSQL AS $$
BEGIN
RAISE NOTICE 'This procedure outputs a result set of its inputs:';
END
$$;
=> CALL echo_int_varchar(3, 'a string');
NOTICE 2005: This procedure outputs a result set of its inputs:
x | y
---+----------
3 | a string
(1 row)
The example uses a set of procedures that convert a given Fahrenheit temperature to Celsius and Kelvin and returns them all as a result set. The procedure f_to_c_and_k()
calls the helper procedures f_to_c()
and f_to_k()
to convert to Celsius and Kelvin, respectively. The f_to_k()
procedure uses the output of f_to_c()
for part of the conversion:
=> CREATE PROCEDURE f_to_c_and_k(IN f_temp DOUBLE PRECISION, OUT c_temp DOUBLE PRECISION, OUT k_temp DOUBLE PRECISION) AS $$
BEGIN
c_temp := CALL f_to_c(f_temp);
k_temp := CALL f_to_k(f_temp);
END;
$$;
=> CREATE PROCEDURE f_to_c(INOUT temp DOUBLE PRECISION) AS $$
BEGIN
temp := (temp - 32) * 5/9;
END;
$$;
=> CREATE PROCEDURE f_to_k(INOUT temp DOUBLE PRECISION) AS $$
BEGIN
temp := CALL f_to_c(temp);
temp := temp + 273.15;
END;
$$;
=> CALL f_to_c_and_k(80);
c_temp | k_temp
------------------+------------------
26.6666666666667 | 299.816666666667
(1 row)
See also
11.26 - CREATE PROFILE
Creates a profile that controls password requirements for users.
Creates a profile that controls password requirements for users.
Syntax
CREATE PROFILE profile-name LIMIT [
PASSWORD_LIFE_TIME setting
PASSWORD_MIN_LIFE_TIME setting
PASSWORD_GRACE_TIME setting
FAILED_LOGIN_ATTEMPTS setting
PASSWORD_LOCK_TIME setting
PASSWORD_REUSE_MAX setting
PASSWORD_REUSE_TIME setting
PASSWORD_MAX_LENGTH setting
PASSWORD_MIN_LENGTH setting
PASSWORD_MIN_LETTERS setting
PASSWORD_MIN_UPPERCASE_LETTERS setting
PASSWORD_MIN_LOWERCASE_LETTERS setting
PASSWORD_MIN_DIGITS setting
PASSWORD_MIN_SYMBOLS setting
PASSWORD_MIN_CHAR_CHANGE setting ]
Parameters
Note
All parameters that are not explicitly set in a new profile are set to default
, and inherit their settings from the default profile.
Name |
Description |
name |
The name of the profile to create, where *name *conforms to conventions described in Identifiers.
To modify the default profile, set name to default . For example:
ALTER PROFILE DEFAULT LIMIT PASSWORD_MIN_SYMBOLS 1;
|
PASSWORD_LIFE_TIME |
Set to an integer value, one of the following:
After your password's lifetime and grace period expire, you must change your password on your next login, if you have not done so already.
|
PASSWORD_MIN_LIFE_TIME |
Set to an integer value, one of the following:
|
PASSWORD_GRACE_TIME |
Set to an integer value, one of the following:
|
FAILED_LOGIN_ATTEMPTS |
Set to an integer value, one of the following:
|
PASSWORD_LOCK_TIME |
-
≥ 1: The number of days (units configurable with PasswordLockTimeUnit) a user's account is locked after FAILED_LOGIN_ATTEMPTS number of login attempts. The account is automatically unlocked when the lock time elapses.
-
UNLIMITED : Account remains indefinitely inaccessible until a superuser manually unlocks it.
|
PASSWORD_REUSE_MAX |
Set to an integer value, one of the following:
|
PASSWORD_REUSE_TIME |
Set to an integer value, one of the following:
|
PASSWORD_MAX_LENGTH |
The maximum number of characters allowed in a password, one of the following:
- Integer between 8 and 512, inclusive
|
PASSWORD_MIN_LENGTH |
The minimum number of characters required in a password, one of the following:
|
PASSWORD_MIN_LETTERS |
Minimum number of letters (a-z and A-Z) that must be in a password, one of the following:
|
PASSWORD_MIN_UPPERCASE_LETTERS |
Minimum number of uppercase letters (A-Z) that must be in a password, one of the following:
|
PASSWORD_MIN_LOWERCASE_LETTERS |
Minimum number of lowercase letters (a-z) that must be in a password, one of the following:
|
PASSWORD_MIN_DIGITS |
Minimum number of digits (0-9) that must be in a password, one of the following:
|
PASSWORD_MIN_SYMBOLS |
Minimum number of symbols—printable non-letter and non-digit characters such as $, #, @—that must be in a password, one of the following:
|
PASSWORD_MIN_CHAR_CHANGE |
Minimum number of characters that must be different from the previous password:
|
Privileges
Superuser
Profile settings and client authentication
The following profile settings affect client authentication methods, such as LDAP or GSS:
-
FAILED_LOGIN_ATTEMPTS
-
PASSWORD_LOCK_TIME
All other profile settings are used only by Vertica to manage its passwords.
Examples
=> CREATE PROFILE sample_profile LIMIT PASSWORD_MAX_LENGTH 20;
See also
11.27 - CREATE PROJECTION
Creates metadata for a in the Vertica catalog.
Creates metadata for a projection in the Vertica catalog. Vertica supports four types of projections:
-
Standard projection: Stores a collection of table data in a format that optimizes execution of certain queries on that table.
-
Live aggregate projection: Stores the grouped results of queries that invoke aggregate functions (such as SUM) on table columns.
-
Top-K projection: Stores the top k rows from partitions of selected rows.
-
UDTF projection: Stores newly-loaded data after it is transformed and/or aggregated by user-defined transformation functions (UDTFs).
Complex data types have additional restrictions when used within a projection:
-
Each projection must include at least one column that is a primitive type or native array.
-
An AS SELECT clause can use a complex-type column, but any other expression must be of a scalar type or native array.
-
The ORDER BY, PARTITION BY, and GROUP BY clauses cannot use complex types.
-
If a projection does not include an ORDER BY or segmentation clause, Vertica uses only the primitive columns from the select list to order or segment data.
-
Projection columns cannot be complex types returned from functions such as ARRAY_CAT.
-
TopK and UDTF projections do not support complex types.
11.27.1 - Encoding types
Vertica supports various encoding and compression types, specified by the following ENCODING parameter arguments:.
Vertica supports various encoding and compression types, specified by the following ENCODING
parameter arguments:
Note
Vertica supports the following encoding for numeric data types:
-
Precision ≤ 18: AUTO
, BLOCK_DICT
, BLOCKDICT_COMP
, COMMONDELTA_COMP
, DELTAVAL
, GCDDELTA
, and RLE
-
Precision > 18: AUTO
, BLOCK_DICT
, BLOCKDICT_COMP
, RLE
You can set encoding types on a projection column when you create the projection. You can also change the encoding of one or more projection columns for a given table with ALTER TABLE...ALTER COLUMN.
AUTO (default)
AUTO encoding is ideal for sorted, many-valued columns such as primary keys. It is also suitable for general purpose applications for which no other encoding or compression scheme is applicable. Therefore, it serves as the default if no encoding/compression is specified.
Column data type |
Default encoding type |
BINARY/VARBINARY BOOLEAN CHAR/VARCHAR FLOAT |
Lempel-Ziv-Oberhumer-based (LZO) compression |
DATE/TIME/TIMESTAMP INTEGER INTERVAL |
Compression scheme based on the delta between consecutive column values. |
The CPU requirements for this type are relatively small. In the worst case, data might expand by eight percent (8%) for LZO and twenty percent (20%) for integer data.
BLOCK_DICT
For each block of storage, Vertica compiles distinct column values into a dictionary and then stores the dictionary and a list of indexes to represent the data block.
BLOCK_DICT is ideal for few-valued, unsorted columns where saving space is more important than encoding speed. Certain kinds of data, such as stock prices, are typically few-valued within a localized area after the data is sorted, such as by stock symbol and timestamp, and are good candidates for BLOCK_DICT. By contrast, long CHAR/VARCHAR columns are not good candidates for BLOCK_DICT encoding.
CHAR and VARCHAR columns that contain 0x00 or 0xFF characters should not be encoded with BLOCK_DICT. Also, BINARY/VARBINARY columns do not support BLOCK_DICT encoding.
BLOCK_DICT encoding requires significantly higher CPU usage than default encoding schemes. The maximum data expansion is eight percent (8%).
BLOCKDICT_COMP
This encoding type is similar to BLOCK_DICT except dictionary indexes are entropy coded. This encoding type requires significantly more CPU time to encode and decode and has a poorer worst-case performance. However, if the distribution of values is extremely skewed, using BLOCK_DICT_COMP
encoding can lead to space savings.
BZIP_COMP
BZIP_COMP encoding uses the bzip2 compression algorithm on the block contents. See bzip web site for more information. This algorithm results in higher compression than the automatic LZO and gzip encoding; however, it requires more CPU time to compress. This algorithm is best used on large string columns such as VARCHAR, VARBINARY, CHAR, and BINARY. Choose this encoding type when you are willing to trade slower load speeds for higher data compression.
COMMONDELTA_COMP
This compression scheme builds a dictionary of all deltas in the block and then stores indexes into the delta dictionary using entropy coding.
This scheme is ideal for sorted FLOAT and INTEGER-based (DATE/TIME/TIMESTAMP/INTERVAL) data columns with predictable sequences and only occasional sequence breaks, such as timestamps recorded at periodic intervals or primary keys. For example, the following sequence compresses well: 300, 600, 900, 1200, 1500, 600, 1200, 1800, 2400. The following sequence does not compress well: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55.
If delta distribution is excellent, columns can be stored in less than one bit per row. However, this scheme is very CPU intensive. If you use this scheme on data with arbitrary deltas, it can cause significant data expansion.
DELTARANGE_COMP
This compression scheme is primarily used for floating-point data; it stores each value as a delta from the previous one.
This scheme is ideal for many-valued FLOAT columns that are sorted or confined to a range. Do not use this scheme for unsorted columns that contain NULL values, as the storage cost for representing a NULL value is high. This scheme has a high cost for both compression and decompression.
To determine if DELTARANGE_COMP is suitable for a particular set of data, compare it to other schemes. Be sure to use the same sort order as the projection, and select sample data that will be stored consecutively in the database.
DELTAVAL
For INTEGER and DATE/TIME/TIMESTAMP/INTERVAL columns, data is recorded as a difference from the smallest value in the data block. This encoding has no effect on other data types.
DELTAVAL is best used for many-valued, unsorted integer or integer-based columns. CPU requirements for this encoding type are minimal, and data never expands.
GCDDELTA
For INTEGER and DATE/TIME/TIMESTAMP/INTERVAL columns, and NUMERIC columns with 18 or fewer digits, data is recorded as the difference from the smallest value in the data block divided by the greatest common divisor (GCD) of all entries in the block. This encoding has no effect on other data types.
ENCODING GCDDELTA is best used for many-valued, unsorted, integer columns or integer-based columns, when the values are a multiple of a common factor. For example, timestamps are stored internally in microseconds, so data that is only precise to the millisecond are all multiples of 1000. The CPU requirements for decoding GCDDELTA encoding are minimal, and the data never expands, but GCDDELTA may take more encoding time than DELTAVAL.
GZIP_COMP
This encoding type uses the gzip compression algorithm. See gzip web site for more information. This algorithm results in better compression than the automatic LZO compression, but lower compression than BZIP_COMP. It requires more CPU time to compress than LZO but less CPU time than BZIP_COMP. This algorithm is best used on large string columns such as VARCHAR, VARBINARY, CHAR, and BINARY. Use this encoding when you want a better compression than LZO, but at less CPU time than bzip2.
RLE
RLE (run length encoding) replaces sequences (runs) of identical values with a single pair that contains the value and number of occurrences. Therefore, it is best used for low cardinality columns that are present in the ORDER BY clause of a projection.
The Vertica execution engine processes RLE encoding run-by-run and the Vertica optimizer gives it preference. Use it only when run length is large, such as when low-cardinality columns are sorted.
Zstandard compression
Vertica supports three ZSTD compression types:
-
ZSTD_COMP
provides high compression ratios. This encoding type has a higher compression than gzip. Use this when you want a better compression than gzip. For general use cases, use this or the ZSTD_FAST_COMP
encoding type.
-
ZSTD_FAST_COMP
uses the fastest compression level that the zstd library provides. It is the fastest encoding type of the zstd library, but takes up more space than the other two encoding types. For general use cases, use this or the ZSTD_COMP
encoding type.
-
ZSTD_HIGH_COMP
offers the best compression in the zstd library. It is slower than the other two encoding types. Use this type when you need the best compression, with slower CPU time.
11.27.2 - GROUPED clause
Groups two or more columns into a single disk file.
Enterprise Mode only
Groups two or more columns into a single disk file. Doing so minimizes file I/O for the following tasks:
-
Read a large percentage of the columns in a table.
-
Perform single row look-ups.
-
Query against many small columns.
-
Frequently update data in these columns.
You can improve query performance by grouping columns that are always accessed together and are not used in predicates. Once columns are grouped, queries can no longer retrieve from disk records for one column independently of the others.
Note
RLE encoding is reduced when an RLE column is grouped with one or more non-RLE columns.
You can group columns in several ways:
-
Group some of the columns:
(a, GROUPED(b, c), d)
-
Group all of the columns:
(GROUPED(a, b, c, d))
-
Create multiple groupings in the same projection:
(GROUPED(a, b), GROUPED(c, d))
Note
Vertica performs dynamic column grouping. For example, to provide better read and write efficiency for small loads, Vertica ignores any projection-defined column grouping (or lack thereof) and groups all columns together by default.
Grouping columns
The following example shows how to group columns bid
and ask
. The stock
column is stored separately.
=> CREATE TABLE trades (stock CHAR(5), bid INT, ask INT);
=> CREATE PROJECTION tradeproj (stock ENCODING RLE,
GROUPED(bid ENCODING DELTAVAL, ask))
AS (SELECT * FROM trades) KSAFE 1;
The following example show how to create a projection that uses expressions in the column definition. The projection contains two integer columns a
and b
, and a third column product_value
that stores the product of a
and b
:
=> CREATE TABLE values (a INT, b INT);
=> CREATE PROJECTION product (a, b, product_value) AS
SELECT a, b, a*b FROM values ORDER BY a KSAFE;
11.27.3 - Hash segmentation clause
A general SQL expression.
Specifies how to segment projection data for distribution across all cluster nodes. You can specify segmentation for a table and a projection. If a table definition specifies segmentation, Vertica uses it for that table's auto-projections.
It is strongly recommended that you use Vertica's built-in
HASH
function, which distributes data evenly across the cluster, and facilitates optimal query execution.
Syntax
SEGMENTED BY expression ALL NODES [ OFFSET offset ]
Parameters
SEGMENTED BY
expression
- A general SQL expression. Hash segmentation is the preferred method of segmentation. Vertica recommends using its built-in
HASH
function, whose arguments resolve to table columns. If you use an expression other than HASH
, Vertica issues a warning.
The segmentation expression should specify columns with a large number of unique data values and acceptable skew in their data distribution. In general, primary key columns that meet these criteria are good candidates for hash segmentation.
For details, see Expression Requirements below.
ALL NODES
- Automatically distributes data evenly across all nodes when the projection is created. Node ordering is fixed.
OFFSET
offset
- A zero-based offset that indicates on which node to start segmentation distribution.
This option is not valid for
CREATE TABLE
and
CREATE TEMPORARY TABLE
.
Important
If you create a projection for a table with the OFFSET
option, be sure to create enough copies of each projection segment to satisfy system K-safety; otherwise, Vertica regards the projection as unsafe and cannot use it to query the table.
You can ensure K-safety compliance when you create projections by combining OFFSET
and
KSAFE
options in the CREATE PROJECTION
statement. On executing this statement, Vertica automatically creates the necessary number of projection copies.
Expression requirements
A segmentation expression must specify table columns as they are defined in the source table. Projection column names are not supported.
The following restrictions apply to segmentation expressions:
-
All leaf expressions must be constants or column references to a column in the CREATE PROJECTION
's SELECT
list.
-
The expression must return the same value over the life of the database.
-
Aggregate functions are not allowed.
-
The expression must return non-negative INTEGER
values in the range
0 <= x < 263
, and values are generally distributed uniformly over that range.
Note
If the expression produces a value outside the expected range—for example, a negative value—no error occurs, and the row is added to the projection's first segment.
Examples
The following CREATE PROJECTION
statement creates projection public.employee_dimension_super
. It specifies to include all columns in table public.employee_dimension
. The hash segmentation clause invokes the Vertica HASH
function to segment projection data on the column employee_key
; it also includes the ALL NODES
clause, which specifies to distribute projection data evenly across all nodes in the cluster:
=> CREATE PROJECTION public.employee_dimension_super
AS SELECT * FROM public.employee_dimension
ORDER BY employee_key
SEGMENTED BY hash(employee_key) ALL NODES;
11.27.4 - Live aggregate projection
Stores the grouped results of queries that invoke aggregate functions (such as SUM) on table columns.
Stores the grouped results of queries that invoke aggregate functions (such as SUM) on table columns. For details, see Live aggregate projections.
Vertica does not regard live aggregate projections as superprojections, even those that include all table columns. For other requirements and restrictions, see Creating live aggregate projections.
Syntax
CREATE PROJECTION [ IF NOT EXISTS ] [[database.]schema.]projection
[ (
{ projection-column | grouped-clause
[ ENCODING encoding-type ]
[ ACCESSRANK integer ] }[,...]
) ]
AS SELECT { table-column | expr-with-table-columns }[,...] FROM [[database.]schema.]table [ [AS] alias]
GROUP BY column-expr[,...]
[ KSAFE [ k-num ] ]
Arguments
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
[
database
.]
schema
Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.
projection
Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
projection-column
The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.
If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.
grouped-clause
- See GROUPED clause.
ENCODING
encoding-type
The column encoding type, by default set to AUTO.
ACCESSRANK
integer
Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.
AS SELECT
Specifies the table data to query:
{table-column | expr-with-table-columns } [ [AS] alias] }[,...]
You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement.
If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.
GROUP BY
column-expr
[,...]
- One or more column expressions from the SELECT list. The first expression must be the first column expression in the SELECT list, the second expression must be the second column expression in the SELECT list, and so on.
Privileges
Non-superusers:
Examples
See Live aggregate projection example.
11.27.5 - Standard projection
Stores a collection of table data in a format that optimizes execution of certain queries on that table.
Stores a collection of table data in a format that optimizes execution of certain queries on that table. For details, see Projections .
Syntax
CREATE PROJECTION [ IF NOT EXISTS ] [[database.]schema.]projection
[ (
{ projection-column | grouped-clause
[ ENCODING encoding-type ]
[ ACCESSRANK integer ] }[,...]
) ]
AS SELECT { * | { MATCH_COLUMNS('pattern') | expression [ [AS] alias ] }[,...] }
FROM [[database.]schema.]table [ [AS] alias]
[ ORDER BY column[,...] ]
[ SEGMENTED BY expression ALL NODES [ OFFSET offset ] | UNSEGMENTED ALL NODES ]
[ KSAFE [ k-num ]
[ ON PARTITION RANGE BETWEEN min-val AND max-val ] ]
Arguments
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
[
database
.]
schema
Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.
projection
Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
projection-column
The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.
If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.
grouped-clause
- See GROUPED clause.
ENCODING
encoding-type
The column encoding type, by default set to AUTO.
ACCESSRANK
integer
Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.
AS SELECT
- Columns or column expressions to select from the specified table:
-
*
(asterisk): Returns all columns in the queried tables.
-
MATCH_COLUMNS('pattern'): Returns the names of all columns in the queried anchor table that match pattern
.
-
expression
[[AS]
alias
]
: Resolves to column data from the queried anchor table. You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement—for example, in the ORDER BY or segmentation clause.
If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.
ORDER BY
column
- Columns from the SELECT list on which to sort the projection. The ORDER BY clause can only be set to ASC (the default). Vertica always stores projection data in ascending sort order.
If you order by a column with a collection data type (ARRAY or SET), queries that use that column in an ORDER BY clause perform the sort again. This is because projections and queries perform the ordering differently.
If you omit the ORDER BY clause, Vertica uses the SELECT list to sort the projection.
SEGMENTED BY
expression
ALL NODES [ OFFSET
offset
] | UNSEGMENTED ALL NODES
-
How to distribute projection data:
If neither the anchor table nor the projection specifies segmentation, Vertica uses hash segmentation using all columns:
SEGMENTED BY HASH(column[,...]) ALL NODES OFFSET 0;
Tip
Vertica recommends segmenting large tables.
KSAFE
k-num
Specifies K-safety for the projection. The value must be equal to or greater than database K-safety. Vertica ignores this option if set for unsegmented projections.
If you omit this clause, Vertica uses database K-safety.
For general information, see K-safety in an Enterprise Mode database.
ON PARTITION RANGE BETWEEN
min-val
AND
max-val
Limits projection data to a range of partition keys. The minimum value must be less than or equal to the maximum value.
Values can be NULL. A null minimum value indicates no lower bound and a null maximum value indicates no upper bound. If both are NULL, this statement drops the projection endpoints, producing a regular projection instead of a range projection.
For other requirements and usage details, see Partition range projections.
Privileges
Non-superusers:
Examples
See:
11.27.6 - Top-k projection
Stores the top k rows from partitions of selected rows.
Stores the top k rows from partitions of selected rows. For details, see Top-k projections.
Vertica does not regard Top-K projections as superprojections, even those that include all table columns. For other requirements and restrictions, see Creating top-k projections.
Syntax
CREATE PROJECTION [ IF NOT EXISTS ] [[database.]schema.]projection
[ (
{ projection-column | grouped-clause
[ ENCODING encoding-type ]
[ ACCESSRANK integer ] }[,...]
) ]
AS SELECT { table-column | expr-with-table-columns }[,...] FROM [[database.]schema.]table [ [AS] alias]
LIMIT num-rows OVER ( window-partition-clause window-order-clause )
[ KSAFE [ k-num ] ]
Arguments
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
[
database
.]
schema
Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.
projection
Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
projection-column
The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.
If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.
grouped-clause
- See GROUPED clause.
ENCODING
encoding-type
The column encoding type, by default set to AUTO.
ACCESSRANK
integer
Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.
AS SELECT
Specifies the table data to query:
{table-column | expr-with-table-columns } [ [AS] alias] }[,...]
You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement.
If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.
LIMIT
num-rows
- The number of rows to return from the specified partition.
OVER (
window-partition-clause
window-order-clause
)
- Specifies window partitioning by one or more comma-delimited column expressions from the SELECT list. The first partition expression must be the first SELECT list item, the second partition expression the second SELECT list item, and so on.
The order clause is required, and specifies the order in which the top k
rows are returned. The default is ascending (ASC) order. All column expressions must be from the SELECT list, where the first window order expression must be the first SELECT list item not specified in the window partition clause.
Top-K projections support ORDER BY NULLS FIRST/LAST.
Privileges
Non-superusers:
Examples
See Top-k projection examples.
11.27.7 - UDTF projection
Stores newly-loaded data after it is transformed and/or aggregated by user-defined transformation functions (UDTFs).
Stores newly-loaded data after it is transformed and/or aggregated by user-defined transformation functions (UDTFs). For details and examples, see Pre-aggregating UDTF results.
Syntax
CREATE PROJECTION [ IF NOT EXISTS ] [[database.]schema.]projection
[ (
{ projection-column | grouped-clause
[ ENCODING encoding-type ]
[ ACCESSRANK integer ] }[,...]
) ]
AS { batch-query FROM { prepass-query sq-ref | table [[AS] alias] }
| prepass-query }
batch-query:
SELECT { table-column | expr-with-table-columns }[,...], batch-udtf(batch-args)
OVER (PARTITION BATCH BY partition-column-expr[,...])
[ AS (output-columns) ]
prepass-query:
SELECT { table-column | expr-with-table-columns }[,...], prepass-udtf(prepass-args)
OVER (PARTITION PREPASS BY partition-column-expr[,...])
[ AS (output-columns) ] FROM table
Arguments
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
[
database
.]
schema
Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.
projection
Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
projection-column
The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.
If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.
grouped-clause
- See GROUPED clause.
ENCODING
encoding-type
The column encoding type, by default set to AUTO.
ACCESSRANK
integer
Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.
SELECT
Specifies the table data to query:
{table-column | expr-with-table-columns } [ [AS] alias] }[,...]
You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement.
If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.
batch-udtf
(
batch-args
)
- The batch UDTF to invoke each time the following events occur:
Important
If the projection definition includes a pre-pass subquery, batch-args
must exactly match the pre-pass UDTF output columns, in name and order.
prepass-udtf
(
prepass-args
)
- The pre-pass UDTF to invoke on each load operation such as COPY or INSERT.
If specified in a subquery, the pre-pass UDTF returns transformed data to the batch query for further processing. Otherwise, the pre-pass query results are added to projection data storage.
OVER (PARTITION { BATCH | PREPASS } BY
partition-column-expr
[,...])
- Specifies the UDTF type and how to partition the data it returns:
In both cases, the OVER clause specifies partitioning with one or more column expressions from the SELECT list. The first expression is the first column expression in the SELECT list, the second expression is the second column expression in the SELECT list, and so on.
Note
The projection is implicitly segmented and ordered on PARTITION BY columns.
AS
output-columns
- Optionally names columns that are returned by the UDTF.
If a pre-pass subquery omits this clause, the outer batch query UDTF arguments (batch-args
) must reference the column names as they are defined in the pre-pass UDTF.
table
[[AS]
alias
]
- The projection's anchor table, optionally qualified by an alias.
sq-results
- Subquery result set that is returned to the outer batch UDTF.
Privileges
Non-superusers:
Examples
See Pre-aggregating UDTF results.
11.27.8 - Unsegmented clause
Specifies to distribute identical copies of table or projection data on all nodes across the cluster.
Specifies to distribute identical copies of table or projection data on all nodes across the cluster. Use this clause to facilitate distributed query execution on tables and projections that are too small to benefit from segmentation.
Vertica uses the same name to identify all instances of an unsegmented projection. For more information about projection name conventions, see Projection naming.
Syntax
UNSEGMENTED ALL NODES
Examples
This example creates an unsegmented projection for table store.store_dimension
:
=> CREATE PROJECTION store.store_dimension_proj (storekey, name, city, state)
AS SELECT store_key, store_name, store_city, store_state
FROM store.store_dimension
UNSEGMENTED ALL NODES;
CREATE PROJECTION
=> SELECT anchor_table_name anchor_table, projection_name, node_name
FROM PROJECTIONS WHERE projection_basename='store_dimension_proj';
anchor_table | projection_name | node_name
-----------------+----------------------+------------------
store_dimension | store_dimension_proj | v_vmart_node0001
store_dimension | store_dimension_proj | v_vmart_node0002
store_dimension | store_dimension_proj | v_vmart_node0003
(3 rows)
11.28 - CREATE RESOURCE POOL
Creates a user-defined resource pool.
Creates a user-defined resource pool.
Syntax
CREATE RESOURCE POOL pool-name [ FOR subcluster ] [ parameter-name setting ]...
Arguments
pool-name
- Name of the resource pool. If you specify a resource pool name with uppercase letters, Vertica converts them to lowercase letters.
The following naming requirements apply:
-
Built-in pool names cannot be used for user-defined pools.
-
All user-defined global resource pools must have unique names.
-
Names of global and subcluster-assigned resource pools cannot be the same. However, the same name can be shared by resource pools of different subclusters.
FOR
subcluster
Eon Mode only, the subcluster to associate with this resource pool, where subcluster
is one of the following:
SUBCLUSTER
subcluster-name
: Resource pool for an existing subcluster. You cannot be connected to this subcluster, otherwise Vertica returns an error.
CURRENT SUBCLUSTER
: Resource pool for the subcluster that you are connected to.
If omitted, the resource pool is created globally.
parameter-name setting
- A resource pool parameter and its initial value. If you omit this argument, Vertica sets this resource pool's parameters to their default values (see Parameters).
Parameters
Default values specified here pertain only to user-defined resource pools. For built-in pool default values, see Built-in resource pools configuration, or query system table RESOURCE_POOL_DEFAULTS.
-
CASCADE TO
Secondary resource pool for executing queries that exceed the
RUNTIMECAP
setting of their assigned resource pool:
CASCADE TO secondary-pool
-
CPUAFFINITYMODE
Specifies whether the resource pool has exclusive or shared use of the CPUs specified in
CPUAFFINITYSET
:
CPUAFFINITYMODE { SHARED | EXCLUSIVE | ANY }
SHARED
: Queries that run in this resource pool share its CPUAFFINITYSET
CPUs with other Vertica resource pools.
EXCLUSIVE
: Dedicates CPUAFFINITYSET
CPUs to this resource pool only, and excludes other Vertica resource pools. If CPUAFFINITYSET
is set as a percentage, then that percentage of CPU resources available to Vertica is assigned solely for this resource pool.
ANY
: Queries in this resource pool can run on any CPU, invalid if CPUAFFINITYSET
designates CPU resources.
Default: ANY
-
CPUAFFINITYSET
CPUs available to this resource pool. All cluster nodes must have the same number of CPUs. The CPU resources assigned to this set are unavailable to general resource pools.
CPUAFFINITYSET {
'cpu-index[,...]'
| 'cpu-indexi-cpu-indexn'
| 'integer%'
| NONE
}
cpu-index
[,...]
: Dedicates one or more comma-delimited CPUs to this resource pool.
cpu-indexi-cpu-indexn
: Dedicates a range of contiguous CPU indexes i
through n
to this resource pool.
integer
%
: Percentage of all available CPUs to use for this resource pool. Vertica rounds this percentage down to include whole CPU units.
NONE
(empty string): No affinity set is assigned to this resource pool. Queries associated with this pool are executed on any CPU.
Default: NONE
Important
CPUAFFINITYSET
and CPUAFFINITYMODE
must be set together in the same statement.
-
EXECUTIONPARALLELISM
Number of threads used to process any single query issued in this resource pool.
EXECUTIONPARALLELISM { limit | AUTO }
limit
: An integer value between 1 and the number of cores. Setting this parameter to a reduced value increases throughput of short queries issued in the resource pool, especially if queries are executed concurrently.
AUTO
or 0
: Vertica calculates the setting from the number of cores, available memory, and amount of data in the system. Unless memory is limited, or the amount of data is very small, Vertica sets this parameter to the number of cores on the node.
Default: AUTO
-
MAXCONCURRENCY
Maximum number of concurrent execution slots available to the resource pool across the cluster:
MAXCONCURRENCY { integer | NONE }
NONE
(empty string): Unlimited number of concurrent execution slots.
Default: NONE
-
MAXMEMORYSIZE
Maximum size per node the resource pool can grow by borrowing memory from the
GENERAL
pool:
MAXMEMORYSIZE {
'integer%'
|'integer{K|M|G|T}'
NONE
}
integer
%
: Percentage of total memory
integer
{K|M|G|T}
: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
NONE
(empty string): Unlimited, resource pool can borrow any amount of available memory from the GENERAL
pool.
Default: NONE
MAXQUERYMEMORYSIZE
Maximum amount of memory this resource pool can allocate at runtime to process a query. If the query requires more memory than this setting, Vertica stops execution and returns an error.
Set this parameter as follows:
MAXQUERYMEMORYSIZE {
'integer%'
| 'integer{K|M|G|T}'
| NONE
}
integer
%
: Percentage of
MAXMEMORYSIZE
for this resource pool.
integer
{K|M|G|T}
: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes, up to the value of
MAXMEMORYSIZE
.
NONE
(empty string): Unlimited; resource pool can borrow any amount of available memory from the GENERAL pool, within the limits set by
MAXMEMORYSIZE
.
Default: NONE
-
MEMORYSIZE
Total per-node memory available to the Vertica resource manager that is allocated to this resource pool:
MEMORYSIZE {
'integer%'
| 'integer{K|M|G|T}'
}
integer
%
: Percentage of total memory
integer
{K|M|G|T}
: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
Default: 0%. No memory allocated, the resource pool borrows memory from the
GENERAL
pool.
-
PLANNEDCONCURRENCY
Preferred number of queries to execute concurrently in the resource pool. This setting applies to the entire cluster:
PLANNEDCONCURRENCY { num-queries | AUTO }
-
num-queries
: Integer value ≥ 1, the preferred number of queries to execute concurrently in the resource pool. When possible, query resource budgets are limited to allow this level of concurrent execution.
-
AUTO
: Value is calculated automatically at query runtime. Vertica sets this parameter to the lower of these two calculations, but never less than 4:
-
Number of logical cores
-
Memory divided by 2GB
If the number of logical cores on each node is different, AUTO
is calculated differently for each node. Distributed queries run like the minimal effective planned concurrency. Single node queries run with the planned concurrency of the initiator.
Default: AUTO
Tip
Change this parameter only after evaluating performance over a period of time.
-
PRIORITY
Priority of queries in this resource pool when they compete for resources in the
GENERAL
pool:
PRIORITY { integer | HOLD }
-
integer
: Negative or positive integer value, where higher numbers denote higher priority:
-
User-defined resource pool: -100
to 100
-
Built-in resource pools
SYSQUERY
,
RECOVERY
, and
TM
: -110
to 110
-
HOLD
: Sets priority to -999
. Queries in this resource pool are queued until
QUEUETIMEOUT
is reached.
Default: 0
-
QUEUETIMEOUT
Maximum time a request can wait for pool resources before it is rejected, not more than one year:
QUEUETIMEOUT { integer | 'interval' | 'NONE' }
-
integer
: Maximum wait time in seconds
-
[interval](/en/sql-reference/language-elements/literals/datetime-literals/interval-literal/): Maximum wait time expressed in the following format:
num year num months num [days] HH:MM:SS.ms
-
NONE
(empty string): No maximum wait time, request can be queued indefinitely, up to one year.
If the value that you specify resolves to more than one year, Vertica returns with a warning and sets the parameter to 365 days:
=> ALTER RESOURCE POOL user_0 QUEUETIMEOUT '11 months 50 days 08:32';
WARNING 5693: Using 1 year for QUEUETIMEOUT
ALTER RESOURCE POOL
=> SELECT QUEUETIMEOUT FROM resource_pools WHERE name = 'user_0';
QUEUETIMEOUT
--------------
365
(1 row)
Default: 00:05 (5 minutes)
RUNTIMECAP
Maximum execution time allowed to queries in this resource pool, not more than one year, otherwise Vertica returns with an error. If a query exceeds this setting, it tries to cascade to a secondary pool:
RUNTIMECAP { 'interval' | NONE }
-
interval
: Maximum wait time expressed in the following format:
num year num month num [day] HH:MM:SS.ms
-
NONE
(empty string): No maximum wait time, request can be queued indefinitely, up to one year.
If the user or session also has a RUNTIMECAP
, the shorter limit applies.
-
RUNTIMEPRIORITY
Determines how the resource manager should prioritize dedication of run-time resources (CPU, I/O bandwidth) to queries already running in this resource pool:
RUNTIMEPRIORITY { HIGH | MEDIUM | LOW }
Default: MEDIUM
-
RUNTIMEPRIORITYTHRESHOLD
Maximum time (in seconds) in which query processing must complete before the resource manager assigns to it the resource pool's RUNTIMEPRIORITY
. All queries begin execution with a priority of HIGH.
RUNTIMEPRIORITYTHRESHOLD seconds
Default: 2
SINGLEINITIATOR
Set to false for backward compatibility. Do not change this setting.
Privileges
Superuser
Examples
This example shows how to create a resource pool with MEMORYSIZE
of 1800 MB.
=> CREATE RESOURCE POOL ceo_pool MEMORYSIZE '1800M' PRIORITY 10;
CREATE RESOURCE POOL
Assuming the CEO report user already exists, associate this user with the preceding resource pool using ALTER USER
statement.
=> GRANT USAGE ON RESOURCE POOL ceo_pool to ceo_user;
GRANT PRIVILEGE
=> ALTER USER ceo_user RESOURCE POOL ceo_pool;
ALTER USER
Issue the following command to confirm that the ceo_user is associated with the ceo_pool:
=> SELECT * FROM users WHERE user_name ='ceo_user';
-[ RECORD 1 ]-----+--------------------------------------------------
user_id | 45035996273733402
user_name | ceo_user
is_super_user | f
profile_name | default
is_locked | f
lock_time |
resource_pool | ceo_pool
memory_cap_kb | unlimited
temp_space_cap_kb | unlimited
run_time_cap | unlimited
all_roles |
default_roles |
search_path | "$user", public, v_catalog, v_monitor, v_internal
This exampleshows how to create and designate secondary resource pools.
=> CREATE RESOURCE POOL rp3 RUNTIMECAP '5 minutes';
=> CREATE RESOURCE POOL rp2 RUNTIMECAP '3 minutes' CASCADE TO rp3;
=> CREATE RESOURCE POOL rp1 RUNTIMECAP '1 minute' CASCADE TO rp2;
=> SET SESSION RESOURCE_POOL = rp1;
This Eon Mode example confirms the current subcluster name, then creates a resource pool for the current subcluster:
=> SELECT CURRENT_SUBCLUSTER_NAME();
CURRENT_SUBCLUSTER_NAME
-------------------------
analytics_1
(1 row)
=> CREATE RESOURCE POOL dashboard FOR SUBCLUSTER analytics_1;
CREATE RESOURCE POOL
See also
11.28.1 - Built-in pools
Vertica is preconfigured with built-in pools for various system tasks:.
Vertica is preconfigured with built-in pools for various system tasks:
For details on resource pool settings, see ALTER RESOURCE POOL.
GENERAL
Catch-all pool used to answer requests that have no specific resource pool associated with them. Any memory left over after memory has been allocated to all other pools is automatically allocated to the GENERAL pool. The MEMORYSIZE parameter of the GENERAL pool is undefined (variable), however, the GENERAL pool must be at least 1GB in size and cannot be smaller than 25% of the memory in the system.
The MAXMEMORYSIZE parameter of the GENERAL pool has special meaning; when set as a % value it represents the percent of total physical RAM on the machine that the Resource manager can use for queries. By default, it is set to 95%. MAXMEMORYSIZE governs the total amount of RAM that the Resource Manager can use for queries, regardless of whether it is set to a percent or to a specific value (for example, '10GB').
User-defined pools can borrow memory from the GENERAL pool to satisfy requests that need extra memory until the MAXMEMORYSIZE parameter of that pool is reached. If the pool is configured to have MEMORYSIZE equal to MAXMEMORYSIZE, it cannot borrow any memory from the GENERAL pool. When multiple pools request memory from the GENERAL pool, they are granted access to general pool memory according to their priority setting. In this manner, the GENERAL pool provides some elasticity to account for point-in-time deviations from normal usage of individual resource pools.
Vertica recommends reducing the GENERAL pool MAXMEMORYSIZE if your catalog uses over 5 percent of overall memory. You can calculate what percentage of GENERAL pool memory the catalog uses as follows:
=> WITH memory_use_metadata AS (SELECT node_name, memory_size_kb FROM resource_pool_status WHERE pool_name='metadata'),
memory_use_general AS (SELECT node_name, memory_size_kb FROM resource_pool_status WHERE pool_name='general')
SELECT m.node_name, ((m.memory_size_kb/g.memory_size_kb) * 100)::NUMERIC(4,2) pct_catalog_usage
FROM memory_use_metadata m JOIN memory_use_general g ON m.node_name = g.node_name;
node_name | pct_catalog_usage
------------------+-------------------
v_vmart_node0001 | 0.41
v_vmart_node0002 | 0.37
v_vmart_node0003 | 0.36
(3 rows)
BLOBDATA
Controls resource usage for in-memory blobs. In-memory blobs are objects used by a number of the machine learning SQL functions. You should adjust this pool if you plan on processing large machine learning workloads. For information about tuning the pool, see Tuning for machine learning.
If a query using the BLOBDATA pool exceeds its query planning budget, then it spills to disk. For more information about tuning your query budget, see Query budgeting.
DBD
Controls resource usage for Database Designer processing. Use of this pool is enabled by configuration parameter DBDUseOnlyDesignerResourcePool, by default set to false.
By default, QUEUETIMEOUT is set to 0 for this pool. When resources are under pressure, this setting causes the DBD to time out immediately, and not be queued to run later. Database Designer then requests the user to run the designer later, when resources are more available.
Important
Do not change QUEUETIMEOUT or any DBD resource pool parameters.
JVM
Controls Java Virtual Machine resources used by Java User Defined Extensions. When a Java UDx starts the JVM, it draws resources from the those specified in the JVM resource pool. Vertica does not reserve memory in advance for the JVM pool. When needed, the pool can expand to 10% of physical memory or 2 GB of memory, whichever is smaller. If you are buffering large amounts of data, you may need to increase the size of the JVM resource pool.
You can adjust the size of your JVM resource pool by changing its configuration settings. Unlike other resource pools, the JVM resource pool does not release resources until a session is closed.
Tracks memory allocated for catalog data and storage data structures. This pool increases in size as Vertica metadata consumes additional resources. Memory assigned to the METADATA pool is subtracted from the GENERAL pool, enabling the Vertica resource manager to make more effective use of available resources. If the METADATA resource pool reaches 75% of the GENERAL pool, Vertica stops updating METADATA memory size and displays a warning message in vertica.log
. You can enable or disable the METADATA pool with configuration parameter EnableMetadataMemoryTracking.
If you created a "dummy" or "swap" resource pool to protect resources for use by your operating system, you can replace that pool with the METADATA pool.
Users cannot change the parameters of the METADATA resource pool.
RECOVERY
Used by queries issued when recovering another node of the database. The MAXCONCURRENCY parameter is used to determine how many concurrent recovery threads to use. You can use the PLANNEDCONCURRENCY parameter (by default, set to twice the MAXCONCURRENCY
) to tune how to apportion memory to recovery queries.
See Tuning for recovery.
REFRESH
Used by queries issued by
PROJECTION_REFRESHES
operations. Refresh does not currently use multiple concurrent threads; thus, changes to the MAXCONCURRENCY values have no effect.
See Scenario: Tuning for Refresh.
SYSQUERY
Runs queries against all system monitoring and catalog tables. The SYSQUERY pool reserves resources for system table queries so that they are never blocked by contention for available resources.
TM
The Tuple Mover (TM) pool. You can set the MAXCONCURRENCY parameter for the TM pool to allow concurrent TM operations.
See Tuning tuple mover pool settings.
11.28.2 - Built-in resource pools configuration
To view the current and default configuration for built-in resource pools, query the system tables RESOURCE_POOLS and RESOURCE_POOL_DEFAULTS, respectively.
To view the current and default configuration for built-in resource pools, query the system tables RESOURCE_POOLS and RESOURCE_POOL_DEFAULTS, respectively. The sections below provide this information, and also indicate which built-in pool parameters can be modified with ALTER RESOURCE POOL:
GENERAL
Important
Changes to GENERAL resource pool parameters take effect only when the database restarts.
Parameter |
Settings |
MEMORYSIZE |
Empty / cannot be set |
MAXMEMORYSIZE |
The maximum memory to use for all resource pools, one of the following:
MAXMEMORYSIZE {
'integer%'
| 'integer{K|M|G|T}'
}
integer % : Percentage of total system RAM, must be ≥ 25%
Caution
Setting this parameter to 100% generates a warning of potential swapping.
integer {K|M|G|T} : Amount of memory in kilobytes, megabytes, gigabytes, or terabytes, must be ≥ 1GB
For example, if your node has 64GB of memory, setting MAXMEMORYSIZE to 50% allocates half of available memory. Thus, the maximum amount of memory available to all resource pools is 32GB.
Default: 95%
|
MAXQUERYMEMORYSIZE |
The maximum amount of memory allocated by this pool to process any query:
MAXQUERYMEMORYSIZE {
'integer%'
| 'integer{K|M|G|T}'
}
-
integer % : Percentage of MAXMEMORYSIZE for this pool.
-
integer {K|M|G|T} : Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
|
EXECUTIONPARALLELISM |
Default: AUTO |
PRIORITY |
Default: 0 |
RUNTIMEPRIORITY |
Default: Medium |
RUNTIMEPRIORITYTHRESHOLD |
Default: 2 |
QUEUETIMEOUT |
Default: 00:05 (minutes) |
RUNTIMECAP |
Prevents runaway queries by setting the maximum time a query in the pool can execute. If a query exceeds this setting, it tries to cascade to a secondary pool:
RUNTIMECAP { ' interval ' | NONE }
|
PLANNEDCONCURRENCY |
The number of concurrent queries you expect to run against the resource pool, an integer ≥ 4. If set to AUTO (default), Vertica automatically sets PLANNEDCONCURRENCY at query runtime, choosing the lower of these two values:
-
Number of cores
-
Memory/2GB
Important
In systems with a large number of cores, the default AUTO setting of PLANNEDCONCURRENCY is liable to be too low. In this case, set the parameter to the actual number of cores:
ALTER RESOURCE POOL general PLANNEDCONCURRENCY #cores ;
Default: AUTO
|
MAXCONCURRENCY |
Caution
Must be set ≥ 1, otherwise Vertica generates a warning that system queries might be unable to execute.
Default: Empty
|
SINGLEINITIATOR |
Important
Included for backwards compatibility. Do not change.
Default: False |
CPUAFFINITYSET |
Default: Empty |
CPUAFFINITYMODE |
Default: ANY |
CASCADETO |
Default: Empty |
BLOBDATA
Parameter |
Default Setting |
MEMORYSIZE |
0% |
MAXMEMORYSIZE |
10 |
MAXQUERYMEMORYSIZE |
Empty / cannot be set |
EXECUTIONPARALLELISM |
PRIORITY |
RUNTIMEPRIORITY |
RUNTIMEPRIORITYTHRESHOLD |
QUEUETIMEOUT |
RUNTIMECAP |
NONE |
PLANNEDCONCURRENCY |
AUTO |
MAXCONCURRENCY |
Empty / cannot be set |
SINGLEINITIATOR |
CPUAFFINITYSET |
CPUAFFINITYMODE |
ANY / cannot be set |
CASCADETO |
Empty / cannot be set |
DBD
Parameter |
Default Setting |
MEMORYSIZE |
0% |
MAXMEMORYSIZE |
Unlimited |
MAXQUERYMEMORYSIZE |
Empty / cannot be set |
EXECUTIONPARALLELISM |
AUTO |
PRIORITY |
0 |
RUNTIMEPRIORITY |
MEDIUM |
RUNTIMEPRIORITYTHRESHOLD |
0 |
QUEUETIMEOUT |
0 |
RUNTIMECAP |
NONE |
PLANNEDCONCURRENCY |
AUTO |
MAXCONCURRENCY |
NONE |
SINGLEINITIATOR |
True
Important
Included for backwards compatibility. Do not change.
|
CPUAFFINITYSET |
Empty / cannot be set |
CPUAFFINITYMODE |
ANY / cannot be set |
CASCADETO |
Empty / cannot be set |
JVM
Parameter |
Default Setting |
MEMORYSIZE |
0% |
MAXMEMORYSIZE |
10% of memory or 2 GB, whichever is smaller |
MAXQUERYMEMORYSIZE |
Empty / cannot be set |
EXECUTIONPARALLELISM |
AUTO |
PRIORITY |
0 |
RUNTIMEPRIORITY |
MEDIUM |
RUNTIMEPRIORITYTHRESHOLD |
2 |
QUEUETIMEOUT |
00:05 (minutes) |
RUNTIMECAP |
NONE |
PLANNEDCONCURRENCY |
AUTO |
MAXCONCURRENCY |
Empty / cannot be set |
SINGLEINITIATOR |
FALSE
Important
Included for backwards compatibility. Do not change.
|
CPUAFFINITYSET |
Empty / cannot be set |
CPUAFFINITYMODE |
ANY / cannot be set |
CASCADETO |
Empty / cannot be set |
Parameter |
Default Setting |
MEMORYSIZE |
0% |
MAXMEMORYSIZE |
Unlimited |
MAXQUERYMEMORYSIZE |
Empty / cannot be set |
EXECUTIONPARALLELISM |
AUTO |
PRIORITY |
108 |
RUNTIMEPRIORITY |
HIGH |
RUNTIMEPRIORITYTHRESHOLD |
0 |
QUEUETIMEOUT |
0 |
RUNTIMECAP |
NONE |
PLANNEDCONCURRENCY |
AUTO |
MAXCONCURRENCY |
0 |
SINGLEINITIATOR |
FALSE.
Important
Included for backwards compatibility. Do not change.
|
CPUAFFINITYSET |
Empty / cannot be set |
CPUAFFINITYMODE |
ANY / cannot be set |
CASCADETO |
Empty / cannot be set |
RECOVERY
Parameter |
Default Setting |
MEMORYSIZE |
0% |
MAXMEMORYSIZE |
Maximum size per node the resource pool can grow by borrowing memory from the
GENERAL pool:
MAXMEMORYSIZE {
'integer%'
|'integer{K|M|G|T}'
NONE
}
integer % : Percentage of total memory
integer {K|M|G|T} : Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
NONE (empty string): Unlimited, resource pool can borrow any amount of available memory from the GENERAL pool.
Default: NONE
Caution
Setting must resolve to ≥ 25%. Otherwise, Vertica generates a warning that system queries might be unable to execute.
|
MAXQUERYMEMORYSIZE |
Empty / cannot be set |
EXECUTIONPARALLELISM |
AUTO |
PRIORITY |
One of the following:
-
Enterprise Mode: 107
-
Eon Mode: 110
Caution
Change these settings only under guidance from Vertica technical support.
|
RUNTIMEPRIORITY |
MEDIUM |
RUNTIMEPRIORITYTHRESHOLD |
60 |
QUEUETIMEOUT |
00:05 (minutes) |
RUNTIMECAP |
NONE |
PLANNEDCONCURRENCY |
AUTO |
MAXCONCURRENCY |
By default, set as follows:
( numberCores / 2) + 1
Thus, given a system with four cores, MAXCONCURRENCY has a default setting of 3.
Note
0 or NONE (unlimited) are invalid settings.
|
SINGLEINITIATOR |
True.
Important
Included for backwards compatibility. Do not change.
|
CPUAFFINITYSET |
Empty / cannot be set |
CPUAFFINITYMODE |
ANY / cannot be set |
CASCADETO |
Empty / cannot be set |
REFRESH
Parameter |
Default Setting |
MEMORYSIZE |
0% |
MAXMEMORYSIZE |
NONE (unlimited) |
MAXQUERYMEMORYSIZE |
Empty / cannot be set |
EXECUTIONPARALLELISM |
AUTO |
PRIORITY |
-10 |
RUNTIMEPRIORITY |
MEDIUM |
RUNTIMEPRIORITYTHRESHOLD |
60 |
QUEUETIMEOUT |
00:05 (minutes) |
RUNTIMECAP |
NONE (unlimited) |
PLANNEDCONCURRENCY |
AUTO (4) |
MAXCONCURRENCY |
3
This parameter must be set ≥ 1.
|
SINGLEINITIATOR |
True.
Important
Included for backwards compatibility. Do not change.
|
CPUAFFINITYSET |
Empty / cannot be set |
CPUAFFINITYMODE |
ANY / cannot be set |
CASCADETO |
Empty / cannot be set |
SYSQUERY
Parameter |
Default Setting |
MEMORYSIZE |
1G
Caution
Setting must resolve to ≥ 20M, otherwise Vertica generates a warning that system queries might be unable to execute, and diagnosing problems might be difficult.
|
MAXMEMORYSIZE |
Empty (unlimited) |
MAXQUERYMEMORYSIZE |
Empty / cannot be set |
EXECUTIONPARALLELISM |
AUTO |
PRIORITY |
110 |
RUNTIMEPRIORITY |
HIGH |
RUNTIMEPRIORITYTHRESHOLD |
0 |
QUEUETIMEOUT |
00:05 (minutes) |
RUNTIMECAP |
NONE |
PLANNEDCONCURRENCY |
AUTO |
MAXCONCURRENCY |
Empty
Caution
Must be set ≥ 1, otherwise Vertica generates a warning that system queries might be unable to execute.
|
SINGLEINITIATOR |
False.
Important
Included for backwards compatibility. Do not change.
|
CPUAFFINITYSET |
Empty / cannot be set |
CPUAFFINITYMODE |
CASCADETO |
TM
Parameter |
Default Setting |
MEMORYSIZE |
5% (of the GENERAL pool's MAXMEMORYSIZE setting) + 2GB
Important
You can estimate the optimal amount of RAM for the TM resource pool as follows:
GbRAM / (6 * #table-cols ) > 10
where #table-cols is the number of columns in the largest database table. For example, given a 100-column table, MEMORYSIZE needs least 6GB of RAM:
6144MB / (6 * 100) = 10.24
|
MAXMEMORYSIZE |
Unlimited |
MAXQUERYMEMORYSIZE |
Empty / cannot be set |
EXECUTIONPARALLELISM |
AUTO |
PRIORITY |
105 |
RUNTIMEPRIORITY |
MEDIUM |
RUNTIMEPRIORITYTHRESHOLD |
60 |
QUEUETIMEOUT |
00:05 (minutes) |
RUNTIMECAP |
NONE |
PLANNEDCONCURRENCY |
7 |
MAXCONCURRENCY |
Sets across all nodes the maximum number of concurrent execution slots available to TM pool. In databases created in Vertica releases ≥9.3, the default value is 7. In databases created in earlier versions, the default is 3.This setting specifies the maximum number of merges that can occur simultaneously on multiple threads.
Note
0 or NONE (unlimited) are invalid settings.
|
SINGLEINITIATOR |
True
Important
Included for backwards compatibility. Do not change.
|
CPUAFFINITYSET |
Empty / cannot be set |
CPUAFFINITYMODE |
ANY / cannot be set |
CASCADETO |
Empty / cannot be set |
11.29 - CREATE ROLE
Creates a.
Creates a role. After creating a role, use GRANT statements to specify role permissions.
Syntax
CREATE ROLE role
Parameters
role
- The name for the new role, where
role
conforms to conventions described in Identifiers.
Privileges
Superuser
Examples
This example shows to create an empty role called roleA.
=> CREATE ROLE roleA;
CREATE ROLE
See also
11.30 - CREATE ROUTING RULE
Creates a load balancing routing rule that directs incoming client connections from an IP address range to a group of Vertica nodes.
Creates one of the following rules:
- A load balancing routing rule that directs incoming client connections from an IP address range to a group of Vertica nodes. This group of Vertica nodes is defined by a load balance group. Once you create a routing rule, any client connection originating from the rule's IP address range is redirected to one of the nodes in the load balance group if the client opts into load balancing.
- A workload routing rule that routes incoming client connections to the specified subcluster. To view existing workload routing rules, see WORKLOAD_ROUTING_RULES.
Syntax
CREATE ROUTING RULE
{
rule_name ROUTE 'address_range' TO group_name |
ROUTE WORKLOAD workload_name TO SUBCLUSTER subcluster_name [,...] [ PRIORITY priority]
}
Arguments
rule_name
- A name for the load balancing routing rule.
address_range
- An IPv4 or IPv6 address range in CIDR format, the address range of client connections that this rule applies to.
group_name
- The name of the load balance group to handle the client connections from the address range. You create this group with CREATE LOAD BALANCE GROUP.
workload_name
- The name of the workload.
subcluster_name
- A list of subclusters to route client connections to.
priority
- A non-negative INTEGER, the priority of the workload (default: 0). The greater the value, the greater the priority. If a user or their role is granted multiple routing rules, the one with the greatest priority applies.
Privileges
Superuser
Examples
The following example creates a routing rule that routes all client connections from 192.168.1.0 to 192.168.1.255 to a load balance group named internal_clients:
=> CREATE ROUTING RULE internal_clients ROUTE '192.168.1.0/24' TO internal_clients;
CREATE ROUTING RULE
The following example creates a routing rule that routes analytics
workloads to subclusters sc_analytics
and sc_analytics_2
with a priority of 5. For details, see Workload routing:
=> CREATE ROUTING RULE ROUTE WORKLOAD analytics TO SUBCLUSTER sc_analytics, sc_analytics_2 PRIORITY 5;
See also
11.31 - CREATE SCHEDULE
Creates a schedule.
Creates a schedule. For details, see Scheduled execution.
To view existing schedules, query USER_SCHEDULES.
Syntax
CREATE SCHEDULE [ IF NOT EXISTS ] [[database.]schema.]schedule
{ USING CRON 'cron_expression' | USING DATETIMES timestamp_list }
Arguments
- IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
schedule
- The name of the schedule.
cron_expression
- A
cron
expression. You should use this for recurring tasks. Value separators are not currently supported.
timestamp_list
- A comma-separated list of timestamps. You should use this to schedule non-recurring events at arbitrary times.
Privileges
Superuser
Examples
To create a schedule for October 2, 2022 and November 2, 2022:
=> CREATE SCHEDULE oct_nov_2 USING DATETIMES('2022-10-02 12:00:00', '2022-11-02 12:00:00');
To create a schedule for January 1 at 12:00 PM:
=> CREATE SCHEDULE annual_schedule USING CRON '0 12 1 1 *';
To create a schedule for the first of every month:
=> CREATE SCHEDULE monthly_schedule USING CRON '0 0 1 * *';
To create a schedule for Sunday at 12:00 AM:
=> CREATE SCHEDULE weekly_schedule USING CRON '0 0 * * 0';
To create a schedule for every day at 1:00 PM:
=> CREATE SCHEDULE daily_schedule USING CRON '0 13 * * *';
To create a schedule for every five minutes:
=> CREATE SCHEDULE every_5_minutes USING CRON '*/5 * * * *';
To create a schedule for every hour from 9:00 AM to 4:59 PM:
=> CREATE SCHEDULE 9_to_5 USING CRON '0 9-16 * * *';
11.32 - CREATE SCHEMA
Defines a schema.
Defines a schema.
Syntax
CREATE SCHEMA [ IF NOT EXISTS ] [{ database | namespace }.]schema
[ AUTHORIZATION username]
[ DEFAULT { INCLUDE | EXCLUDE } [ SCHEMA ] PRIVILEGES ]
[ DISK_QUOTA quota ]
Arguments
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
database
- Name of the database. If specified, it must be the current database.
namespace
- For Eon Mode databases, namespace under which to create the schema. If unspecified, the schema is created in the default namespace.
schema
- Name of the schema to create, with the following requirements:
-
Must be unique among all other schema names in the namespace under which the schema is created.
-
Must comply with keyword restrictions and rules for Identifiers.
-
Cannot begin with v_
; this prefix is reserved for Vertica system tables.
-
If schema
contains a period, the part before the period cannot be the same as the name of an existing namespace.
AUTHORIZATION
username
- Valid only for superusers, assigns ownership of the schema to another user. By default, the user who creates a schema is also assigned ownership.
After you create a schema, you can reassign ownership to another user with
ALTER SCHEMA
.
DEFAULT {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES
Specifies whether to enable or disable default inheritance of privileges for new tables in the specified schema:
-
EXCLUDE SCHEMA PRIVILEGES
(default): Disables inheritance of schema privileges.
-
INCLUDE SCHEMA PRIVILEGES
: Specifies to grant tables in the specified schema the same privileges granted to that schema. This option has no effect on existing tables in the schema.
If you omit INCLUDE PRIVILEGES
, you must explicitly grant schema privileges on the desired tables.
For more information see Enabling schema inheritance.
DISK_QUOTA
quota
- String, an integer followed by a supported unit: K, M, G, or T. Data-load, DML, and ILM operations that increase the schema's usage beyond the set quota fail. For details, see Disk quotas.
If not specified, the schema has no quota.
Privileges
Supported sub-statements
CREATE SCHEMA
can include one or more sub-statements—for example, to create tables or projections within the new schema. Supported sub-statements include:
CREATE SCHEMA
statement and all sub-statements are treated as a single transaction. If any statement fails, Vertica rolls back the entire transaction. The owner of the new schema is assigned ownership of all objects that are created within this transaction.
For example, the following CREATE SCHEMA
statement also grants privileges on the new schema, and creates a table and view of that table:
=> \c - Joan
You are now connected as user "Joan".
=> CREATE SCHEMA s1
GRANT USAGE, CREATE ON SCHEMA s1 TO public
CREATE TABLE s1.t1 (a varchar)
CREATE VIEW s1.t1v AS SELECT * FROM s1.t1;
CREATE SCHEMA
=> \dtv s1.*
List of tables
Schema | Name | Kind | Owner | Comment
--------+------+-------+-------+---------
s1 | t1 | table | Joan |
s1 | t1v | view | Joan |
(2 rows)
Examples
Create schema s1
:
Create schema s2
if it does not already exist:
=> CREATE SCHEMA IF NOT EXISTS s2;
In an Eon Mode database, create schema s3
in namespace n1
:
=> CREATE SCHEMA n1.s3;
CREATE SCHEMA
In an Eon Mode database, create schema s3
in the default_namespace
:
=> CREATE SCHEMA s3;
CREATE SCHEMA
If the schema already exists, Vertica returns a rollback message:
=> CREATE SCHEMA IF NOT EXISTS s2;
NOTICE 4214: Object "s2" already exists; nothing was done
Create table t1
in schema s1
, then grant users Fred
and Aniket
access to all existing tables and all privileges on table t1
:
=> CREATE TABLE s1.t1 (c INT);
CREATE TABLE
=> GRANT USAGE ON SCHEMA s1 TO Fred, Aniket;
GRANT PRIVILEGE
=> GRANT ALL PRIVILEGES ON TABLE s1.t1 TO Fred, Aniket;
GRANT PRIVILEGE
Enable inheritance on new schema s3
so all tables created in it automatically inherit its privileges. In this case, new table s3.t2
inherits USAGE, CREATE, and SELECT privileges, which are automatically granted to all database users:
=> CREATE SCHEMA s3 DEFAULT INCLUDE SCHEMA PRIVILEGES;
CREATE SCHEMA
=> GRANT USAGE, CREATE, SELECT, INSERT ON SCHEMA S3 TO PUBLIC;
GRANT PRIVILEGE
=> CREATE TABLE s3.t2(i int);
WARNING 6978: Table "t2" will include privileges from schema "s3"
CREATE TABLE
See also
11.33 - CREATE SEQUENCE
Defines a named sequence number generator object.
Defines a named sequence number generator. Named sequences let you set the default values of primary key columns. Sequences guarantee uniqueness and avoid constraint enforcement issues.
A sequence has minimum and maximum values, a starting position, and an increment. The increment can be positive (ascending) or negative (descending). By default, an ascending sequence starts at 1, its minimum value, and a descending sequence starts at -1, its maximum value. You can specify minimum, maximum, increment, and start values for a new sequence, so long as the values are mathematically consistent.
For more information about sequence types and usage, see Sequences.
Syntax
CREATE SEQUENCE [ IF NOT EXISTS ] [[database.]schema.]sequence
[ INCREMENT [ BY ] integer ]
[ MINVALUE integer | NO MINVALUE ]
[ MAXVALUE integer | NO MAXVALUE ]
[ START [ WITH ] integer ]
[ CACHE integer | NO CACHE ]
[ CYCLE | NO CYCLE ]
Arguments
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
sequence
- Name of the sequence to create, where the name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
INCREMENT
integer
Positive or negative minimum value change on each call to NEXTVAL. The default is 1.
This value is a minimum increment size. The sequence can increment by more than this value unless you also specify NO CACHE
.
MINVALUE
integer
| NO MINVALUE
- Minimum value the sequence can generate. The default depends on the direction of the sequence:
MAXVALUE
integer
| NO MAXVALUE
- Maximum value the sequence can generate. The default depends on the direction of the sequence:
-
Ascending sequence: 263
-
Descending sequence: -1
START [WITH]
integer
- Initial value of the sequence. The next call to NEXTVAL returns this start value. By default, an ascending sequence begins with the minimum value and a descending sequence begins with the maximum value.
CACHE
integer
| NO CACHE
How many unique sequence numbers to pre-allocate and store in memory on each node for faster access. Vertica sets up caching for each session and distributes it across all nodes. A value of 0 or 1 is equivalent to NO CACHE
.
Caution
If sequence caching is set to a low number, nodes might request a new set of cache values more frequently. While it supplies a new cache, Vertica must lock the catalog. Until Vertica releases the lock, other database activities such as table inserts are blocked, which can adversely affect overall performance.
By default, the sequence cache is set to 250,000.
For details, see Distributing sequences.
CYCLE | NO CYCLE
- How to handle reaching the end of the sequence. The default is
NO CYCLE
, meaning that a call to NEXTVAL returns an error after the sequence reaches its upper or lower limit. CYCLE
instead wraps, as follows:
- Ascending sequence: wraps to the minimum value.
- Descending sequence: wraps to the maximum value.
Privileges
Non-superusers: CREATE privilege on the schema
Examples
See Creating and using named sequences.
See also
11.34 - CREATE SUBNET
Identifies the subnet to which the nodes of a Vertica database belong.
Identifies the subnet to which the nodes of a Vertica database belong. Use this statement to configure import/export from a database to other Vertica clusters.
Syntax
CREATE SUBNET subnet-name WITH 'subnet-prefix'
Parameters
subnet-name
- A name you assign to the subnet, where
subnet-name
conforms to conventions described in Identifiers.
subnet-prefix
- The subnet prefix in either a dotted-quad number format for IPv4 addresses, or four colon-delimited four-digit hexadecimal numbers for IPv6 addresses. Refer to system table
NETWORK_INTERFACES
to get the prefix of all available IP networks.
You can then configure the database to use the subnet for import/export. For details, see Identify the database or nodes used for import/export.
Privileges
Superuser
Examples
=> CREATE SUBNET mySubnet WITH '123.4.5.6';
=> CREATE SUBNET mysubnet WITH 'fd9b:1fcc:1dc4:78d3::';
11.35 - CREATE TABLE
Creates a table in the logical schema.
Creates a table in the logical schema. You can specify the column definitions directly, or you can base a table on an existing table using AS or LIKE.
Syntax
Create with column definitions
CREATE TABLE [ IF NOT EXISTS ] [[ { namespace. | database. } ]schema.]table
( column-definition[,...] [, table-constraint [,...]] )
[ ORDER BY column[,...] ]
[ segmentation-spec ]
[ KSAFE [safety] ]
[ partition-clause]
[ { INCLUDE | EXCLUDE } [SCHEMA] PRIVILEGES ]
[ DISK_QUOTA quota ]
Create from another table
CREATE TABLE [ IF NOT EXISTS ] [[{ namespace. | database. } ]schema.]table
{ AS-clause | LIKE-clause }
[ DISK_QUOTA quota ]
AS-clause:
[ ( column-name-list ) ]
[ {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES ]
AS [ /*+ LABEL */ ] [ AT epoch ] query [ ENCODED BY column-ref-list ] [ segmentation-spec ]
LIKE-clause:
LIKE [[{ namespace. | database. } ]schema.]existing-table
[ { INCLUDING | EXCLUDING } PROJECTIONS ]
[ { INCLUDE | EXCLUDE } [SCHEMA] PRIVILEGES ]
Arguments
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
namespace
- For Eon Mode databases, name of the namespace under which to create the table. If unspecified, the table is created under the
default_namespace
.
database
- For Enterprise Mode databases, name of the database. If specified, it must be the current database.
schema
- Name of the schema under which to create the table. If no schema is specified, the table is created in the
public
schema.
table
- Name of the table to create, which must be unique among names of all sequences, tables, projections, views, and models within the schema.
- column-definition
- Column name, data type, and optional constraints. A table can have up to 9800 columns. At least one column in the table must be of a scalar type or native array.
- table-constraint
- Table-level constraint, as opposed to column constraints.
ORDER BY
column
[,...]
Invalid for external tables, specifies columns from the SELECT
list on which to sort the superprojection that is automatically created for this table. The ORDER BY
clause cannot include qualifiers ASC
or DESC
. Vertica always stores projection data in ascending sort order.
If you omit the
`ORDER BY`
clause, Vertica uses the SELECT
list order as the projection sort order.
segmentation-spec
Invalid for external tables, specifies how to distribute data for auto-projections of this table. Supply one of the following clauses:
If this clause is omitted, Vertica generates auto-projections with default hash segmentation.
KSAFE [
safety
]
Invalid for external tables, specifies K-safety of auto-projections created for this table, where k-num
must be equal to or greater than system K-safety. If you omit this option, the projection uses the system K-safety level.
- partition-clause
- Invalid for external tables, logically divides table data storage through a PARTITION BY clause:
PARTITION BY partition-expression
[ GROUP BY group-expression ] [ ACTIVEPARTITIONCOUNT integer ]
- column-name-list
Valid only when creating a table from a query (AS
query
), defines column names that map to the query output. If you omit this list, Vertica uses the query output column names.
This clause and the ENCODED BY
clause are mutually exclusive. Column name lists are invalid for external tables.
The names in column-name-list
and queried columns must be the same in number.
For example:
CREATE TABLE customer_occupations (name, profession)
AS SELECT customer_name, occupation FROM customer_dimension;
{INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES
Default inheritance of schema privileges for this table:
-
INCLUDE PRIVILEGES specifies that the table inherits privileges that are set on its schema. This is the default behavior if privileges inheritance is enabled for the schema.
-
EXCLUDE PRIVILEGES disables inheritance of privileges from the schema.
For details, see Inherited privileges.
AS
query
Creates and loads a table from the results of a query, specified as follows:
AS [ /*+ LABEL */ ] [ AT ] epoch query
The query cannot include complex type columns.
ENCODED BY
column-ref-list
A comma-delimited list of columns from the source table, where each column is qualified by one or both of the following encoding options:
-
ACCESSRANK
integer
: Overrides the default access rank for a column, useful for prioritizing access to a column. See Prioritizing column access speed.
-
ENCODING encoding-type
: Specifies the type of encoding to use on the column. The default encoding type is AUTO
.
This option and *column-name-list
*are mutually exclusive. This option is invalid for external tables.
LIKE
existing-table
- Creates the table by replicating an existing table. The user issuing the CREATE TABLE LIKE statement owns the new table, not the original owner.
You can qualify the LIKE clause with one of the following options:
-
EXCLUDING PROJECTIONS (default): Do not copy projections from the source table.
-
INCLUDING PROJECTIONS: Copy current projections from the source table for the new table.
-
{INCLUDE|EXCLUDE} [SCHEMA] PRIVILEGES: See description above).
DISK_QUOTA
quota
- String, an integer followed by a supported unit: K, M, G, or T. Data-load, DML, and ILM operations that increase the table's usage beyond the set quota fail. For details, see Disk quotas.
If not specified, the table has no quota.
Privileges
Superuser to set disk quota.
Non-superuser:
Restrictions for complex types
Complex types used in native tables have some restrictions, in addition to the restrictions for individual types listed on their reference pages:
-
A native table must have at least one column that is a primitive type or a native array (one-dimensional array of a primitive type). If a flex table has real columns, it must also have at least one column satisfying this restriction.
-
Complex type columns cannot be used in ORDER BY or PARTITION BY clauses nor as FILLER columns.
-
Complex type columns cannot have constraints.
-
Complex type columns cannot use DEFAULT or SET USING.
-
Expressions returning complex types cannot be used as projection columns, and projections cannot be segmented or ordered by columns of complex types.
Examples
The following example creates a table in the public schema:
CREATE TABLE public.Premium_Customer
(
ID IDENTITY ,
lname varchar(25),
fname varchar(25),
store_membership_card int
);
In an Eon Mode database, the following example creates the same table from the previous example, but in the store
schema of the company
namespace:
CREATE TABLE company.store.Premium_Customer
(
ID IDENTITY ,
lname varchar(25),
fname varchar(25),
store_membership_card int
);
The following example uses LIKE to create a new table from the Premium_Customer
table:
=> CREATE TABLE All_Customers LIKE Premium_Customer;
CREATE TABLE
The following example selects columns from one table to use in a new table, using an AS clause:
=> CREATE TABLE cust_basic_profile AS SELECT
customer_key, customer_gender, customer_age, marital_status, annual_income, occupation
FROM customer_dimension WHERE customer_age>18 AND customer_gender !='';
CREATE TABLE
=> SELECT customer_age, annual_income, occupation
FROM cust_basic_profile
WHERE customer_age > 23 ORDER BY customer_age;
customer_age | annual_income | occupation
--------------+---------------+--------------------
24 | 469210 | Hairdresser
24 | 140833 | Butler
24 | 558867 | Lumberjack
24 | 529117 | Mechanic
24 | 322062 | Acrobat
24 | 213734 | Writer
...
The following example creates a table using array columns:
=> CREATE TABLE orders(
orderkey INT,
custkey INT,
prodkey ARRAY[VARCHAR(10)],
orderprices ARRAY[DECIMAL(12,2)],
orderdate DATE
);
The following example uses a ROW complex type:
=> CREATE TABLE inventory
(store INT, products ROW(name VARCHAR, code VARCHAR));
The following example uses quotas:
=> CREATE SCHEMA internal DISK_QUOTA '10T';
CREATE SCHEMA
=> CREATE TABLE internal.sales (...) DISK_QUOTA '5T';
CREATE TABLE
=> CREATE TABLE internal.leads (...) DISK_QUOTA '12T';
WARNING 0: Table leads has disk quota greater than its schema internal
The following statement creates a flights
table in the airline
schema under the airport
namespace:
=> CREATE TABLE airport.airline.flights(
flight_number INT,
leaving_from VARCHAR,
arriving_at VARCHAR,
expected_departure DATE,
gate VARCHAR
);
See also
11.35.1 - Column-constraint
Adds a constraint to a column's metadata.
Adds a constraint to a column's metadata.
Syntax
[ { AUTO_INCREMENT | IDENTITY } [ (args) ] ]
[ CONSTRAINT constraint-name ] {
[ CHECK (expression) [ ENABLED | DISABLED ] ]
[ [ DEFAULT expression ] [ SET USING expression } | DEFAULT USING expression ]
[ NULL | NOT NULL ]
[ { PRIMARY KEY [ ENABLED | DISABLED ] REFERENCES table [( column )] } ]
[ UNIQUE [ ENABLED | DISABLED ] ]
}
Arguments
You can specify enforcement of several constraints by qualifying them with the keywords ENABLED
or DISABLED
. See Enforcing Constraints below.
{ AUTO_INCREMENT | IDENTITY } [ (
args
) ]
- Creates a table column whose values are automatically generated by and managed by the database. You cannot change or load values in this column. You can set this constraint on only one table column. You cannot set this constraint in a temporary table.
AUTO_INCREMENT
and IDENTITY
are synonyms.
The following arguments can be added:
( { cache-size | start }, increment, [ cache-size ] )
-
start: First value to set for this column (default 1).
-
increment: How much to change the value for each new row insertion, a positive or negative value (default 1).
-
cache-size: How many unique values each node caches per session, or 0 or 1 to disable (default 250,000).
For more information about these arguments, see IDENTITY sequences.
CONSTRAINT
constraint-name
- Assigns a name to the constraint, valid for the following constraints:
-
PRIMARY KEY
-
REFERENCES
(foreign key)
-
CHECK
-
UNIQUE
If you omit assigning a name to these constraints, Vertica assigns its own name. For details, see Naming constraints.
Vertica recommends that you name all constraints.
CHECK (
expression
)
- Sets a Boolean condition to check.
DEFAULT
default-expr
- Sets this column's default value. Vertica evaluates the expression and sets the column on load operations, if the operation omits a value for the column. For details about valid expressions, see Defining column values.
SET USING
using-expr
- Sets column values from an expression. Vertica evaluates the expression and refreshes column values only when REFRESH_COLUMNS is invoked. For details about valid expressions, see Defining column values.
DEFAULT USING
expression
- Defines the column with
DEFAULT
and SET USING
constraints, specifying the same expression for both. DEFAULT USING
columns support the same expressions as SET USING
columns, and are subject to the same restrictions.
NULL | NOT NULL
- Whether the column allows null values.
-
NULL
: Allows null values in the column. If you set this constraint on a primary key column, Vertica ignores it and sets it to NOT NULL
.
-
NOT NULL
: Requires the column to be set to a value during insert and update operations. If the column has no default value and no value is provided, INSERT
or UPDATE
returns an error.
The default is NULL
for all columns except primary key columns, which Vertica always sets to NOT NULL
.
For an external table, data violating a NOT NULL
constraint can produce unexpected results when the table is queried. Use NOT NULL
for an external table column only if you are sure that the data does not contain nulls.
PRIMARY KEY
- Identifies this column as the table's primary key.
REFERENCES
table
[ (
column
) ]
- Identifies this column as a foreign key to a column in another table. If you omit the column, Vertica uses the table's primary key.
UNIQUE
- Requires column data to be unique with respect to all table rows.
Privileges
Table owner or user WITH GRANT OPTION is grantor.
Enforcing constraints
The following constraints can be qualified with the keyword ENABLED
or DISABLED
:
If you omit ENABLED
or DISABLED
, Vertica determines whether to enable the constraint automatically by checking the appropriate constraints configuration parameter:
-
EnableNewPrimaryKeysByDefault
-
EnableNewUniqueKeysByDefault
-
EnableNewCheckConstraintsByDefault
See also
11.35.2 - Column-definition
Specifies the name, data type, and constraints to be applied to a column.
Specifies the name, data type, and constraints to be applied to a column.
Syntax
column-name data-type
[ column-constraint ][...]
[ ENCODING encoding-type ]
[ ACCESSRANK integer ]
Parameters
column-name
- The name of a column to be created or added.
data-type
- A Vertica-supported data type.
Tip
When specifying the maximum column width in a CREATE TABLE statement, use the width in bytes (octets) for any of the string types. Each UTF-8 character might require four bytes, but European languages generally require a little over one byte per character, while Oriental languages generally require a little under three bytes per character.
-
column-constraint
- A constraint type that Vertica supports—for example, NOT NULL or UNIQUE. For general information, see Constraints.
-
ENCODING encoding-type
The column encoding type, by default set to AUTO.
ACCESSRANK
integer
Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.
Examples
The following example creates a table named Employee_Dimension
and its associated superprojection in the public
schema. The Employee_key
column is designated as a primary key, and RLE encoding is specified for the Employee_gender
column definition:
=> CREATE TABLE public.Employee_Dimension (
Employee_key integer PRIMARY KEY NOT NULL,
Employee_gender varchar(8) ENCODING RLE,
Courtesy_title varchar(8),
Employee_first_name varchar(64),
Employee_middle_initial varchar(8),
Employee_last_name varchar(64)
);
11.35.3 - Column-name-list
Used to rename columns when creating a table or temporary table from a query; also used to specify the column's encoding type and .
Used to rename columns when creating a table or temporary table from a query; also used to specify the column's encoding type and access rank .
Syntax
column-name-list
[ ENCODING encoding-type ]
[ ACCESSRANK integer ]
[ GROUPED ( column-reference[,...] ) ]
Parameters
column-name
- Specifies the new name for the column.
ENCODING
[encoding-type](/en/sql-reference/statements/create-statements/create-projection/encoding-types/)
- Specifies the type of encoding to use on the column. The default encoding type is
AUTO
.
ACCESSRANK
integer
- Overrides the default access rank for a column, useful for prioritizing access to a column. See Prioritizing column access speed.
GROUPED
- Groups two or more columns . For detailed information, see GROUPED clause.
Requirements
-
A column in the list can not specify the column's data type or any constraint. These are derived from the queried table.
-
If the query output has expressions other than simple columns (for example, constants or functions) then an alias must be specified for that expression, or the column name list must include all queried columns.
-
CREATE TABLE can specify encoding types and access ranks in the column name list or the query's ENCODED BY clause, but not in both. For example, the following CREATE TABLE statement sets encoding and access rank on two columns in the column name list:
=> CREATE TABLE promo1 (state ENCODING RLE ACCESSRANK 1, zip ENCODING RLE,...)
AS SELECT * FROM customer_dimension ORDER BY customer_state;
The next statement specifies the same encoding and access rank in the query's ENCODED BY clause.
=> CREATE TABLE promo2
AS SELECT * FROM customer_dimension ORDER BY customer_state
ENCODED BY customer_state ENCODING RLE ACCESSRANK 1, customer_zip ENCODING RLE;
11.35.4 - Partition clause
Specifies partitioning of table data, through a PARTITION BY clause in the table definition:.
Specifies partitioning of table data, through a PARTITION BY clause in the table definition:
PARTITION BY partition-expression [ GROUP BY group-expression ] [ active-partition-count-expr ]
PARTITION BY
partition-expression
- For each table row, resolves to a partition key that is derived from one or more table columns.
Caution
Avoid partitioning tables on LONG VARBINARY and LONG VARCHAR columns. Doing so can adversely impact performance.
GROUP BY
group-expression
- For each table row, resolves to a partition group key that is derived from the partition key. Vertica uses group keys to merge partitions into separate partition groups. GROUP BY must use the same expression as PARTITION BY. For example:
...PARTITION BY (i+j) GROUP BY (
CASE WHEN (i+j) < 5 THEN 1
WHEN (i+j) < 10 THEN 2
ELSE 3);
For details on partitioning table data by groups, see Partition grouping and Hierarchical partitioning.
active-partition-count-expr
- Specifies how many partitions are active for this table, specified as follows:
-
In partition clause of CREATE TABLE:
ACTIVEPARTITIONCOUNT integer
-
In partition clause of ALTER TABLE:
SET ACTIVEPARTITIONCOUNT integer
This setting supersedes configuration parameter ActivePartitionCount. For details on usage, see Active and inactive partitions.
Partitioning requirements and restrictions
PARTITION BY expressions can specify leaf expressions, functions, and operators. The following requirements and restrictions apply:
- All table projections must include all columns referenced in the expression; otherwise, Vertica cannot resolve the expression.
- The expression can reference multiple columns, but it must resolve to a single non-null value for each row.
Note
You can avoid null-related errors with the function
ZEROIFNULL. This function can check a PARTITION BY expression for null values and evaluate them to 0. For example:
CREATE TABLE t1 (a int, b int) PARTITION BY (ZEROIFNULL(a)); CREATE TABLE
- All leaf expressions must be constants or table columns.
- All other expressions must be functions and operators. The following restrictions apply to functions: * They must be immutable—that is, they return the same value regardless of time and locale and other session- or environment-specific conditions. * They cannot be aggregate functions. * They cannot be Vertica meta-functions.
- The expression cannot include queries.
- The expression cannot include user-defined data types such as Geometry.
GROUP BY expressions do not support modulo (%) operations.
Examples
The following statements create the store_orders
table and load data into it. The CREATE TABLE statement includes a simple partition clause that specifies to partition data by year:
=> CREATE TABLE public.store_orders
(
order_no int,
order_date timestamp NOT NULL,
shipper varchar(20),
ship_date date
)
UNSEGMENTED ALL NODES
PARTITION BY YEAR(order_date);
CREATE TABLE
=> COPY store_orders FROM '/home/dbadmin/export_store_orders_data.txt';
41834
As COPY loads the new table data into ROS storage, the Tuple Mover executes the table's partition clause by dividing orders for each year into separate partitions, and consolidating these partitions in ROS containers.
In this case, the Tuple Mover creates four partition keys for the loaded data—2017, 2016, 2015, and 2014—and divides the data into separate ROS containers accordingly:
=> SELECT dump_table_partition_keys('store_orders');
... Partition keys on node v_vmart_node0001
Projection 'store_orders_super'
Storage [ROS container]
No of partition keys: 1
Partition keys: 2017
Storage [ROS container]
No of partition keys: 1
Partition keys: 2016
Storage [ROS container]
No of partition keys: 1
Partition keys: 2015
Storage [ROS container]
No of partition keys: 1
Partition keys: 2014
Partition keys on node v_vmart_node0002
Projection 'store_orders_super'
Storage [ROS container]
No of partition keys: 1
Partition keys: 2017
...
(1 row)
As new data is loaded into store_orders
, the Tuple Mover merges it into the appropriate partitions, creating partition keys as needed for new years.
See also
Partitioning tables
11.35.5 - Table-constraint
Adds a constraint to table metadata. You can specify table constraints with
CREATE TABLE,
or add a constraint to an existing table with ALTER TABLE. For details, see Setting constraints.
Adding a constraint to a table that is referenced in a view does not affect the view.
Syntax
[ CONSTRAINT constraint-name ]
{
... PRIMARY KEY (column[,... ]) [ ENABLED | DISABLED ]
... | FOREIGN KEY (column[,... ] ) REFERENCES table [ (column[,...]) ]
... | UNIQUE (column[,...]) [ ENABLED | DISABLED ]
... | CHECK (expression) [ ENABLED | DISABLED ]
}
Parameters
CONSTRAINT
constraint‑name
- Assigns a name to the constraint. Vertica recommends that you name all constraints.
PRIMARY KEY
- Defines one or more
NOT NULL
columns as the primary key as follows:
PRIMARY KEY (column[,...]) [ ENABLED | DISABLED]
You can qualify this constraint with the keyword ENABLED
or DISABLED
. See Enforcing Constraints below.
If you do not name a primary key constraint, Vertica assigns the name C_PRIMARY
.
FOREIGN KEY
- Adds a referential integrity constraint defining one or more columns as foreign keys as follows:
FOREIGN KEY (column[,... ]) REFERENCES table [(column[,... ])]
If you omit column
, Vertica references the primary key in table
.
If you do not name a foreign key constraint, Vertica assigns the name C_FOREIGN
.
Important
Adding a foreign key constraint requires the following privileges (in addition to privileges also required by ALTER TABLE):
UNIQUE
- Specifies that the data in a column or group of columns is unique with respect to all table rows, as follows:
UNIQUE (column[,...]) [ENABLED | DISABLED]
You can qualify this constraint with the keyword ENABLED
or DISABLED
. See Enforcing Constraints below.
If you do not name a unique constraint, Vertica assigns the name C_UNIQUE
.
CHECK
- Specifies a check condition as an expression that returns a Boolean value, as follows:
CHECK (expression) [ENABLED | DISABLED]
You can qualify this constraint with the keyword ENABLED
or DISABLED
. See Enforcing Constraints below.
If you do not name a check constraint, Vertica assigns the name C_CHECK
.
Privileges
Non-superusers: table owner, or the following privileges:
Enforcing constraints
A table can specify whether Vertica automatically enforces a primary key, unique key or check constraint with the keyword ENABLED
or DISABLED
. If you omit ENABLED
or DISABLED
, Vertica determines whether to enable the constraint automatically by checking the appropriate configuration parameter:
-
EnableNewPrimaryKeysByDefault
-
EnableNewUniqueKeysByDefault
-
EnableNewCheckConstraintsByDefault
For details, see Constraint enforcement.
Examples
The following example creates a table (t01
) with a primary key constraint.
CREATE TABLE t01 (id int CONSTRAINT sampleconstraint PRIMARY KEY);
CREATE TABLE
This example creates the same table without the constraint, and then adds the constraint with ALTER TABLE ADD CONSTRAINT
CREATE TABLE t01 (id int);
CREATE TABLE
ALTER TABLE t01 ADD CONSTRAINT sampleconstraint PRIMARY KEY(id);
WARNING 2623: Column "id" definition changed to NOT NULL
ALTER TABLE
The following example creates a table (addapk
) with two columns, adds a third column to the table, and then adds a primary key constraint on the third column.
=> CREATE TABLE addapk (col1 INT, col2 INT);
CREATE TABLE
=> ALTER TABLE addapk ADD COLUMN col3 INT;
ALTER TABLE
=> ALTER TABLE addapk ADD CONSTRAINT col3constraint PRIMARY KEY (col3) ENABLED;
WARNING 2623: Column "col3" definition changed to NOT NULL
ALTER TABLE
Using the sample table addapk
, check that the primary key constraint is enabled (is_enabled
is t
).
=> SELECT constraint_name, column_name, constraint_type, is_enabled FROM PRIMARY_KEYS WHERE table_name IN ('addapk');
constraint_name | column_name | constraint_type | is_enabled
-----------------+-------------+-----------------+------------
col3constraint | col3 | p | t
(1 row)
This example disables the constraint using ALTER TABLE ALTER CONSTRAINT
.
=> ALTER TABLE addapk ALTER CONSTRAINT col3constraint DISABLED;
Check that the primary key is now disabled (is_enabled
is f
).
=> SELECT constraint_name, column_name, constraint_type, is_enabled FROM PRIMARY_KEYS WHERE table_name IN ('addapk');
constraint_name | column_name | constraint_type | is_enabled
-----------------+-------------+-----------------+------------
col3constraint | col3 | p | f
(1 row)
For a general discussion of constraints, see Constraints. For additional examples of creating and naming constraints, see Naming constraints.
11.36 - CREATE TEMPORARY TABLE
Creates a table whose data persists only during the current session.
Creates a table whose data persists only during the current session. By default, temporary table data is not visible to other sessions.
Syntax
Create with column definitions:
CREATE [ scope ] TEMP[ORARY] TABLE [ IF NOT EXISTS ] [[{namespace. | database. }]schema.]table-name
( column-definition[,...] )
[ table-constraint ]
[ ON COMMIT { DELETE | PRESERVE } ROWS ]
[ NO PROJECTION ]
[ ORDER BY table-column[,...] ]
[ segmentation-spec ]
[ KSAFE [safety-level] ]
[ {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES ]
[ DISK_QUOTA quota ]
Create from another table:
CREATE TEMP[ORARY] TABLE [ IF NOT EXISTS ] [[{namespace. | database. }]schema.]table-name
[ ( column-name-list ) ]
[ ON COMMIT { DELETE | PRESERVE } ROWS ]
AS [ /*+ LABEL */ ] [ AT epoch ] query [ ENCODED BY column-ref-list ]
[ DISK_QUOTA quota ]
Parameters
scope
- Visibility of the table definition:
-
GLOBAL: The table definition is visible to all sessions, and persists until you explicitly drop the table.
-
LOCAL: the table definition is visible only to the session in which it is created, and is dropped when the session ends.
If no scope is specified, Vertica uses the default that is set by the DefaultTempTableLocal configuration parameter.
Regardless of this setting, retention of temporary table data is set by the keywords ON COMMIT DELETE and ON COMMIT PRESERVE (see below).
For more information, see Creating temporary tables.
IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
{
namespace.
|
database.
}
- Qualifies the
schema
name with one of the following names, depending on the mode of the database:
- Eon Mode: name of the namespace to which the schema belongs. If unspecified, the namespace is assumed to be
default_namespace
. If scope
is set to LOCAL, the namespace must be default_namespace
.
- Enterprise Mode: name of the database. If specified, it must be the current database.
schema
- Name of the schema, by default
public
. If you specify the namespace
or database
name, you must specify the schema name, even if the schema is public
.
table-name
- Name of the table to create, where
table-name
conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
-
column-definition
- Column names and types. A table can have up to 9800 columns.
-
table-constraint
- Adds a constraint to table metadata.
ON COMMIT
- Whether data is transaction- or session-scoped:
ON COMMIT {PRESERVE | DELETE} ROWS
-
DELETE (default) marks the temporary table for transaction-scoped data. Vertica removes all table data after each commit.
-
PRESERVE marks the temporary table for session-scoped data, which is preserved beyond the lifetime of a single transaction. Vertica removes all table data when the session ends.
NO PROJECTION
- Prevents Vertica from creating auto-projections for this table. A superprojection is created only when data is explicitly loaded into this table.
NO PROJECTION is invalid with the following clauses:
{INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES
Default inheritance of schema privileges for this table:
-
INCLUDE PRIVILEGES specifies that the table inherits privileges that are set on its schema. This is the default behavior if privileges inheritance is enabled for the schema.
-
EXCLUDE PRIVILEGES disables inheritance of privileges from the schema.
For details, see Inherited privileges.
ORDER BY
table-column
[,...]
Invalid for external tables, specifies columns from the SELECT
list on which to sort the superprojection that is automatically created for this table. The ORDER BY
clause cannot include qualifiers ASC
or DESC
. Vertica always stores projection data in ascending sort order.
If you omit the
`ORDER BY`
clause, Vertica uses the SELECT
list order as the projection sort order.
segmentation-spec
Invalid for external tables, specifies how to distribute data for auto-projections of this table. Supply one of the following clauses:
If this clause is omitted, Vertica generates auto-projections with default hash segmentation.
KSAFE [
safety-level
]
Invalid for external tables, specifies K-safety of auto-projections created for this table, where k-num
must be equal to or greater than system K-safety. If you omit this option, the projection uses the system K-safety level.
Eon Mode: K-safety of temporary tables is always set to 0, regardless of system K-safety. If a CREATE TEMPORARY TABLE
statement sets k-num
greater than 0, Vertica returns an warning.
- column-name-list
Valid only when creating a table from a query (AS
query
), defines column names that map to the query output. If you omit this list, Vertica uses the query output column names.
This clause and the ENCODED BY
clause are mutually exclusive. Column name lists are invalid for external tables.
The names in column-name-list
and queried columns must be the same in number.
For example:
CREATE TEMP TABLE customer_occupations (name, profession)
AS SELECT customer_name, occupation FROM customer_dimension;
AS
query
Creates and loads a table from the results of a query, specified as follows:
AS [ /*+ LABEL */ ] [ AT ] epoch query
The query cannot include complex type columns.
ENCODED BY
column-ref-list
A comma-delimited list of columns from the source table, where each column is qualified by one or both of the following encoding options:
-
ACCESSRANK
integer
: Overrides the default access rank for a column, useful for prioritizing access to a column. See Prioritizing column access speed.
-
ENCODING encoding-type
: Specifies the type of encoding to use on the column. The default encoding type is AUTO
.
This option and *column-name-list
*are mutually exclusive. This option is invalid for external tables.
DISK_QUOTA
quota
- String, an integer followed by a supported unit: K, M, G, or T. If the schema has a quota, this value must be smaller than the schema quota. Data-load and ILM operations that increase the table's usage beyond the set quota fail. For details, see Disk quotas.
If not specified, the table has no quota.
Disk quota is valid for global temporary tables but not local ones.
Privileges
The following privileges are required:
Restrictions
-
Queries on temporary tables are subject to the same restrictions on SQL support as persistent tables.
-
You cannot add projections to non-empty, global temporary tables (ON COMMIT PRESERVE ROWS). Make sure that projections exist before you load data. See Auto-projections.
-
While you can add projections for temporary tables that are defined with ON COMMIT DELETE ROWS specified, be aware that you might lose all data.
-
Mergeout operations cannot be used on session-scoped temporary data.
-
In general, session-scoped temporary table data is not visible using system (virtual) tables.
-
Temporary tables do not recover. If a node fails, queries that use the temporary table also fail. Restart the session and populate the temporary table.
-
Local temporary tables cannot have disk quotas.
Examples
See Creating temporary tables.
See also
11.37 - CREATE TEXT INDEX
Creates a text index used to perform text searches.
Creates a text index used to perform text searches. If data within a table is partitioned, then an extra column appears in the text index, showing the partition.
Syntax
CREATE TEXT INDEX [[database.]schema.]txtindex-name
ON [schema.]source-table (unique-id, text-field [, column-name,...])
[STEMMER {stemmer-name(stemmer-input-data-type)| NONE}]
[TOKENIZER tokenizer-name(tokenizer-input-data-type)];
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
If you do not specify a schema, the table is created in the default schema.
txtindex-name
- The text index name.
source-table
- The source table to index.
unique-id
- The name of the column in the source table that contains a unique identifier. Any data type is permissible. The column must be the primary key in the source table.
text-field
- The name of the column in the source table that contains the text field. Valid data types are:
-
CHAR
-
VARCHAR
-
LONG VARCHAR
-
VARBINARY
-
LONG VARBINARY
Nulls are allowed.
column-name
- The name of a column or columns to be included as additional columns.
stemmer-name
- The name of the stemmer.
stemmer-input-data-type
- The input data type of the
stemmer-name
function.
tokenizer-name
- Specifies the name of the tokenizer.
tokenizer-input-data-type
- This value is the input data type of the
tokenizer-name
function. It can accept any number of arguments.
If a Vertica tokenizers is used, then this parameter can be omitted.
Privileges
The index automatically inherits the query permissions of its parent table. The table owner and dbadmin will be allowed to create and/or modify the indices.
Important
Do not alter the contents or definitions of the text index. If the contents or definitions of the text index are altered, then the results will not appropriately match the source table.
Requirements
-
Requires there be a column with a unique identifier set as the primary key.
-
The source table must have an associated projection, and must be both sorted and segmented by the primary key.
Examples
The following example shows how to create a text index with an additional unindexed column on the table t_log using the CREATE TEXT INDEX statement:
=> CREATE TEXT INDEX t_log_index ON t_log (id, text, day_of_week);
CREATE INDEX
=> SELECT * FROM t_log_index;
token | doc_id | day_of_week
-----------------------+--------+-------------
'catalog | 1 | Monday
'dbadmin' | 2 | Monday
2014-06-04 | 1 | Monday
2014-06-04 | 2 | Monday
2014-06-04 | 3 | Monday
2014-06-04 | 4 | Monday
2014-06-04 | 5 | Monday
2014-06-04 | 6 | Monday
2014-06-04 | 7 | Monday
2014-06-04 | 8 | Monday
45035996273704966 | 3 | Tuesday
45035996273704968 | 4 | Tuesday
<INFO> | 1 | Tuesday
<INFO> | 6 | Tuesday
<INFO> | 7 | Tuesday
<INFO> | 8 | Tuesday
<WARNING> | 2 | Tuesday
<WARNING> | 3 | Tuesday
<WARNING> | 4 | Tuesday
<WARNING> | 5 | Tuesday
...
(97 rows)
The following example shows a text index, tpart_index, created from a partitioned source table:
=> SELECT * FROM tpart_index;
token | doc_id | partition
------------------------+--------+-----------
0 | 4 | 2014
0 | 5 | 2014
11:00:49.568 | 4 | 2014
11:00:49.568 | 5 | 2014
11:00:49.569 | 6 | 2014
<INFO> | 6 | 2014
<WARNING> | 4 | 2014
<WARNING> | 5 | 2014
Database | 6 | 2014
Execute: | 6 | 2014
Object | 4 | 2014
Object | 5 | 2014
[Catalog] | 4 | 2014
[Catalog] | 5 | 2014
'catalog | 1 | 2013
'dbadmin' | 2 | 2013
0 | 3 | 2013
11:00:49.568 | 1 | 2013
11:00:49.568 | 2 | 2013
11:00:49.568 | 3 | 2013
11:00:49.570 | 7 | 2013
11:00:49.571 | 8 | 2013
45035996273704966 | 3 | 2013
...
(89 rows)
See also
11.38 - CREATE TLS CONFIGURATION
Creates a TLS Configuration object.
Creates a TLS Configuration object. For information on existing TLS Configuration objects, query TLS_CONFIGURATIONS.
To modify an existing TLS Configuration object, see ALTER TLS CONFIGURATION.
Syntax
CREATE TLS CONFIGURATION tls_config_name {
[ CERTIFICATE { NULL | cert_name } ]
[ CA CERTIFICATES ca_cert_name [,...] ]
[ CIPHER SUITES { '' | 'openssl_cipher [,...]' } ]
[ TLSMODE 'tlsmode' ]
}
Parameters
tls_config_name
- The name of the TLS Configuration object.
cert_name
- A certificate created with CREATE CERTIFICATE.
ca_cert_name
- A CA certificate created with CREATE CERTIFICATE.
openssl_cipher
- A comma-separated list of cipher suites to use instead of the default set of cipher suites. Providing an empty string for this parameter clears the alternate cipher suite list and instructs the specified TLS Configuration to use the default set of cipher suites.
To view enabled cipher suites, use LIST_ENABLED_CIPHERS.
tlsmode
- How Vertica establishes TLS connections and handles client certificates, one of the following, in order of ascending security:
-
DISABLE
: Disables TLS. All other options for this parameter enable TLS.
-
ENABLE
: Enables TLS. Vertica does not check client certificates.
-
TRY_VERIFY
: Establishes a TLS connection if one of the following is true:
If the other host presents an invalid certificate, the connection will use plaintext.
-
VERIFY_CA
: Connection succeeds if Vertica verifies that the other host's certificate is from a trusted CA. If the other host does not present a certificate, the connection uses plaintext.
-
VERIFY_FULL
: Connection succeeds if Vertica verifies that the other host's certificate is from a trusted CA and the certificate's cn
(Common Name) or subjectAltName
attribute matches the hostname or IP address of the other host.
Note that for client certificates, cn
is used for the username, so subjectAltName
must match the hostname or IP address of the other host.
VERIFY_FULL
is unsupported for client-server TLS (the server
TLS Configuration context) and behaves as VERIFY_CA
.
11.39 - CREATE TRIGGER
Creates a trigger.
Creates a trigger. For details, see Triggers.
Syntax
CREATE TRIGGER [ IF NOT EXISTS ] [[database.]schema.]trigger
ON SCHEDULE [[database.]schema.]schedule
EXECUTE PROCEDURE procedure AS DEFINER
Parameters
- IF NOT EXISTS
If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.
The IF NOT EXISTS
clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
trigger
- The name of the trigger.
schedule
- The schedule with which to associate the trigger.
procedure
- The function signature of the stored procedure.
- AS DEFINER
- The user to execute the stored procedure as. Currently, the only option is DEFINER, which executes the stored procedure as the definer of the trigger.
Privileges
Superuser
Examples
CREATE TRIGGER requires a schedule and stored procedure:
=> CREATE PROCEDURE revoke_all_on_table(table_name VARCHAR, user_name VARCHAR)
LANGUAGE PLvSQL
AS $$
BEGIN
EXECUTE 'REVOKE ALL ON ' || QUOTE_IDENT(table_name) || ' FROM ' || QUOTE_IDENT(user_name);
END;
$$;
=> CREATE SCHEDULE 24_hours_later USING DATETIMES('2022-12-16 12:00:00');
To create the trigger with schedule 24_hours_later
and stored procedure revoke_all_on_table()
with arguments customer_dimension
and Bob
:
=> CREATE TRIGGER revoke_trigger ON SCHEDULE 24_hours_later EXECUTE PROCEDURE revoke_all_on_table('customer_dimension', 'Bob') AS DEFINER;;
11.40 - CREATE USER
Adds a name to the list of authorized database users.
Adds a name to the list of authorized database users.
Syntax
CREATE USER user-name [ parameter value[,...] ]
Arguments
user-name
- Name of the new user, where the value conforms to conventions described in Identifiers.
parameter
value
- One or more user account parameter settings (see below). Note that the parameter name and value are separated by a space, not
=
.
Parameters
ACCOUNT
Locks or unlocks user access to the database, one of the following:
Tip
To automate account locking, set a maximum number of failed login attempts with
CREATE PROFILE.
GRACEPERIOD
Specifies how long a user query can block on any session socket, one of the following:
-
NONE
(default): Removes any grace period previously set on session queries.
-
'
interval
'
: Specifies as an interval the maximum grace period for current session queries, up to 20 days.
For details, see Handling session socket blocking.
IDENTIFIED BY { '
password
' | '
hashed-password
' SALT '
hash-salt
' }
- Sets the user's password. Options are:
-
password
: ASCII password that Vertica then hashes for internal storage. An empty string enables this user to access the database with no password.
-
hashed-password
: A pre-hashed password and its associated hex string hash-salt
. Setting a password this way bypasses all password complexity requirements.
Important
If you omit this parameter, this user can access the database with no password.
For details, see Password guidelines and Creating a database name and password.
IDLESESSIONTIMEOUT
The length of time the system waits before disconnecting an idle session, one of the following:
-
NONE
(default): No limit set for this user. If you omit this parameter, no limit is set for this user.
-
'interval'
: An interval value, up to one year.
For details, see Managing client connections.
MAXCONNECTIONS
Sets the maximum number of connections the user can have to the server, one of the following:
-
NONE
(default): No limit set. If you omit this parameter, the user can have an unlimited number of connections across the database cluster.
-
integer
ON DATABASE
: Sets to integer
the maximum number of connections across the database cluster.
-
integer
ON NODE
: Sets to integer
the maximum number of connections to each node.
For details, see Managing client connections.
MEMORYCAP
Sets how much memory can be allocated to user requests, one of the following:
PASSWORD EXPIRE
Forces immediate expiration of the user's password. The user must change the password on the next login.
Note
PASSWORD EXPIRE
has no effect when using external password authentication methods such as LDAP or Kerberos.
PROFILE
Assigns a profile that controls password requirements for this user, one of the following:
If you omit this parameter, the user is assigned the default profile.
RESOURCE POOL
pool-name
[FOR SUBCLUSTER
sc-name
]
Assigns a resource pool to this user. The user must also be granted privileges to this pool, unless privileges to the pool are set to PUBLIC
.
The FOR SUBCLUSTER
clause assigns a subcluster-specific resource pool to the user. You can assign only one subcluster-specific resource pool to each user.
RUNTIMECAP
Sets how long this user's queries can execute, one of the following:
-
NONE
(default): No limit set for this user. If you omit this parameter, no limit is set for this user.
-
'interval'
: An interval value, up to one year.
A query's runtime limit can be set at three levels: the user's runtime limit, the user's resource pool, and the session setting. For more information, see Setting a runtime limit for queries.
SEARCH_PATH
Specifies the user's default search path, that tells Vertica which schemas to search for unqualified references to tables and UDFs, one of the following:
-
DEFAULT
(default): Sets the search path as follows:
"$user", public, v_catalog, v_monitor, v_internal
-
Comma-delimited list of schemas.
For details, see Setting Search Paths.
TEMPSPACECAP
Sets how much temporary file storage is available for user requests, one of the following:
Privileges
Superuser
User name best practices
Vertica database user names are logically separate from user names of the operating system in which the server runs. If all the users of a particular server also have accounts on the server's machine, it makes sense to assign database user names that match their operating system user names. However, a server that accepts remote connections might many database users with no local operating system account. In this case, there is no need to connect database and system user names.
Examples
=> CREATE USER Fred IDENTIFIED BY 'Mxyzptlk';
=> GRANT USAGE ON SCHEMA PUBLIC to Fred;
See also
11.41 - CREATE VIEW
Defines a.
Defines a view. Views are read only, so they do not support insert, update, delete, or copy operations.
Syntax
CREATE [ OR REPLACE ] VIEW [[database.]schema.]view [ (column[,...]) ]
[ {INCLUDE|EXCLUDE} [SCHEMA] PRIVILEGES ] AS query
Parameters
OR REPLACE
- Specifies to overwrite the existing view
view-name
. If you omit this option and view-name
already exists, CREATE VIEW
returns an error.
Any grants assigned to the view before you execute a CREATE OR REPLACE remain on the updated view. See GRANT (view).
[
database
]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
view
- Identifies the view to create, where
view
conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
column
[,...]
- List of up to 9800 names to use as view column names. Vertica maps view column names to query columns according to the order of their respective lists. By default, the view uses column names as they are specified in the query.
query
- A
SELECT
statement that the temporary view executes. The SELECT
statement can reference tables, temporary tables, and other views.
{INCLUDE|EXCLUDE}[SCHEMA] PRIVILEGES
- Specifies whether this view inherits schema privileges:
-
INCLUDE PRIVILEGES
specifies that the view inherits privileges that are set on its schema. This is the default behavior if privileges inheritance is enabled for the schema.
-
EXCLUDE PRIVILEGES
disables inheritance of privileges from the schema.
For details, see Inherited privileges.
Privileges
See Creating views.
Examples
The following example shows how to create a view that contains data from multiple tables.
=> CREATE VIEW temp_t0 AS SELECT * from t0_p1 UNION ALL
SELECT * from t0_p2 UNION ALL
SELECT * from t0_p3 UNION ALL
SELECT * from t0_p4 UNION ALL
SELECT * from t0_p5;
See also
12 - DEACTIVATE DIRECTED QUERY
Deactivates one or more directed queries previously activated by ACTIVATE DIRECTED QUERY.
Deactivates one or more directed queries previously activated by ACTIVATE DIRECTED QUERY.
Syntax
DEACTIVATE DIRECTED QUERY { query-name | input-query | where-clause }
Arguments
query-name
- Name of the directed query to deactivate, as stored in the DIRECTED_QUERIES column
query_name
.
input-query
- The input query of the directed queries to deactivate. Use this argument to deactivate multiple direct queries that map to the same input query.
where-clause
- Resolves to one or more directed queries that are filtered from system table DIRECTED_QUERIES. For example, the following statement specifies to deactivate all directed queries with the same
save_plans_version
identifier:
=> DEACTIVATE DIRECTED QUERY WHERE save_plans_version = 21;
Privileges
Superuser
Examples
See Managing directed queries.
13 - DELETE
Removes the specified rows from a table and returns a count of the deleted rows.
Removes the specified rows from a table and returns a count of the deleted rows. A count of 0 is not an error, but indicates that no rows matched the condition. An unqualified DELETE statement (one that omits a WHERE clause) removes all rows but leaves intact table columns, projections, and constraints.
DELETE supports subqueries and joins, so you can delete values in a table based on values in other tables.
Important
The Vertica implementation of DELETE differs from traditional databases: it does not delete data from disk storage, but instead marks rows as deleted so they are available for historical queries. Deleted data remains on disk, and counts against disk quota, until purged.
Syntax
DELETE [ /*+LABEL (label-string)*/ ]
FROM [[{database | namespace }.]schema.]table [ where-clause ]
Arguments
-
LABEL
Assigns a label to a statement to identify it for profiling and debugging.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
table
- Any table, including temporary tables.
- where-clause
- Which rows to mark for deletion. If you omit this clause, DELETE behavior varies depending on whether the table is persistent or temporary. See below for details.
Privileges
Table owner or user with GRANT OPTION is grantor.
-
DELETE privilege on table
-
USAGE privilege on the schema of the target table
-
SELECT privilege on a table when the DELETE statement includes a WHERE or SET clause that specifies columns from that table.
Restrictions
You cannot execute DELETE on an immutable table.
Committing successive table changes
Vertica follows the SQL-92 transaction model, so successive INSERT, UPDATE, and DELETE statements are included in the same transaction. You do not need to explicitly start this transaction; however, you must explicitly end it with COMMIT, or implicitly end it with COPY. Otherwise, Vertica discards all changes that were made within the transaction.
Persistent and temporary tables
When deleting from a persistent table, DELETE removes data directly from the ROS.
DELETE execution on temporary tables varies, depending on whether the table was created with ON COMMIT DELETE ROWS (default) or ON COMMIT PRESERVE ROWS:
-
If DELETE contains a WHERE clause that specifies which rows to remove, behavior is identical: DELETE marks the rows for deletion. In both cases, you cannot roll back to an earlier savepoint.
-
If DELETE omits a WHERE clause and the table was created with ON COMMIT PRESERVE ROWS, Vertica marks all table rows for deletion. If the table was created with ON COMMIT DELETE ROWS, DELETE behaves like TRUNCATE TABLE and removes all rows from storage.
Note
If you issue an unqualified DELETE statement on a temporary table created with ON COMMIT DELETE ROWS, Vertica removes all rows from storage but does not end the transaction.
Examples
The following statement removes all rows from a temporary table:
=> DELETE FROM temp1;
The following statement deletes all records from a schema-qualified table where a condition is satisfied:
=> DELETE FROM retail.customer WHERE state IN ('MA', 'NH');
For examples that show how to nest a subquery within a DELETE statement, see Subqueries in UPDATE and DELETE.
See also
14 - DISCONNECT
Closes a connection to another Vertica database that was opened in the same session with CONNECT TO VERTICA.
Closes a connection to another Vertica database that was opened in the same session with
CONNECT TO VERTICA
.
Note
Closing your session also closes the database connection. However, it is a good practice to explicitly close the connection to the other database, both to free up resources and to prevent issues with other SQL scripts that might be running in your session. Always closing the connection prevents potential errors if you run a script in the same session that attempts to open a connection to the same database, since each session can only have one connection to a given database at a time.
Syntax
DISCONNECT db-spec
Parameters
db-spec
- Specifies the target database, either the database name or
DEFAULT
.
Privileges
None
Examples
=> DISCONNECT DEFAULT;
DISCONNECT
15 - DO
Executes an anonymous (unnamed) stored procedure without saving it.
Executes an anonymous (unnamed) stored procedure without saving it.
Syntax
DO [ LANGUAGE 'language-name' ] $$
source
$$;
Parameters
language-name
- Specifies the language of the procedure
source
, one of the following (both options refer to PLvSQL; PLpgSQL is included to maintain compatibility with existing scripts):
Default: PLvSQL
source
- The source code of the procedure.
Privileges
None
Examples
For more complex examples, see Stored procedures: use cases and examples
This procedure prints the variables in the DECLARE block:
DO LANGUAGE PLvSQL $$
DECLARE
x int := 3;
y varchar := 'some string';
BEGIN
RAISE NOTICE 'x = %', x;
RAISE NOTICE 'y = %', y;
END;
$$;
NOTICE 2005: x = 3
NOTICE 2005: y = some string
For more information on RAISE NOTICE, see Errors and diagnostics.
See also
16 - DROP statements
DROP statements let you delete database objects such as schemas, tables, and users.
DROP statements let you delete database objects such as schemas, tables, and users.
16.1 - DROP ACCESS POLICY
Removes an access policy from a column or row.
Removes an access policy from a column or row.
Syntax
DROP ACCESS POLICY ON table FOR { COLUMN column | ROWS}
Parameters
table
- Name of the table that contains the column access policy to remove
column
- Name of the column that contains the access policy to remove
Privileges
Non-superuser: Ownership of the table
Examples
These examples show various cases where you can drop an access policy.
Drop column access policy:
=> DROP ACCESS POLICY ON customer FOR COLUMN Customer_Number;
Drop row access policy on a table:
=> DROP ACCESS POLICY ON customer_info FOR ROWS;
16.2 - DROP AGGREGATE FUNCTION
Drops a user-defined aggregate function (UDAnF) from the Vertica catalog.
Drops a user-defined aggregate function (UDAnF) from the Vertica catalog.
Syntax
DROP AGGREGATE FUNCTION [ IF EXISTS ] [[database.]schema.]function( [ arglist ] )
Parameters
IF EXISTS
- Specifies not to report an error if the function to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- Specifies a name of the SQL function to drop. If the function name is schema-qualified, the function is dropped from the specified schema (as noted above).
arglist
- A comma delimited list of argument names and data types that are passed to the function, formatted as follows:
{ [argname] argtype }[,...]
-
argname
optionally specifies the argument name, typically a column name.
-
argtype
specifies the argument's data type, where argtype
matches a Vertica data type.
Privileges
Non-superuser: Owner
Requirements
-
To drop a function, you must specify the argument types because several functions might share the same name with different parameters.
-
Vertica does not check for dependencies, so if you drop a SQL function where other objects reference it (such as views or other SQL functions), Vertica returns an error when those objects are used and not when the function is dropped.
Examples
The following command drops the ag_avg
function:
=> DROP AGGREGATE FUNCTION ag_avg(numeric);
DROP AGGREGATE FUNCTION
See also
Aggregate functions (UDAFs)
16.3 - DROP ANALYTIC FUNCTION
Drops a user-defined analytic function from the Vertica catalog.
Drops a user-defined analytic function from the Vertica catalog.
Syntax
DROP ANALYTIC FUNCTION [ IF EXISTS ] [[database.]schema.]function( [ arglist ] )
Parameters
IF EXISTS
- Specifies not to report an error if the function to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- Specifies a name of the SQL function to drop. If the function name is schema-qualified, the function is dropped from the specified schema (as noted above).
arglist
- A comma delimited list of argument names and data types that are passed to the function, formatted as follows:
{ [argname] argtype }[,...]
-
argname
optionally specifies the argument name, typically a column name.
-
argtype
specifies the argument's data type, where argtype
matches a Vertica data type.
Privileges
Non-superuser: Owner
Requirements
-
To drop a function, you must specify the argument types because several functions might share the same name with different parameters.
-
Vertica does not check for dependencies, so if you drop a SQL function where other objects reference it (such as views or other SQL functions), Vertica returns an error when those objects are used and not when the function is dropped.
Examples
The following command drops the analytic_avg
function:
=> DROP ANALYTIC FUNCTION analytic_avg(numeric);
DROP ANALYTIC FUNCTION
See also
Analytic functions (UDAnFs)
16.4 - DROP ARCHIVE
Eon Mode only
Drops an in-database restore point archive.
Syntax
DROP ARCHIVE archive-name [ CASCADE ]
Arguments
archive-name
- Name of the archive to drop. If the archive contains any restore points and the CASCADE arguement is not provided, the statement returns an error. If you specify CASCADE, the statement drops the archive and any existing restore points.
Privileges
Superuser or archive owner
Examples
Drop the eon
archive and any restore points that it contains:
=> DROP ARCHIVE eon CASCADE
See also
16.5 - DROP AUTHENTICATION
Drops an authentication method.
Drops an authentication method.
Syntax
DROP AUTHENTICATION [ IF EXISTS ] auth-method-name [ CASCADE ]
Parameters
IF EXISTS
- Specifies not to report an error if the authentication method to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
auth-method-name
- Name of the authentication method to drop.
CASCADE
- Required if the authentication method to drop is granted to users. In this case, omission of this option causes the drop operation to fail.
Privileges
Superuser
Examples
Delete authentication method md5_auth
:
=> DROP AUTHENTICATION md5_auth;
Use CASCADE
to drop authentication method that was granted to a user:
=> CREATE AUTHENTICATION localpwd METHOD 'password' LOCAL;
=> GRANT AUTHENTICATION localpwd TO jsmith;
=> DROP AUTHENTICATION localpwd CASCADE;
See also
16.6 - DROP CA BUNDLE
Drops a certificate authority (CA) bundle.
Deprecated
CA bundles are only usable with certain deprecated parameters in
Kafka notifiers. You should prefer using
TLS configurations and the TLS CONFIGURATION parameter for notifiers instead.
Drops a certificate authority (CA) bundle.
Syntax
DROP CA BUNDLE [ IF EXISTS ] name [,...] [ CASCADE ]
Parameters
IF EXISTS
- Vertica does not report an error if the CA bundle to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
name
- The name of the CA bundle.
CASCADE
- Drops dependent objects before dropping the CA bundle.
Privileges
Ownership of the CA bundle
Examples
See Managing CA bundles.
See also
16.7 - DROP CERTIFICATE
Drops a TLS certificate from the database.
Drops a TLS certificate from the database.
To view existing certificates, query CERTIFICATES.
Syntax
DROP CERTIFICATE [ IF EXISTS ] certificate-name [,...] [ CASCADE ]
Parameters
IF EXISTS
- Vertica does not report an error if the certificate to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
certificate-name
- The name of the certificate to drop.
CASCADE
- Drops dependent objects before dropping the certificate.
Predefined TLS Configurations and TLS Configurations that manage a connection type cannot be dropped, nor can the keys and certificates referenced by such TLS Configurations. For details, see TLS configurations.
Privileges
Non-superuser, one of the following:
Examples
Drop server_cert
, if it exists:
=> DROP CERTIFICATE server_cert;
DROP CERTIFICATE;
Drop a CA certificate and its dependencies (typically the certificates that it has signed):
=> DROP CERTIFICATE ca_cert CASCADE;
DROP CERTIFICATE;
See also
16.8 - DROP DATA LOADER
DROP DATA LOADER drops a data loader and its associated monitoring table.
DROP DATA LOADER drops a data loader created with CREATE DATA LOADER. It also drops the associated monitoring table.
If the data loader was created with a trigger, you must disable it before dropping it. Call ALTER DATA LOADER with the DISABLE option.
Syntax
DROP DATA LOADER [ IF EXISTS ] [schema.]name
Arguments
IF EXISTS
- Do not report an error if the data loader does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
schema
- Schema containing the data loader. The default schema is
public
.
name
- Name of the data loader.
Privileges
Non-superuser: owner or DROP privilege on the data loader.
See also
16.9 - DROP DIRECTED QUERY
Removes a directed query from the database.
Removes a directed query from the database. If the directed query is active, Vertica deactivates it before removal.
Syntax
DROP DIRECTED QUERY { query-name | where-clause }
Arguments
query-name
- Name of the directed query to remove from the database, as stored in the DIRECTED_QUERIES column
query_name
. You can also use GET DIRECTED QUERY to obtain names of all directed queries that map to an input query.
where-clause
- Resolves to one or more directed queries that are filtered from system table DIRECTED_QUERIES. For example, the following statement specifies to drop all directed queries with the same
save_plans_version
identifier:
=> DROP DIRECTED QUERY WHERE save_plans_version = 21;
Privileges
Superuser
Examples
See Managing directed queries.
16.10 - DROP FAULT GROUP
Removes the specified fault group and its child fault groups, placing all nodes under the parent of the dropped fault group.
Removes the specified fault group and its child fault groups, placing all nodes under the parent of the dropped fault group.
To drop all fault groups, use ALTER DATABASE..DROP ALL FAULT GROUP.
To add an orphaned node back to a fault group, you must manually reassign it to a new or existing fault group with CREATE FAULT GROUP and ALTER FAULT GROUP...ADD NODE.
Tip
For a list of all fault groups defined in the cluster, query system table
FAULT_GROUPS .
Syntax
DROP FAULT GROUP [ IF EXISTS ] fault-group
Parameters
IF EXISTS
- Specifies not to report an error if
fault-group
does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
fault-group
- Specifies the name of the fault group to drop.
Privileges
Superuser
Examples
=> DROP FAULT GROUP group2;
DROP FAULT GROUP
See also
16.11 - DROP FILTER
Drops a User Defined Load Filter function from the Vertica catalog.
Drops a User Defined Load Filter function from the Vertica catalog.
Syntax
DROP FILTER [[database.]schema.]filter()
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
filter
()
- Specifies the filter function to drop. You must append empty parentheses to the function name.
Privileges
Non-superuser:
Examples
The following command drops the Iconverter
filter function::
=> drop filter Iconverter();
DROP FILTER
See also
16.12 - DROP FUNCTION
Drops an SQL function or user-defined functions (UDFs) from the Vertica catalog.
Drops an SQL function or user-defined functions (UDFs) from the Vertica catalog.
Syntax
DROP FUNCTION [ IF EXISTS ] [[database.]schema.]function[,...] ( [ arg-list ] )
Parameters
IF EXISTS
- Specifies not to report an error if the function to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- The SQL or user-defined function (UDF) to drop, where UDFs can be one of the following types:
arg-list
A comma-delimited list of arguments as defined for this function when it was created, specified as follows:
[
arg-name
]
arg-type
[,...]
where arg-name
optionally qualifies arg-type
:
Privileges
Non-superuser, one of the following:
Requirements
-
To drop a function, you must specify the argument types because several functions might share the same name with different parameters.
-
Vertica does not check for dependencies when you drop a SQL function, so if other objects reference it (such as views or other SQL functions), Vertica returns an error only when those objects are used.
Examples
The following command drops the zerowhennull
function in the macros
schema:
=> DROP FUNCTION macros.zerowhennull(x INT);
DROP FUNCTION
See also
16.13 - DROP KEY
Drops a cryptographic key and its certificate, if any, from the database.
Drops a cryptographic key and its certificate, if any, from the database.
To view existing cryptographic keys, query CRYPTOGRAPHIC_KEYS.
Syntax
DROP KEY [ IF EXISTS ] key-name [,...] [ CASCADE ]
Parameters
IF EXISTS
- Vertica does not report an error if the key to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
key-name
- The name of the cryptographic key to drop.
CASCADE
- Drops dependent objects before dropping the key.
Predefined TLS Configurations and TLS Configurations that manage a connection type cannot be dropped, nor can the keys and certificates referenced by such TLS Configurations. For details, see TLS configurations.
Privileges
Non-superuser, one of the following:
-
Ownership of the key
-
DROP privileges
Examples
Drop k_ca
, if it exists:
=> DROP KEY k_ca IF EXISTS;
DROP KEY;
Drop k_client
and its dependencies (the certificate it's associated with):
=> DROP KEY k_client CASCADE;
DROP KEY;
See also
16.14 - DROP LIBRARY
Removes a UDx library from the database.
Removes a UDx library from the database. The library file is deleted from managed directories on the Vertica nodes. The user-defined functions (UDFs) in the library are no longer available. See Developing user-defined extensions (UDxs) for details.
Syntax
DROP LIBRARY [ IF EXISTS ] [[database.]schema.]library [ CASCADE]
Arguments
IF EXISTS
- Execute this command only if the library exists. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
library
- The name of the library to drop, the same name used in CREATE LIBRARY to load the library.
CASCADE
- Also drop any functions that were defined using the library. DROP LIBRARY fails if CASCADE is omitted and one or more UDxs use the target library.
Privileges
One of:
Examples
A superuser can drop any library:
=> DROP LIBRARY ml.MyLib CASCADE;
Users with the UDXDEVELOPER role can drop libraries that they created:
=> GRANT UDXDEVELOPER TO alice, bob;
GRANT ROLE
=> \c - alice;
You are now connected as user "alice".
-- Must enable the role before using:
=> SET ROLE UDXDEVELOPER;
SET
-- Create and use ml.mylib...
-- Drop library and dependencies:
DROP LIBRARY ml.mylib CASCADE;
DROP LIBRARY
A user can be granted explicit permission to drop a library:
=> \c - alice
You are now connected as user "alice".
=> GRANT DROP ON LIBRARY ml.mylib to bob;
GRANT PRIVILEGE
=> \c - bob
You are now connected as user "bob".
=> SET ROLE UDXDEVELOPER;
SET
=> DROP LIBRARY ml.mylib cascade;
DROP LIBRARY
16.15 - DROP LOAD BALANCE GROUP
Deletes a load balancing group.
Deletes a load balancing group.
Syntax
DROP LOAD BALANCE GROUP [ IF EXISTS ] group_name [ CASCADE ]
Parameters
IF EXISTS
- Specifies not to report an error if the load balance group to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
-
group_name
- The name of the group to drop.
[CASCADE]
- Also drops all load balancing routing rules that target this group. If you do not supply this keyword and one or more routing rules target
group_name
, this statement fails with an error message.
Privileges
Superuser
Examples
The following statement demonstrates the error you get if the load balancing group has a dependent routing rule, and the use of the CASCADE keyword:
=> DROP LOAD BALANCE GROUP group_all;
NOTICE 4927: The RoutingRule catch_all depends on LoadBalanceGroup group_all
ROLLBACK 3128: DROP failed due to dependencies
DETAIL: Cannot drop LoadBalanceGroup group_all because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too
=> DROP LOAD BALANCE GROUP group_all CASCADE;
DROP LOAD BALANCE GROUP
See also
16.16 - DROP MODEL
Removes one or more models from the Vertica database.
Removes one or more models from the Vertica database.
Syntax
DROP MODEL [ IF EXISTS ] [[database.]schema.]model[,...]
Parameters
IF EXISTS
- Specifies not to report an error if the models to drop do not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
- [
database
.]schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
model
- The model to drop.
Privileges
One of the following:
Examples
See Dropping models.
16.17 - DROP NAMESPACE
Eon Mode only
Drops a namespace from an Eon Mode database. The statement errors if there are any schemata or tables in the namespace.
Syntax
DROP NAMESPACE [ IF EXISTS ] name
Arguments
IF EXISTS
- Do not report an error if the namespace does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
name
- Name of the namespace to drop. You cannot drop the
default_namespace
.
Privileges
Superuser
Examples
Drop the analytics
namespace:
=> DROP NAMESAPCE `analytics`;
DROP NAMESPACE
See also
16.18 - DROP NETWORK ADDRESS
Deletes a network address from the catalog.
Deletes a network address from the catalog. A network address is a name for a IP address and port on a node for use in connection load balancing policies.
Syntax
DROP NETWORK ADDRESS [ IF EXISTS ] address-name [ CASCADE ]
Parameters
IF EXISTS
- Specifies not to report an error if the network address to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
address-name
- Name of the network address to drop.
CASCADE
- Removes the network address from any load balancing groups that target it. If you do not supply this keyword and one or more load balance groups include this address, this statement fails with an error message.
Privileges
Superuser
Examples
The following statement demonstrates the error you get if the network address has a dependent load balance group, and the use of the CASCADE keyword:
=> DROP NETWORK ADDRESS node01;
NOTICE 4927: The LoadBalanceGroup group_1 depends on NetworkInterface node01
NOTICE 4927: The LoadBalanceGroup group_random depends on NetworkInterface node01
ROLLBACK 3128: DROP failed due to dependencies
DETAIL: Cannot drop NetworkInterface node01 because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too
=> DROP NETWORK ADDRESS node01 CASCADE;
DROP NETWORK ADDRESS
16.19 - DROP NETWORK INTERFACE
Removes a network interface from Vertica.
Removes a network interface from Vertica. You can use the CASCADE option to also remove the network interface from any node definition. (See Identify the database or nodes used for import/export for more information.)
Syntax
DROP NETWORK INTERFACE [ IF EXISTS ] network-interface-name [ CASCADE ]
Parameters
The parameters are defined as follows:
IF EXISTS
- Specifies not to report an error if the network interface to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
network-interface-name
- The network interface to remove.
CASCADE
- Removes the network interface from all node definitions.
Privileges
Superuser
Examples
=> DROP NETWORK INTERFACE myNetwork;
16.20 - DROP NOTIFIER
Drops a push-based notifier created by CREATE NOTIFIER.
Drops a push-based notifier created by
CREATE NOTIFIER
.
Syntax
DROP NOTIFIER [ IF EXISTS ] notifier-name [ CASCADE ]
Parameters
IF EXISTS
- Specifies not to report an error if notifier to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
notifier-name
- The notifier's unique identifier.
CASCADE
- Removes the notifier from any data collector (DC) table policies before dropping the notifier. If the notifier is set for a DC table and CASCADE is not specified, the DROP command fails.
To manually remove the notifier from DC table policies, use the SET_DATA_COLLECTOR_NOTIFY_POLICY function.
Examples
Drop the requests_issued
notifier, specifying CASCADE to remove it from any DC table policies:
DROP NOTIFIER requests_issued CASCADE;
16.21 - DROP PARSER
Drops a User Defined Load Parser function from the Vertica catalog.
Drops a User Defined Load Parser function from the Vertica catalog.
Syntax
DROP PARSER[[database.]schema.]parser()
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
parser
()
- The name of the parser function to drop. You must append empty parentheses to the function name.
Privileges
Non-superuser:
Examples
=> DROP PARSER BasicIntegerParser();
DROP PARSER
See also
16.22 - DROP PROCEDURE (external)
Removes an external procedure from Vertica.
Enterprise Mode only
Removes an external procedure from Vertica. Only the reference to the procedure inside Vertica is removed. The external file remains in the database
/
procedures
directory of each database node.
Syntax
DROP PROCEDURE [ IF EXISTS ] [[database.]schema.]procedure( [ parameter-list ] )
Parameters
IF EXISTS
- Specifies not to report an error if the procedure to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
procedure
- Specifies the procedure to drop.
parameter-list
- A comma-delimited list of formal parameters defined for this procedure, specified as follows:
[
parameter-name
]
parameter-type
[,...]
where parameter-name
optionally qualifies parameter-type
.
Privileges
Non-superuser:
Examples
=> DROP PROCEDURE helloplanet(arg1 varchar);
See also
CREATE PROCEDURE (external)
16.23 - DROP PROCEDURE (stored)
Drops a stored procedure.
Drops a stored procedure.
Syntax
DROP PROCEDURE [ IF EXISTS ] [[database.]schema.]procedure( [ parameter-type-list] ) [ CASCADE ];
Parameters
IF EXISTS
- Specifies not to report an error if the procedure to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
procedure
- The name of the stored procedure, where
procedure
conforms to conventions described in Identifiers.
parameter-type-list
- A comma-delimited list of the IN and INOUT parameters' types.
- CASCADE
- Drops the trigger that references the stored procedure, if any.
Privileges
Non-superuser:
Examples
Given the following procedure:
=> CREATE PROCEDURE raiseXY(IN x INT, y VARCHAR) LANGUAGE PLvSQL AS $$
BEGIN
RAISE NOTICE 'x = %', x;
RAISE NOTICE 'y = %', y;
-- some processing statements
END;
$$;
CALL raiseXY(3, 'some string');
NOTICE 2005: x = 3
NOTICE 2005: y = some string
You can drop it with:
=> DROP PROCEDURE raiseXY(INT, VARCHAR);
DROP PROCEDURE
For details on RAISE NOTICE, see Errors and diagnostics.
See also
16.24 - DROP PROFILE
Removes a user-defined profile (created by CREATE PROFILE) from the database.
Removes a user-defined profile (created by
CREATE PROFILE
) from the database. You cannot drop the DEFAULT
profile.
Syntax
DROP PROFILE [ IF EXISTS ] profile-name[,...] [ CASCADE ]
Parameters
IF EXISTS
- Specifies not to report an error if the profile to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
profile-name
- The profile to drop.
CASCADE
- Moves all users assigned to the dropped profiles to the DEFAULT profile. If you omit this option and a targeted profile has users assigned to it, Vertica returns an error.
Privileges
Superuser
Examples
=> DROP PROFILE sample_profile;
16.25 - DROP PROJECTION
Marks a to drop from the catalog so it is unavailable to user queries.
Marks a projection to drop from the catalog so it is unavailable to user queries.
Syntax
DROP PROJECTION [ IF EXISTS ] { [[database.]schema.]projection[,...] } [ RESTRICT | CASCADE ]
Parameters
IF EXISTS
- Specifies not to report an error if the projection to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
projection
- Specifies a projection to drop:
-
If the projection is unsegmented, all projection replicas in the database cluster are dropped.
-
If the projection is segmented, drop all buddy projections by specifying the projection base name. You can also specify the name of a specific buddy projection as long as dropping it so does not violate system K-safety.
See Projection naming for projection name conventions.
RESTRICT | CASCADE
- Specifies whether to drop the projection when it contains objects:
Privileges
Non-superuser: owner of the anchor table
Restrictions
The following restrictions apply to dropping a projection:
-
The projection cannot be the anchor table's superprojection.
-
You cannot drop a buddy projection if doing so violates system K-safety.
-
Another projection must be available to enforce the same primary or unique key constraint.
See also
16.26 - DROP RESOURCE POOL
Drops a user-created resource pool.
Drops a user-created resource pool. All memory allocated to the pool is returned to the GENERAL pool.
Syntax
DROP RESOURCE POOL resource-pool [ FOR { SUBCLUSTER subcluster | CURRENT SUBCLUSTER } ]
Parameters
resource-pool
- Name of the resource pool to drop.
FOR {SUBCLUSTER
subcluster
| CURRENT SUBCLUSTER}
- Eon Mode only, drops
resource-pool
from the specified subcluster, one of the following:
SUBCLUSTER
subcluster
: Drops resource-pool
from the named subcluster . You cannot be connected to this subcluster, otherwise Vertica returns an error.
CURRENT SUBCLUSTER
: Drops resource-pool
from the subcluster you are connected to.
If you omit this parameter, the resource pool is dropped from all subclusters. If a resource pool was created for an individual subcluster, you must explicitly drop it from that subcluster with this parameter; otherwise, Vertica returns an error.
Privileges
Superuser
Resource pool transfers
Requests that are queued against the dropped pool are transferred to the GENERAL pool according to the priority of the pool compared to the GENERAL pool. If the pool’s priority is higher than the GENERAL pool, the requests are placed at the head of the queue; otherwise, transferred requests are placed at the end of the queue.
Users who are assigned to the dropped pool are reassigned to the default user resource pool as set by DefaultResourcePoolForUser. The DROP request returns with a notice like this:
NOTICE: Switched the following users to the <name> pool: <username>
If any user lacks permission to use the default user resource pool, Vertica rolls back the drop operation.
Existing sessions are transferred to the GENERAL pool regardless of whether the session user has privileges to use that pool. This can result in additional user privileges if access to the dropped pool is more restrictive than the GENERAL pool. In this case, you can prevent giving users additional privileges as follows:
-
Revoke permissions on the target resource pool from all users.
-
Close any sessions that had permissions on the resource pool.
-
Drop the resource pool.
Restrictions
-
If you try to drop a resource pool that is the secondary pool for another resource pool, Vertica returns an error. The error lists the resource pools that depend on the secondary pool you tried to drop. To drop a secondary resource pool, first set the CASCADE TO parameter to DEFAULT
on the primary resource pool, and then drop the secondary pool.
For example, you can drop resource pool rp2
, which is a secondary pool for rp1
, as follows:
=> ALTER RESOURCE POOL rp1 CASCADE TO DEFAULT;
=> DROP RESOURCE POOL rp2;
-
You cannot drop a resource pool that is configured as the default user resource pool by the DefaultResourcePoolForUsers parameter.
Examples
Drop a user-defined resource pool:
=> DROP RESOURCE POOL ceo_pool;
Get the name of the current subcluster for an Eon Mode database, then drop its resource pool:
=> SELECT CURRENT_SUBCLUSTER_NAME();
CURRENT_SUBCLUSTER_NAME
-------------------------
analytics_1
(1 row)
=> DROP RESOURCE POOL dashboard FOR CURRENT SUBCLUSTER;
DROP RESOURCE POOL
See also
16.27 - DROP ROLE
Removes a role from the database.
Removes a role from the database.
Note
You cannot use DROP ROLE on a role added to the Vertica database with the LDAPLink service.
Syntax
DROP ROLE [ IF EXISTS ] role-name[,...] [ CASCADE ]
Parameters
IF EXISTS
- Specifies not to report an error if the roles to drop do not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
role-name
- The name of the role to drop
CASCADE
- Revoke the role from users and other roles before dropping the role
Privileges
Superuser
Examples
=> DROP ROLE appadmin;
NOTICE: User bob depends on Role appadmin
ROLLBACK: DROP ROLE failed due to dependencies
DETAIL: Cannot drop Role appadmin because other objects depend on it
HINT: Use DROP ROLE ... CASCADE to remove granted roles from the dependent users/roles
=> DROP ROLE appadmin CASCADE;
DROP ROLE
See also
16.28 - DROP ROUTING RULE
Deletes a routing rule from the catalog.
Deletes a routing rule from the catalog.
Syntax
DROP ROUTING RULE [ IF EXISTS ] { rule_name | FOR WORKLOAD workload_name }
Parameters
IF EXISTS
- Specifies not to report an error if the routing rule to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
rule_name
- Name of the rule to drop.
workload_name
- The name of a workload.
Privileges
Superuser
Examples
To drop a routing rule:
=> DROP ROUTING RULE internal_clients;
DROP ROUTING RULE
To drop a rule for a workload:
=> DROP ROUTING RULE FOR WORKLOAD analytics;
See also
16.29 - DROP SCHEDULE
Drops schedules.
Drops schedules.
Syntax
DROP SCHEDULE [[database.]schema.]schedule[,...] [ CASCADE ]
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
schedule
- The schedule to drop.
- CASCADE
- Drops triggers that use this schedule, if any.
Privileges
Superuser
Examples
=> DROP SCHEDULE monthly_schedule;
16.30 - DROP SCHEMA
Permanently removes a schema from the database.
Permanently removes a schema from the database. Be sure that you want to remove the schema before you drop it, because DROP SCHEMA is an irreversible process. Use the CASCADE parameter to drop a schema containing one or more objects.
Syntax
DROP SCHEMA [ IF EXISTS ] [{namespace. | database. }]schema[,...] [ CASCADE | RESTRICT ]
Parameters
IF EXISTS
- Specifies not to report an error if the schemas to drop do not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
{
database
|
namespace
}
- Name of the database or namespace that contains
schema
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
CASCADE
- Specifies to drop the schema and all objects in it, regardless of who owns those objects.
Caution
Objects in other schemas that depend on objects in the dropped schema—for example, user-defined functions—also are silently dropped.
RESTRICT
- Drops the schema only if it is empty (default).
Privileges
Non-superuser: schema owner
Restrictions
-
You cannot drop the PUBLIC schema.
-
If a user is accessing an object within a schema that is in the process of being dropped, the schema is not deleted until the transaction completes.
-
Canceling a DROP SCHEMA statement can cause unpredictable results.
Examples
The following example drops schema S1 only if it doesn't contain any objects:
=> DROP SCHEMA S1;
The following example drops schema S1 whether or not it contains objects:
=> DROP SCHEMA S1 CASCADE;
16.31 - DROP SEQUENCE
Removes the specified named sequence number generator.
Removes the specified named sequence number generator.
Syntax
DROP SEQUENCE [ IF EXISTS ] [[database.]schema.]sequence[,...]
Parameters
IF EXISTS
- Specifies not to report an error if the sequences to drop do not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
sequence
- Name of the sequence to drop.
Privileges
Non-superusers: sequence or schema owner
Restrictions
-
For sequences specified in a table's default expression, the default expression fails the next time you try to load data. Vertica does not check for these instances.
-
DROP SEQUENCE
does not support the CASCADE
keyword. Sequences used in a default expression of a column cannot be dropped until all references to the sequence are removed from the default expression.
Examples
The following command drops the sequence named sequential
.
=> DROP SEQUENCE sequential;
See also
16.32 - DROP SOURCE
Drops a User Defined Load Source function from the Vertica catalog.
Drops a User Defined Load Source function from the Vertica catalog.
Syntax
DROP SOURCE [[database.]schema.]source()
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
source
()
- Specifies the source function to drop. You must append empty parentheses to the function name.
Privileges
Non-superuser:
Examples
The following command drops the curl
source function:
=> DROP SOURCE curl();
DROP SOURCE
See also
16.33 - DROP SUBNET
Removes a subnet from Vertica.
Removes a subnet from Vertica.
Syntax
DROP SUBNET [ IF EXISTS ] subnet-name[,...] [ CASCADE ]
Parameters
The parameters are defined as follows:
IF EXISTS
- Specifies not to report an error if the subnets to drop do not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
subnet-name
- A subnet to remove.
CASCADE
- Removes the specified subnets from all database definitions.
Privileges
Superuser
Examples
=> DROP SUBNET mySubnet;
See also
Identify the database or nodes used for import/export
16.34 - DROP TABLE
DROP TABLE;delete table;.
Removes one or more tables and their projections. When you run DROP TABLE
, the change is auto-committed.
Syntax
DROP TABLE [ IF EXISTS ] [[{namespace. | database. }]schema.]table[,...] [ CASCADE ]
Parameters
IF EXISTS
- Specifies not to report an error if one or more of the tables to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
table
- The table to drop.
CASCADE
- Specifies to drop all projections of the target tables.
CASCADE
is optional if the target tables have only auto-projections. If you omit this option and any of the tables has non-superprojections, Vertica returns an error and rolls back the entire drop operation.
This option is not valid for external tables.
Privileges
Non-superuser:
Requirements
-
Do not cancel an executing DROP TABLE
. Doing so can leave the database in an inconsistent state.
-
Check that the target table is not in use, either directly or indirectly—for example, in a view.
-
If you drop and restore a table that is referenced by a view, the new table must have the same name and column definitions.
Examples
See Dropping tables
See also
16.35 - DROP TEXT INDEX
Drops a text index used to perform text searches.
Drops a text index used to perform text searches.
Note
When a source table is dropped that has a text index associated with it, the text index is also dropped.
Syntax
DROP TEXT INDEX [ IF EXISTS ] [[database.]schema.]idx-table
Parameters
IF EXISTS
- Specifies not to report an error if the text index to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
idx-table
- Specifies the text index name. When using more than one schema, specify the schema that contains the index in the
DROP TEXT INDEX
statement.
Privileges
Examples
=> DROP TEXT INDEX t_text_index;
DROP INDEX
See also
16.36 - DROP TLS CONFIGURATION
Drops an existing TLS Configuration.
Drops an existing TLS Configuration.
You cannot drop a TLS Configuration if it set as a configuration parameter. For details, see TLS configurations.
Syntax
DROP TLS CONFIGURATION tls_config_name
Parameters
tls_config_name
- The name of the TLS Configuration object to drop.
Privileges
Non-superuser, one of the following:
16.37 - DROP TRANSFORM FUNCTION
Drops a user-defined transform function (UDTF) from the Vertica catalog.
Drops a user-defined transform function (UDTF) from the Vertica catalog.
Syntax
DROP TRANSFORM FUNCTION [ IF EXISTS ] [[database.]schema.]function( [ arg-list ] )
Parameters
IF EXISTS
- Specifies not to report an error if the function to drop does not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- Specifies the transform function to drop.
arg-list
A comma-delimited list of arguments as defined for this function when it was created, specified as follows:
[
arg-name
]
arg-type
[,...]
where arg-name
optionally qualifies arg-type
:
Note
You can omit arg-list
when dropping a polymorphic function.
Privileges
One of the following:
Examples
The following command drops the tokenize
UDTF in the macros
schema:
=> DROP TRANSFORM FUNCTION macros.tokenize(varchar);
DROP TRANSFORM FUNCTION
The following command drops the Pagerank
polymorphic function in the online
schema:
=> DROP TRANSFORM FUNCTION online.Pagerank();
DROP TRANSFORM FUNCTION
See also
CREATE TRANSFORM FUNCTION
16.38 - DROP TRIGGER
Drops triggers.
Drops triggers. Dropping a trigger disables its associated schedule, if any.
Syntax
DROP TRIGGER [[database.]schema.]trigger[,...]
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
trigger
- The trigger to drop.
Privileges
Superuser
Examples
To drop a trigger:
=> DROP TRIGGER revoke_trigger;
16.39 - DROP USER
Removes a name from the list of authorized database users.
Removes a name from the list of authorized database users.
Note
DROP USER can not remove a user that was added to the Vertica database with the LDAPLink service.
Syntax
DROP USER [ IF EXISTS ] user-name[,...] [ CASCADE ]
Parameters
IF EXISTS
- Do not report an error if the users to drop do not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
user-name
- Name of a user to drop.
CASCADE
- Drop all user-defined objects created by
user-name
, including schemas, tables and all views that reference the table, and projections of that table.
Caution
Tables owned by the dropped user cannot be recovered after you issue DROP USER CASCADE.
Privileges
Superuser
Examples
DROP USER succeeds if no user-defined objects exist:
=> CREATE USER user2;
CREATE USER
=> DROP USER IF EXISTS user2;
DROP USER
DROP USER fails if objects that the user created still exist:
=> DROP USER IF EXISTS user1;
NOTICE: Table T_tbd1 depends on User user1
ROLLBACK: DROP failed due to dependencies
DETAIL: Cannot drop User user1 because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too
DROP USER CASCADE succeeds regardless of any existing user-defined objects. The statement forcibly drops all user-defined objects, such as schemas, tables and their associated projections:
=> DROP USER IF EXISTS user1 CASCADE;
DROP USER
See also
16.40 - DROP VIEW
Removes the specified view.
Removes the specified view. Vertica does not check for dependencies on the dropped view. After dropping a view, other views that reference it fail.
If you drop a view and replace it with another view or table with the same name and column names, other views that reference that name use the new view. If you change the column data type in the new view, the server coerces the old data type to the new one if possible; otherwise, it returns an error.
Syntax
DROP VIEW [ IF EXISTS ] [[database.]schema.]view[,...]
Parameters
IF EXISTS
- Specifies not to report an error if the views to drop do not exist. Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
view
- Name of a view to drop.
Privileges
One of the following
Examples
=> DROP VIEW myview;
17 - END
Ends the current transaction and makes all changes that occurred during the transaction permanent and visible to other users.
Ends the current transaction and makes all changes that occurred during the transaction permanent and visible to other users.
Note
COMMIT is a synonym for END.
Syntax
END [ WORK | TRANSACTION ]
Parameters
WORK | TRANSACTION
- Optional keywords that have no effect, for readability only.
Privileges
None
Examples
=> BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED READ WRITE;
BEGIN
=> CREATE TABLE sample_table (a INT);
CREATE TABLE
=> INSERT INTO sample_table (a) VALUES (1);
OUTPUT
--------
1
(1 row)
=> END;
COMMIT
See also
18 - EXECUTE DATA LOADER
EXECUTE DATA LOADER runs a data loader to load new files from a specified path.
EXECUTE DATA LOADER executes a data loader created with CREATE DATA LOADER. It attempts to load all files in the specifed location that have not already been loaded and that have not reached the retry limit. By default, executing the loader commits the transaction.
Data loaders are excuted with COPY ABORT ON ERROR.
EXECUTE DATA LOADER records events in the DATA_LOADER_EVENTS system table.
EXECUTE DATA LOADER runs the data loader once. To execute it periodically, you can use a scheduled stored procedure. To suspend execution, call ALTER DATA LOADER with the DISABLE
option.
Syntax
EXECUTE DATA LOADER [schema.]name
[ WITH FILES file [{ EXACT_PATH | EXPAND_GLOB }][,...] ]
[ FORCE RETRY ]
[ BATCH_SIZE { 'size{K|M|G|T}' | '0' | 'UNLIMITED' } ]
[ NO COMMIT ]
Arguments
schema
- Schema containing the data loader. The default schema is
public
.
name
- Name of the data loader.
WITH FILES
file
- Load only the specified files from the data loader's COPY path. Files that do not also match the COPY path are ignored. Files that have already been loaded are not reloaded. Both of these events are recorded in the loader's associated monitoring table.
EXACT_PATH | EXPAND_GLOB
- How to match the named file:
-
EXACT_PATH
(default): file names must match exactly. Any special characters in the file name are treated as literal parts of the name. For example, sales*.parquet
specifies a single file whose name includes the *
character.
-
EXPAND_GLOB
: glob characters in file names are interpreted as globs. For example, sales*.parquet
specifies one or more Parquet files begainning with sales
.
FORCE RETRY
- Ignore the retry limit.
BATCH_SIZE {
'size
{K|M|G|T}' | '0' | 'UNLIMITED' }
- When dividing a load for parallel execution, override the Vertica default and set the target total size of files to include in one batch. This value is a hint and batches could be larger, depending on the sizes of individual files.
If this option is omitted, Vertica chooses a size based on the total data volume and the number of executor nodes. The maximum batch size that Vertica automatically chooses is 10GB on each node, but you can specify a larger value with this option.
To set a target size, specify an integer with a unit of measure, such as '100M'
or '2G'
. To disable batch loads and load files one at a time, specify '0'. To load all files in a single batch, specify 'UNLIMITED'.
NO COMMIT
- Do not commit the transaction after execution. Use when execution is part of an operation that must be atomic. SQS triggers use this option so that execution and queue management are part of the same transaction.
Privileges
Read privileges to the source location.
Non-superuser:
- Owner or EXECUTE privilege on the data loader.
- Write privilege on the table.
Examples
Execute a data loader once:
=> EXECUTE DATA LOADER sales_dl;
Rows Loaded
-------------
5
(1 row)
Schedule a recurring execution:
=> CREATE SCHEDULE daily_load USING CRON '0 13 * * *';
--- statement must be quoted:
=> CREATE PROCEDURE dl_runner() as $$ begin EXECUTE 'EXECUTE DATA LOADER sales_dl'; end; $$;
=> CREATE TRIGGER t_load ON SCHEDULE daily_load EXECUTE PROCEDURE dl_runner() AS DEFINER;
Load specific files with globbing:
--- no file named '09*.dat' so nothing loaded by default:
=> EXECUTE DATA LOADER sales_dl
WITH FILES 's3://MyBucket/sales/2023/09*.dat';
Rows Loaded
-------------
0
(1 row)
--- loads '09-01.dat', '09-02.dat', etc:
=> EXECUTE DATA LOADER sales_dl
WITH FILES 's3://MyBucket/sales/2023/09*.dat' EXPAND_GLOB;
Rows Loaded
-------------
20
(1 row)
--- loads one glob and one specific file:
=> EXECUTE DATA LOADER sales_dl
WITH FILES 's3://MyBucket/sales/2023/09*.dat' EXPAND_GLOB,
's3://MyBucket/sales/2023/specials.dat' EXACT_PATH;
Rows Loaded
-------------
32
(1 row)
See Automatic load for an extended example.
See also
19 - EXPLAIN
Returns a formatted description of the Vertica optimizer's plan for executing the specified statement.
Returns a formatted description of the Vertica optimizer's plan for executing the specified statement.
Syntax
EXPLAIN [/*+ ALLNODES */] [explain-options] sql-statement
Parameters
-
/*+ALLNODES*/
- Specifies to create a query plan that assumes all nodes are active, not valid with
LOCAL
option.
explain-options
- One or more
EXPLAIN
options, specified in the order shown:
[ LOCAL ] [ VERBOSE ] [ JSON ] [ ANNOTATED ]
-
LOCAL
: On a multi-node database, shows the local query plans assigned to each node, which together comprise the total (global) query plan. If you omit this option, Vertica shows only the global query plan. Local query plans are shown only in DOT language source, which can be rendered in Graphviz.
This option is incompatible with the hint /*+ALLNODES*/
. If you specify both, EXPLAIN returns with an error.
-
VERBOSE
: Increases the level of detail in the rendered query plan.
-
JSON
: Renders the query plan in JSON format. This option is compatible only with VERBOSE
.
-
ANNOTATED
: Embeds optimizer hints that encapsulate the query plan for this query. This option is compatible with LOCAL
and VERBOSE
.
sql-statement
- A query or DML statement—for example, SELECT, INSERT, UPDATE, COPY, and MERGE.
Privileges
The same privileges required by the specified statement.
Requirements
The following requirements apply to EXPLAIN's ability to produce useful information:
- Reasonably representative statistics of your data must be available. See Collecting Statistics for details.
- EXPLAIN produces useful output only if projections are available for the queried tables.
- Qualifier options must be specified in the order shown earlier, otherwise EXPLAIN returns with an error. If an option is incompatible with any preceding options, EXPLAIN ignores them.
Examples
See Viewing query plans.
20 - EXPORT TO DELIMITED
Exports a table, columns from a table, or query results to delimited files.
Exports a table, columns from a table, or query results to delimited files. The files can be read back in using DELIMITED. Several exporter parameters have corresponding parser parameters, allowing you to change delimiters, null indicators, and other formatting.
There are some limitations on the queries you can use in an export statement. See Query Restrictions.
You can export data stored in Vertica in ROS format and data from external tables.
File exporters are UDxs that cannot be run in fenced mode. If the ForceUDxFencedMode configuration parameter is set to true to support other UDxs, you can set it to false at the session level before performing the export.
This statement returns the number of rows written and logs information about exported files in a system table. See Monitoring exports.
During an export to HDFS or an NFS mount point, Vertica writes files to a temporary directory in the same location as the destination and renames the directory when the export is complete. Do not attempt to use the files in the temporary directory. During an export to S3, GCS, or Azure, Vertica writes files directly to the destination path, so you must wait for the export to finish before reading the files. For more information, see Exporting to object stores.
Syntax
EXPORT [ /*+LABEL (label-string)*/ ] TO DELIMITED
( directory='path'[, param=value[,...] ] )
[ OVER (over-clause ) ] AS SELECT query-expression
Arguments
-
LABEL
- Assigns a label to a statement to identify it for profiling and debugging.
over-clause
- Specifies how to partition table data using PARTITION BY. Within partitions you can sort using ORDER BY. See SQL analytics. This clause may contain column references but not expressions.
If you partition data, Vertica creates a partition directory structure, transforming column names to lowercase. See Partitioned data for a description of the directory structure. If you use the fileName
parameter, you cannot use partitioning.
If you omit this clause, Vertica optimizes for maximum parallelism.
query-expression
- Specifies the data to export. See Query Restrictions for important limitations.
Parameters
directory
The destination directory for the output files. The current user must have permission to write it. The destination can be on any of the following file systems:
See also: ifDirExists
.
ifDirExists
What to do if directory
already exists, one of:
-
fail
(default)
-
overwrite
: replace the entire directory
-
append
: export new files into the existing directory with a prepended hash that identifies all files in the exported batch
-
appendNoHash
: export new files into the existing directory without a prepended hash; with this option, Vertica cannot clean up partial exports
If you specify overwrite
for an export to an object store, the existing directory is deleted recursively at the beginning of the operation and is not restored if the operation fails. Be careful not to export to a directory containing data you want to keep. For an export to a Linux file system or HDFS, the directory is only overwritten if the export succeeds.
Do not do concurrent exports to the same directory. In particular, if you do so with a value of overwrite
, all operations appear to succeed, but the results are incorrect.
When using append
, be careful to use the same table schema. Otherwise, queries of external tables using this data path could fail.
filename
If specified, all output is written to a single file of this name in the location specified by directory
. While the query can be processed by multiple nodes, only a single node generates the output data. The fileSizeMB
parameter is ignored, and the query cannot use partitioning in the OVER() clause.
filePrefix
Prefix to use for exported files instead of the default hash. If used with ifDirExists=appendNoHash
, choose a unique prefix to avoid overwriting existing files.
addHeader
- If true, add a header row to the beginning of each file.
Default: false
delimiter
- Column delimiter character. To produce CSV in accordance with RFC 4180, set the delimiter character to
,
(comma).
Default: |
(vertical bar)
escapeDelimitersInsideEnclosures
- If true, delimiter characters are escaped within values that are enclosed by
enclosedBy
. This parameter is most relevant when enclosedBy
and escapeAs
have the same value. A value of false only applies when enclosedBy
has a value (is not an empty string).
Default: true
recordTerminator
- Character that marks the record end.
Default: \n
enclosedBy
- Character to use to enclose string and date/time data. If you omit this parameter, no character encloses these data types.
Default: ''
(empty string, no enclosing character)
escapeAs
- Character to use to escape values in exported data that must be escaped, including the
enclosedBy
value.
Default: \
(backslash)
nullAs
- String to represent null values in the data. If this parameter is included, the exporter exports all null values as this value. Otherwise, the exporter exports null values as zero-length strings.
binaryTypesFormat
- Format for exported binary data type (BINARY, VARBINARY, and LONG VARBINARY) values, one of the following:
-
Default
: Printable ASCII characters where possible and escaped octal representations of the non-printable bytes. The DELIMITED parser reads this format.
-
Hex
: Base 16 (hexadecimal) representation; value is preceded by '0x' and bytes are not escaped.
-
Octal
: Base 8 (octal) representation, without escaping.
-
Bitstring
: Binary representation, without escaping.
For example, the value a\000b\001c
can be exported as follows:
compression
- Compression type, one of:
Default: Uncompressed
fileExtension
- Output file extension. If using compression, a compression-specific extension such as
.bz2
is appended.
Default: csv
fileSizeMB
The maximum file size of a single output file. This value is a hint, not a hard limit. A value of 0 specifies no limit. If filename
is also specified, fileSizeMB
is ignored.
This value affects the size of individual output files, not the total output size. For smaller values, Vertica divides the output into more files; all data is still exported.
Default: 10GB
fileMode
For writes to HDFS only, permission to apply to all exported files. You can specify the value in Unix octal format (such as 665
) or user
-
group
-
other
format—for example, rwxr-xr-x
. The value must be formatted as a string even if using the octal format.
Valid octal values range between 0
and 1777
, inclusive. See HDFS Permissions in the Apache Hadoop documentation.
When writing files to any destination other than HDFS, this parameter has no effect.
Default: 660, regardless of the value of fs.permissions.umask-mode
in hdfs-site.xml
.
dirMode
For writes to HDFS only, permission to apply to all exported directories. Values follow the same rules as those for fileMode
. Further, you must give the Vertica HDFS user full permission, at least rwx------
or 700
.
When writing files to any destination other than HDFS, this parameter has no effect.
Default: 755, regardless of the value of fs.permissions.umask-mode
in hdfs-site.xml
.
Privileges
Non-superusers:
Query restrictions
You must provide an alias column label for selected column targets that are expressions.
If you partition the output, you cannot specify schema and table names in the SELECT statement. Specify only the column name.
The query can contain only a single outer SELECT statement. For example, you cannot use UNION:
=> EXPORT TO DELIMITED(directory = '/mnt/shared_nfs/accounts/rm')
OVER(PARTITION BY hash)
AS
SELECT 1 as account_id, '{}' as json, 0 hash
UNION ALL
SELECT 2 as account_id, '{}' as json, 1 hash;
ERROR 8975: Only a single outer SELECT statement is supported
HINT: Please use a subquery for multiple outer SELECT statements
Instead, rewrite the query to use a subquery:
=> EXPORT TO DELIMITED(directory = '/mnt/shared_nfs/accounts/rm')
OVER(PARTITION BY hash)
AS
SELECT
account_id,
json
FROM
(
SELECT 1 as account_id, '{}' as json, 0 hash
UNION ALL
SELECT 2 as account_id, '{}' as json, 1 hash
) a;
Rows Exported
---------------
2
(1 row)
To use composite statements such as UNION, INTERSECT, and EXCEPT, rewrite them as subqueries.
Data types
EXPORT TO DELIMITED does not support ARRAY, ROW, and SET types.
This operation exports raw Flex columns as binary data.
Output
The export operation always creates (or appends to) an output directory, even if all output is written to a single file or the query produces zero rows.
By default, output file names follow the pattern: prefix
-
nodename
-
threadId
[-
sequenceNumber
].
extension
. prefix
is typically an 8-character hash, but can be longer if the export appended to an existing directory. A sequence number is added if an export needs to be broken into pieces to satisfy fileSizeMB
. You can change the prefix using the filePrefix
parameter or, for appends, by setting ifDirExists=appendNoHash
. You can write all output to a single named file using the filename
parameter.
Column names in partition directories are lowercase.
Files exported to a local file system by any Vertica user are owned by the Vertica superuser. Files exported to HDFS or object stores are owned by the Vertica user who exported the data.
Making concurrent exports to the same output destination is an error and can produce incorrect results.
Exports to the local file system can be to an NFS mount (shared) or to the Linux file system on each node (non-shared). For details, see Exporting to the Linux file system. Exports to non-shared local file systems have the following restrictions:
-
The output directory must not exist on any node.
-
You must have a USER storage location or superuser privileges.
-
You cannot override the permissions mode of 700 for directories and 600 for files.
Exports to object-store file systems are not atomic. Be careful to wait for the export to finish before using the data. For details, see Exporting to object stores.
Examples
The following example exports uncompressed comma-separated values (CSV) with a header row in each file:
=> EXPORT TO DELIMITED(directory='webhdfs:///user1/data', delimiter=',', addHeader='true')
AS SELECT * FROM public.sales;
21 - EXPORT TO JSON
Exports a table, columns from a table, or query results to JSON files.
Exports a table, columns from a table, or query results to JSON files. The files can be read back into Vertica using FJSONPARSER.
There are some limitations on the queries you can use in an export statement. See Query Restrictions.
You can export data stored in Vertica in ROS format and data from external tables.
File exporters are UDxs that cannot be run in fenced mode. If the ForceUDxFencedMode configuration parameter is set to true to support other UDxs, you can set it to false at the session level before performing the export.
This statement returns the number of rows written and logs information about exported files in a system table. See Monitoring exports.
During an export to HDFS or an NFS mount point, Vertica writes files to a temporary directory in the same location as the destination and renames the directory when the export is complete. Do not attempt to use the files in the temporary directory. During an export to S3, GCS, or Azure, Vertica writes files directly to the destination path, so you must wait for the export to finish before reading the files. For more information, see Exporting to object stores.
Syntax
EXPORT [ /*+LABEL (label)*/ ] TO JSON
( directory='path'[, param=value[,...] ] )
[ OVER (over-clause ) ] AS SELECT query-expression
Arguments
-
LABEL
- Assigns a label to a statement to identify it for profiling and debugging.
over-clause
- Specifies how to partition table data using PARTITION BY. Within partitions you can sort using ORDER BY. See SQL analytics. This clause may contain column references but not expressions.
If you partition data, Vertica creates a partition directory structure, transforming column names to lowercase. See Partitioned data for a description of the directory structure. If you use the fileName
parameter, you cannot use partitioning.
If you omit this clause, Vertica optimizes for maximum parallelism.
query-expression
- Specifies the data to export. See Query Restrictions for important limitations.
Parameters
directory
The destination directory for the output files. The current user must have permission to write it. The destination can be on any of the following file systems:
See also: ifDirExists
.
ifDirExists
What to do if directory
already exists, one of:
-
fail
(default)
-
overwrite
: replace the entire directory
-
append
: export new files into the existing directory with a prepended hash that identifies all files in the exported batch
-
appendNoHash
: export new files into the existing directory without a prepended hash; with this option, Vertica cannot clean up partial exports
If you specify overwrite
for an export to an object store, the existing directory is deleted recursively at the beginning of the operation and is not restored if the operation fails. Be careful not to export to a directory containing data you want to keep. For an export to a Linux file system or HDFS, the directory is only overwritten if the export succeeds.
Do not do concurrent exports to the same directory. In particular, if you do so with a value of overwrite
, all operations appear to succeed, but the results are incorrect.
When using append
, be careful to use the same table schema. Otherwise, queries of external tables using this data path could fail.
filename
If specified, all output is written to a single file of this name in the location specified by directory
. While the query can be processed by multiple nodes, only a single node generates the output data. The fileSizeMB
parameter is ignored, and the query cannot use partitioning in the OVER() clause.
filePrefix
Prefix to use for exported files instead of the default hash. If used with ifDirExists=appendNoHash
, choose a unique prefix to avoid overwriting existing files.
omitNullFields
- Boolean, whether to omit ROW fields with null values.
Default: false
compression
- Compression type, one of:
Default: Uncompressed
fileSizeMB
The maximum file size of a single output file. This value is a hint, not a hard limit. A value of 0 specifies no limit. If filename
is also specified, fileSizeMB
is ignored.
This value affects the size of individual output files, not the total output size. For smaller values, Vertica divides the output into more files; all data is still exported.
Default: 10GB
fileMode
For writes to HDFS only, permission to apply to all exported files. You can specify the value in Unix octal format (such as 665
) or user
-
group
-
other
format—for example, rwxr-xr-x
. The value must be formatted as a string even if using the octal format.
Valid octal values range between 0
and 1777
, inclusive. See HDFS Permissions in the Apache Hadoop documentation.
When writing files to any destination other than HDFS, this parameter has no effect.
Default: 660, regardless of the value of fs.permissions.umask-mode
in hdfs-site.xml
.
dirMode
For writes to HDFS only, permission to apply to all exported directories. Values follow the same rules as those for fileMode
. Further, you must give the Vertica HDFS user full permission, at least rwx------
or 700
.
When writing files to any destination other than HDFS, this parameter has no effect.
Default: 755, regardless of the value of fs.permissions.umask-mode
in hdfs-site.xml
.
Privileges
Non-superusers:
Query restrictions
You must provide an alias column label for selected column targets that are expressions.
If you partition the output, you cannot specify schema and table names in the SELECT statement. Specify only the column name.
The query can contain only a single outer SELECT statement. For example, you cannot use UNION:
=> EXPORT TO JSON(directory = '/mnt/shared_nfs/accounts/rm')
OVER(PARTITION BY hash)
AS
SELECT 1 as account_id, '{}' as json, 0 hash
UNION ALL
SELECT 2 as account_id, '{}' as json, 1 hash;
ERROR 8975: Only a single outer SELECT statement is supported
HINT: Please use a subquery for multiple outer SELECT statements
Instead, rewrite the query to use a subquery:
=> EXPORT TO JSON(directory = '/mnt/shared_nfs/accounts/rm')
OVER(PARTITION BY hash)
AS
SELECT
account_id,
json
FROM
(
SELECT 1 as account_id, '{}' as json, 0 hash
UNION ALL
SELECT 2 as account_id, '{}' as json, 1 hash
) a;
Rows Exported
---------------
2
(1 row)
To use composite statements such as UNION, INTERSECT, and EXCEPT, rewrite them as subqueries.
Data types
EXPORT TO JSON can export ARRAY and ROW types in any combination.
EXPORT TO JSON does not support binary output (VARBINARY).
Output
The export operation always creates (or appends to) an output directory, even if all output is written to a single file or the query produces zero rows.
By default, output file names follow the pattern: prefix
-
nodename
-
threadId
[-
sequenceNumber
].
extension
. prefix
is typically an 8-character hash, but can be longer if the export appended to an existing directory. A sequence number is added if an export needs to be broken into pieces to satisfy fileSizeMB
. You can change the prefix using the filePrefix
parameter or, for appends, by setting ifDirExists=appendNoHash
. You can write all output to a single named file using the filename
parameter.
Column names in partition directories are lowercase.
Files exported to a local file system by any Vertica user are owned by the Vertica superuser. Files exported to HDFS or object stores are owned by the Vertica user who exported the data.
Making concurrent exports to the same output destination is an error and can produce incorrect results.
Exports to the local file system can be to an NFS mount (shared) or to the Linux file system on each node (non-shared). For details, see Exporting to the Linux file system. Exports to non-shared local file systems have the following restrictions:
-
The output directory must not exist on any node.
-
You must have a USER storage location or superuser privileges.
-
You cannot override the permissions mode of 700 for directories and 600 for files.
Exports to object-store file systems are not atomic. Be careful to wait for the export to finish before using the data. For details, see Exporting to object stores.
Examples
In the following example, one of the ROW elements has a null value, which is omitted in the output. EXPORT TO JSON writes each JSON record on one line; line breaks have been inserted into the following output for readability:
=> SELECT name, menu FROM restaurants;
name | menu
-------------------+------------------------------------------------------------------------------
Bob's pizzeria | [{"item":"cheese pizza","price":null},{"item":"spinach pizza","price":10.5}]
Bakersfield Tacos | [{"item":"veggie taco","price":9.95},{"item":"steak taco","price":10.95}]
(2 rows)
=> EXPORT TO JSON (directory='/output/json', omitNullFields=true)
AS SELECT * FROM restaurants;
Rows Exported
---------------
2
(1 row)
=> \! cat /output/json/*.json
{"name":"Bob's pizzeria","cuisine":"Italian","location_city":["Cambridge","Pittsburgh"],
"menu":[{"item":"cheese pizza"},{"item":"spinach pizza","price":10.5}]}
{"name":"Bakersfield Tacos","cuisine":"Mexican","location_city":["Pittsburgh"],
"menu":[{"item":"veggie taco","price":9.95},{"item":"steak taco","price":10.95}]}
22 - EXPORT TO ORC
Exports a table, columns from a table, or query results to files in the ORC format.
Exports a table, columns from a table, or query results to files in the ORC format.
You can use an OVER() clause to partition the data before export. You can partition data instead of or in addition to exporting the column data. Partitioning data can improve query performance by enabling partition pruning. See Partitioned data.
There are some limitations on the queries you can use in an export statement. See Query Restrictions.
You can export data stored in Vertica in ROS format and data from external tables.
File exporters are UDxs that cannot be run in fenced mode. If the ForceUDxFencedMode configuration parameter is set to true to support other UDxs, you can set it to false at the session level before performing the export.
This statement returns the number of rows written and logs information about exported files in a system table. See Monitoring exports.
During an export to HDFS or an NFS mount point, Vertica writes files to a temporary directory in the same location as the destination and renames the directory when the export is complete. Do not attempt to use the files in the temporary directory. During an export to S3, GCS, or Azure, Vertica writes files directly to the destination path, so you must wait for the export to finish before reading the files. For more information, see Exporting to object stores.
Syntax
EXPORT [ /*+LABEL (label-string)*/ ] TO ORC
( directory='path'[, param=value[,...] ] )
[ OVER (over-clause ) ] AS SELECT query-expression
Arguments
-
LABEL
- Assigns a label to a statement to identify it for profiling and debugging.
over-clause
- Specifies how to partition table data using PARTITION BY. Within partitions you can sort using ORDER BY. See SQL analytics. This clause may contain column references but not expressions.
If you partition data, Vertica creates a partition directory structure, transforming column names to lowercase. See Partitioned data for a description of the directory structure. If you use the fileName
parameter, you cannot use partitioning.
If you omit this clause, Vertica optimizes for maximum parallelism.
query-expression
- Specifies the data to export. See Query Restrictions for important limitations.
Parameters
directory
The destination directory for the output files. The current user must have permission to write it. The destination can be on any of the following file systems:
See also: ifDirExists
.
ifDirExists
What to do if directory
already exists, one of:
-
fail
(default)
-
overwrite
: replace the entire directory
-
append
: export new files into the existing directory with a prepended hash that identifies all files in the exported batch
-
appendNoHash
: export new files into the existing directory without a prepended hash; with this option, Vertica cannot clean up partial exports
If you specify overwrite
for an export to an object store, the existing directory is deleted recursively at the beginning of the operation and is not restored if the operation fails. Be careful not to export to a directory containing data you want to keep. For an export to a Linux file system or HDFS, the directory is only overwritten if the export succeeds.
Do not do concurrent exports to the same directory. In particular, if you do so with a value of overwrite
, all operations appear to succeed, but the results are incorrect.
When using append
, be careful to use the same table schema. Otherwise, queries of external tables using this data path could fail.
filename
If specified, all output is written to a single file of this name in the location specified by directory
. While the query can be processed by multiple nodes, only a single node generates the output data. The fileSizeMB
parameter is ignored, and the query cannot use partitioning in the OVER() clause.
filePrefix
Prefix to use for exported files instead of the default hash. If used with ifDirExists=appendNoHash
, choose a unique prefix to avoid overwriting existing files.
compression
- Column compression type, one of:
Default: Zlib
stripeSizeMB
- The uncompressed size of exported stripes in MB, an integer value between 1 and 1024, inclusive.
Default: 250
rowIndexStride
- Integer that specifies how frequently the exporter builds indexing statistics in the output, between 1 and 1000000 (1 million), inclusive. A value of 0 disables indexing. The exporter builds statistics after every
rowIndexStride
rows in each stripe, or once for stripes < rowIndexStride
.
Default: 1000
fileSizeMB
The maximum file size of a single output file. This value is a hint, not a hard limit. A value of 0 specifies no limit. If filename
is also specified, fileSizeMB
is ignored.
This value affects the size of individual output files, not the total output size. For smaller values, Vertica divides the output into more files; all data is still exported.
Default: 10GB
fileMode
For writes to HDFS only, permission to apply to all exported files. You can specify the value in Unix octal format (such as 665
) or user
-
group
-
other
format—for example, rwxr-xr-x
. The value must be formatted as a string even if using the octal format.
Valid octal values range between 0
and 1777
, inclusive. See HDFS Permissions in the Apache Hadoop documentation.
When writing files to any destination other than HDFS, this parameter has no effect.
Default: 660, regardless of the value of fs.permissions.umask-mode
in hdfs-site.xml
.
dirMode
For writes to HDFS only, permission to apply to all exported directories. Values follow the same rules as those for fileMode
. Further, you must give the Vertica HDFS user full permission, at least rwx------
or 700
.
When writing files to any destination other than HDFS, this parameter has no effect.
Default: 755, regardless of the value of fs.permissions.umask-mode
in hdfs-site.xml
.
Privileges
Non-superusers:
Query restrictions
You must provide an alias column label for selected column targets that are expressions.
If you partition the output, you cannot specify schema and table names in the SELECT statement. Specify only the column name.
The query can contain only a single outer SELECT statement. For example, you cannot use UNION:
=> EXPORT TO ORC(directory = '/mnt/shared_nfs/accounts/rm')
OVER(PARTITION BY hash)
AS
SELECT 1 as account_id, '{}' as json, 0 hash
UNION ALL
SELECT 2 as account_id, '{}' as json, 1 hash;
ERROR 8975: Only a single outer SELECT statement is supported
HINT: Please use a subquery for multiple outer SELECT statements
Instead, rewrite the query to use a subquery:
=> EXPORT TO ORC(directory = '/mnt/shared_nfs/accounts/rm')
OVER(PARTITION BY hash)
AS
SELECT
account_id,
json
FROM
(
SELECT 1 as account_id, '{}' as json, 0 hash
UNION ALL
SELECT 2 as account_id, '{}' as json, 1 hash
) a;
Rows Exported
---------------
2
(1 row)
To use composite statements such as UNION, INTERSECT, and EXCEPT, rewrite them as subqueries.
Data types
EXPORT TO ORC converts Vertica data types to Hive data types as shown in the following table.
Vertica Data Type |
Hive Data Type |
INTEGER, BIGINT |
BIGINT |
FLOAT, DECIMAL, SMALLINT, TINYINT, CHAR, BOOLEAN |
Corresponding Hive type |
VARCHAR, LONG VARCHAR |
VARCHAR (max 64KB) or STRING (can be read as either) |
BINARY, VARBINARY, LONG VARBINARY |
BINARY |
DATE |
DATE if supported by your version of Hive, otherwise INT96 (can be read as TIMESTAMP) |
TIMESTAMP, TIMESTAMPTZ |
TIMESTAMP. Vertica does not convert TIMESTAMP values to UTC. To avoid problems arising from time zones, use TIMESTAMPTZ instead of TIMESTAMP. |
TIME, TIMEZ, INTERVAL, UUID |
Not supported |
ARRAY, SET |
Not supported |
ROW |
Not supported |
Decimal precision must be <= 38.
The exported Hive types might not be identical to the Vertica types. For example, a Vertica INT is exported as a Hive BIGINT. When defining Hive external tables to read exported data, you might have to adjust column definitions.
This operation exports raw Flex columns as binary data.
Output
The export operation always creates (or appends to) an output directory, even if all output is written to a single file or the query produces zero rows.
By default, output file names follow the pattern: prefix
-
nodename
-
threadId
[-
sequenceNumber
].
extension
. prefix
is typically an 8-character hash, but can be longer if the export appended to an existing directory. A sequence number is added if an export needs to be broken into pieces to satisfy fileSizeMB
. You can change the prefix using the filePrefix
parameter or, for appends, by setting ifDirExists=appendNoHash
. You can write all output to a single named file using the filename
parameter.
Column names in partition directories are lowercase.
Files exported to a local file system by any Vertica user are owned by the Vertica superuser. Files exported to HDFS or object stores are owned by the Vertica user who exported the data.
Making concurrent exports to the same output destination is an error and can produce incorrect results.
Exports to the local file system can be to an NFS mount (shared) or to the Linux file system on each node (non-shared). For details, see Exporting to the Linux file system. Exports to non-shared local file systems have the following restrictions:
-
The output directory must not exist on any node.
-
You must have a USER storage location or superuser privileges.
-
You cannot override the permissions mode of 700 for directories and 600 for files.
Exports to object-store file systems are not atomic. Be careful to wait for the export to finish before using the data. For details, see Exporting to object stores.
Examples
The following example demonstrates partitioning and exporting data. EXPORT TO ORC first partitions the data on region and then, within each partition, sorts by store.
=> EXPORT TO ORC(directory='gs://DataLake/user2/data')
OVER(PARTITION BY store.region ORDER BY store.ID)
AS SELECT sale.price, sale.date, store.ID
FROM public.sales sale
JOIN public.vendor store ON sale.distribID = store.ID;
For more examples, see EXPORT TO PARQUET, which (aside from a few parameters) behaves the same as EXPORT TO ORC.
23 - EXPORT TO PARQUET
Exports a table, columns from a table, or query results to files in the Parquet format.
Exports a table, columns from a table, or query results to files in the Parquet format.
You can use an OVER() clause to partition the data before export. You can partition data instead of or in addition to exporting the column data. Partitioning data can improve query performance by enabling partition pruning. See Partitioned data.
There are some limitations on the queries you can use in an export statement. See Query Restrictions.
You can export data stored in Vertica in ROS format and data from external tables.
File exporters are UDxs that cannot be run in fenced mode. If the ForceUDxFencedMode configuration parameter is set to true to support other UDxs, you can set it to false at the session level before performing the export.
This statement returns the number of rows written and logs information about exported files in a system table. See Monitoring exports.
During an export to HDFS or an NFS mount point, Vertica writes files to a temporary directory in the same location as the destination and renames the directory when the export is complete. Do not attempt to use the files in the temporary directory. During an export to S3, GCS, or Azure, Vertica writes files directly to the destination path, so you must wait for the export to finish before reading the files. For more information, see Exporting to object stores.
After you export data, you can use the GET_METADATA function to inspect the results.
Syntax
EXPORT [ /*+LABEL (label-string)*/ ] TO PARQUET
( directory='path'[, param=value[,...] ] )
[ OVER (over-clause ) ] AS SELECT query-expression
Arguments
-
LABEL
- Assigns a label to a statement to identify it for profiling and debugging.
over-clause
- Specifies how to partition table data using PARTITION BY. Within partitions you can sort using ORDER BY. See SQL analytics. This clause may contain column references but not expressions.
If you partition data, Vertica creates a partition directory structure, transforming column names to lowercase. See Partitioned data for a description of the directory structure. If you use the fileName
parameter, you cannot use partitioning.
If you omit this clause, Vertica optimizes for maximum parallelism.
query-expression
- Specifies the data to export. See Query Restrictions for important limitations.
Parameters
directory
The destination directory for the output files. The current user must have permission to write it. The destination can be on any of the following file systems:
See also: ifDirExists
.
ifDirExists
What to do if directory
already exists, one of:
-
fail
(default)
-
overwrite
: replace the entire directory
-
append
: export new files into the existing directory with a prepended hash that identifies all files in the exported batch
-
appendNoHash
: export new files into the existing directory without a prepended hash; with this option, Vertica cannot clean up partial exports
If you specify overwrite
for an export to an object store, the existing directory is deleted recursively at the beginning of the operation and is not restored if the operation fails. Be careful not to export to a directory containing data you want to keep. For an export to a Linux file system or HDFS, the directory is only overwritten if the export succeeds.
Do not do concurrent exports to the same directory. In particular, if you do so with a value of overwrite
, all operations appear to succeed, but the results are incorrect.
When using append
, be careful to use the same table schema. Otherwise, queries of external tables using this data path could fail.
filename
If specified, all output is written to a single file of this name in the location specified by directory
. While the query can be processed by multiple nodes, only a single node generates the output data. The fileSizeMB
parameter is ignored, and the query cannot use partitioning in the OVER() clause.
filePrefix
Prefix to use for exported files instead of the default hash. If used with ifDirExists=appendNoHash
, choose a unique prefix to avoid overwriting existing files.
compression
- Column compression type, one of:
-
Snappy
-
GZIP
-
Brotli
-
ZSTD
-
Uncompressed
Default: Snappy
rowGroupSizeMB
- The uncompressed size of exported row groups, in MB, an integer value between 1 and
fileSizeMB
, inclusive, or unlimited if fileSizeMB
is 0.
The row groups in the exported files are smaller than this value because Parquet files are compressed on write. For best performance when exporting to HDFS, set size
to be smaller than the HDFS block size.
Row-group size affects memory consumption during export. An export thread consumes at least double the row-group size. The default value of 512MB is a compromise between writing larger row groups and allowing enough free memory for other Vertica operations. If you perform exports when the database is not otherwise under heavy load, you can improve read performance on the exported data by increasing row-group size on export. However, row groups that span multiple blocks on HDFS decrease read performance by requiring more I/O, so do not set the row-group size to be larger than your HDFS block size.
Default: 512
fileSizeMB
The maximum file size of a single output file. This value is a hint, not a hard limit. A value of 0 specifies no limit. If filename
is also specified, fileSizeMB
is ignored.
This value affects the size of individual output files, not the total output size. For smaller values, Vertica divides the output into more files; all data is still exported.
Default: 10GB
fileMode
For writes to HDFS only, permission to apply to all exported files. You can specify the value in Unix octal format (such as 665
) or user
-
group
-
other
format—for example, rwxr-xr-x
. The value must be formatted as a string even if using the octal format.
Valid octal values range between 0
and 1777
, inclusive. See HDFS Permissions in the Apache Hadoop documentation.
When writing files to any destination other than HDFS, this parameter has no effect.
Default: 660, regardless of the value of fs.permissions.umask-mode
in hdfs-site.xml
.
dirMode
For writes to HDFS only, permission to apply to all exported directories. Values follow the same rules as those for fileMode
. Further, you must give the Vertica HDFS user full permission, at least rwx------
or 700
.
When writing files to any destination other than HDFS, this parameter has no effect.
Default: 755, regardless of the value of fs.permissions.umask-mode
in hdfs-site.xml
.
int96AsTimestamp
- Boolean, specifies whether to export timestamps as int96 physical type (true) or int64 physical type (false).
Default: true
Privileges
Non-superusers:
Query restrictions
You must provide an alias column label for selected column targets that are expressions.
If you partition the output, you cannot specify schema and table names in the SELECT statement. Specify only the column name.
The query can contain only a single outer SELECT statement. For example, you cannot use UNION:
=> EXPORT TO PARQUET(directory = '/mnt/shared_nfs/accounts/rm')
OVER(PARTITION BY hash)
AS
SELECT 1 as account_id, '{}' as json, 0 hash
UNION ALL
SELECT 2 as account_id, '{}' as json, 1 hash;
ERROR 8975: Only a single outer SELECT statement is supported
HINT: Please use a subquery for multiple outer SELECT statements
Instead, rewrite the query to use a subquery:
=> EXPORT TO PARQUET(directory = '/mnt/shared_nfs/accounts/rm')
OVER(PARTITION BY hash)
AS
SELECT
account_id,
json
FROM
(
SELECT 1 as account_id, '{}' as json, 0 hash
UNION ALL
SELECT 2 as account_id, '{}' as json, 1 hash
) a;
Rows Exported
---------------
2
(1 row)
To use composite statements such as UNION, INTERSECT, and EXCEPT, rewrite them as subqueries.
Data types
EXPORT TO PARQUET converts Vertica data types to Hive data types as shown in the following table.
Vertica Data Type |
Hive Data Type |
INTEGER, BIGINT |
BIGINT |
FLOAT, DECIMAL, SMALLINT, TINYINT, CHAR, BOOLEAN |
Corresponding Hive type |
VARCHAR, LONG VARCHAR |
VARCHAR (max 64KB) or STRING (can be read as either) |
BINARY, VARBINARY, LONG VARBINARY |
BINARY |
DATE |
DATE if supported by your version of Hive, otherwise INT96 (can be read as TIMESTAMP) |
TIMESTAMP, TIMESTAMPTZ |
TIMESTAMP. Vertica does not convert TIMESTAMP values to UTC. To avoid problems arising from time zones, use TIMESTAMPTZ instead of TIMESTAMP. |
INTERVAL |
INT64 |
TIME |
INT64 |
TIMETZ |
Not supported |
ARRAY |
ARRAY |
SET |
Not supported |
ROW |
STRUCT |
Decimal precision must be <= 153.
The exported Hive types might not be identical to the Vertica types. For example, a Vertica INT is exported as a Hive BIGINT. When defining Hive external tables to read exported data, you might have to adjust column definitions.
This operation exports raw Flex columns as binary data.
Output
The export operation always creates (or appends to) an output directory, even if all output is written to a single file or the query produces zero rows.
By default, output file names follow the pattern: prefix
-
nodename
-
threadId
[-
sequenceNumber
].
extension
. prefix
is typically an 8-character hash, but can be longer if the export appended to an existing directory. A sequence number is added if an export needs to be broken into pieces to satisfy fileSizeMB
. You can change the prefix using the filePrefix
parameter or, for appends, by setting ifDirExists=appendNoHash
. You can write all output to a single named file using the filename
parameter.
Column names in partition directories are lowercase.
Files exported to a local file system by any Vertica user are owned by the Vertica superuser. Files exported to HDFS or object stores are owned by the Vertica user who exported the data.
Making concurrent exports to the same output destination is an error and can produce incorrect results.
Exports to the local file system can be to an NFS mount (shared) or to the Linux file system on each node (non-shared). For details, see Exporting to the Linux file system. Exports to non-shared local file systems have the following restrictions:
-
The output directory must not exist on any node.
-
You must have a USER storage location or superuser privileges.
-
You cannot override the permissions mode of 700 for directories and 600 for files.
Exports to object-store file systems are not atomic. Be careful to wait for the export to finish before using the data. For details, see Exporting to object stores.
Examples
The following example demonstrates exporting all columns from theT1 table in the public schema, using GZIP compression.
=> EXPORT TO PARQUET(directory='webhdfs:///user1/data', compression='gzip')
AS SELECT * FROM public.T1;
The following example demonstrates exporting the results of a query using more than one table.
=> EXPORT TO PARQUET(directory='s3://DataLake/sales_by_region')
AS SELECT sale.price, sale.date, store.region
FROM public.sales sale
JOIN public.vendor store ON sale.distribID = store.ID;
The following example demonstrates partitioning and exporting data. EXPORT TO PARQUET first partitions the data on region and then, within each partition, sorts by store.
=> EXPORT TO PARQUET(directory='gs://DataLake/user2/data')
OVER(PARTITION BY store.region ORDER BY store.ID)
AS SELECT sale.price, sale.date, store.ID
FROM public.sales sale
JOIN public.vendor store ON sale.distribID = store.ID;
The following example uses an alias column label for a selected column target that is an expression.
=> EXPORT TO PARQUET(directory='webhdfs:///user3/data')
OVER(ORDER BY col1) AS SELECT col1 + col1 AS A, col2
FROM public.T3;
The following example sets permissions for the output.
=> EXPORT TO PARQUET(directory='webhdfs:///user1/data',
fileMode='432', dirMode='rwxrw-r-x')
AS SELECT * FROM public.T1;
24 - EXPORT TO VERTICA
Exports table data from one Vertica database to another.
Exports table data from one Vertica database to another. A connection to the target database must exist in the current session before starting the copy operation. Otherwise, Vertica returns an error. For details, see CONNECT TO VERTICA.
Important
The source database must be no more than one major release behind the target database.
Syntax
EXPORT [ /*+LABEL (label-string)*/ ] TO VERTICA
[[ { target-db | target-ns }.]target-schema.]target-table [ ( target-columns ) ]
{ AS SELECT query-expression | FROM [[namespace.]schema.]source-table[ ( source-columns ) ] }
Arguments
-
LABEL
Assigns a label to a statement to identify it for profiling and debugging.
target-db
- The target database to which data is exported. A connection to this database must already exist in the current session before starting the copy operation; otherwise Vertica returns an error. For details, see CONNECT TO VERTICA.
target-ns
- For Eon Mode databases, namespace in the target database to which data is exported. If unspecified, the target namespace is
default_namespace
.
target-schema
- The target schema to which data is exported, or
public
if not specified.
target-table
- The table in the target database to store the exported data. The table cannot have columns of complex data types other than native arrays.
target-columns
- A comma-delimited list of columns in
target-table
in which to store the exported data.See Mapping Between Source and Target Columns, below.
query-expression
- The data to export.
[[
namespace
.]
schema.
]
source-table
- The table in the source database that contains the data to export. The namespace option only applies to Eon Mode databases and defaults to
default_namespace
.
source-columns
- A comma-delimited list of the columns in the source table to export. The table cannot have columns of complex data types.See Mapping Between Source and Target Columns, below.
Privileges
Non-superusers:
Mapping between source and target columns
If you export all table data from one database to another, EXPORT TO VERTICA can omit specifying column lists if column definitions in both tables comply with the following conditions:
-
Same number of columns
-
Identical column names
-
Same sequence of columns
-
Matching or compatible column data types
-
No complex data types (ARRAY, SET, or ROW), except for native arrays
If any of these conditions is not true, the EXPORT TO VERTICA statement must include column lists that explicitly map source and target columns to each other, as follows:
-
Contain the same number of columns.
-
List source and target columns in the same order.
-
Pair columns with the same (or compatible) data types.
Examples
See Exporting data to another database.
See also
25 - GET DIRECTED QUERY
Queries system table DIRECTED_QUERIES on the specified input query, and returns details of all directed queries that map to the input query.
Queries the DIRECTED_QUERIES system table on the specified input query, and returns details of all directed queries that map to the input query. For details about output, see Managing directed queries.
Syntax
GET DIRECTED QUERY input-query
Arguments
input-query
- An input query that is associated with one or more directed queries.
Privileges
None
Examples
See Managing directed queries.
26 - GRANT statements
GRANT statements grant privileges on database objects to users and roles.
GRANT statements grant privileges on database objects to users and roles.
Important
In a database with trust authentication, GRANT statements appear to work as expected but have no real effect on database security.
26.1 - GRANT (authentication)
Associates an authentication record to one or more users and roles.
Associates an authentication record to one or more users and roles.
Syntax
GRANT AUTHENTICATION auth-method-name TO grantee[,...]
Parameters
auth-method-name
- Name of the authentication method to associate with one or more users or roles.
grantee
- Specifies who is associated with the authentication method, one of the following:
Privileges
Superuser
Examples
-
Associate v_ldap
authentication with user jsmith
:
=> GRANT AUTHENTICATION v_ldap TO jsmith;
-
Associate v_gss
authentication to the role DBprogrammer
:
=> CREATE ROLE DBprogrammer;
=> GRANT AUTHENTICATION v_gss TO DBprogrammer;
-
Associate client authentication method v_localpwd
with role PUBLIC
, which is assigned by default to all users:
=> GRANT AUTHENTICATION v_localpwd TO PUBLIC;
See also
26.2 - GRANT (data loader)
Grants data-loader privileges to users and roles.
Grants privileges on automatic data loaders to users and roles. By default, only superusers and the owner can execute or alter a data loader.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] }
ON DATA LOADER [schema.]name
TO grantee[,...]
[ WITH GRANT OPTION ]
Arguments
privilege
- One of the following privileges:
ALL [PRIVILEGES]
- Grants all privileges. Inherited privileges must be granted explicitly.
schema
- Schema containing the data loader. The default schema is
public
.
name
- Name of the data loader.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superusers, one of the following:
See also
26.3 - GRANT (database)
Grants database privileges to users and roles.
Grants database privileges to users and roles.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] }
ON DATABASE db-spec
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
- The following privileges are valid for a database:
ALL [PRIVILEGES]
- Grants all database privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
db-spec
Specifies the current database, set to the database name or DEFAULT
.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superuser: Privileges grantee given the option (WITH GRANT OPTION
) of granting privileges to other users or roles.
Examples
The following example grants user Fred the right to create schemas in the current database.
=> GRANT CREATE ON DATABASE DEFAULT TO Fred;
See also
26.4 - GRANT (key)
Grants privileges on a cryptographic key to a user or role.
Grants privileges on a cryptographic key to a user or role.
Important
Because certificates depend on their underlying key, DROP privileges on a key effectively act as DROP privileges on its associated certificate when used with
DROP KEY...CASCADE.
To revoke granted privileges, see REVOKE (key).
Superusers have limited access to cryptographic objects that they do not own. For details, see Database object privileges.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] [ EXTEND ] } ON KEY
key_name[,...]
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
A privilege, one of the following:
-
USAGE: Allows a user to perform the following actions:
USAGE on the key also gives implicit USAGE privileges on a certificate that uses it as its private key. Users can also get these privileges from ownership of the key or certificate. USAGE privileges on a certificate allow a user to perform the following actions:
-
View the contents of the certificate.
-
Add (with CREATE or ALTER) the certificate to a TLS Configuration.
-
Reuse the CA certificate when importing certificates signed by it. For example, if a user imports a chain of certificates A > B > C
and have USAGE on B
, the database reuses B
(as opposed to creating a duplicate of B
).
-
Specify that the CA certificate signed an imported certificate. For example, if certificate B
signed certificate C
, USAGE on B
allows a user to import C
and specify that it was SIGNED BY B
.
-
DROP
-
ALTER: Allows a user to see the key and its associated certificates in their respective system tables, but not their contents.
key_name
- The target key.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superuser:
Examples
The following example grants USAGE privileges on a private key to a user, which then allows the user to add the self-signed CA certificate to the server
TLS Configuration:
=> CREATE KEY new_ca_key TYPE 'RSA' LENGTH 2048;
=> CREATE CA CERTIFICATE new_ca_cert
SUBJECT '/C=US/ST=Massachusetts/L=Cambridge/O=Micro Focus/OU=Vertica/CN=Vertica example CA'
VALID FOR 3650
EXTENSIONS 'authorityKeyIdentifier' = 'keyid:always,issuer', 'nsComment' = 'new CA'
KEY new_ca_key;
=> CREATE USER u1;
=> GRANT USAGE ON KEY new_ca_key TO u1;
=> GRANT ALTER ON TLS CONFIGURATION data_channel TO u1;
=> \c - u1
=> ALTER TLS CONFIGURATION data_channel ADD CA CERTIFICATES new_ca_cert;
-- clean up:
=> \c
=> ALTER TLS CONFIGURATION data_channel REMOVE CA CERTIFICATES new_ca_cert;
=> DROP KEY new_ca_key CASCADE;
=> DROP USER u1;
26.5 - GRANT (library)
Grants privileges on one or more libraries to users and roles.
Grants privileges on one or more libraries to users and roles.
For example, when working with the Connector Framework Service, you might need to grant a user usage privileges to a library to be able to set UDSession parameters. For more information see Implementing CFS.
Syntax
GRANT privilege
ON LIBRARY [[database.]schema.]library[,...]
TO grantee[,...]
[ WITH GRANT OPTION ]
Arguments
privilege
- Privilege to grant, one of:
-
USAGE
: Grants access to functions in the specified libraries.
-
DROP
: Grants permission to drop libraries that the grantee created.
-
ALL [PRIVILEGES] [EXTEND]
: Grants all library privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack.
You can qualify ALL
with two optional keywords:
-
PRIVILEGES
conforms with the SQL standard.
-
EXTEND
extends the semantics of ALL
to include DROP privileges. An unqualified ALL
excludes this privilege. This option enables backward compatibility with GRANT ALL
usage in pre-9.2.1 Vertica releases.
Important
To execute functions inside the library, users must also have separate EXECUTE
privileges on them, and USAGE
privileges on their respective schemas.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
library
- The target library.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superusers require USAGE on the schema and one of the following:
Examples
Grant USAGE privileges on the MyFunctions
library to Fred
:
=> GRANT USAGE ON LIBRARY MyFunctions TO Fred;
See also
26.6 - GRANT (model)
Grants usage privileges on a model to users and roles.
Grants usage privileges on a model to users and roles.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] [ EXTEND ] }
ON MODEL [[database.]schema.]model-name[,...]
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
- The following privileges are valid for models:
ALL [PRIVILEGES][EXTEND]
- Grants all model privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack.
You can qualify ALL
with two optional keywords:
-
PRIVILEGES
conforms with the SQL standard.
-
EXTEND
extends the semantics of ALL
to include ALTER and DROP privileges. An unqualified ALL
excludes these two privileges. This option enables backward compatibility with GRANT ALL
usage in pre-9.2.1 Vertica releases.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
model-name
- The model on which to grant the privilege.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superusers require USAGE on the schema and one of the following:
Examples
This example grants USAGE privileges on the mySvmClassModel model to user1:
=> GRANT USAGE ON MODEL mySvmClassModel TO user1;
See also
26.7 - GRANT (procedure)
Grants privileges on a stored procedure or external procedure to a user or role.
Grants privileges on a stored procedure or external procedure to a user or role.
Important
External procedures that you create with
CREATE PROCEDURE (external) are always run with Linux dbadmin privileges. If a dbadmin or pseudosuperuser grants a non-dbadmin permission to run a procedure using
GRANT (procedure), be aware that the non-dbadmin user runs the procedure with full Linux dbadmin privileges.
Syntax
GRANT { EXECUTE | ALL [ PRIVILEGES ] }
ON PROCEDURE [[database.]schema.]procedure( [arg-list] )[,...]
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
EXECUTE
- Enables grantees to run the specified
procedure
.
ALL [PRIVILEGES]
- Grants all procedure privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
procedure
- The target procedure.
arg-list
- A comma-delimited list of procedure arguments, where each argument is specified as follows:
[ argname ] argtype
If the procedure is defined with no arguments, supply an empty argument list.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superuser, one of the following:
Examples
Grant EXECUTE privileges on the tokenize
procedure to users Bob
and Jules
, and to the role Operator
:
=> GRANT EXECUTE ON PROCEDURE tokenize(varchar) TO Bob, Jules, Operator;
See also
26.8 - GRANT (Resource pool)
Grants USAGE privileges on resource pools to users and roles.
Grants USAGE privileges on resource pools to users and roles. Users can access their resource pools with ALTER USER or SET SESSION RESOURCE POOL.
Syntax
GRANT USAGE
ON RESOURCE POOL resource-pool[,...]
[FOR SUBCLUSTER subcluster | FOR CURRENT SUBCLUSTER]
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
USAGE
- Enables grantees to acess the specified resource pools.
ALL [PRIVILEGES]
- Grants all resource pool privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
resource-pool
- A resource pool on which to grant the specified privileges.
subcluster
- The subcluster for the resource pool.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superuser, one of the following:
Examples
Grant user Joe
USAGE privileges on resource pool Joe_pool
.
=> CREATE USER Joe;
CREATE USER
=> CREATE RESOURCE POOL Joe_pool;
CREATE RESOURCE POOL
=> GRANT USAGE ON RESOURCE POOL Joe_pool TO Joe;
GRANT PRIVILEGE
Grant user Joe
USAGE privileges on resource pool Joe_pool
for subcluster sub1
.
=> GRANT USAGE on RESOURCE POOL Joe_pool FOR SUBCLUSTER sub1 TO Joe;
GRANT PRIVILEGE
See also
26.9 - GRANT (Role)
Assigns roles to users or other roles.
Assigns roles to users or other roles.
Note
Granting a role does not activate the role automatically; you must
enable it with the
SET ROLE statement or specify it as a default role to enable it automatically.
Syntax
GRANT role[,...] TO grantee[,...] [ WITH ADMIN OPTION ]
Arguments
role
- A role to grant
grantee
- User or role to be granted the specified roles, one of the following:
WITH ADMIN OPTION
- Gives
grantee
the privilege to grant the specified roles to other users or roles. For details, see Granting privileges.
Privileges
Non-superuser: If WITH GRANT OPTION is used, can grant the same roles to other users or roles.
Examples
See Granting database roles.
See also
REVOKE (Role)
26.10 - GRANT (schema)
Grants schema privileges to users and roles.
Grants schema privileges to users and roles. By default, only superusers and the schema owner have the following schema privileges:
Note
By default, new users cannot access schema PUBLIC. You must explicitly grant all new users USAGE privileges on the PUBLIC schema.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] [ EXTEND ] }
ON SCHEMA [{namespace. | database. }]schema[,...]
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
- One of the following privileges:
-
USAGE: Enables access to objects in the specified schemas. Grantees can then be granted privileges on individual objects in these schemas in order to access them, for example, with GRANT TABLE and GRANT VIEW.
-
CREATE: Create and rename objects in the specified schemas, and move objects from other schemas.
You can also grant the following privileges on a schema, to be inherited by tables and their projections, and by views of that schema. If inheritance is enabled for the database and schema, these privileges are automatically granted to those objects on creation:
-
SELECT: Query tables and views. SELECT privileges are granted by default to the PUBLIC role.
-
INSERT: Insert rows, or and load data into tables with
COPY
.
Note
COPY FROM STDIN
is allowed for users with INSERT privileges, while COPY FROM
file
requires admin privileges.
-
UPDATE: Update table rows.
-
DELETE: Delete table rows.
-
REFERENCES: Create foreign key constraints on this table. This privilege must be set on both referencing and referenced tables.
-
TRUNCATE: Truncate table contents. Non-owners of tables can also execute the following partition operations on them:
-
ALTER: Modify the DDL of tables and views with
ALTER TABLE
and
ALTER VIEW
, respectively.
-
DROP: Drop tables and views.
ALL [PRIVILEGES][EXTEND]
- Grants USAGE AND CREATE privileges. Inherited privileges must be granted explicitly.
You can qualify ALL
with two optional keywords:
-
PRIVILEGES
conforms with the SQL standard.
-
EXTEND
extends the semantics of ALL
to include ALTER and DROP privileges. An unqualified ALL
excludes these two privileges. This option enables backward compatibility with GRANT ALL
usage in pre-9.2.1 Vertica releases.
{
database
|
namespace
}
- Name of the database or namespace that contains
schema
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superusers, one of the following:
Examples
Grant user Joe USAGE privilege on schema online_sales
.
=> CREATE USER Joe;
CREATE USER
=> GRANT USAGE ON SCHEMA online_sales TO Joe;
GRANT PRIVILEGE
See also
26.11 - GRANT (sequence)
Grants sequence privileges to users and roles.
Grants sequence privileges to users and roles.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] [ EXTEND ] }
ON {
SEQUENCE [[database.]schema.]sequence[,...]
| ALL SEQUENCES IN SCHEMA [database.]schema[,...] }
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
- The following privileges are valid for sequences:
ALL [PRIVILEGES][EXTEND]
- Grants all sequence privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack
You can qualify ALL
with two optional keywords:
-
PRIVILEGES
conforms with the SQL standard.
-
EXTEND
extends the semantics of ALL
to include ALTER and DROP privileges. An unqualified ALL
excludes these two privileges. This option enables backward compatibility with GRANT ALL
usage in pre-9.2.1 Vertica releases.
[
database.
]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
SEQUENCE
sequence
- Specifies the sequence on which to grant privileges.
ALL SEQUENCES IN SCHEMA
schema
- Grants the specified privileges on all sequences in schema
schema
.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superusers require USAGE on the schema and one of the following:
Examples
This example shows how to grant user Joe
all privileges on sequence my_seq
.
=> CREATE SEQUENCE my_seq START 100;
CREATE SEQUENCE
=> GRANT ALL PRIVILEGES ON SEQUENCE my_seq TO Joe;
GRANT PRIVILEGE
See also
26.12 - GRANT (storage location)
Grants privileges to users and roles on a USER-defined storage location.
Grants privileges to users and roles on a USER-defined storage location. For details, see Creating storage locations.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] }
ON LOCATION 'path' [ ON node ]
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
- The following privileges are valid for storage locations:
-
READ
: Copy data from files in the storage location into a table.
-
WRITE
: Export data from the database to the storage location. With WRITE
privileges, grantees can also save COPY
statement rejected data and exceptions files to the storage location.
ALL [PRIVILEGES]
- Grants all storage location privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
ON LOCATION '
path
' [ ON
node
]
- Specifies the path name mount point of the storage location. If qualified by
ON NODE
, Vertica grants access to the storage location residing on node
.
If no node is specified, the grant operation applies to all nodes on the specified path. All nodes must be on the specified path; otherwise, the entire grant operation rolls back.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superuser, one of the following:
Note
Only a superuser can add, alter, retire, drop, and restore a location.
Examples
Create a storage location:
=> CREATE LOCATION '/home/dbadmin/UserStorage/BobStore' NODE 'v_mcdb_node0007' USAGE 'USER';
CREATE LOCATION
Grant user Bob
all available privileges to the /BobStore
location:
=> GRANT ALL ON LOCATION '/home/dbadmin/UserStorage/BobStore' TO Bob;
GRANT PRIVILEGE
Revoke all storage location privileges from Bob:
=> REVOKE ALL ON LOCATION '/home/dbadmin/UserStorage/BobStore' FROM Bob;
REVOKE PRIVILEGE
Grant privileges to Bob
on the BobStore
location again, specifying a node:
=> GRANT ALL ON LOCATION '/home/dbadmin/UserStorage/BobStore' ON v_mcdb_node0007 TO Bob;
GRANT PRIVILEGE
Revoke all storage location privileges from Bob
:
=> REVOKE ALL ON LOCATION '/home/dbadmin/UserStorage/BobStore' ON v_mcdb_node0007 FROM Bob;
REVOKE PRIVILEGE
See also
26.13 - GRANT (table)
Grants table privileges to users and roles.
Grants table privileges to users and roles. Users must also be granted USAGE on the table schema.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] [ EXTEND ] }
ON {
[ TABLE ] [[{namespace. | database. }]schema.]table[,...]
| ALL TABLES IN SCHEMA [{namespace. | database. }]schema[,...] }
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
- The following privileges are valid for tables:
Important
Only SELECT privileges are valid for system tables.
-
SELECT: Query tables. SELECT privileges are granted by default to the PUBLIC role.
-
INSERT: Insert table rows with INSERT, and load data with
COPY
.
Note
COPY FROM STDIN
is allowed for users with INSERT privileges, while COPY FROM
file
requires admin privileges.
-
UPDATE: Update table rows.
-
DELETE: Delete table rows.
-
REFERENCES: Create foreign key constraints on this table. This privilege must be set on both referencing and referenced tables.
-
TRUNCATE: Truncate table contents. Non-owners of tables can also execute the following partition operations on them:
-
ALTER: Modify a table's DDL with
ALTER TABLE
.
-
DROP: Drop a table.
ALL [PRIVILEGES][EXTEND]
- Invalid for system tables, grants all table privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack.
You can qualify ALL
with two optional keywords:
-
PRIVILEGES
conforms with the SQL standard.
-
EXTEND
extends the semantics of ALL
to include ALTER and DROP privileges. An unqualified ALL
excludes these two privileges. This option enables backward compatibility with GRANT ALL
usage in pre-9.2.1 Vertica releases.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
TABLE
table
- Specifies the table on which to grant privileges.
ON ALL TABLES IN SCHEMA
schema
- Grants the specified privileges on all tables and views in schema
schema
.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superusers require USAGE on the schema and one of the following:
Examples
Grant user Joe
all privileges on table customer_dimension
:
=> CREATE USER Joe;
CREATE USER
=> GRANT ALL PRIVILEGES ON TABLE customer_dimension TO Joe;
GRANT PRIVILEGE
Grant user Joe
SELECT privileges on all system tables:
=> GRANT SELECT on all tables in schema V_MONITOR, V_CATALOG TO Joe;
GRANT PRIVILEGE
See also
26.14 - GRANT (TLS configuration)
Grants privileges on a TLS Configuration to a user or role.
Grants privileges on a TLS Configuration to a user or role.
To revoke granted privileges, see REVOKE (TLS configuration).
Superusers have limited access to cryptographic objects that they do not own. For details, see Database object privileges.
Syntax
GRANT { privilege[,...] } ON TLS CONFIGURATION
tls_configuration[,...]
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
A privilege, one of the following:
tls_configuration
- The target TLS Configuration.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superuser:
Examples
You can grant ALTER on a TLS Configuration to a user or role to delegate management of that TLS context, which includes adding and removing certificates, setting the TLSMODE, etc. For example, the following statement grants ALTER privileges on the TLS CONFIGURATION server
to the role client_server_tls_manager
:
=> GRANT ALTER ON TLS CONFIGURATION server TO client_server_tls_manager;
26.15 - GRANT (user defined extension)
Grants privileges on a user-defined extensions (UDx) to users and roles.
Grants privileges on a user-defined extensions (UDx) to users and roles.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] [ EXTEND ] }
ON {
UDx-type [[database.]schema.]function( [arg-list] )[,...]
| ALL FUNCTIONS IN SCHEMA schema[,...] }
TO grantee[,...]
[ WITH GRANT OPTION ]
Arguments
privilege
- The following privileges are valid for user-defined extensions:
Note
Users can only call a UDx function on which they have EXECUTE privilege, and USAGE privilege on its schema.
ALL [PRIVILEGES] [EXTEND]
- Grants all function privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack
You can qualify ALL
with two optional keywords:
-
PRIVILEGES
conforms with the SQL standard.
-
EXTEND
extends the semantics of ALL
to include ALTER and DROP privileges. An unqualified ALL
excludes these two privileges. This option enables backward compatibility with GRANT ALL
usage in pre-9.2.1 Vertica releases.
ON
UDx-type
- Type of the user-defined extension (UDx), one of the following:
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function
- Name of the user-defined function on which to grant privileges.
ON ALL FUNCTIONS IN SCHEMA
schema
- Grants privileges on all functions in the specified schema.
arg-list
- Required for all polymorphic functions, a comma-delimited list of function arguments, where each argument is specified as follows:
[ argname ] argtype
If the procedure is defined with no arguments, supply an empty argument list.
grantee
Who is granted privileges, one of the following:
Note
Grantees must have USAGE
privileges on the schema.
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superusers require USAGE on the schema and one of the following:
Examples
Grant EXECUTE
privileges on the myzeroifnull
SQL function to users Bob
and Jules
, and to the role Operator
. The function takes one integer argument:
=> GRANT EXECUTE ON FUNCTION myzeroifnull (x INT) TO Bob, Jules, Operator;
Grant EXECUTE
privileges on all functions in the zero-schema
schema to user Bob
:
=> GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA zero-schema TO Bob;
Grant EXECUTE
privileges on the tokenize
transform function to user Bob
and the role Operator
:
=> GRANT EXECUTE ON TRANSFORM FUNCTION tokenize(VARCHAR) TO Bob, Operator;
Grant EXECUTE
privileges on the ExampleSource()
source to user Alice
:
=> CREATE USER Alice;
=> GRANT USAGE ON SCHEMA hdfs TO Alice;
=> GRANT EXECUTE ON SOURCE ExampleSource() TO Alice;
Grant all privileges on the ExampleSource()
source to user Alice
:
=> GRANT ALL ON SOURCE ExampleSource() TO Alice;
Grant all privileges on polymorphic function Pagerank
to the dbadmin role:
=> GRANT ALL ON TRANSFORM FUNCTION Pagerank(z varchar) to dbadmin;
See also
26.16 - GRANT (view)
Grants view privileges to users and roles.
Grants view privileges to users and roles.
Syntax
GRANT { privilege[,...] | ALL [ PRIVILEGES ] [ EXTEND ] }
ON [[database.]schema.]view[,...]
TO grantee[,...]
[ WITH GRANT OPTION ]
Parameters
privilege
``
- The following privileges are valid for views:
ALL [PRIVILEGES][EXTEND]
- Grants all view privileges that also belong to the grantor. Grantors cannot grant privileges that they themselves lack.
You can qualify ALL
with two optional keywords:
-
PRIVILEGES
conforms with the SQL standard.
-
EXTEND
extends the semantics of ALL
to include ALTER and DROP privileges. An unqualified ALL
excludes these two privileges. This option enables backward compatibility with GRANT ALL
usage in pre-9.2.1 Vertica releases.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
view
- The target view.
grantee
Who is granted privileges, one of the following:
WITH GRANT OPTION
Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.
Privileges
Non-superusers require USAGE on the schema and one of the following:
Note
As view owner, you can grant other users SELECT privilege on the view only if one of the following is true:
Examples
Grant user Joe
all privileges on view ship
.
=> CREATE VIEW ship AS SELECT * FROM public.shipping_dimension;
CREATE VIEW
=> GRANT ALL PRIVILEGES ON ship TO Joe;
GRANT PRIVILEGE
See also
REVOKE (view)
26.17 - GRANT (workload)
Grants privileges to a role to use a workload, routing queries to its associated subclusters.
Grants privileges to a user or role to use a workload, routing its queries to its associated subclusters.
Syntax
GRANT USAGE ON WORKLOAD workload TO {role | user}
Parameters
workload
- A workload defined by a routing rule. For details, see Workload routing.
user
| role
- The user or role permitted to use the workload.
Privileges
Superuser
Examples
The following example grants the USAGE privilege on the analytics
workload to users with the analytics_role
role:
=> GRANT USAGE ON WORKLOAD analytics TO analytics_role;
Similarly, this example grants the USAGE privilege on the analytics
workload to the user jacob
:
=> GRANT USAGE ON WORKLOAD analytics TO jacob;
For more examples, see Workload routing.
27 - INSERT
Inserts values into all projections of the specified table.
Inserts values into all projections of the specified table. You must insert one complete tuple at a time. If no projections are associated with the target table, Vertica creates a superprojection to store the inserted values.
INSERT works for flex tables as well as regular native tables. If the table has real columns, inserted data of scalar types and native arrays of scalar types is added to both the real column and the __raw__
column. For data of complex types, the values are not added to the __raw__
column.
Syntax
INSERT [ /*+LABEL (label-string)*/ ]
INTO [[{ database | namespace }.]schema.]table-name
[ ( column-list ) ]
{ DEFAULT VALUES | VALUES ( values-list )[,...] | SELECT query-expression }
Arguments
-
LABEL
Assigns a label to a statement to identify it for profiling and debugging.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
table-name
- The target table. You cannot invoke INSERT on a projection.
column-list
- A comma-delimited list of one or more target columns in this table, listed in any order. VALUES clause values are mapped to columns in the same order. If you omit this list, Vertica maps VALUES clause values to columns according to column order in the table definition.
A list of target columns is invalid with DEFAULT VALUES.
DEFAULT VALUES
- Fills all columns with their default values as specified in the table definition. If no default value is specified for a column, Vertica inserts a NULL value.
You cannot specify a list of target columns with this option.
VALUES (
values-list
)
- A comma-delimited list of one or more values to insert in the target columns, where each value is one of the following:
-
expression
resolves to a value to insert in the target column. The expression must not nest other expressions, include Vertica meta-functions, or use mixed complex types. Values may include native array or ROW types if Vertica can coerce the element or field types.
-
DEFAULT inserts the default value as specified in the table definition.
If no value is supplied for a column, Vertica implicitly adds a DEFAULT value, if defined. Otherwise Vertica inserts a NULL value. If the column is defined as NOT NULL, INSERT returns an error.
You can use INSERT to insert multiple rows in the target table, by specifying multiple comma-delimited VALUES lists:
INSERT INTO table-name
VALUES ( values-list ), ( values-list )[,...]
For details, see Multi-Row INSERT below.
SELECT
query-expression
- A query that returns the rows to insert. Isolation level applies only to the SELECT clauses and works like any query. Restrictions on use of complex types apply as in other queries.
Privileges
-
Table owner or user with GRANT OPTION is grantor
-
INSERT privilege on table
-
USAGE privilege on schema that contains the table
Committing successive table changes
Vertica follows the SQL-92 transaction model, so successive INSERT, UPDATE, and DELETE statements are included in the same transaction. You do not need to explicitly start this transaction; however, you must explicitly end it with COMMIT, or implicitly end it with COPY. Otherwise, Vertica discards all changes that were made within the transaction.
Multi-row INSERT
You can use INSERT to insert multiple rows in the target table, by specifying multiple comma-delimited VALUES lists. For example:
=> CREATE TABLE public.t1(a int, b int, c varchar(16));
CREATE TABLE
=> INSERT INTO t1 VALUES (1,2, 'un, deux'), (3,4, 'trois, quatre');
OUTPUT
--------
2
(1 row)
=> COMMIT;
COMMIT
=> SELECT * FROM t1;
a | b | c
---+---+---------------
1 | 2 | un, deux
3 | 4 | trois, quatre
(4 rows)
Restrictions
-
Vertica does not support subqueries as the target of an INSERT statement.
-
Restrictions on the use of complex types in SELECT statements apply equally to INSERT. Using complex values that cannot be coerced to the column type results in an error.
-
If primary key, unique key, or check constraints are enabled for automatic enforcement in the target table, Vertica enforces those constraints when you load new data. If a violation occurs, Vertica rolls back the operation and returns an error.
-
If an insert would violate a table or schema disk quota, the operation fails. For more information, see Disk quotas.
Examples
=> INSERT INTO t1 VALUES (101, 102, 103, 104);
=> INSERT INTO customer VALUES (10, 'male', 'DPR', 'MA', 35);
=> INSERT INTO start_time VALUES (12, 'film','05:10:00:01');
=> INSERT INTO retail.t1 (C0, C1) VALUES (1, 1001);
=> INSERT INTO films SELECT * FROM tmp_films WHERE date_prod < '2004-05-07';
Vertica does not support subqueries or nested expressions as the target of an INSERT statement. For example, the following query returns an error message:
=> INSERT INTO t1 (col1, col2) VALUES ('abc', (SELECT mycolumn FROM mytable));
ERROR 4821: Subqueries not allowed in target of insert
You can rewrite the above query as follows:
=> INSERT INTO t1 (col1, col2) (SELECT 'abc', mycolumn FROM mytable);
OUTPUT
--------
0
(1 row)
The following example shows how to use INSERT...VALUES with flex tables:
=> CREATE FLEX TABLE flex1();
CREATE TABLE
=> INSERT INTO flex1(a,b) VALUES (1, 'x');
OUTPUT
--------
1
(1 row)
=> SELECT MapToString(__raw__) FROM flex1;
MapToString
---------------------------------
{
"a" : "1",
"b" : "x"
}
(1 row)
The following example shows how to use INSERT...SELECT with flex tables:
=> CREATE FLEX TABLE flex2();
CREATE TABLE
=> INSERT INTO flex2(a, b) SELECT a, b, '2016-08-10 11:10' c, 'Hello' d, 3.1415 e, f from flex1;
OUTPUT
--------
1
(1 row)
=> SELECT MapToString(__raw__) FROM flex2;
MapToString
---------------------------------
{
"a" : "1",
"b" : "x",
"c" : "2016-08-10",
"d" : "Hello",
"e" : 3.1415,
"f" : null
}
(1 row)
The following examples use complex types:
=> CREATE TABLE inventory(storeID INT, product ROW(name VARCHAR, code VARCHAR));
CREATE TABLE
--- LookUpProducts() returns a row(varchar, int), which is cast to row(varchar, varchar):
=> INSERT INTO inventory(product) SELECT LookUpProducts();
OUTPUT
--------
5
(1 row)
--- Cannot use with select...values:
=> INSERT INTO inventory(product) VALUES(LookUpProducts());
ERROR 2631: Column "product" is of type "row(varchar,varchar)" but expression is of type "row(varchar,int)"
--- Literal values are supported:
=> INSERT INTO inventory(product) VALUES(ROW('xbox',165));
OUTPUT
--------
1
(1 row)
=> SELECT product FROM inventory;
product
------------------------------
{"name":"xbox","code":"125"}
(1 row)
28 - LOCK TABLE
Locks a table, giving the caller's session exclusive access to certain operations.
Locks a table, giving the caller's session exclusive access to certain operations. Tables are automatically unlocked after the current transaction ends—that is, after COMMIT or ROLLBACK. LOCK TABLE can be useful for preventing deadlocks.
To view existing locks, see LOCKS.
Note
READ COMMITTED isolation (default) and SERIALIZABLE isolation automatically handle locks for you, and the vast majority of users can rely on them exclusively; LOCK TABLE is only for advanced users who need granular control over locks for more complex workloads.
To implement pessimistic concurrency without manually locking tables, you can use SELECT...FOR UPDATE to acquire an EXCLUSIVE (X) lock on the table.
Syntax
LOCK [ TABLE ] [[{database | namespace }.]schema.]table [,...]
IN { lock_type } MODE
[ NOWAIT ]
Arguments
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
table
- The table to lock.
lock-type
- The type of lock, one of the following:
-
SHARE
-
INSERT
-
INSERT VALIDATE
-
SHARE INSERT
-
EXCLUSIVE
-
NOT DELETE
-
USAGE
-
OWNER
Note
LOCK TABLE does not currently support D (drop partition) locks.
NOWAIT
- If specified, LOCK TABLE returns and reports an error immediately if it cannot acquire the lock. Otherwise, LOCK TABLE waits for incompatible locks to be released by their respective sessions, returning an error if the lock is not released after a certain amount of time, as defined by LockTimeout.
Privileges
Required privileges depend on the type of lock requested:
Lock |
Privileges |
SHARED (S) |
SELECT |
INSERT (I) |
INSERT |
SHARE INSERT |
SELECT, INSERT |
INSERT VALIDATE (IV) |
SELECT, INSERT |
EXCLUSIVE (X) |
UPDATE, DELETE |
NOT DELETE (T) |
SELECT |
USAGE |
All privileges |
Owner |
All privileges |
Examples
See Lock examples.
29 - MERGE
Performs update and insert operations on a target table based on the results of a join with another data set, such as a table or view.
Performs update and insert operations on a target table based on the results of a join with another data set, such as a table or view. The join can match a source row with only one target row; otherwise, Vertica returns an error.
If a merge would violate a table or schema disk quota, the operation fails. For more information, see Disk quotas.
The target table cannot have columns of complex data types. The source table can, so long as those columns are not included in the merge operation.
Syntax
MERGE [ /*+LABEL (label-string)*/ ]
INTO [[ { database | namespace }.]schema.]target-table [ [AS] alias ]
USING source-dataset
ON join-condition matching-clause[ matching-clause ]
Returns
Number of target table rows updated or inserted
Arguments
-
LABEL
Assigns a label to a statement to identify it for profiling and debugging.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
target-table
- The table on which to perform update and insert operations. MERGE takes an X (exclusive) lock on the target table during the operation. The table must not contain columns of complex types.
Important
The total number of target table columns cannot exceed 831.
source-dataset
- The data to join to
target-table
, one of the following:
The specified data set typically supplies the data used to update the target table and populate new rows. You can specify an external table.
ON
join-condition
- The conditions on which to join the target table and source data set.
Tip
The Vertica query optimizer can create an optimized query plan for a
MERGE statement only if the target table join column has a unique or primary key constraint. For details, see
MERGE optimization.
matching-clause
- One of the following clauses:
MERGE supports one instance of each clause, and must include at least one.
-
WHEN MATCHED THEN UPDATE
- For each
target-table
row that is joined (matched) to source-dataset
, specifies to update one or more columns:
WHEN MATCHED [ AND update-filter ] THEN UPDATE
SET { column = expression }[,...]
update-filter
optionally filters the set of matching rows. The update filter can specify any number of conditions. Vertica evaluates each matching row against this filter, and updates only the rows that evaluate to true. For details, see Update and insert filters.
Note
Vertica also supports Oracle syntax for specifying update filters:
WHEN MATCHED THEN UPDATE
SET { column = expression }[,...]
[ WHERE update-filter ]
The following requirements apply:
For details, see Merging table data.
-
WHEN NOT MATCHED THEN INSERT
- For each
source-dataset
row that is not joined (not matched) to target-table
, specifies to:
WHEN NOT MATCHED [ AND insert-filter ] THEN INSERT
[ ( column-list ) ] VALUES ( values-list )
column-list
is a comma-delimited list of one or more target columns in the target table, listed in any order. MERGE maps column-list
columns to values-list
values in the same order, and each column-value pair must be compatible. If you omit column-list
, Vertica maps values-list
values to columns according to column order in the table definition.
insert-filter
optionally filters the set of non-matching rows. The insert filter can specify any number of conditions. Vertica evaluates each non-matching source row against this filter. For each row that evaluates to true, Vertica inserts a new row in the target table. For details, see Update and insert filters.
Note
Vertica also supports Oracle syntax for specifying insert filters:
WHEN NOT MATCHED THEN INSERT
[ ( column-list ) ] VALUES ( values-list )
[ WHERE insert-filter ]
The following requirements apply:
-
A MERGE statement can contain only one WHEN NOT MATCHED clause.
-
*
column-list*
can only specify column names in the target table. It cannot be qualified with a table name.
-
Insert filter conditions can only reference the source data. If any condition references the target table, Vertica returns an error.
For details, see Merging table data.
Privileges
MERGE requires the following privileges:
-
SELECT permissions on the source data and INSERT, UPDATE, and DELETE permissions on the target table.
-
Automatic constraint enforcement requires SELECT permissions on the table containing the constraint.
-
SELECT permissions on the target table if the condition in the syntax reads data from the target table.
For example, the following GRANT statement grants user1
access to the t2
table. This allows user1
to run the MERGE statement that follows:
=> GRANT SELECT, INSERT, UPDATE, DELETE ON TABLE t2 to user1;
GRANT PRIVILEGE
=>\c - user1
You are now connected as user "user1".
=> MERGE INTO t2 USING t1 ON t1.a = t2.a
WHEN MATCHED THEN UPDATE SET b = t1.b
WHEN NOT MATCHED THEN INSERT (a, b) VALUES (t1.a, t1.b);
You can improve MERGE performance in several ways:
For details, see MERGE optimization.
Constraint enforcement
If primary key, unique key, or check constraints are enabled for automatic enforcement in the target table, Vertica enforces those constraints when you load new data. If a violation occurs, Vertica rolls back the operation and returns an error.
Caution
If you run MERGE multiple times using the same target and source table, each iteration is liable to introduce duplicate values into the target columns and return with an error.
Columns prohibited from merge
The following columns cannot be specified in a merge operation; attempts to do so return with an error:
-
IDENTITY columns, or columns whose default value is set to a named sequence.
-
Vmap columns such as __raw__
in flex tables.
-
Columns of complex types ARRAY, SET, or ROW.
Examples
See:
See also
30 - PROFILE
Profiles a single SQL statement.
Profiles a single SQL statement.
Syntax
PROFILE { sql-statement }
Parameters
sql-statement
- A query (
SELECT
) statement or DML statement--for example, you can profile
INSERT
,
UPDATE
,
COPY
, and
MERGE
.
Output
Writes profile summary to stderr, saves details to system catalog
V_MONITOR.EXECUTION_ENGINE_PROFILES
.
Privileges
The same privileges required to run the profiled statement
Description
PROFILE
generates detailed information about how the target statement executes, and saves that information in the system catalog
V_MONITOR.EXECUTION_ENGINE_PROFILES
. Query output is preceded by a profile summary: profile identifiers transaction_id
and statement_id
, initiator memory for the query, and total memory required. For example:
=> PROFILE SELECT customer_name, annual_income FROM public.customer_dimension WHERE (customer_gender, annual_income) IN (SELECT customer_gender, MAX(annual_income) FROM public.customer_dimension GROUP BY customer_gender);
NOTICE 4788: Statement is being profiled
HINT: Select * from v_monitor.execution_engine_profiles where transaction_id=45035996274683334 and statement_id=7;
NOTICE 3557: Initiator memory for query: [on pool general: 708421 KB, minimum: 554324 KB]
NOTICE 5077: Total memory required by query: [708421 KB]
customer_name | annual_income
------------------+---------------
Emily G. Vogel | 999998
James M. McNulty | 999979
(2 rows)
Use profile identifiers to query the table for profile information on a given query.
See also
Profiling single statements
31 - RELEASE SAVEPOINT
Destroys a savepoint without undoing the effects of commands executed after the savepoint was established.
Destroys a savepoint without undoing the effects of commands executed after the savepoint was established.
Syntax
RELEASE [ SAVEPOINT ] savepoint_name
Parameters
savepoint_name
- Specifies the name of the savepoint to destroy.
Privileges
None
Notes
Once destroyed, the savepoint is unavailable as a rollback point.
Examples
The following example establishes and then destroys a savepoint called my_savepoint. The values 101 and 102 are both inserted at commit.
=> INSERT INTO product_key VALUES (101);
=> SAVEPOINT my_savepoint;
=> INSERT INTO product_key VALUES (102);
=> RELEASE SAVEPOINT my_savepoint;
=> COMMIT;
See also
32 - REMOVE RESTORE POINT FROM ARCHIVE
Eon Mode only
Removes a in-database restore point from an archive.
To remove a specific restore point, you must specify either its index in the archive or its unique object ID. You can find both of these identifiers by querying the ARCHIVE_RESTORE_POINTS system table. If neither of these identifiers are provided, the oldest restore point in the archive is removed.
Syntax
REMOVE RESTORE POINT FROM ARCHIVE archive-name { INDEX restore-point-index | ID restore-point-id }
Arguments
archive-name
- Name of the archive from which to remove the restore point.
restore-point-index
- Index of the restore point to remove, where index 1 specifies the most recently saved restore point.
restore-point-id
- Unique object ID of the restore point to remove.
Privileges
Superuser or archive owner
Examples
To view all restore points, including unique object IDs and indexes, query the ARCHIVE_RESTORE_POINTS system table:
=> SELECT * FROM ARCHIVE_RESTORE_POINTS;
id | save_time | index | archive | state
--------------------------------------+----------------------------+-------+-----------+----------
2616f6cd-e98c-472c-8d5c-a7f459dff82b | 2021-12-10 10:19:58.565311 | 1 | verticadb | COMPLETE
116066cc-29cb-4ce5-9a27-170538ba95cf | 2021-12-06 11:02:48.059941 | 2 | verticadb | COMPLETE
(2 rows)
The following statement removes the most recently saved restore point in the verticadb
archive:
=> REMOVE RESTORE POINT FROM ARCHIVE verticadb index 1;
REMOVE RESTORE POINT
If you again query the ARCHIVE_RESTORE_POINTS table, the restore point will either appear with a status of REMOVING or, if the removal process has completed, is not listed:
=> SELECT * FROM ARCHIVE_RESTORE_POINTS;
id | save_time | index | archive | state
--------------------------------------+----------------------------+-------+-----------+----------
2616f6cd-e98c-472c-8d5c-a7f459dff82b | 2021-12-10 10:19:58.565311 | | verticadb | REMOVING
116066cc-29cb-4ce5-9a27-170538ba95cf | 2021-12-06 11:02:48.059941 | 1 | verticadb | COMPLETE
(2 rows)
See also
33 - REPLICATE
Copies data directly from one Eon Mode database's communal store to another.
Eon Mode only
Copies table or schema data directly from one Eon Mode database's communal storage to another. REPLICATE copies all table data and metadata. Each REPLICATE call can only replicate tables and schemas to a single namespace in the target database. When called multiple times, this statement only copies data that has changed in the source database since the last call. The tables do not have to exist in the target database before replication. If the table does exist in the target, it is overwritten.
Caution
Replication overwrites any data in the target table. You do not receive any notification that the data in the target database will be overwritten, even if the target table's design does not match the source table. Verify that the tables you want to replicate either do not exist in the target database or do not contain data that you need. Be especially cautious when replicating entire schemas to prevent overwriting data in the target database.
Similarly, changes to a replicated table in the target database are overwritten each time you replicate the table. When performing scheduled replications, only grant read access to replicated tables in the target database. Limiting these tables to read-only access will help prevent confusion if a user makes changes to a replicated table.
Syntax
REPLICATE { table | schema | "[.namespace.]schema.table"
| INCLUDE "inc_pattern" [ EXCLUDE "ex_pattern" ] }
{ FROM | TO } database_name
[ TARGET_NAMESPACE namespace-name ]
Arguments
table
|
schema
| "[.
namespace
.]
schema
.
table
"
- The name of a single table or schema to replicate. If specified, the namespace name must be front-qualified with a period. For example, to specify table
t
in schema s
in namespace n
, you would write ".n.s.t". If you do not specify a namespace, the object is assumed to be in the default_namespace
.
inc_pattern
- A string containing a wildcard pattern of the schemas and/or tables to include in the replication. Namespace names must be front-qualified with a period.
ex_pattern
- A string containing a wildcard pattern of the schemas and/or tables to exclude from the set of tables matched by the include pattern. Namespace names must be front-qualified with a period.
Note
When using wildcard patterns, be sure to anchor the patterns with a period to indicate the separation between the schema and the table name and between the namespace and schema. For example, suppose you wanted to replicate all tables in the public
schema in the n1
namespace that started with the letter t
, except the table public.t3
. This pattern statement does not work because the exclude wildcard only applies to schemas:
=> REPLICATE INCLUDE ".n1.public.t*" EXCLUDE "*3" FROM verticadb;
The following statement does work because the periods in the EXCLUDE wildcard force the wildcard to apply to the table:
=> REPLICATE INCLUDE ".n1.public.t*" EXCLUDE ".n1.*.*3" FROM verticadb;
database_name
- The name of the database from or to which data is replicated. The replication steps depend on whether the call is initiated from the source or target database:
- Target-inititated replication: use the CONNECT TO VERTICA statement to create a connection to the source database, and specify the source database in the REPLICATE statement using a
FROM
source_db
clause.
- Source-inititated replication: use the CONNECT TO VERTICA statement to connect to the target database, and specify the target database in the REPLICATE statement using a
TO
target_db
clause.
Both of these methods copy the shard data directly from the source communal storage location to the target communal storage location.
TARGET_NAMESPACE
namespace-name
- Namespace in the target cluster to which objects are replicated. The target namespace must have the same shard count as the source namespace in the source cluster.
If you do not specify a target namespace, objects are replicated to a namespace with the same name as the source namespace. If no such namespace exists in the target cluster, it is created with the same name and shard count as the source namespace. You can only replicate tables in the public
schema to the default_namespace
in the target cluster.
Privileges
Superuser
Examples
Connect to the source database from the target database and then replicate the source database's customers
table to the target database:
=> CONNECT TO VERTICA source_db USER dbadmin
PASSWORD 'mypassword' ON 'vertica_node01', 5433;
CONNECT
=> REPLICATE customers FROM source_db;
REPLICATE
The following example performs the same data replication as the previous example, but the following replication is initiated from the source database instead of the target database:
=> CONNECT TO VERTICA target_db USER dbadmin
PASSWORD 'mypassword' ON 'vertica_node01', 5433;
CONNECT
=> REPLICATE customers TO target_db;
REPLICATE
Replicate all tables in the public
schema that start with the letter t:
=> REPLICATE INCLUDE "public.t*" FROM source_db;
Replicate all tables in the public
schema except those that start with the string customer_
:
=> REPLICATE INCLUDE "public.*" EXCLUDE '*.customer_*' FROM source_db;
Replicate the flights
table in the airline
schema of the airport
namespace to the airport2
namespace in the target cluster:
=> REPLICATE ".airport.airline.flights" FROM source_db TARGET_NAMESPACE airport2;
Replicate all schemas and tables in the default_namespace
of the source database to the default_namespace
in the target database:
=> REPLICATE INCLUDE ".default_namespace.*.*" FROM source_db;
See also
34 - REVOKE statements
REVOKE statements let you revoke privileges on database objects from users and roles.
REVOKE statements let you revoke privileges on database objects from users and roles.
Important
In a database with trust authentication, REVOKE statements appear to work as expected but have no real effect on database security.
34.1 - REVOKE (authentication)
Revokes privileges on an authentication method from users and roles.
Revokes privileges on an authentication method from users and roles.
Syntax
REVOKE AUTHENTICATION auth-method-name FROM grantee[,...]
Parameters
auth-method-name
- Name of the target authentication method.
grantee
Whose privileges are revoked, one of the following:
Privileges
Superuser
Examples
-
Revoke v_ldap
authentication from user jsmith
:
=> REVOKE AUTHENTICATION v_ldap FROM jsmith;
-
Revoke v_gss
authentication from the role DBprogrammer
:
=> REVOKE AUTHENTICATION v_gss FROM DBprogrammer;
-
Revoke localpwd
as the default client authentication method:
=> REVOKE AUTHENTICATION localpwd FROM PUBLIC;
See also
34.2 - REVOKE (data loader)
Revokes data-loader privileges from users and roles.
Revokes privileges on automatic data loaders from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] }
ON DATA LOADER [schema.]name
FROM grantee[,...]
[ CASCADE ]
Arguments
privilege
- One of the following privileges:
ALL [PRIVILEGES]
- Revokes all privileges.
schema
- Schema containing the data loader. The default schema is
public
.
name
- Name of the data loader.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
See also
34.3 - REVOKE (database)
Revokes database privileges from users and roles.
Revokes database privileges from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] }
ON DATABASE db-spec
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
privilege
- The database privilege to revoke, one of the following:
ALL [PRIVILEGES]
- Revokes all database privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
is supported to comply with the SQL standard.
ON DATABASE
db-spec
Specifies the current database, set to the database name or DEFAULT
.
*`grantee`*
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
Revoke user Fred
's privilege to create schemas in the current database:
=> REVOKE CREATE ON DATABASE DEFAULT FROM Fred;
Revoke user Fred
's privilege to create temporary tables in the current database:
=> REVOKE TEMP ON DATABASE DEFAULT FROM Fred;
See also
34.4 - REVOKE (key)
Revokes privileges on a cryptographic key from a user or role.
Revokes privileges on a cryptographic key from a user or role.
Important
Because certificates depend on their underlying key, DROP privileges on a key effectively act as DROP privileges on its associated certificate when used with
DROP KEY...CASCADE.
To grant privileges on a key, see GRANT (key).
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] } ON KEY
key_name[,...]
FROM user[,...]
Parameters
-
`GRANT OPTION FOR`
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
privilege
A privilege, one of the following:
-
USAGE: Allows a user to perform the following actions:
USAGE on the key also gives implicit USAGE privileges on a certificate that uses it as its private key. Users can also get these privileges from ownership of the key or certificate. USAGE privileges on a certificate allow a user to perform the following actions:
-
View the contents of the certificate.
-
Add (with CREATE or ALTER) the certificate to a TLS Configuration.
-
Reuse the CA certificate when importing certificates signed by it. For example, if a user imports a chain of certificates A > B > C
and have USAGE on B
, the database reuses B
(as opposed to creating a duplicate of B
).
-
Specify that the CA certificate signed an imported certificate. For example, if certificate B
signed certificate C
, USAGE on B
allows a user to import C
and specify that it was SIGNED BY B
.
-
DROP
-
ALTER: Allows a user to see the key and its associated certificates in their respective system tables, but not their contents.
key_name
- The target key.
user
Who is granted privileges, one of the following:
Privileges
Non-superuser:
Examples
The following example revokes DROP privileges on a key (and, by extension, its associated certificate) from a user:
=> REVOKE USAGE ON KEY new_key FROM u1;
REVOKE PRIVILEGE
34.5 - REVOKE (library)
Revokes library privileges from users and roles.
Revokes library privileges from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { USAGE | ALL [ PRIVILEGES ] }
ON LIBRARY [[database.]schema.]library[,...]
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
USAGE
- Revokes access to the specified libraries.
Important
Privileges on functions in these libraries must be separately revoked.
ALL [PRIVILEGES]
- Revokes all library privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack. The optional keyword
PRIVILEGES
conforms with the SQL standard.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
library
- The target library.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Examples
These commands show how to create a new library, and then grant and revoke user Fred
's USAGE privilege on that library.
=> CREATE LIBRARY MyFunctions AS 'home/dbadmin/my_functions.so';
=> GRANT USAGE ON LIBRARY MyFunctions TO Fred;
=> REVOKE USAGE ON LIBRARY MyFunctions FROM Fred;
See also
34.6 - REVOKE (model)
Revokes model privileges from users and roles.
Revokes model privileges from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] }
ON MODEL [[database.]schema.]model-name [,...]
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
USAGE
- One of the following privileges:
ALL [PRIVILEGES]
- Revokes all model privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
model-name
- Name of the target model.
-
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
Revoke user Fred
's USAGE privilege on model mySvmClassModel
:
=> REVOKE USAGE ON mySvmClassModel FROM Fred;
See also
34.7 - REVOKE (procedure)
Revokes procedure privileges from users and roles.
Revokes procedure privileges from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { EXECUTE | ALL PRIVILEGES }
ON PROCEDURE [[database.]schema.]procedure( [argument-list] )[,...]
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
EXECUTE
- Revokes grantees ability to run the specified procedures.
ALL [PRIVILEGES]
- Revokes all procedure privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
- [database.]schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
-
procedure
- The target procedure.
argument-list
- A comma-delimited list of procedure arguments, where each argument is specified as follows:
[argname] argtype
If the procedure is defined with no arguments, supply an empty argument list.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
This example revokes user Bob
's execute privilege on the tokenize
procedure.
=> REVOKE EXECUTE ON PROCEDURE tokenize(varchar) FROM Bob;
See also
34.8 - REVOKE (Resource pool)
Revokes resource pool access privileges from users and roles.
Revokes resource pool access privileges from users and roles.
Vertica checks resource pool privileges at runtime. Revoking a user's privileges for a resource pool can have an immediate effect on the user's current session. For example, a user query might require USAGE privileges on a resource pool. If you revoke those privileges from that user, subsequent attempts by the user to execute that query fail and return with an error message.
Syntax
REVOKE [ GRANT OPTION FOR ] { USAGE | ALL PRIVILEGES }
ON RESOURCE POOL resource-pool[,...]
[FOR SUBCLUSTER subcluster | FOR CURRENT SUBCLUSTER]
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
USAGE
- Revokes grantee's access to the specified resource pool.
ALL PRIVILEGES
- Revokes all resource pool privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
resource-pool
- The target resource pool.
subcluster
- The subcluster for the resource pool.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
Revoke user Joe
's USAGE privileges on resource pool Joe_pool
.
=> REVOKE USAGE ON RESOURCE POOL Joe_pool FROM Joe;
REVOKE PRIVILEGE
Revoke user Joe
's USAGE privileges on resource pool Joe_pool
for subcluster sub1
.
=> REVOKE USAGE ON RESOURCE POOL Joe_pool FOR SUBCLUSTER sub1 FROM Joe;
REVOKE PRIVILEGE
See also
34.9 - REVOKE (Role)
Revokes a role from users and roles.
Revokes a role from users and roles.
Syntax
REVOKE [ ADMIN OPTION FOR ] role[,...]
FROM grantee[,...]
[ CASCADE ]
Parameters
ADMIN OPTION FOR
- Revokes from the grantees the authority to assign the specified roles to other users or roles. Current roles for grantees remain unaffected. If you omit this clause, Vertica revokes role assignment privileges and the current roles .
role
- Role to revoke.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
One of the following:
Examples
This example shows the revocation of the pseudosuperuser role from the dbadmin user:
=> REVOKE pseudosuperuser from dbadmin;
This example shows the revocation of administration access from the dbadmin user for the pseudosuperuser role. The ADMIN OPTION command does not remove the pseudosuperuser role.
=> REVOKE ADMIN OPTION FOR pseudosuperuser FROM dbadmin;
Notes
If the role you are trying to revoke was not already granted to the user, Vertica returns a NOTICE:
=> REVOKE commentor FROM Sue;
NOTICE 2022: Role "commentor" was not already granted to user "Sue"
REVOKE ROLE
See also
34.10 - REVOKE (schema)
Revokes schema privileges from users and roles.
Revokes schema privileges from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] }
ON SCHEMA [{namespace. | database. }]schema[,...]
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
privilege
- The schema privilege to revoke, one of the following:
You can also revoke privileges from tables and views that they inherited on creation from this schema. When you revoke inherited privileges at the schema level, Vertica automatically applies the revocation to all tables and views that inherited these privileges.
-
SELECT: Query tables and views. SELECT privileges are granted by default to the PUBLIC role.
-
INSERT: Insert rows, or and load data into tables with
COPY
.
Note
COPY FROM STDIN
is allowed for users with INSERT privileges, while COPY FROM
file
requires admin privileges.
-
UPDATE: Update table rows.
-
DELETE: Delete table rows.
-
REFERENCES: Create foreign key constraints on this table. This privilege must be set on both referencing and referenced tables.
-
TRUNCATE: Truncate table contents. Non-owners of tables can also execute the following partition operations on them:
-
ALTER: Modify the DDL of tables and views with
ALTER TABLE
and
ALTER VIEW
, respectively.
-
DROP: Drop tables and views.
ALL [PRIVILEGES]
- Revokes USAGE AND CREATE privileges. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
Important
Inherited privileges must be explicitly revoked.
{
database
|
namespace
}
- Name of the database or namespace that contains
schema
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
Revoke user Joe
's USAGE privilege on schema online_sales
.
=> REVOKE USAGE ON SCHEMA online_sales FROM Joe;
REVOKE PRIVILEGE
See also
34.11 - REVOKE (sequence)
Revokes sequence privileges from users and roles.
Revokes sequence privileges from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] }
ON {
SEQUENCE [[database.]schema.]sequence[,...]
| ALL SEQUENCES IN SCHEMA [database.]schema[,...] }
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
privilege
- One of the following privileges:
ALL [PRIVILEGES]
- Revokes all sequence privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
is supported to comply with the SQL standard.
-
[database.]schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
SEQUENCE
sequence
- Specifies the sequence on which to revoke privileges.
-
ALL SEQUENCES IN SCHEMA schema
- Revokes the specified privileges on all sequences in schema
schema
.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
Revoke user Joe
's privileges on sequence my_seq
.
=> REVOKE ALL PRIVILEGES ON SEQUENCE my_seq FROM Joe;
REVOKE PRIVILEGE
See also
34.12 - REVOKE (storage location)
Revokes privileges on a USER-defined storage location from users and roles.
Revokes privileges on a USER-defined storage location from users and roles. For more information, see Creating storage locations.
Note
If the storage location is
dropped, Vertica automatically revokes all privileges on it.
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] }
ON LOCATION 'path' [ ON node ]
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
privilege
- One of the following privileges:
-
READ
: Copy data from files in the storage location into a table.
-
WRITE
: Export data from the database to the storage location. With WRITE
privileges, grantees can also save COPY
statement rejected data and exceptions files to the storage location.
ALL [PRIVILEGES]
- Revokes all storage location privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
is supported to comply with the SQL standard.
ON LOCATION '
path
' [ ON
node
]
- Specifies the path name mount point of the storage location. If qualified by
ON NODE
, Vertica revokes access to the storage location residing on node
.
If no node is specified, the revoke operation applies to all nodes on the specified path. All nodes must be on the specified path; otherwise, the entire revoke operation rolls back.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
See GRANT (storage location).
See also
Granting and revoking privileges
34.13 - REVOKE (table)
Revokes table privileges from users and roles.
Revokes table privileges from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] }
ON {
[ TABLE ] [[{namespace. | database. }]schema.]table[,...]
| ALL TABLES IN SCHEMA [database.]schema[,...] }
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
privilege
- One of the following privileges:
-
SELECT: Query tables. SELECT privileges are granted by default to the PUBLIC role.
-
INSERT: Insert table rows with INSERT, and load data with
COPY
.
Note
COPY FROM STDIN
is allowed for users with INSERT privileges, while COPY FROM
file
requires admin privileges.
-
UPDATE: Update table rows.
-
DELETE: Delete table rows.
-
REFERENCES: Create foreign key constraints on this table. This privilege must be set on both referencing and referenced tables.
-
TRUNCATE: Truncate table contents. Non-owners of tables can also execute the following partition operations on them:
-
ALTER: Modify a table's DDL with
ALTER TABLE
.
-
DROP: Drop a table.
ALL [PRIVILEGES]
- Revokes all table privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
is supported to comply with the SQL standard.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
TABLE
table
- Specifies the table on which to revoke privileges.
ON ALL TABLES IN SCHEMA
schema
- Revokes the specified privileges on all tables and views in schema
schema
.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
Revoke user Joe
's privileges on table customer_dimension
.
=> REVOKE ALL PRIVILEGES ON TABLE customer_dimension FROM Joe;
REVOKE PRIVILEGE
See also
34.14 - REVOKE (TLS configuration)
Revokes privileges granted on one or more TLS Configurations from users and roles.
Revokes privileges granted on one or more TLS Configurations from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ]
{ ALL | [ privilege[,...] ]}
ON TLS CONFIGURATION tls_configuration[,...]
FROM grantee [,...]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
privilege
A privilege, one of the following:
tls_configuration
- The TLS Configuration on which to revoke privileges.
grantee
Who is granted privileges, one of the following:
Privileges
Non-superusers require USAGE on the schema and one of the following:
Examples
To revoke ALTER privileges on the TLS Configuration server
from the role client_server_tls_manager
:
=> REVOKE ALTER ON TLS CONFIGURATION server FROM client_server_tls_manager;
34.15 - REVOKE (user defined extension)
Revokes privileges on one or more user-defined extensions from users and roles.
Revokes privileges on one or more user-defined extensions from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { EXECUTE | ALL PRIVILEGES }
ON {
UDx-type [[database.]schema.]function-name( [argument-list] )[,...]
| ALL FUNCTIONS IN SCHEMA schema[,...] }
FROM grantee[,...]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
EXECUTE
- Revokes grantees ability to run the specified functions.
ALL [PRIVILEGES]
- Revokes all function privileges that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
-
ON UDx-type
- Specifies the function's user-defined extension (UDx) type, where
UDx-type
is one of the following:
-
FUNCTION
-
AGGREGATE FUNCTION
-
ANALYTIC FUNCTION
-
TRANSFORM FUNCTION
-
FILTER
-
PARSER
-
SOURCE
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
function-name
- The name of the user-defined function on which to revoke privileges.
ON ALL FUNCTIONS IN SCHEMA
schema
- Revokes privileges on all functions in the specified schema.
argument-list
- Required for all polymorphic functions, a comma-delimited list of function arguments, where each argument is specified as follows:
[argname] argtype
If the procedure is defined with no arguments, supply an empty argument list.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Privileges
Non-superuser, one of the following:
Examples
Revoke EXECUTE
privileges from user Bob
on function myzeroifnull
:
=> REVOKE EXECUTE ON FUNCTION myzeroifnull (x INT) FROM Bob;
Revoke all privileges from user Doug
on function Pagerank
:
=> REVOKE ALL ON TRANSFORM FUNCTION Pagerank (t float) FROM Doug;
Revoke EXECUTE
privileges on all functions in the zero-schema
schema from user Bob:
=> REVOKE EXECUTE ON ALL FUNCTIONS IN SCHEMA zero-schema FROM Bob;
Revoke EXECUTE
privileges from user Bob on the tokenize
function:
=> REVOKE EXECUTE ON TRANSFORM FUNCTION tokenize(VARCHAR) FROM Bob;
Revoke all privileges on the ExampleSource()
source from user Alice:
=> REVOKE ALL ON SOURCE ExampleSource() FROM Alice;
See also
34.16 - REVOKE (view)
Revokes privileges on a view from users and roles.
Revokes privileges on a view from users and roles.
Syntax
REVOKE [ GRANT OPTION FOR ] { privilege[,...] | ALL [ PRIVILEGES ] }
ON [[database.]schema.]view[,...]
FROM grantee[,...]
[ CASCADE ]
Parameters
GRANT OPTION FOR
Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.
privilege
- One of the following:
-
SELECT: Query the specified views.
-
ALTER: Modify a view's DDL with ALTER VIEW
-
DROP: Drop this view with DROP VIEW.
ALL PRIVILEGES
- Revokes all privileges that pertain to views that also belong to the revoker. Users cannot revoke privileges that they themselves lack.
The optional keyword PRIVILEGES
conforms with the SQL standard.
[
database.
]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
view
- The view on which to revoke privileges.
grantee
Whose privileges are revoked, one of the following:
CASCADE
Revoke privileges from users who received them from the grantee through WITH GRANT OPTION
.
Examples
Revoke SELECT privileges from user Joe
on view test_view
.
=> REVOKE SELECT ON test_view FROM Joe;
REVOKE PRIVILEGE
See also
34.17 - REVOKE (workload)
Revokes privileges from a role to use a workload.
Revokes privileges from a user or role to use a workload
Syntax
REVOKE USAGE ON WORKLOAD workload FROM {user | role}
Parameters
workload
- A workload defined by a routing rule. For details, see Workload routing.
user
| role
- The user or role to revoke privileges from.
Privileges
Superuser
Examples
The following example revokes the USAGE privilege on the analytics
workload from users with the analytics_role
role:
=> REVOKE USAGE ON WORKLOAD analytics FROM analytics_role;
Similarly, this example revokes the USAGE privilege on the analytics
workload from the user jacob
:
=> REVOKE USAGE ON WORKLOAD analytics FROM jacob;
For more examples, see Workload routing.
35 - ROLLBACK
Ends the current transaction and discards all changes that occurred during the transaction.
Ends the current transaction and discards all changes that occurred during the transaction.
Syntax
ROLLBACK [ WORK | TRANSACTION ]
Parameters
WORK | TRANSACTION
- Have no effect; they are optional keywords for readability.
Privileges
None
Notes
When an operation is rolled back, any locks that are acquired by the operation are also rolled back.
ABORT is a synonym for ROLLBACK.
Examples
This example shows how to roll back from a DELETE transaction.
=> SELECT * FROM sample_table;
a
---
1
(1 row)
=> DELETE FROM sample_table WHERE a = 1;
=> SELECT * FROM sample_table;
a
---
(0 rows)
=> ROLLBACK;
=> SELECT * FROM sample_table;
a
---
1
(1 row)
This example shows how to roll back the changes you made since the BEGIN statement.
=> BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED READ ONLY;
BEGIN
=> ROLLBACK TRANSACTION;
ROLLBACK
See also
36 - ROLLBACK TO SAVEPOINT
Rolls back all commands that have been entered within the transaction since the given savepoint was established.
Rolls back all commands that have been entered within the transaction since the given savepoint was established.
Syntax
ROLLBACK TO [SAVEPOINT] savepoint_name
Parameters
savepoint_name
- Specifies the name of the savepoint to roll back to.
Privileges
None
Notes
-
The savepoint remains valid and can be rolled back to again later if needed.
-
When an operation is rolled back, any locks that are acquired by the operation are also rolled back.
-
ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that were established after the named savepoint.
Examples
The following example rolls back the values 102 and 103 that were entered after the savepoint, my_savepoint
, was established. Only the values 101 and 104 are inserted at commit.
=> INSERT INTO product_key VALUES (101);
=> SAVEPOINT my_savepoint;
=> INSERT INTO product_key VALUES (102);
=> INSERT INTO product_key VALUES (103);
=> ROLLBACK TO SAVEPOINT my_savepoint;
=> INSERT INTO product_key VALUES (104);
=> COMMIT;
See also
37 - SAVE QUERY
Saves an input query to associate with a custom directed query.
Saves an input query to associate with a custom directed query.
Syntax
SAVE QUERY input-query
Arguments
input-query
- The input query to associate with a custom directed query. The input query supports only one optimizer hint,
:v
.
Privileges
Superuser
Usage
SAVE QUERY saves the specified input query for use by the next invocation of CREATE DIRECTED QUERY CUSTOM. CREATE DIRECTED QUERY CUSTOM pairs the saved query with its annotated query argument to create a directed query. Both statements must be issued in the same user session.
The saved query remains available until the one of the following events occurs:
-
The next invocation of CREATE DIRECTED QUERY, whether invoked with CUSTOM or OPTIMIZER.
-
Another invocation of SAVE QUERY.
-
The session ends.
Caution
Vertica associates a saved query with a directed query without checking whether the input and annotated queries are compatible. Be careful to sequence SAVE QUERY and CREATE DIRECTED QUERY CUSTOM so the saved and directed queries are correctly matched.
Examples
See Custom directed queries.
38 - SAVE RESTORE POINT TO ARCHIVE
Eon Mode only
Saves an in-database restore point to an archive. After the restore point is saved, you can revive the database to the state at which the restore point was saved. For details on reviving your database from a restore point, see Revive from a restore point.
Syntax
SAVE RESTORE POINT TO ARCHIVE archive-name
Arguments
archive-name
- Name of the archive to which the restore point is saved.
Privileges
Superuser or archive owner
Examples
The following statement saves a restore point to the verticadb
archive:
=> SAVE RESTORE POINT TO ARCHIVE verticadb;
SAVE RESTORE POINT
To view all restore points in the database, query the ARCHIVE_RESTORE_POINTS system table:
=> SELECT * FROM ARCHIVE_RESTORE_POINTS;
id | save_time | index | archive | state
--------------------------------------+----------------------------+-------+-----------+----------
2616f6cd-e98c-472c-8d5c-a7f459dff82b | 2021-12-10 10:19:58.565311 | 1 | verticadb | COMPLETE
116066cc-29cb-4ce5-9a27-170538ba95cf | 2021-12-06 11:02:48.059941 | 2 | verticadb | COMPLETE
(2 rows)
See also
39 - SAVEPOINT
SAVEPOINT is a transaction control command that.
Creates a special mark, called a savepoint, inside a transaction. A savepoint allows all commands that are executed after it was established to be rolled back, restoring the transaction to the state it was in at the point in which the savepoint was established.
Tip
Savepoints are useful when creating nested transactions. For example, a savepoint could be created at the beginning of a subroutine. That way, the result of the subroutine could be rolled back if necessary.
Syntax
SAVEPOINT savepoint_name
Parameters
savepoint_name
- Specifies the name of the savepoint to create.
Privileges
None
Notes
-
Savepoints are local to a transaction and can only be established when inside a transaction block.
-
Multiple savepoints can be defined within a transaction.
-
If a savepoint with the same name already exists, it is replaced with the new savepoint.
Examples
The following example illustrates how a savepoint determines which values within a transaction can be rolled back. The values 102 and 103 that were entered after the savepoint, my_savepoint
, was established are rolled back. Only the values 101 and 104 are inserted at commit.
=> INSERT INTO T1 (product_key) VALUES (101);
=> SAVEPOINT my_savepoint;
=> INSERT INTO T1 (product_key) VALUES (102);
=> INSERT INTO T1 (product_key) VALUES (103);
=> ROLLBACK TO SAVEPOINT my_savepoint;
=> INSERT INTO T1 (product_key) VALUES (104);
=> COMMIT;
=> SELECT product_key FROM T1;
.
.
.
101
104
(2 rows)
See also
40 - SELECT
Returns a result set from one or more data sources—tables, views, joined tables, and named subqueries.
Returns a result set from one or more data sources—tables, views, joined tables, and named subqueries.
Syntax
[ AT epoch ] [ WITH-clause ] SELECT [ ALL | DISTINCT ]
{ * | { MATCH_COLUMNS('pattern') | expression [ [AS] alias ] }[,...] }
[ into-table-clause ]
[ from-clause ]
[ where-clause ]
[ time-series-clause ]
[ group-by-clause[,...] ]
[ having-clause[,...] ]
[ match-clause ]
[ union-clause ]
[ intersect-clause ]
[ except-clause ]
[ order-by-clause [ offset-clause ]]
[ limit-clause ]
[ FOR UPDATE [ OF table-name[,...] ] ]
Note
SELECT statements can also embed various directives, or hints, that let you control how a given query is handled—for example, join hints such as JTYPE, which enforces the join type (merge or hash join).
For details on using Vertica hints, see Hints.
Parameters
AT
epoch
- Returns data from the specified epoch, where
epoch
is one of the following:
-
EPOCH LATEST
: Return data up to but not including the current epoch. The result set includes data from the latest committed DML transaction.
-
EPOCH
integer
: Return data up to and including the specified epoch.
-
TIME '
timestamp
'
: Return data from the epoch at the specified timestamp.
These options are ignored if used to query temporary or external tables.
See Epochs for additional information about how Vertica uses epochs.
For details, see Historical queries.
ALL | DISTINCT
-
The
ALL
or DISTINCT
qualifier must immediately follow the SELECT
keyword. Only one instance of this keyword can appear in the select list.
*
- Lists all columns in the queried tables.
Caution
Selecting all columns from the queried tables can produce a very large wide set, which can adversely affect performance.
-
MATCH_COLUMNS('pattern')
- Returns all columns in the queried tables that match
pattern
.
expression
[[AS]
alias
]
- An expression that typically resolves to column data from the queried tables—for example, names of columns that are specified in the FROM clause; also:
You can optionally assign a temporary alias to each column expression and reference that alias elsewhere in the SELECT statement—for example, in the query predicate or ORDER BY clause. Vertica uses the alias as the column heading in query output.
FOR UPDATE
- Specifies to obtain an X lock on all tables specified in the query, most often used from
READ COMMITTED
isolation.
FOR UPDATE
requires update/delete permissions on the queried tables and cannot be issued from a read-only transaction.
Privileges
Non-superusers:
Note
As view owner, you can grant other users SELECT privilege on the view only if one of the following is true:
Examples
When multiple clients run transactions as in the following example query, deadlocks can occur if FOR UPDATE
is not used. Two transactions acquire an S lock, and when both attempt to upgrade to an X lock, they encounter deadlocks:
=> SELECT balance FROM accounts WHERE account_id=3476 FOR UPDATE;
...
=> UPDATE accounts SET balance = balance+10 WHERE account_id=3476;
=> COMMIT;
See also
40.1 - EXCEPT clause
Combines two or more SELECT queries.
Combines two or more SELECT queries. EXCEPT returns distinct results of the left-hand query that are not also found in the right-hand query.
Note
MINUS is an alias for EXCEPT.
Syntax
SELECT
EXCEPT except-query[...]
[ ORDER BY { column-name | ordinal-number } [ ASC | DESC ] [,...] ]
[ LIMIT { integer | ALL } ]
[ OFFSET integer ]
Notes
-
Use the EXCEPT clause to filter out specific results from a SELECT statement. The EXCEPT query operates on the results of two or more SELECT queries. It returns only those rows in the left-hand query that are not also present in the right-hand query.
-
Vertica evaluates multiple EXCEPT clauses in the same SELECT query from left to right, unless parentheses indicate otherwise.
-
You cannot use the ALL keyword with an EXCEPT query.
-
The results of each SELECT statement must be union compatible. Each statement must return the same number of columns, and the corresponding columns must have compatible data types. For example, you cannot use the EXCEPT clause on a column of type INTEGER and a column of type VARCHAR. If statements do not meet these criteria, Vertica returns an error.
Note
The
Data type coercion chart lists the data types that can be cast to other data types. If one data type can be cast to the other, those two data types are compatible.
-
You can use EXCEPT in FROM, WHERE, and HAVING clauses.
-
You can order the results of an EXCEPT operation by including an ORDER BY operation in the statement. When you write the ORDER BY list, specify the column names from the leftmost SELECT statement, or specify integers that indicate the position of the columns by which to sort.
-
The rightmost ORDER BY, LIMIT, or OFFSET clauses in an EXCEPT query do not need to be enclosed in parentheses, because the rightmost query specifies that Vertica perform the operation on the results of the EXCEPT operation. Any ORDER BY, LIMIT, or OFFSET clauses contained in SELECT queries that appear earlier in the EXCEPT query must be enclosed in parentheses.
-
Vertica supports EXCEPT noncorrelated subquery predicates. For example:
=> SELECT * FROM T1
WHERE T1.x IN
(SELECT MAX(c1) FROM T2
EXCEPT
SELECT MAX(cc1) FROM T3
EXCEPT
SELECT MAX(d1) FROM T4);
Examples
Consider the following three tables:
Company_A
Id | emp_lname | dept | sales
------+-----------+----------------+-------
1234 | Stephen | auto parts | 1000
5678 | Alice | auto parts | 2500
9012 | Katherine | floral | 500
3214 | Smithson | sporting goods | 1500
(4 rows)
Company_B
Id | emp_lname | dept | sales
------+-----------+-------------+-------
4321 | Marvin | home goods | 250
8765 | Bob | electronics | 20000
9012 | Katherine | home goods | 500
3214 | Smithson | home goods | 1500
(4 rows)
Company_C
Id | emp_lname | dept | sales
------+-----------+----------------+-------
3214 | Smithson | sporting goods | 1500
5432 | Madison | sporting goods | 400
7865 | Cleveland | outdoor | 1500
1234 | Stephen | floral | 1000
(4 rows)
The following query returns the IDs and last names of employees that exist in Company_A, but not in Company_B:
=> SELECT id, emp_lname FROM Company_A
EXCEPT
SELECT id, emp_lname FROM Company_B;
id | emp_lname
------+-----------
1234 | Stephen
5678 | Alice
(2 rows)
The following query sorts the results of the previous query by employee last name:
=> SELECT id, emp_lname FROM Company_A
EXCEPT
SELECT id, emp_lname FROM Company_B
ORDER BY emp_lname ASC;
id | emp_lname
------+-----------
5678 | Alice
1234 | Stephen
(2 rows)
If you order by the column position, the query returns the same results:
=> SELECT id, emp_lname FROM Company_A
EXCEPT
SELECT id, emp_lname FROM Company_B
ORDER BY 2 ASC;
id | emp_lname
------+-----------
5678 | Alice
1234 | Stephen
(2 rows)
The following query returns the IDs and last names of employees that exist in Company_A, but not in Company_B or Company_C:
=> SELECT id, emp_lname FROM Company_A
EXCEPT
SELECT id, emp_lname FROM Company_B
EXCEPT
SELECT id, emp_lname FROM Company_C;
id | emp_lname
------+-----------
5678 | Alice
(1 row)
The following query shows the results of mismatched data types:
=> SELECT id, emp_lname FROM Company_A
EXCEPT
SELECT emp_lname, id FROM Company_B;
ERROR 3429: For 'EXCEPT', types int and varchar are inconsistent
DETAIL: Columns: id and emp_lname
Using the VMart example database, the following query returns information about all Connecticut-based customers who bought items through stores and whose purchases amounted to more than $500, except for those customers who paid cash:
=> SELECT customer_key, customer_name FROM public.customer_dimension
WHERE customer_key IN (SELECT customer_key FROM store.store_sales_fact
WHERE sales_dollar_amount > 500
EXCEPT
SELECT customer_key FROM store.store_sales_fact
WHERE tender_type = 'Cash')
AND customer_state = 'CT';
customer_key | customer_name
--------------+----------------------
15084 | Doug V. Lampert
21730 | Juanita F. Peterson
24412 | Mary U. Garnett
25840 | Ben Z. Taylor
29940 | Brian B. Dobisz
32225 | Ruth T. McNulty
33127 | Darlene Y. Rodriguez
40000 | Steve L. Lewis
44383 | Amy G. Jones
46495 | Kevin H. Taylor
(10 rows)
See also
40.2 - FROM clause
A comma-separated list of data sources to query.
A comma-separated list of data sources to query.
Syntax
FROM dataset[,...] [ TABLESAMPLE(percent) ]
Parameters
dataset
``
- A set of data to query, one of the following:
-
`TABLESAMPLE(percent)`
- Specifies to return a random sampling of records, where
percent
specifies the approximate sampling size. The percent
value must be between 0 and 100, exclusive, and can include decimal values. The number of records returned is not guaranteed to be the exact percentage specified.
All rows of the data have equal opportunities to be selected. Vertica performs sampling before applying other query filters.
Examples
Count all records in customer_dimension
table:
=> SELECT COUNT(*) FROM customer_dimension;
COUNT
-------
50000
(1 row)
Return a small sampling of rows in table customer_dimension
:
=> SELECT customer_name, customer_state FROM customer_dimension TABLESAMPLE(0.5) WHERE customer_state='IL';
customer_name | customer_state
---------------------+----------------
Amy Y. McNulty | IL
Daniel C. Nguyen | IL
Midori O. Greenwood | IL
Meghan U. Lampert | IL
Tiffany Y. Lang | IL
Laura S. King | IL
Steve T. Nguyen | IL
Craig S. Webber | IL
Luigi A. Lewis | IL
Mark W. Williams | IL
(10 rows)
40.2.1 - Joined-table
Specifies how to join tables.
Specifies how to join tables.
Syntax
table-reference [ join-type ] JOIN table-reference [ TABLESAMPLE(percent) ] [ ON join-predicate ]
Arguments
table-reference
- A table name, optionally qualified.
join-type
- One of the following:
TABLESAMPLE(
percent
)
- Use simple random sampling to return an approximate percentage of records. The percentage value must be greater than 0 and less than 100. All rows in the total potential return set are equally eligible to be included in the sampling. Vertica performs this sampling before other filters in the query are applied. The number of records returned is not guaranteed to be exactly
percent
.
The TABLESAMPLE
option is valid only with user-defined tables and Data Collector (DC) tables. Views and system tables are not supported.
ON
join-predicate
- Specifies the columns to join on.
Invalid for
NATURAL
and CROSS
joins, required for all other join types.
Alternative JOIN syntax options
Vertica supports two older join syntax conventions:
For details, see Join Syntax.
Examples
The following SELECT
statement qualifies its JOIN
clause with the TABLESAMPLE
option:
=> SELECT user_id.id, user_name.name FROM user_name TABLESAMPLE(50)
JOIN user_id TABLESAMPLE(50) ON user_name.id = user_id.id;
id | name
------+--------
489 | Markus
2234 | Cato
763 | Pompey
(3 rows)
40.2.2 - Table-reference
A temporary name used for references to table.
Syntax
[[database.]schema.]table[ [AS] alias]
Parameters
[
database
.]
schema
Database and schema. The default schema is public
. If you specify a database, it must be the current database.
table
- A table in the logical schema.
[AS]
alias
- A temporary name used for references to
table
.
40.3 - GROUP BY clause
Use the GROUP BY clause with aggregate functions in a SELECT statement to collect data across multiple records.
Use the GROUP BY
clause with aggregate functions in a SELECT
statement to collect data across multiple records. Vertica groups the results into one or more sets of rows that match an expression.
The GROUP BY
clause without aggregates is similar to using SELECT DISTINCT
.
ROLLUP
is an extension to the GROUP BY
clause. ROLLUP
performs subtotal aggregations.
Syntax
GROUP BY [/*+GBYTYPE(algorithm)*/] { expression | aggregate-expression }[,...]
Arguments
-
/*+GBYTYPE(algorithm)*/
- Specifies which algorithm has precedence for implementing this
GROUP BY
clause, over the algorithm the Vertica query optimizer might otherwise choose. You can set algorithm
to one of the following values:
For more information about both algorithms, see GROUP BY implementation options.
expression
- Any expression, including constants and column references in the tables specified in the FROM clause. For example:
column,... column, (expression)
aggregate-expression
- An ordered list of columns, expressions,
CUBE, GROUPING SETS
, or ROLLUP
aggregates.
You can include CUBE
and ROLLUP
aggregates within a GROUPING SETS
aggregate. CUBE
and ROLLUP
aggregates can result in a large amount of output. In that case, use GROUPING SETS
to return only certain results.
You cannot include any aggregates within a CUBE
or ROLLUP
expression.
You can append multiple GROUPING SETS
, CUBE
, or ROLLUP
aggregates in the same query. For example:
GROUP BY a,b,c,d, ROLLUP(a,b)
GROUP BY a,b,c,d, CUBE((a,b),c,d)
GROUP BY a,b,c,d, CUBE(a,b), ROLLUP (c,d)
GROUP BY ROLLUP(a), CUBE(b), GROUPING SETS(c)
GROUP BY a,b,c,d, GROUPING SETS ((a,d),(b,c),CUBE(a,b))
GROUP BY a,b,c,d, GROUPING SETS ((a,d),(b,c),(a,b),(a),(b),())
Usage considerations
-
expression cannot include aggregate functions. However, you can use the GROUP BY clause with CUBE, GROUPING SETS, and ROLLUP
to return summary values for each group.
-
When you create a GROUP BY clause, you must include all non-aggregated columns that appear in the SELECT
list.
-
If the GROUP BY
clause includes a WHERE
clause, Vertica ignores all rows that do not satisfy the WHERE
clause.
Examples
This example shows how to use the WHERE
clause with GROUP BY
. In this case, the example retrieves all employees whose last name begins with S, and ignores all rows that do not meet this criteria. The GROUP BY
clause uses the ILIKE
function to retrieve only last names beginning with S. The aggregate function SUM
computes the total vacation days for each group.
=> SELECT employee_last_name, SUM(vacation_days)
FROM employee_dimension
WHERE employee_last_name ILIKE 'S%'
GROUP BY employee_last_name;
employee_last_name | SUM
--------------------+------
Sanchez | 2892
Smith | 2672
Stein | 2660
(3 rows)
The GROUP BY
clause in the following example groups results by vendor region, and vendor region's biggest deal:
=> SELECT vendor_region, MAX(deal_size) AS "Biggest Deal"
FROM vendor_dimension
GROUP BY vendor_region;
vendor_region | Biggest Deal
---------------+--------------
East | 990889
MidWest | 699163
NorthWest | 76101
South | 854136
SouthWest | 609807
West | 964005
(6 rows)
The following query modifies the previous one with a HAVING
clause, which specifies to return only groups whose maximum deal size exceeds $900,000:
=> SELECT vendor_region, MAX(deal_size) as "Biggest Deal"
FROM vendor_dimension
GROUP BY vendor_region
HAVING MAX(deal_size) > 900000;
vendor_region | Biggest Deal
---------------+--------------
East | 990889
West | 964005
(2 rows)
You can use the GROUP BY
clause with one-dimensional arrays of scalar types. In the following example, grants is an ARRAY[VARCHAR] and grant_values is an ARRAY[INT].
=> SELECT department, grants, SUM(apply_sum(grant_values))
FROM employees
GROUP BY grants, department;
department | grants | SUM
------------+--------------------------+--------
Physics | ["US-7376","DARPA-1567"] | 235000
Astronomy | ["US-7376","DARPA-1567"] | 9000
Physics | ["US-7376"] | 30000
(3 rows)
The GROUP BY
clause without aggregates is similar to using SELECT DISTINCT
. For example, the following two queries return the same results:
=> SELECT DISTINCT household_id FROM customer_dimension;
=> SELECT household_id FROM customer_dimension GROUP BY household_id;
See also
40.3.1 - CUBE aggregate
Automatically performs all possible aggregations of the specified columns, as an extension to the GROUP BY clause.
Automatically performs all possible aggregations of the specified columns, as an extension to the GROUP BY clause.
You can use the ROLLUP clause with three grouping functions:
Syntax
GROUP BY group-expression[,...]
Parameters
group-expression
``
- One or both of the following:
-
An expression that is not an aggregate or a grouping function that includes constants and column references in FROM
-specified tables. For example:
column1, (column2+1), column3+column4
-
A multilevel expression, one of the following:
-
ROLLUP
-
CUBE
-
GROUPING SETS
Restrictions
- GROUP BY CUBE does not order data. If you want to sort data, use the ORDER BY clause. The ORDER BY clause must come after the GROUP BY clause.
- You can use CUBE inside a GROUPING SETS expression, but not inside a ROLLUP expression or another CUBE expression.
Levels of CUBE aggregation
If n is the number of grouping columns, CUBE creates 2n levels of aggregations. For example:
CUBE (A, B, C) creates all possible groupings, resulting in eight groups:
- (A, B, C)
- (A, B)
- (A, C)
- (B, C)
- (A)
- (B)
- (C)
- ()
If you increase the number of CUBE columns, the number of CUBE groupings increases exponentially. The CUBE query may be resource intensive and produce combinations that are not of interest. In that case, consider using theGROUPING SETS aggregate, which allows you to choose specific groupings.
Examples
Using CUBE to return all groupings
Suppose you have a table that contains information about family expenses for books and electricity:
=> SELECT * FROM expenses ORDER BY Category, Year;
Year | Category | Amount
------+-------------+--------
2005 | Books | 39.98
2007 | Books | 29.99
2008 | Books | 29.99
2005 | Electricity | 109.99
2006 | Electricity | 109.99
2007 | Electricity | 229.98
To aggregate the data by both Category and Year using the CUBE aggregate:
=> SELECT Category, Year, SUM(Amount) FROM expenses
GROUP BY CUBE(Category, Year) ORDER BY 1, 2, GROUPING_ID();
Category | Year | SUM
-------------+------+--------
Books | 2005 | 39.98
Books | 2007 | 29.99
Books | 2008 | 29.99
Books | | 99.96
Electricity | 2005 | 109.99
Electricity | 2006 | 109.99
Electricity | 2007 | 229.98
Electricity | | 449.96
| 2005 | 149.97
| 2006 | 109.99
| 2007 | 259.97
| 2008 | 29.99
| | 549.92
The results include subtotals for each category and year, and a grand total ($549.92).
Using CUBE with the HAVING clause
This example shows how you can restrict the GROUP BY results, use the HAVING clause with the CUBE aggregate. This query returns only the category totals and the full total:
=> SELECT Category, Year, SUM(Amount) FROM expenses
GROUP BY CUBE(Category,Year) HAVING GROUPING(Year)=1;
Category | Year | SUM
-------------+------+--------
Books | | 99.96
Electricity | | 449.96
| | 549.92
The next query returns only the aggregations for the two categories for each year. The GROUPING ID function specifies to omit the grand total ($549.92):
=> SELECT Category, Year, SUM (Amount) FROM expenses
GROUP BY CUBE(Category,Year) HAVING GROUPING_ID(Category,Year)<2
ORDER BY 1, 2, GROUPING_ID();
Category | Year | SUM
------------+------+--------
Books | 2005 | 39.98
Books | 2007 | 29.99
Books | 2008 | 29.99
Books | | 99.96
Electrical | 2005 | 109.99
Electrical | 2006 | 109.99
Electrical | 2007 | 229.98
Electrical | | 449.96
See also
40.3.2 - GROUPING SETS aggregate
The GROUPING SETS aggregate is an extension to the GROUP BY clause that automatically performs subtotal aggregations on groupings that you specify.
The GROUPING SETS
aggregate is an extension to the
GROUP BY
clause that automatically performs subtotal aggregations on groupings that you specify.
You can use the GROUPING SETS
clause with three grouping functions:
To sort data, use the
ORDER BY
clause. The ORDER BY
clause must follow the GROUP BY
clause.
Syntax
GROUP BY group-expression[,...]
Parameters
group-expression
``
- One or both of the following:
-
An expression that is not an aggregate or a grouping function that includes constants and column references in FROM
-specified tables. For example:
column1, (column2+1), column3+column4
-
A multilevel expression, one of the following:
-
ROLLUP
-
CUBE
-
GROUPING SETS
Defining the groupings
GROUPING SETS allows you to specify exactly which groupings you want in the results. You can also concatenate the groupings as follows:
The following example clauses result in the groupings shown.
This clause... |
Defines groupings... |
...GROUP BY GROUPING SETS(A,B,C,D)... |
(A), (B), (C), (D) |
...GROUP BY GROUPING SETS((A),(B),(C),(D))... |
(A), (B), (C), (D) |
...GROUP BY GROUPING SETS((A,B,C,D))... |
(A, B, C, D) |
...GROUP BY GROUPING SETS(A,B),GROUPING SETS(C,D)... |
(A, C), (B, C), (A, D), (B, C) |
...GROUP BY GROUPING SETS((A,B)),GROUPING SETS(C,D)... |
(A, B, C), (A, B, D) |
...GROUP BY GROUPING SETS(A,B),GROUPING SETS(ROLLUP(C,D))... |
(A,B), (A,B,C), (A,B,C,D) |
...GROUP BY A,B,C,GROUPING SETS(ROLLUP(C, D))... |
(A, B, C, D), (A, B, C), (A, B, C)
The clause contains two groups (A, B, C). In the HAVING clause, use the GROUP_ID function as a predicate, to eliminate the second grouping.
|
Example: selecting groupings
This example shows how to select only those groupings you want. Suppose you want to aggregate on columns only, and you do not need the grand total. The first query omits the total. In the second query, you add () to the GROUPING SETS list to get the total. Use the ORDER BY clause to sort the results by grouping:
=> SELECT Category, Year, SUM(Amount) FROM expenses
GROUP BY GROUPING SETS((Category, Year), (Year))
ORDER BY 1, 2, GROUPING_ID();
Category | Year | SUM
------------+------+--------
Books | 2005 | 39.98
Books | 2007 | 29.99
Books | 2008 | 29.99
Electrical | 2005 | 109.99
Electrical | 2006 | 109.99
Electrical | 2007 | 229.98
| 2005 | 149.97
| 2006 | 109.99
| 2007 | 259.97
| 2008 | 29.99
=> SELECT Category, Year, SUM(Amount) FROM expenses
GROUP BY GROUPING SETS((Category, Year), (Year), ())
ORDER BY 1, 2, GROUPING_ID();
Category | Year | SUM
------------+------+--------
Books | 2005 | 39.98
Books | 2007 | 29.99
Books | 2008 | 29.99
Electrical | 2005 | 109.99
Electrical | 2006 | 109.99
Electrical | 2007 | 229.98
| 2005 | 149.97
| 2006 | 109.99
| 2007 | 259.97
| 2008 | 29.99
| | 549.92
See also
40.3.3 - ROLLUP aggregate
Automatically performs subtotal aggregations as an extension to the GROUP BY clause.
Automatically performs subtotal aggregations as an extension to the GROUP BY clause. ROLLUP
performs these aggregations across multiple dimensions, at different levels, within a single SQL query.
You can use the ROLLUP
clause with three grouping functions:
Syntax
ROLLUP
grouping-expression
[,...]
Parameters
group-expression
- One or both of the following:
-
An expression that is not an aggregate or a grouping function that includes constants and column references in FROM
-specified tables. For example:
column1, (column2+1), column3+column4
-
A multilevel expression, one of the following:
-
ROLLUP
-
CUBE
-
GROUPING SETS
Restrictions
GROUP BY ROLLUP
does not sort results. To sort data, an ORDER BY
clause must follow the GROUP BY
clause.
Levels of aggregation
If n is the number of grouping columns, ROLLUP
creates n+1 levels of subtotals and grand total. Because ROLLUP
removes the right-most column at each step, specify column order carefully.
Suppose that ROLLUP(A, B, C)
creates four groups:
Because ROLLUP
removes the right-most column at each step, there are no groups for (A, C)
and (B, C)
.
If you enclose two or more columns in parentheses, GROUP BY
treats them as a single entity. For example:
-
ROLLUP(A, B, C)
creates four groups:
(A, B, C)
(A, B)
(A)
()
-
ROLLUP((A, B), C)
treats (A, B)
as a single entity and creates three groups:
(A, B, C)
(A, B)
()
Example: aggregating the full data set
The following example shows how to use the GROUP BY
clause to determine family expenses for electricity and books over several years. The SUM
aggregate function computes the total amount of money spent in each category per year.
Suppose you have a table that contains information about family expenses for books and electricity:
=> SELECT * FROM expenses ORDER BY Category, Year;
Year | Category | Amount
------+-------------+--------
2005 | Books | 39.98
2007 | Books | 29.99
2008 | Books | 29.99
2005 | Electricity | 109.99
2006 | Electricity | 109.99
2007 | Electricity | 229.98
For the expenses
table, ROLLUP
computes the subtotals in each category between 2005–2007:
-
Books: $99.96
-
Electricity: $449.96
-
Grand total: $549.92.
Use the ORDER BY
clause to sort the results:
=> SELECT Category, Year, SUM(Amount) FROM expenses
GROUP BY ROLLUP(Category, Year) ORDER BY 1,2, GROUPING_ID();
Category | Year | SUM
-------------+------+--------
Books | 2005 | 39.98
Books | 2007 | 29.99
Books | 2008 | 29.99
Books | | 99.96
Electricity | 2005 | 109.99
Electricity | 2006 | 109.99
Electricity | 2007 | 229.98
Electricity | | 449.96
| | 549.92
Example: using ROLLUP with the HAVING clause
This example shows how to use the HAVING clause with ROLLUP
to restrict the GROUP BY
results. The following query produces only those ROLLUP
categories where year
is subtotaled, based on the expression in the GROUPING
function:
=> SELECT Category, Year, SUM(Amount) FROM expenses
GROUP BY ROLLUP(Category,Year) HAVING GROUPING(Year)=1
ORDER BY 1, 2, GROUPING_ID();
Category | Year | SUM
-------------+------+--------
Books | | 99.96
Electricity | | 449.96
| | 549.92
The next example rolls up on (Category
, Year
), but not on the full results. The GROUPING_ID
function specifies to aggregate less than three levels:
=> SELECT Category, Year, SUM(Amount) FROM expenses
GROUP BY ROLLUP(Category,Year) HAVING GROUPING_ID(Category,Year)<3
ORDER BY 1, 2, GROUPING_ID();
Category | Year | SUM
-------------+------+--------
Books | 2005 | 39.98
Books | 2007 | 29.99
Books | 2008 | 29.99
Books | | 99.96
Electricity | 2005 | 109.99
Electricity | 2006 | 109.99
Electricity | 2007 | 229.98
Electricity | | 449.96
See also
40.4 - HAVING clause
Filters the results of a GROUP BY clause.
Filters the results of a GROUP BY clause. Semantically, the HAVING clause occurs after the GROUP BY operation. It was added to the SQL standard because a WHERE clause cannot specify aggregate functions.
Syntax
HAVING condition[,...]
Parameters
condition
- Unambiguously references a grouping column, unless the reference appears in an aggregate function.
Examples
The following example returns the employees with salaries greater than $800,000:
=> SELECT employee_last_name, MAX(annual_salary) as highest_salary FROM employee_dimension
GROUP BY employee_last_name HAVING MAX(annual_salary) > 800000 ORDER BY highest_salary DESC;
employee_last_name | highest_salary
--------------------+----------------
Sanchez | 992363
Vogel | 983634
Vu | 977716
Lewis | 957949
Taylor | 953373
King | 937765
Gauthier | 927335
Garnett | 903104
Bauer | 901181
Jones | 885395
Rodriguez | 861647
Young | 846657
Greenwood | 837543
Overstreet | 831317
Garcia | 811231
(15 rows)
40.5 - INTERSECT clause
Calculates the intersection of the results of two or more SELECT queries.
Calculates the intersection of the results of two or more SELECT queries. INTERSECT returns distinct values by both the query on the left and right sides of the INTERSECT operand.
Syntax
select-stmt
INTERSECT query[...]
[ order-by-clause [ offset-clause ]]
[ limit-clause ]
Notes
-
Use the INTERSECT clause to return all elements that are common to the results of all the SELECT queries. The INTERSECT query operates on the results of two or more SELECT queries. INTERSECT returns only the rows that are returned by all the specified queries.
-
You cannot use the ALL keyword with an INTERSECT query.
-
The results of each SELECT query must be union compatible; they must return the same number of columns, and the corresponding columns must have compatible data types. For example, you cannot use the INTERSECT clause on a column of type INTEGER and a column of type VARCHAR. If the SELECT queries do not meet these criteria, Vertica returns an error.
Note
The
Data type coercion chart lists the data types that can be cast to other data types. If one data type can be cast to the other, those two data types are compatible.
-
Order the results of an INTERSECT operation by using an ORDER BY clause. In the ORDER BY list, specify the column names from the leftmost SELECT statement or specify integers that indicate the position of the columns by which to sort.
-
You can use INTERSECT in FROM, WHERE, and HAVING clauses.
-
The rightmost ORDER BY, LIMIT, or OFFSET clauses in an INTERSECT query do not need to be enclosed in parentheses because the rightmost query specifies that Vertica perform the operation on the results of the INTERSECT operation. Any ORDER BY, LIMIT, or OFFSET clauses contained in SELECT queries that appear earlier in the INTERSECT query must be enclosed in parentheses.
-
The order by column names is from the first select.
-
Vertica supports INTERSECT noncorrelated subquery predicates. For example:
=> SELECT * FROM T1
WHERE T1.x IN
(SELECT MAX(c1) FROM T2
INTERSECT
SELECT MAX(cc1) FROM T3
INTERSECT
SELECT MAX(d1) FROM T4);
Examples
Consider the following three tables:
Company_A
id emp_lname dept sales
------+------------+----------------+-------
1234 | Stephen | auto parts | 1000
5678 | Alice | auto parts | 2500
9012 | Katherine | floral | 500
3214 | Smithson | sporting goods | 1500
Company_B
id emp_lname dept sales
------+------------+-------------+-------
4321 | Marvin | home goods | 250
9012 | Katherine | home goods | 500
8765 | Bob | electronics | 20000
3214 | Smithson | home goods | 1500
Company_C
id | emp_lname | dept | sales
------+-----------+----------------+-------
3214 | Smithson | sporting goods | 1500
5432 | Madison | sporting goods | 400
7865 | Cleveland | outdoor | 1500
1234 | Stephen | floral | 1000
The following query returns the IDs and last names of employees that exist in both Company_A and Company_B:
=> SELECT id, emp_lname FROM Company_A
INTERSECT
SELECT id, emp_lname FROM Company_B;
id | emp_lname
------+-----------
3214 | Smithson
9012 | Katherine
(2 rows)
The following query returns the same two employees in descending order of sales:
=> SELECT id, emp_lname, sales FROM Company_A
INTERSECT
SELECT id, emp_lname, sales FROM Company_B
ORDER BY sales DESC;
id | emp_lname | sales
------+-----------+-------
3214 | Smithson | 1500
9012 | Katherine | 500
(2 rows)
The following query returns the employee who works for both companies whose sales in Company_B are greater than 1000:
=> SELECT id, emp_lname, sales FROM Company_A
INTERSECT
(SELECT id, emp_lname, sales FROM company_B WHERE sales > 1000)
ORDER BY sales DESC;
id | emp_lname | sales
------+-----------+-------
3214 | Smithson | 1500
(1 row)
In the following query returns the ID and last name of the employee who works for all three companies:
=> SELECT id, emp_lname FROM Company_A
INTERSECT
SELECT id, emp_lname FROM Company_B
INTERSECT
SELECT id, emp_lname FROM Company_C;
id | emp_lname
------+-----------
3214 | Smithson
(1 row)
The following query shows the results of a mismatched data types; these two queries are not union compatible:
=> SELECT id, emp_lname FROM Company_A
INTERSECT
SELECT emp_lname, id FROM Company_B;
ERROR 3429: For 'INTERSECT', types int and varchar are inconsistent
DETAIL: Columns: id and emp_lname
Using the VMart example database, the following query returns information about all Connecticut-based customers who bought items online and whose purchase amounts were between $400 and $500:
=> SELECT customer_key, customer_name from public.customer_dimension
WHERE customer_key IN (SELECT customer_key
FROM online_sales.online_sales_fact
WHERE sales_dollar_amount > 400
INTERSECT
SELECT customer_key FROM online_sales.online_sales_fact
WHERE sales_dollar_amount > 500)
AND customer_state = 'CT' ORDER BY customer_key;
customer_key | customer_name
--------------+------------------------
39 | Sarah S. Winkler
44 | Meghan H. Overstreet
70 | Jack X. Cleveland
103 | Alexandra I. Vu
110 | Matt . Farmer
173 | Mary R. Reyes
188 | Steve G. Williams
233 | Theodore V. McNulty
250 | Marcus E. Williams
294 | Samantha V. Young
313 | Meghan P. Pavlov
375 | Sally N. Vu
384 | Emily R. Smith
387 | Emily L. Garcia
...
The previous query and the next one are equivalent, and return the same results:
=> SELECT customer_key,customer_name FROM public.customer_dimension
WHERE customer_key IN (SELECT customer_key
FROM online_sales.online_sales_fact
WHERE sales_dollar_amount > 400
AND sales_dollar_amount < 500)
AND customer_state = 'CT' ORDER BY customer_key;
See also
40.6 - INTO TABLE clause
Creates a table from a query result set.
Creates a table from a query result set.
Syntax
Permanent table:
INTO [TABLE] [[{namespace. | database. }]schema.]table
Temporary table:
INTO [scope] TEMP[ORARY] [TABLE] [[database.]schema.]table
[ ON COMMIT { DELETE | PRESERVE } ROWS ]
Parameters
scope
- Specifies visibility of a temporary table definition:
-
GLOBAL
(default): The table definition is visible to all sessions, and persists until you explicitly drop the table.
-
LOCAL
: The table definition is visible only to the session in which it is created, and is dropped when the session ends.
Regardless of this setting, retention of temporary table data is set by the keywords ON COMMIT DELETE ROWS
and ON COMMIT PRESERVE ROWS
(see below).
For more information, see Creating temporary tables.
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
table
- The name of the table to create.
-
ON COMMIT { DELETE | PRESERVE } ROWS
- Specifies whether data is transaction- or session-scoped:
-
DELETE
(default) marks the temporary table for transaction-scoped data. Vertica removes all table data after each commit.
-
PRESERVE
marks the temporary table for session-scoped data, which is preserved beyond the lifetime of a single transaction. Vertica removes all table data when the session ends.
Examples
The following SELECT
statement has an INTO TABLE
clause that creates table newTable
from customer_dimension
:
=> SELECT * INTO TABLE newTable FROM customer_dimension;
The following SELECT
statement creates temporary table newTempTable
. By default, temporary tables are created at a global scope, so its definition is visible to other sessions and persists until it is explicitly dropped. No customer_dimension
data is copied into the new table, and Vertica issues a warning accordingly:
=> SELECT * INTO TEMP TABLE newTempTable FROM customer_dimension;
WARNING 4102: No rows are inserted into table "public"."newTempTable" because
ON COMMIT DELETE ROWS is the default for create temporary table
HINT: Use "ON COMMIT PRESERVE ROWS" to preserve the data in temporary table
CREATE TABLE
The following SELECT
statement creates local temporary table newTempTableLocal
. This table is visible only to the session in which it was created, and is automatically dropped when the session ends. The INTO TABLE
clause includes ON COMMIT PRESERVE ROWS
, so Vertica copies all selection data into the new table:
=> SELECT * INTO LOCAL TEMP TABLE newTempTableLocal ON COMMIT PRESERVE ROWS
FROM customer_dimension;
CREATE TABLE
40.7 - LIMIT clause
Specifies the maximum number of result set rows to return, either from the entire result set, or from windows of a partitioned result set.
Specifies the maximum number of result set rows to return, either from the entire result set, or from windows of a partitioned result set.
LIMIT { num-rows OVER ( PARTITION BY partition-expr[,...] ORDER BY order-expr[,...] ) }
| ALL }
Arguments
num-rows
- The maximum number of rows to return.
OVER()
- Specifies how to partition and sort input data with respect to the current row. The input data is the result set that the query returns after it evaluates FROM, WHERE, GROUP BY, and HAVING clauses.
For details, see Using LIMIT with Window Partitioning below.
partition-expr
- Column or expression returning a column to partition results by.
order-expr
[ ASC | DESC ] [ NULLS { FIRST | LAST } ]
- Column or expression returning the column to order results by. Each column can be sorted in ascending (default) or descending order, with nulls first or last.
ALL
- Returns all rows, valid only when LIMIT is applied to the entire result set.
Limiting returned rows
LIMIT returns the specified number of rows from the queried dataset. Row precedence is determined by the query's ORDER BY clause. If you do not use an ORDER BY clause, the query returns an undefined subset of the result set.
When a SELECT statement specifies both LIMIT and OFFSET, Vertica first processes the OFFSET, and then applies LIMIT to the remaining rows.
The following query returns the first 10 rows of data, as ordered first by region and then by number of employees:
=> SELECT store_region, store_city||', '||store_state location, store_name, number_of_employees
FROM store.store_dimension WHERE number_of_employees <= 12
ORDER BY store_region, number_of_employees LIMIT 10;
store_region | location | store_name | number_of_employees
--------------+----------------+------------+---------------------
East | Stamford, CT | Store219 | 12
East | New Haven, CT | Store66 | 12
East | New York, NY | Store122 | 12
MidWest | South Bend, IN | Store134 | 10
MidWest | Evansville, IN | Store30 | 11
MidWest | Green Bay, WI | Store27 | 12
South | Mesquite, TX | Store124 | 10
South | Cape Coral, FL | Store18 | 11
South | Beaumont, TX | Store226 | 11
South | Houston, TX | Store33 | 11
(10 rows)
Using LIMIT with window partitioning
You can use LIMIT to apply window partitioning on query results and limit the number of rows that are returned in each window.
For example, the following query specifies window partitioning on the result set. LIMIT is set to 2, so each window partition can return no more than two rows. The OVER
clause partitions the result set by store_region
, so in each region the query returns the two stores with the smallest number of employees:
=> SELECT store_region, store_city||', '||store_state location, store_name, number_of_employees
FROM store.store_dimension
LIMIT 2
OVER (PARTITION BY store_region ORDER BY number_of_employees ASC);
store_region | location | store_name | number_of_employees
--------------+---------------------+------------+---------------------
West | Norwalk, CA | Store43 | 10
West | Lancaster, CA | Store95 | 11
East | Stamford, CT | Store219 | 12
East | New York, NY | Store122 | 12
SouthWest | North Las Vegas, NV | Store170 | 10
SouthWest | Phoenix, AZ | Store228 | 11
NorthWest | Bellevue, WA | Store200 | 19
NorthWest | Portland, OR | Store39 | 22
MidWest | South Bend, IN | Store134 | 10
MidWest | Evansville, IN | Store30 | 11
South | Mesquite, TX | Store124 | 10
South | Beaumont, TX | Store226 | 11
(12 rows)
40.8 - MATCH clause
A SQL extension that lets you screen large amounts of historical data in search of event patterns, the MATCH clause provides subclasses for analytic partitioning and ordering and matches rows from the result table based on a pattern you define.
A SQL extension that lets you screen large amounts of historical data in search of event patterns, the MATCH clause provides subclasses for analytic partitioning and ordering and matches rows from the result table based on a pattern you define.
You specify a pattern as a regular expression, which is composed of event types defined in the DEFINE subclause, where each event corresponds to a row in the input table. Then you can search for the pattern within a sequence of input events. Pattern matching returns the contiguous sequence of rows that conforms to PATTERN subclause. For example, pattern P
(A B* C
) consist of three event types: A, B, and C. When Vertica finds a match in the input table, the associated pattern instance must be an event of type A followed by 0 or more events of type B, and an event of type C.
Pattern matching is particularly useful for clickstream analysis where you might want to identify users' actions based on their Web browsing behavior (page clicks). For details, see Event series pattern matching.
Syntax
MATCH ( [ PARTITION BY table-column ] ORDER BY table-column
DEFINE event-name AS boolean-expr [,...]
PATTERN pattern-name AS ( regexp)
[ rows-match-clause ] )
Arguments
PARTITION BY
- Defines the window data scope in which the pattern, defined in the PATTERN subclause, is matched. The partition clause partitions the data by matched patterns defined in the PATTERN subclause. For each partition, data is sorted by the ORDER BY clause. If the partition clause is omitted, the entire data set is considered a single partition.
ORDER BY
- Defines the window data scope in which the pattern, defined in the PATTERN subclause, is matched. For each partition, the order clause specifies how the input data is ordered for pattern matching.
Note
The ORDER BY clause is mandatory.
DEFINE
- Defines the boolean expressions that make up the event types in the regular expressions. For example:
DEFINE
Entry AS RefURL NOT ILIKE '%website2.com%' AND PageURL ILIKE
'%website2.com%',
Onsite AS PageURL ILIKE '%website2.com%' AND Action='V',
Purchase AS PageURL ILIKE '%website2.com%' AND Action='P'
The DEFINE subclause accepts a maximum of 52 events. See Event series pattern matching for examples.
event-name
- Name of the event to evaluate for each row—in the earlier example,
Entry, Onsite, Purchase
.
Note
Event names are case insensitive and follow the same naming conventions as those used for tables and columns.
boolean-expr
- Expression that returns true or false. boolean_expr can include Logical operators and relational (comparison) operators. For example:
Purchase AS PageURL ILIKE '%website2.com%' AND Action = 'P'
PATTERN
pattern-name
- Name of the pattern defined in the PATTERN subclause; for example, P is the pattern name defined below:
PATTERN P AS (...)
A PATTERN is a search pattern that is comprised of a name and a regular expression.
Note
Vertica supports one pattern per query.
regexp
- A regular expression comprised of event types defined in the
DEFINE
subclause and one or more quantifiers below. When Vertica evaluates the MATCH
clause, the regular expression identifies the rows that meet the expression criteria.
-
*
: Match 0 or more times
-
*?
: Match 0 or more times, not greedily
-
+
: Match 1 or more times
-
+?
: Match 1 or more times, not greedily
-
?
: Match 0 or 1 time
-
??
: Match 0 or 1 time, not greedily
-
*+
: Match 0 or more times, possessive
-
++
: Match 1 or more times, possessive
-
?+
: Match 0 or 1 time, possessive
-
|
: Alternation. Matches expression before or after the vertical bar. Similar to a Boolean OR.
rows-match-clause
- Specifies how to resolve more than one event evaluating to true for a single row, one of the following:
-
ROWS MATCH ALL EVENTS
: If more than one event evaluates to true for a single row, Vertica returns this error :
ERROR: pattern events must be mutually exclusive
HINT: try using ROWS MATCH FIRST EVENT
-
ROWS MATCH FIRST EVENT
: If more than one event evaluates to true for a given row, Vertica uses the first event in the SQL statement for that row.
Pattern semantic evaluation
-
The semantic evaluating ordering of the SQL clauses is: FROM -> WHERE -> PATTERN MATCH -> SELECT.
-
Data is partitioned as specified in the PARTITION BY clause. If the partition clause is omitted, the entire data set is considered a single partition.
-
For each partition, the order clause specifies how the input data is ordered for pattern matching.
-
Events are evaluated for each row. A row could have 0, 1, or N
events evaluate to true. If more than one event evaluates to true for the same row, Vertica returns a run-time error unless you specify ROWS MATCH FIRST EVENT. If you specify ROWS MATCH FIRST EVENT and more than one event evaluates to TRUE for a single row, Vertica chooses the event that was defined first in the SQL statement to be the event it uses for the row.
-
Vertica performs pattern matching by finding the contiguous sequence of rows that conforms to the pattern defined in the PATTERN subclause.
For each match, Vertica outputs the rows that contribute to the match. Rows not part of the match (do not satisfy one or more predicates) are not output.
-
Vertica reports only non-overlapping matches. If an overlap occurs, Vertica chooses the first match found in the input stream. After finding the match, Vertica looks for the next match, starting at the end of the previous match.
-
Vertica reports the longest possible match, not a subset of a match. For example, consider pattern: AB with input: AAAB. Because A uses the greedy regular expression quantifier (), Vertica reports all A inputs (AAAB), not AAB, AB, or B.
Notes and restrictions
-
DISTINCT and GROUP BY/HAVING clauses are not allowed in pattern match queries.
-
The following expressions are not allowed in the DEFINE subclause:
-
Subqueries, such as DEFINE X AS c IN ELECT c FROM table
-
Analytic functions, such as DEFINE X AS c <
LEA1) OVER (ORDER BY 1)
-
Aggregate functions, such as DEFINE X AS c <
MA1)
-
You cannot use the same pattern name to define a different event; for example, the following is not allowed for X:
DEFINE X AS c1 < 3
X AS c1 >= 3
-
Used with MATCH clause, Vertica MATCH clause functions provide additional data about the patterns it finds. For example, you can use the functions to return values representing the name of the event that matched the input row, the sequential number of the match, or a partition-wide unique identifier for the instance of the pattern that matched.
Examples
For examples, see Event series pattern matching.
See also
40.8.1 - Event series pattern matching
The SQL MATCH clause syntax lets you screen large amounts of historical data in search of event patterns.
The SQL MATCH clause syntax lets you screen large amounts of historical data in search of event patterns. You specify a pattern as a regular expression and can then search for the pattern within a sequence of input events. MATCH provides subclauses for analytic data partitioning and ordering, and the pattern matching occurs on a contiguous set of rows.
Pattern matching is particularly useful for clickstream analysis where you might want to identify users' actions based on their Web browsing behavior (page clicks). A typical online clickstream funnel is:
Company home page -> product home page -> search -> results -> purchase online
Using this clickstream funnel, you can search for a match on the user's sequence of web clicks and identify that user:
-
Landed on the company home page
-
Navigated to the product page
-
Ran a search
-
Clicked a link from the search results
-
Made a purchase
Clickstream funnel schema
The examples in this topic use this clickstream funnel and the following clickstream_log
table schema:
=> CREATE TABLE clickstream_log (
uid INT, --user ID
sid INT, --browsing session ID, produced by previous sessionization computation
ts TIME, --timestamp that occurred during the user's page visit
refURL VARCHAR(20), --URL of the page referencing PageURL
pageURL VARCHAR(20), --URL of the page being visited
action CHAR(1) --action the user took after visiting the page ('P' = Purchase, 'V' = View)
);
INSERT INTO clickstream_log VALUES (1,100,'12:00','website1.com','website2.com/home', 'V');
INSERT INTO clickstream_log VALUES (1,100,'12:01','website2.com/home','website2.com/floby', 'V');
INSERT INTO clickstream_log VALUES (1,100,'12:02','website2.com/floby','website2.com/shamwow', 'V');
INSERT INTO clickstream_log values (1,100,'12:03','website2.com/shamwow','website2.com/buy', 'P');
INSERT INTO clickstream_log values (2,100,'12:10','website1.com','website2.com/home', 'V');
INSERT INTO clickstream_log values (2,100,'12:11','website2.com/home','website2.com/forks', 'V');
INSERT INTO clickstream_log values (2,100,'12:13','website2.com/forks','website2.com/buy', 'P');
COMMIT;
Here's the clickstream_log table's output:
=> SELECT * FROM clickstream_log;
uid | sid | ts | refURL | pageURL | action
-----+-----+----------+----------------------+----------------------+--------
1 | 100 | 12:00:00 | website1.com | website2.com/home | V
1 | 100 | 12:01:00 | website2.com/home | website2.com/floby | V
1 | 100 | 12:02:00 | website2.com/floby | website2.com/shamwow | V
1 | 100 | 12:03:00 | website2.com/shamwow | website2.com/buy | P
2 | 100 | 12:10:00 | website1.com | website2.com/home | V
2 | 100 | 12:11:00 | website2.com/home | website2.com/forks | V
2 | 100 | 12:13:00 | website2.com/forks | website2.com/buy | P
(7 rows)
Examples
This example includes the Vertica MATCH clause functions to analyze users' browsing history over website2.com. It identifies patterns where the user performed the following tasks:
-
Landed on website2.com from another web site (Entry)
-
Browsed to any number of other pages (Onsite)
-
Made a purchase (Purchase)
In the following statement, pattern P (Entry Onsite* Purchase
) consist of three event types: Entry, Onsite, and Purchase. When Vertica finds a match in the input table, the associated pattern instance must be an event of type Entry followed by 0 or more events of type Onsite, and an event of type Purchase
=> SELECT uid,
sid,
ts,
refurl,
pageurl,
action,
event_name(),
pattern_id(),
match_id()
FROM clickstream_log
MATCH
(PARTITION BY uid, sid ORDER BY ts
DEFINE
Entry AS RefURL NOT ILIKE '%website2.com%' AND PageURL ILIKE '%website2.com%',
Onsite AS PageURL ILIKE '%website2.com%' AND Action='V',
Purchase AS PageURL ILIKE '%website2.com%' AND Action = 'P'
PATTERN
P AS (Entry Onsite* Purchase)
ROWS MATCH FIRST EVENT);
In the output below, the first four rows represent the pattern for user 1's browsing activity, while the following three rows show user 2's browsing habits.
uid | sid | ts | refurl | pageurl | action | event_name | pattern_id | match_id
-----+-----+----------+----------------------+----------------------+--------+------------+------------+----------
1 | 100 | 12:00:00 | website1.com | website2.com/home | V | Entry | 1 | 1
1 | 100 | 12:01:00 | website2.com/home | website2.com/floby | V | Onsite | 1 | 2
1 | 100 | 12:02:00 | website2.com/floby | website2.com/shamwow | V | Onsite | 1 | 3
1 | 100 | 12:03:00 | website2.com/shamwow | website2.com/buy | P | Purchase | 1 | 4
2 | 100 | 12:10:00 | website1.com | website2.com/home | V | Entry | 1 | 1
2 | 100 | 12:11:00 | website2.com/home | website2.com/forks | V | Onsite | 1 | 2
2 | 100 | 12:13:00 | website2.com/forks | website2.com/buy | P | Purchase | 1 | 3
(7 rows)
See also
40.9 - MINUS clause
MINUS is an alias for EXCEPT.
MINUS is an alias for EXCEPT.
40.10 - OFFSET clause
Omits a specified number of rows from the beginning of the result set.
Omits a specified number of rows from the beginning of the result set.
Syntax
OFFSET rows
Parameters
start-row
- Specifies the first row to include in the result set. All preceding rows are omitted.
Dependencies
-
Use an ORDER BY clause with OFFSET. Otherwise, the query returns an undefined subset of the result set.
-
OFFSET must follow the ORDER BY clause in a SELECT statement or UNION clause.
-
When a SELECT statement or UNION clause specifies both LIMIT and OFFSET, Vertica first processes the OFFSET statement, and then applies the LIMIT statement to the remaining rows.
Examples
The following query returns 14 rows from the customer_dimension
table:
=> SELECT customer_name, customer_gender FROM customer_dimension
WHERE occupation='Dancer' AND customer_city = 'San Francisco' ORDER BY customer_name;
customer_name | customer_gender
----------------------+-----------------
Amy X. Lang | Female
Anna H. Li | Female
Brian O. Weaver | Male
Craig O. Pavlov | Male
Doug Z. Goldberg | Male
Harold S. Jones | Male
Jack E. Perkins | Male
Joseph W. Overstreet | Male
Kevin . Campbell | Male
Raja Y. Wilson | Male
Samantha O. Brown | Female
Steve H. Gauthier | Male
William . Nielson | Male
William Z. Roy | Male
(14 rows)
If you modify the previous query to specify an offset of 8 (OFFSET 8
), Vertica skips the first eight rows of the previous result set. The query returns the following results:
=> SELECT customer_name, customer_gender FROM customer_dimension
WHERE occupation='Dancer' AND customer_city = 'San Francisco' ORDER BY customer_name OFFSET 8;
customer_name | customer_gender
-------------------+-----------------
Kevin . Campbell | Male
Raja Y. Wilson | Male
Samantha O. Brown | Female
Steve H. Gauthier | Male
William . Nielson | Male
William Z. Roy | Male
(6 rows)
40.11 - ORDER BY clause
Sorts a query result set on one or more columns or column expressions.
Sorts a query result set on one or more columns or column expressions. Vertica uses the current locale and collation sequence to compare and sort string values.
Note
Vertica projection data is always stored sorted by the ASCII (binary) collating sequence.
Syntax
ORDER BY expression [ ASC | DESC ] [,...]
Parameters
expression
- One of the following:
-
Name or ordinal number of a SELECT list item. The ordinal number refers to the position of the result column, counting from the left beginning at one. Use them to order by a column whose name is not unique. Ordinal numbers are invalid for an ORDER BY clause of an analytic function's OVER clause.
-
Arbitrary expression formed from columns that do not appear in the SELECT
list
-
CASE expression.
Note
You cannot use DISTINCT on a collection column if it is also included in the sort order.
- ASC | DESC
- Specifies whether to sort values in ascending or descending order. NULL values are either first or last in the sort order, depending on data type:
-
INTEGER, INT, DATE/TIME: NULL has the smallest value.
-
FLOAT, BOOLEAN, CHAR, VARCHAR, ARRAY, SET: NULL has the largest value
Examples
The follow example returns all the city and deal size for customer Metamedia, sorted by deal size in descending order.
=> SELECT customer_city, deal_siz FROM customer_dimension WHERE customer_name = 'Metamedia'
ORDER BY deal_size DESC;
customer_city | deal_size
------------------+-----------
El Monte | 4479561
Athens | 3815416
Ventura | 3792937
Peoria | 3227765
Arvada | 2671849
Coral Springs | 2643674
Fontana | 2374465
Rancho Cucamonga | 2214002
Wichita Falls | 2117962
Beaumont | 1898295
Arvada | 1321897
Waco | 1026854
Joliet | 945404
Hartford | 445795
(14 rows)
The following example uses a transform function. It returns an error because the ORDER BY column is not in the window partition.
=> CREATE TABLE t(geom geometry(200), geog geography(200));
=> SELECT PolygonPoint(geom) OVER(PARTITION BY geom)
AS SEL_0 FROM t ORDER BY geog;
ERROR 2521: Cannot specify anything other than user defined transforms and partitioning expressions in the ORDER BY list
The following example, using the same table, corrects this error.
=> SELECT PolygonPoint(geom) OVER(PARTITION BY geom)
AS SEL_0 FROM t ORDER BY geom;
The following example uses an array in the ORDER BY clause.
=> SELECT * FROM employees
ORDER BY grant_values;
id | department | grants | grant_values
----+------------+--------------------------+----------------
36 | Astronomy | ["US-7376","DARPA-1567"] | [5000,4000]
36 | Physics | ["US-7376","DARPA-1567"] | [10000,25000]
33 | Physics | ["US-7376"] | [30000]
42 | Physics | ["US-7376","DARPA-1567"] | [65000,135000]
(4 rows)
40.12 - TIMESERIES clause
Provides gap-filling and interpolation (GFI) computation, an important component of time series analytics computation.
Provides gap-filling and interpolation (GFI) computation, an important component of time series analytics computation. See Time series analytics for details and examples.
Syntax
TIMESERIES slice-time AS 'length-and-time-unit-expr' OVER (
[ PARTITION BY column-expr[,...] ] ORDER BY time-expr ) [ ORDER BY table-column[,...] ]
Parameters
slice-time
- A time column produced by the
TIMESERIES
clause, which stores the time slice start times generated from gap filling.
Note: This parameter is an alias, so you can use any name that an alias would take.
length-and-time-unit-expr
- An
INTERVAL DAY TO SECOND
literal that specifies the length of time unit of time slice computation. For example:
`TIMESERIES slice_time AS '3 seconds' ...
OVER()
- Specifies partitioning and ordering for the function.
OVER()
also specifies that the time series function operates on a query result set—that is, the rows that are returned after the FROM
, WHERE
, GROUP BY
, and HAVING
clauses are evaluated.
-
PARTITION BY (column-expr`[,...] )`
- Partitions the data by the specified column expressions. Gap filling and interpolation is performed on each partition separately.
ORDER BY
time-expr
- Sorts the data by the
TIMESTAMP
expression time-expr
, which computes the time information of the time series data.
Note
The TIMESERIES
clause requires an ORDER BY
operation on the timestamp column.
Notes
If the window-partition-clause
is not specified in TIMESERIES OVER(), for each defined time slice, exactly one output record is produced; otherwise, one output record is produced per partition per time slice. Interpolation is computed there.
Given a query block that contains a TIMESERIES clause, the following are the semantic phases of execution (after evaluating the FROM and the optional WHERE clauses):
-
Compute time-expression.
-
Perform the same computation as the TIME_SLICE() function on each input record based on the result of time-exp
and 'length-and-time-unit-expr'
.
-
Perform gap filling to generate time slices missing from the input.
-
Name the result of this computation as slice_time
, which represents the generated "time series" column (alias) after gap filling.
-
Partition the data by expression
, slice-time
. For each partition, do step 4.
-
Sort the data by time-expr
. Interpolation is computed here.
There is semantic overlap between the TIMESERIES clause and the TIME_SLICE function with the following key differences:
-
TIMESERIES
only supports the interval qualifier DAY TO SECOND
; it does not allow YEAR TO MONTH
.
-
Unlike TIME_SLICE
, the time slice length and time unit expressed in *length-and-time-unit-expr
*must be constants so gaps in the time slices are well-defined.
-
TIMESERIES
performs gap filling; the TIME_SLICE
function does not.
-
TIME_SLICE
can return the start or end time of a time slice, depending on the value of its fourth input parameter (start-or-end
). TIMESERIES
, on the other hand, always returns the start time of each time slice. To output the end time of each time slice, write a SELECT
statement like the following:
=> SELECT slice_time + <slice_length>;
Restrictions
-
When the TIMESERIES
clause occurs in a SQL query block, only the following clauses can be used in the same query block:
-
SELECT
-
FROM
-
WHERE
-
ORDER BY
GROUP BY
and HAVING
clauses are not allowed. If a GROUP BY
operation is needed before or after gap-filling and interpolation (GFI), use a subquery and place the GROUP BY
In the outer query. For example:
=> SELECT symbol, AVG(first_bid) as avg_bid FROM (
SELECT symbol, slice_time, TS_FIRST_VALUE(bid1) AS first_bid
FROM Tickstore
WHERE symbol IN ('MSFT', 'IBM')
TIMESERIES slice_time AS '5 seconds' OVER (PARTITION BY symbol ORDER BY ts)
) AS resultOfGFI
GROUP BY symbol;
-
When the TIMESERIES
clause is present in the SQL query block, the SELECT
list can include only the following:
For example, the following two queries return a syntax error because bid1
is not a PARTITION BY
or GROUP BY
column:
=> SELECT bid, symbol, TS_FIRST_VALUE(bid) FROM Tickstore
TIMESERIES slice_time AS '5 seconds' OVER (PARTITION BY symbol ORDER BY ts);
ERROR: column "Tickstore.bid" must appear in the PARTITION BY list of Timeseries clause or be used in a Timeseries Output function
=> SELECT bid, symbol, AVG(bid) FROM Tickstore
GROUP BY symbol;
ERROR: column "Tickstore.bid" must appear in the GROUP BY clause or be used in an aggregate function
Examples
For examples, see Gap filling and interpolation (GFI).
See also
40.13 - UNION clause
Combines the results of multiple SELECT statements.
Combines the results of multiple SELECT statements. You can include UNION in FROM, WHERE, and HAVING clauses.
Syntax
select { UNION [ ALL | DISTINCT ] select }[...]
[ order-by-clause [ offset-clause ]]
[ limit-clause ]
Arguments
select
- A SELECT statement that returns one or more rows, depending on whether you specify keywords DISTINCT or ALL.
The first SELECT statement can include the LABEL hint. Vertica ignores LABEL hints in subsequent SELECT statements.
Each SELECT statement can specify its own ORDER BY, LIMIT, and OFFSET clauses. A SELECT statement with one or more of these clauses must be enclosed by parentheses. See also: ORDER BY, LIMIT, and OFFSET Clauses in UNION.
DISTINCT
, ALL
- How to return duplicate rows:
-
DISTINCT (default) returns only unique rows.
-
ALL concatenates all rows, including duplicates. For best performance, use UNION ALL.
UNION ALL supports columns of complex types; UNION DISTINCT does not.
Requirements
-
Each row of the UNION result set must be in the result set of at least one of its SELECT statements.
-
Each SELECT statement must specify the same number of columns.
-
Data types of corresponding SELECT statement columns must be compatible, otherwise Vertica returns an error.
ORDER BY, LIMIT, and OFFSET clauses in UNION
A UNION statement can specify its own ORDER BY, LIMIT, and OFFSET clauses, as in the following example:
=> SELECT id, emp_name FROM company_a UNION ALL SELECT id, emp_name FROM company_b ORDER BY emp_name LIMIT 2;
id | emp_name
------+----------
5678 | Alice
8765 | Bob
(2 rows)
Each SELECT statement in a UNION clause can specify its own ORDER BY, LIMIT, and OFFSET clauses. Vertica processes the SELECT statement clauses before it processes the UNION clauses. In the following example, Vertica processes the individual queries and then concatenates the two result sets:
=> (SELECT id, emp_name FROM company_a ORDER BY emp_name LIMIT 2)
UNION ALL
(SELECT id, emp_name FROM company_b ORDER BY emp_name LIMIT 2);
id | emp_name
------+-----------
5678 | Alice
9012 | Katherine
8765 | Bob
9012 | Katherine
(4 rows)
The following requirements and restrictions determine how Vertica processes a UNION clause that contains ORDER BY, LIMIT, and OFFSET clauses:
-
A UNION's ORDER BY clause must specify columns from the first (leftmost) SELECT statement.
-
ORDER BY must precede LIMIT and OFFSET.
-
When a SELECT or UNION statement specifies both LIMIT and OFFSET, Vertica first processes the OFFSET statement, and then applies the LIMIT statement to the remaining rows.
Vertica supports UNION in noncorrelated subquery predicates. For example:
=> SELECT DISTINCT customer_key, customer_name FROM public.customer_dimension
WHERE customer_key IN
(SELECT customer_key FROM store.store_sales_fact WHERE sales_dollar_amount > 500
UNION ALL
SELECT customer_key FROM online_sales.online_sales_fact WHERE sales_dollar_amount > 500)
AND customer_state = 'CT';
customer_key | customer_name
--------------+------------------------
7021 | Luigi T. Dobisz
1971 | Betty V. Dobisz
46284 | Ben C. Gauthier
33885 | Tanya Y. Taylor
5449 | Sarah O. Robinson
29059 | Sally Z. Fortin
11200 | Foodhope
15582 | John J. McNulty
24638 | Alexandra F. Jones
...
UNION ALL with complex types
You can use UNION ALL with complex types. Consider a table with the following definition:
=> CREATE TABLE restaurants(
name VARCHAR, cuisine VARCHAR,
locations ARRAY[ROW(city VARCHAR(50), state VARCHAR(2)),50],
menu ARRAY[ROW(item VARCHAR(50), price FLOAT),100] );
Suppose you are in a new city looking for a place to eat. The database has information about the following restaurants:
=> SELECT name, cuisine FROM restaurants
WHERE CONTAINS(locations,ROW('Pittsburgh', 'PA'));
name | cuisine
-------------------+----------
Bakersfield Tacos | Mexican
Bob's pizzeria | Italian
Succulent Steaks | American
Sushi House | Asian
Villa Milano | Italian
(5 rows)
Suppose you are hungry for Italian food. If you cannot have Italian, you want something inexpensive. The following query uses two SELECT clauses from the same table, one finding menu items for Italian restaurants and one finding menu items under $10:
=> WITH menu_entries AS
(SELECT name, cuisine,
EXPLODE(menu USING PARAMETERS skip_partitioning=true) AS (idx, menu_entry)
FROM restaurants WHERE CONTAINS(locations,ROW('Pittsburgh', 'PA')))
SELECT name, cuisine, menu_entry FROM menu_entries WHERE cuisine = 'Italian'
UNION ALL
SELECT name, cuisine, menu_entry FROM menu_entries WHERE menu_entry.price <= 10;
name | cuisine | menu_entry
-------------------+---------+--------------------------------------------
Bob's pizzeria | Italian | {"item":"cheese pizza","price":8.25}
Bob's pizzeria | Italian | {"item":"spinach pizza","price":10.5}
Villa Milano | Italian | {"item":"pasta carbonara","price":24.99}
Villa Milano | Italian | {"item":"eggplant parmesan","price":23.49}
Villa Milano | Italian | {"item":"herbed salmon","price":28.99}
Bakersfield Tacos | Mexican | {"item":"veggie taco","price":9.95}
Bob's pizzeria | Italian | {"item":"cheese pizza","price":8.25}
(7 rows)
You cannot use LIMIT OVER with UNION ALL if the selected columns are of complex types. In this case, the statement returns an error like "Multi-value expressions are not supported in this context". You can still use LIMIT OVER in a single SELECT statement by using parentheses to make the scoping explicit.
Examples
The examples that follow use these two tables:
=> SELECT * FROM company_a;
ID emp_name dept sales
------+------------+-------------+-------
1234 | Stephen | auto parts | 1000
5678 | Alice | auto parts | 2500
9012 | Katherine | floral | 500
=> SELECT * FROM company_b;
ID emp_name dept sales
------+------------+-------------+-------
4321 | Marvin | home goods | 250
9012 | Katherine | home goods | 500
8765 | Bob | electronics | 20000
The following query finds all employee IDs and names from the two tables. The UNION statement uses DISTINCT to combine unique IDs and last names of employees. Katherine works for both companies, so she appears only once in the result set. DISTINCT is the default and can be omitted:
=> SELECT id, emp_name FROM company_a
UNION DISTINCT SELECT id, emp_name FROM company_b ORDER BY id;
id | emp_name
------+-----------
1234 | Stephen
4321 | Marvin
5678 | Alice
8765 | Bob
9012 | Katherine
(5 rows)
If the UNION statement instead uses ALL, the query returns two records for Katherine:
=> SELECT id, emp_name FROM company_a
UNION ALL SELECT id, emp_name FROM company_b ORDER BY id;
id | emp_name
------+-----------
1234 | Stephen
5678 | Alice
9012 | Katherine
4321 | Marvin
9012 | Katherine
8765 | Bob
(6 rows)
The following query returns the top two salespeople in each company. Each SELECT statement specifies its own ORDER BY and LIMIT clauses, and the UNION statement concatenates the result sets as returned by each query:
=> (SELECT id, emp_name, sales FROM company_a ORDER BY sales DESC LIMIT 2)
UNION ALL
(SELECT id, emp_name, sales FROM company_b ORDER BY sales DESC LIMIT 2);
id | emp_name | sales
------+-----------+-------
8765 | Bob | 20000
5678 | Alice | 2500
1234 | Stephen | 1000
9012 | Katherine | 500
(4 rows)
The following query returns all employees in both companies with an overall ordering. The ORDER BY clause is part of the UNION statement:
=> SELECT id, emp_name, sales FROM company_a
UNION
SELECT id, emp_name, sales FROM company_b
ORDER BY sales;
id | emp_name | sales
------+-----------+-------
4321 | Marvin | 250
9012 | Katherine | 500
1234 | Stephen | 1000
5678 | Alice | 2500
8765 | Bob | 20000
(5 rows)
The following query groups total sales by department within each company. Each SELECT statement has its own GROUP BY clause. UNION combines the aggregate results from each query:
=> (SELECT 'Company A' as company, dept, SUM(sales) FROM company_a
GROUP BY dept)
UNION
(SELECT 'Company B' as company, dept, SUM(sales) FROM company_b
GROUP BY dept)
ORDER BY 1;
company | dept | sum
-----------+-------------+-------
Company A | auto parts | 3500
Company A | floral | 500
Company B | electronics | 20000
Company B | home goods | 750
(4 rows)
See also
40.14 - WHERE clause
Specifies which rows to include in a query's result set.
Specifies which rows to include in a query's result set.
Syntax
WHERE boolean-expression [ subquery ]...
Arguments
boolean-expression
- An expression that returns true or false. The result set only includes rows that evaluate to true. The expression can include boolean operators and the following predicate elements:
Use parentheses to group expressions, predicates, and boolean operators. For example:
... WHERE NOT (A=1 AND B=2) OR C=3;
Examples
The following example returns the names of all customers in the Eastern region whose name starts with the string Amer
:
=> SELECT DISTINCT customer_name
FROM customer_dimension
WHERE customer_region = 'East'
AND customer_name ILIKE 'Amer%';
customer_name
---------------
Americare
Americom
Americore
Americorp
Ameridata
Amerigen
Amerihope
Amerimedia
Amerishop
Ameristar
Ameritech
(11 rows)
40.15 - WITH clause
A WITH clause defines one or more named common table expressions (CTEs), where each CTE encapsulates a result set that can be referenced by another CTE in the same WITH clause, or by the primary query.
A WITH clause defines one or more named common table expressions (CTEs), where each CTE encapsulates a result set that can be referenced by another CTE in the same WITH clause, or by the primary query. Vertica can evaluate WITH clauses in two ways:
- Inline expansion (default): Vertica evaluates each WITH clause every time it is referenced by the primary query.
- Materialization: Vertica evaluates each WITH clause once, stores results in a temporary table, and references this table as often as the query requires.
In both cases, WITH clauses can help simplify complicated queries and avoid statement repetition.
Syntax
WITH [ /*+ENABLE_WITH_CLAUSE_MATERIALIZATION */ ] [ RECURSIVE ] {
cte-identifier [ ( column-aliases ) ] AS (
[ subordinate-WITH-clause ]
query-expression )
} [,...]
Arguments
/*+ENABLE_WITH_CLAUSE_MATERIALIZATION*/
- Enables materialization of all queries in the current WITH clause. Otherwise, materialization is set by configuration parameter WithClauseMaterialization, by default set to 0 (disabled). If WithClauseMaterialization is disabled, materialization is automatically cleared when the primary query of the WITH clause returns. For details, see Materialization of WITH clause.
RECURSIVE
- Specifies to iterate over the WITH clause's own result set, through repeated execution of an embedded UNION or UNION ALL statement. For details, see WITH clause recursion.
cte-identifier
- Identifies a common table expression (CTE) within a WITH clause. This identifier is available to CTEs of the same WITH clause, and of parent and child WITH clauses (if any). CTE identifiers of the outermost (primary) WITH clause are also available to the primary query.
All CTE identifiers of the same WITH clause must be unique. For example, the following WITH clause defines two CTEs, so they require unique identifiers: regional_sales
and top_regions
:
WITH
-- query sale amounts for each region
regional_sales AS (SELECT ... ),
top_regions AS ( SELECT ... )
)
column-aliases
- A comma-delimited list of result set column aliases. The list of aliases must map to all column expressions in the CTE query. If omitted, result set columns can only be referenced by the names used in the query.
In the following example, the revenue
CTE specifies two column aliases: vkey
and total_revenue
. These map to column vendor_key
and aggregate expression SUM(total_order_cost)
, respectively. The primary query references these aliases:
WITH revenue ( vkey, total_revenue ) AS (
SELECT vendor_key, SUM(total_order_cost)
FROM store.store_orders_fact
GROUP BY vendor_key ORDER BY vendor_key)
SELECT v.vendor_name, v.vendor_address, v.vendor_city, r.total_revenue
FROM vendor_dimension v JOIN revenue r ON v.vendor_key = r.vkey
WHERE r.total_revenue = (SELECT MAX(total_revenue) FROM revenue )
ORDER BY vendor_name;
subordinate-WITH-clause
- A WITH clause that is nested within the current one. CTEs of this WITH clause can only reference CTEs of the same clause, and of parent and child WITH clauses.
Important
The primary query can only reference CTEs in the outermost WITH clause. It cannot reference the CTEs of any nested WITH clause.
query-expression
- The query of a given CTE.
Restrictions
WITH clauses only support SELECT and INSERT statements. They do not support UPDATE or DELETE statements.
Examples
Single WITH clause with single CTE
The following SQL defines a WITH clause with one CTE, revenue
, which aggregates data in table store.store_orders_fact
. The primary query references the WITH clause result set twice: in its JOIN
clause and predicate:
-- define WITH clause
WITH revenue ( vkey, total_revenue ) AS (
SELECT vendor_key, SUM(total_order_cost)
FROM store.store_orders_fact
GROUP BY vendor_key ORDER BY 1)
-- End WITH clause
-- primary query
SELECT v.vendor_name, v.vendor_address, v.vendor_city, r.total_revenue
FROM vendor_dimension v JOIN revenue r ON v.vendor_key = r.vkey
WHERE r.total_revenue = (SELECT MAX(total_revenue) FROM revenue )
ORDER BY vendor_name;
vendor_name | vendor_address | vendor_city | total_revenue
------------------+----------------+-------------+---------------
Frozen Suppliers | 471 Mission St | Peoria | 49877044
(1 row)
Single WITH clause and multiple CTEs
In the following example, the WITH clause contains two CTEs:
The primary query aggregates sales by region and departments in the top_regions
result set:
WITH
-- query sale amounts for each region
regional_sales (region, total_sales) AS (
SELECT sd.store_region, SUM(of.total_order_cost) AS total_sales
FROM store.store_dimension sd JOIN store.store_orders_fact of ON sd.store_key = of.store_key
GROUP BY store_region ),
-- query previous result set
top_regions AS (
SELECT region, total_sales
FROM regional_sales ORDER BY total_sales DESC LIMIT 3
)
-- primary query
-- aggregate sales in top_regions result set
SELECT sd.store_region AS region, pd.department_description AS department, SUM(of.total_order_cost) AS product_sales
FROM store.store_orders_fact of
JOIN store.store_dimension sd ON sd.store_key = of.store_key
JOIN public.product_dimension pd ON of.product_key = pd.product_key
WHERE sd.store_region IN (SELECT region FROM top_regions)
GROUP BY ROLLUP (region, department) ORDER BY region, product_sales DESC, GROUPING_ID();
region | department | product_sales
---------+----------------------------------+---------------
East | | 1716917786
East | Meat | 189837962
East | Produce | 170607880
East | Photography | 162271618
East | Frozen Goods | 141077867
East | Gifts | 137604397
East | Bakery | 136497842
East | Liquor | 130410463
East | Canned Goods | 128683257
East | Cleaning supplies | 118996326
East | Dairy | 118866901
East | Seafood | 109986665
East | Medical | 100404891
East | Pharmacy | 71671717
MidWest | | 1287550770
MidWest | Meat | 141446607
MidWest | Produce | 125156100
MidWest | Photography | 122666753
MidWest | Frozen Goods | 105893534
MidWest | Gifts | 103088595
MidWest | Bakery | 102844467
MidWest | Canned Goods | 97647270
MidWest | Liquor | 97306898
MidWest | Cleaning supplies | 90775242
MidWest | Dairy | 89065443
MidWest | Seafood | 82541528
MidWest | Medical | 76674814
MidWest | Pharmacy | 52443519
West | | 2159765937
West | Meat | 235841506
West | Produce | 215277204
West | Photography | 205949467
West | Frozen Goods | 178311593
West | Bakery | 172824555
West | Gifts | 172134780
West | Liquor | 164798022
West | Canned Goods | 163330813
West | Cleaning supplies | 148776443
West | Dairy | 145244575
West | Seafood | 139464407
West | Medical | 126184049
West | Pharmacy | 91628523
| | 5164234493
(43 rows)
INSERT statement that includes WITH clause
The following SQL uses a WITH clause to insert data from a JOIN query into table total_store_sales
:
CREATE TABLE total_store_sales (store_key int, region VARCHAR(20), store_sales numeric (12,2));
INSERT INTO total_store_sales
WITH store_sales AS (
SELECT sd.store_key, sd.store_region::VARCHAR(20), SUM (of.total_order_cost)
FROM store.store_dimension sd JOIN store.store_orders_fact of ON sd.store_key = of.store_key
GROUP BY sd.store_region, sd.store_key ORDER BY sd.store_region, sd.store_key)
SELECT * FROM store_sales;
=> SELECT * FROM total_store_sales ORDER BY region, store_key;
store_key | region | store_sales
-----------+-----------+-------------
2 | East | 47668303.00
6 | East | 48136354.00
12 | East | 46673113.00
22 | East | 48711211.00
24 | East | 48603836.00
31 | East | 46836469.00
36 | East | 48461449.00
37 | East | 48018279.00
41 | East | 48713084.00
44 | East | 47808362.00
49 | East | 46990023.00
50 | East | 47643329.00
9 | MidWest | 46851087.00
15 | MidWest | 48787354.00
27 | MidWest | 48497620.00
29 | MidWest | 47639234.00
30 | MidWest | 49013483.00
38 | MidWest | 48856012.00
42 | MidWest | 47297912.00
45 | MidWest | 48544521.00
46 | MidWest | 48887255.00
4 | NorthWest | 47580215.00
39 | NorthWest | 47136892.00
47 | NorthWest | 48477574.00
8 | South | 48131455.00
13 | South | 47605422.00
17 | South | 46054367.00
...
(50 rows)
40.15.1 - Inline expansion of WITH clause
By default, Vertica uses inline expansion to evaluate WITH clauses.
By default, Vertica uses inline expansion to evaluate WITH clauses. Vertica evaluates each WITH clause every time it is referenced by the primary query. Inline expansion often works best if the query does not reference the same WITH clause multiple times, or if some local optimizations are possible after inline expansion.
Example
The following example shows a WITH clause that is a good candidate for inline expansion. The WITH clause is used in a query that obtains order information for all 2007 orders shipped between December 01-07:
-- Begin WITH
WITH store_orders_fact_new AS(
SELECT * FROM store.store_orders_fact WHERE date_shipped between '2007-12-01' and '2007-12-07')
-- End WITH
-- Begin primary query
SELECT store_key, product_key, product_version, SUM(quantity_ordered*unit_price) AS total_price
FROM store_orders_fact_new
GROUP BY store_key, product_key, product_version
ORDER BY total_price DESC;
store_key | product_key | product_version | total_price
-----------+-------------+-----------------+-------------
232 | 1855 | 2 | 29008
125 | 8500 | 4 | 28812
139 | 3707 | 2 | 28812
212 | 3203 | 1 | 28000
236 | 8023 | 4 | 27548
123 | 10598 | 2 | 27146
34 | 8888 | 4 | 27100
203 | 2243 | 1 | 27027
117 | 13932 | 2 | 27000
84 | 768 | 1 | 26936
123 | 1038 | 1 | 26885
106 | 18932 | 1 | 26864
93 | 10395 | 3 | 26790
162 | 13073 | 1 | 26754
15 | 3679 | 1 | 26675
52 | 5957 | 5 | 26656
190 | 8114 | 3 | 26611
5 | 7772 | 1 | 26588
139 | 6953 | 3 | 26572
202 | 14735 | 1 | 26404
133 | 2740 | 1 | 26312
198 | 8545 | 3 | 26287
221 | 7582 | 2 | 26280
127 | 9468 | 3 | 26224
63 | 8115 | 4 | 25960
171 | 2088 | 1 | 25650
250 | 11210 | 3 | 25608
...
Vertica processes the query as follows:
-
Expands the WITH clause reference to store_orders_fact_new
within the primary query.
-
After expanding the WITH clause, evaluates the primary query.
40.15.2 - Materialization of WITH clause
When materialization is enabled, Vertica evaluates each WITH clause once, stores results in a temporary table, and references this table as often as the query requires.
When materialization is enabled, Vertica evaluates each WITH clause once, stores results in a temporary table, and references this table as often as the query requires. Vertica drops the temporary table after primary query execution completes.
Note
If the primary query returns with an error, temporary tables might be dropped only after the client’s session ends.
Materialization can facilitate better performance when WITH clauses are complex—for example, when the WITH clauses contain JOIN and GROUP BY clauses, and are referenced multiple times in the primary query.
If materialization is enabled, WITH statements perform an auto-commit of the user transaction. This occurs even when using EXPLAIN with the WITH statement.
Enabling WITH clause materialization
WITH materialization is set by configuration parameter WithClauseMaterialization, by default set to 0 (disabled). You can enable and disable materialization by setting WithClauseMaterialization at database and session levels, with ALTER DATABASE and ALTER SESSION, respectively:
-
Database:
=> ALTER DATABASE db-spec SET PARAMETER WithClauseMaterialization={ 0 | 1 };
=> ALTER DATABASE db-spec CLEAR PARAMETER WithClauseMaterialization;
-
Session: Parameter setting remains in effect until you explicitly clear it, or the session ends.
=> ALTER SESSION SET PARAMETER WithClauseMaterialization={ 0 | 1 };
=> ALTER SESSION CLEAR PARAMETER WithClauseMaterialization;
You can also enable WITH materialization for individual queries with the hint ENABLE_WITH_CLAUSE_MATERIALIZATION. Materialization is automatically cleared when the query returns. For example:
=> WITH /*+ENABLE_WITH_CLAUSE_MATERIALIZATION */ revenue AS (
SELECT vendor_key, SUM(total_order_cost) AS total_revenue
FROM store.store_orders_fact
GROUP BY vendor_key ORDER BY 1)
...
Processing WITH clauses using EE5 temp relations
By default, when WITH clause queries are reused, Vertica saves those WITH clause query outputs in EE5 temp relations. However, this option can be changed. EE5 temp relation support is set by configuration parameter EnableWITHTempRelReuseLimit, which can be set in the following ways:
-
0: Disables this feature.
-
1: Force-saves all WITH clause queries into EE5 temp relations, whether or not they are reused.
-
2 (default): Saves only reused WITH clause queries into EE5 temp relations.
-
3 or more: Saves WITH clause queries into EE5 temp relations only when they are used at least this number of times.
EnableWITHTempRelReuseLimit can be set at database and session levels, with ALTER DATABASE and ALTER SESSION, respectively. When WithClauseMaterialization is set to 1, that setting overrides any EnableWITHTempRelReuseLimit settings.
Note that for WITH queries with complex types, temp relations are disabled.
Example
The following example shows a WITH clause that is a good candidate for materialization. The query obtains data for the vendor who has the highest combined order cost for all orders:
-- Enable materialization
=> ALTER SESSION SET PARAMETER WithClauseMaterialization=1;
-- Define WITH clause
=> WITH revenue AS (
SELECT vendor_key, SUM(total_order_cost) AS total_revenue
FROM store.store_orders_fact
GROUP BY vendor_key ORDER BY 1)
-- End WITH clause
-- Primary query
=> SELECT vendor_name, vendor_address, vendor_city, total_revenue
FROM vendor_dimension v, revenue r
WHERE v.vendor_key = r.vendor_key AND total_revenue = (SELECT MAX(total_revenue) FROM revenue )
ORDER BY vendor_name;
vendor_name | vendor_address | vendor_city | total_revenue
------------------+----------------+-------------+---------------
Frozen Suppliers | 471 Mission St | Peoria | 49877044
(1 row)
Vertica processes this query as follows:
-
WITH clause revenue
evaluates its SELECT statement from table store.store_orders_fact
.
-
Results of the revenue
clause are stored in a local temporary table.
-
Whenever the revenue
clause statement is referenced, the results stored in the table are used.
-
The temporary table is dropped when query execution is complete.
40.15.3 - WITH clause recursion
A WITH clause that includes the RECURSIVE option iterates over its own output through repeated execution of a UNION or UNION ALL query.
A WITH clause that includes the RECURSIVE option iterates over its own output through repeated execution of a UNION or UNION ALL query. Recursive queries are useful when working with self-referential data—hierarchies such as manager-subordinate relationships, or tree-structured data such as taxonomies.
The configuration parameter WithClauseRecursionLimit—by default set to 8—sets the maximum depth of recursion. You can set this parameter at database and session scopes with ALTER DATABASE and ALTER SESSION, respectively. Recursion continues until it reaches the configured maximum depth, or until the last iteration returns with no data.
You specify a recursive WITH clause as follows:
WITH [ /*+ENABLE_WITH_CLAUSE_MATERIALIZATION*/ ] RECURSIVE
cte-identifier [ ( column-aliases ) ] AS (
non-recursive-term
UNION [ ALL ]
recursive-term
)
Non-recursive and recursive terms are separated by UNION or UNION ALL:
-
The non-recursive-term
query sets its result set in cte-identifier
, which is subject to recursion in recursive-term
.
-
The UNION statement's recursive-term
recursively iterates over its own output. When recursion is complete, the results of all iterations are compiled and set in cte-identifier
.
For example:
=> ALTER SESSION SET PARAMETER WithClauseRecursionLimit=4; -- maximum recursion depth = 4
=> WITH RECURSIVE nums (n) AS (
SELECT 1 -- non-recursive (base) term
UNION ALL
SELECT n+1 FROM nums -- recursive term
)
SELECT n FROM nums; -- primary query
This simple query executes as follows:
-
Executes the WITH RECURSIVE clause:
-
Evaluates the non-recursive term SELECT 1, and places the result set—1—in nums
.
-
Iterates over the UNION ALL query (SELECT n+1) until the number of iterations is greater than the configuration parameter WithClauseRecursionLimit.
-
Combines the results of all UNION queries and sets the result set in nums
, and then exits to the primary query.
-
Executes the primary query SELECT n FROM nums:
n
---
1
2
3
4
5
(5 rows)
In this case , WITH RECURSIVE clause exits after four iterations as per WithClauseRecursionLimit. If you restore WithClauseRecursionLimit to its default value of 8, then the clause exits after eight iterations:
=> ALTER SESSION CLEAR PARAMETER WithClauseRecursionLimit;
=> WITH RECURSIVE nums (n) AS (
SELECT 1
UNION ALL
SELECT n+1 FROM nums
)
SELECT n FROM nums;
n
---
1
2
3
4
5
6
7
8
9
(9 rows)
Important
Be careful to set WithClauseRecursionLimit only as high as needed to traverse the deepest hierarchies. Vertica sets no limit on this parameter; however, a high value can incur considerable overhead that adversely affects performance and exhausts system resources.
If a high recursion count is required, then consider enabling materialization. For details, see WITH RECURSIVE Materialization.
Restrictions
The following restrictions apply:
-
The SELECT list of a non-recursive term cannot include the wildcard *
(asterisk) or the function MATCH_COLUMNS.
-
A recursive term can reference the target CTE only once.
-
Recursive reference cannot appear within an outer join.
-
Recursive reference cannot appear within a subquery.
-
WITH clauses do not support UNION options ORDER BY, LIMIT, and OFFSET.
Examples
A small software company maintains the following data on employees and their managers:
=> SELECT * FROM personnel.employees ORDER BY emp_id;
emp_id | fname | lname | section_id | section_name | section_leader | leader_id
--------+-----------+-----------+------------+---------------------+------------------+-----------
0 | Stephen | Mulligan | 0 | | |
1 | Michael | North | 201 | Development | Zoe Black | 3
2 | Megan | Berry | 202 | QA | Richard Chan | 18
3 | Zoe | Black | 101 | Product Development | Renuka Patil | 24
4 | Tim | James | 203 | IT | Ebuka Udechukwu | 17
5 | Bella | Tucker | 201 | Development | Zoe Black | 3
6 | Alexandra | Climo | 202 | QA | Richard Chan | 18
7 | Leonard | Gray | 203 | IT | Ebuka Udechukwu | 17
8 | Carolyn | Henderson | 201 | Development | Zoe Black | 3
9 | Ryan | Henderson | 201 | Development | Zoe Black | 3
10 | Frank | Tucker | 205 | Sales | Benjamin Glover | 29
11 | Nathan | Ferguson | 102 | Sales Marketing | Eric Redfield | 28
12 | Kevin | Rampling | 101 | Product Development | Renuka Patil | 24
13 | Tuy Kim | Duong | 201 | Development | Zoe Black | 3
14 | Dwipendra | Sing | 204 | Tech Support | Sarah Feldman | 26
15 | Dylan | Wijman | 206 | Documentation | Kevin Rampling | 12
16 | Tamar | Sasson | 207 | Marketing | Nathan Ferguson | 11
17 | Ebuka | Udechukwu | 101 | Product Development | Renuka Patil | 24
18 | Richard | Chan | 101 | Product Development | Renuka Patil | 24
19 | Maria | del Rio | 201 | Development | Zoe Black | 3
20 | Hua | Song | 204 | Tech Support | Sarah Feldman | 26
21 | Carmen | Lopez | 204 | Tech Support | Sarah Feldman | 26
22 | Edgar | Mejia | 206 | Documentation | Kevin Rampling | 12
23 | Riad | Salim | 201 | Development | Zoe Black | 3
24 | Renuka | Patil | 100 | Executive Office | Stephen Mulligan | 0
25 | Rina | Dsouza | 202 | QA | Richard Chan | 18
26 | Sarah | Feldman | 101 | Product Development | Renuka Patil | 24
27 | Max | Mills | 102 | Sales Marketing | Eric Redfield | 28
28 | Eric | Redfield | 100 | Executive Office | Stephen Mulligan | 0
29 | Benjamin | Glover | 102 | Sales Marketing | Eric Redfield | 28
30 | Dominic | King | 205 | Sales | Benjamin Glover | 29
32 | Ryan | Metcalfe | 206 | Documentation | Kevin Rampling | 12
33 | Piers | Paige | 201 | Development | Zoe Black | 3
34 | Nicola | Kelly | 207 | Marketing | Nathan Ferguson | 11
(34 rows)
You can query this data for employee-manager relationships through WITH RECURSIVE. For example, the following query's WITH RECURSIVE clause gets employee-manager relationships for employee Eric Redfield, including all employees who report directly and indirectly to him:
WITH RECURSIVE managers (employeeID, employeeName, sectionID, section, lead, leadID)
AS (SELECT emp_id, fname||' '||lname, section_id, section_name, section_leader, leader_id
FROM personnel.employees WHERE fname||' '||lname = 'Eric Redfield'
UNION
SELECT emp_id, fname||' '||lname AS employee_name, section_id, section_name, section_leader, leader_id FROM personnel.employees e
JOIN managers m ON m.employeeID = e.leader_id)
SELECT employeeID, employeeName, lead AS 'Reports to', section, leadID from managers ORDER BY sectionID, employeeName;
The WITH RECURSIVE clause defines the CTE managers
, and then executes in two phases:
-
The non-recursive term populates managers
with data that it queries from personnel.employees
.
-
The recursive term's UNION query iterates over its own output until, on the fourth cycle, it finds no more data. The results of all iterations are then compiled and set in managers
, and the WITH CLAUSE exits to the primary query.
The primary query returns three levels of data from managers
—one for each recursive iteration:
Similarly, the following query iterates over the same data to get all employee-manager relationships for employee Richard Chan, who is one level lower in the company chain of command:
WITH RECURSIVE managers (employeeID, employeeName, sectionID, section, lead, leadID)
AS (SELECT emp_id, fname||' '||lname, section_id, section_name, section_leader, leader_id
FROM personnel.employees WHERE fname||' '||lname = 'Richard Chan'
UNION
SELECT emp_id, fname||' '||lname AS employee_name, section_id, section_name, section_leader, leader_id FROM personnel.employees e
JOIN managers m ON m.employeeID = e.leader_id)
SELECT employeeID, employeeName, lead AS 'Reports to', section, leadID from managers ORDER BY sectionID, employeeName;
The WITH RECURSIVE clause executes as before, except this time it finds no more data after two iterations and exits. Accordingly, the primary query returns two levels of data from managers
:
WITH RECURSIVE materialization
By default, materialization is disabled. In this case, Vertica rewrites the WITH RECURSIVE query into subqueries, as many as necessary for the required level of recursion.
If recursion is very deep, the high number of query rewrites is liable to incur considerable overhead that adversely affects performance and exhausts system resources. In this case, consider enabling materialization, either with the configuration parameter WithClauseMaterialization, or the hint ENABLE_WITH_CLAUSE_MATERIALIZATION. In either case, intermediate result sets from all recursion levels are written to local temporary tables. When recursion is complete, the intermediate results in all temporary tables are compiled and passed on to the primary query.
Note
If materialization is not possible, you can improve throughput on a resource pool that handles recursive queries by setting its
EXECUTIONPARALLELISM parameter to 1.
41 - SET statements
SET statements let you change how the database operates, such as changing the autocommit settings or the resource pool your session uses.
SET statements let you change how the database operates, such as changing the autocommit settings or the resource pool your session uses.
41.1 - SET DATESTYLE
Specifies how to format date/time output for the current session.
Specifies how to format date/time output for the current session. Use
SHOW DATESTYLE
to verify the current output settings.
Syntax
SET DATESTYLE TO { arg | 'arg' }[, arg | 'arg' ]
Parameters
SET DATESTYLE
has a single parameter, which can be set to one or two arguments that specify date ordering and style. Each argument can be specified singly or in combination with the other; if combined, they can be specified in any order.
The following table describes each style and the date ordering arguments it supports:
Date style arguments |
Order arguments |
Example |
ISO (ISO 8601/SQL standard) |
n/a |
2016-03-16 00:00:00 |
GERMAN |
n/a |
16.03.2016 00:00:00 |
SQL |
MDY |
03/16/2016 00:00:00 |
DMY (default) |
16/03/2016 00:00:00 |
POSTGRES |
MDY (default) |
Wed Mar 16 00:00:00 2016 |
DMY |
Wed 16 Mar 00:00:00 2016 |
Vertica ignores the order argument for date styles ISO
and GERMAN
. If the date style is SQL
or POSTGRES
, the order setting determines whether dates are output in MDY
or DMY
order. Neither SQL
nor POSTGRES
support YMD
order. If you specify YMD
for SQL
or POSTGRES
, Vertica ignores it and uses their default MDY
order.
Date styles and ordering can also affect how Vertica interprets input values. For more information, see Date/time literals.
Privileges
None
In some cases, input format can determine output, regardless of date style and order settings:
-
Vertica ISO output for DATESTYLE
is ISO long form, but several input styles are accepted. If the year appears first in the input, YMD
is used for input and output, regardless of the DATESTYLE
value.
-
INTERVAL
input and output share the same format, with the following exceptions:
If the date style is set to ISO, output follows this format:
[ quantity unit [...] ] [ days ] [ hours:minutes:seconds ]
Examples
=> CREATE TABLE t(a DATETIME);
CREATE TABLE
=> INSERT INTO t values ('3/16/2016');
OUTPUT
--------
1
(1 row)
=> SHOW DATESTYLE;
name | setting
-----------+----------
datestyle | ISO, MDY
(1 row)
=> SELECT * FROM t;
a
---------------------
2016-03-16 00:00:00
(1 row)
=> SET DATESTYLE TO German;
SET
=> SHOW DATESTYLE;
name | setting
-----------+-------------
datestyle | German, DMY
(1 row)
=> SELECT * FROM t;
a
---------------------
16.03.2016 00:00:00
(1 row)
=> SET DATESTYLE TO SQL;
SET
=> SHOW DATESTYLE;
name | setting
-----------+----------
datestyle | SQL, DMY
(1 row)
=> SELECT * FROM t;
a
---------------------
16/03/2016 00:00:00
(1 row)
=> SET DATESTYLE TO Postgres, MDY;
SET
=> SHOW DATESTYLE;
name | setting
-----------+---------------
datestyle | Postgres, MDY
(1 row)
=> SELECT * FROM t;
a
--------------------------
Wed Mar 16 00:00:00 2016
(1 row)
41.2 - SET ESCAPE_STRING_WARNING
Issues a warning when a backslash is used in a string literal during the current .
Issues a warning when a backslash is used in a string literal during the current session.
Syntax
SET ESCAPE_STRING_WARNING TO { ON | OFF }
Parameters
ON
- [Default] Issues a warning when a back slash is used in a string literal.
Tip: Organizations that have upgraded from earlier versions of Vertica can use this as a debugging tool for locating backslashes that used to be treated as escape characters, but are now treated as literals.
OFF
- Ignores back slashes within string literals.
Privileges
None
Notes
Examples
The following example shows how to turn OFF escape string warnings for the session.
=> SET ESCAPE_STRING_WARNING TO OFF;
See also
41.3 - SET INTERVALSTYLE
Specifies whether to include units in interval output for the current .
Specifies whether to include units in interval output for the current session.
Syntax
SET INTERVALSTYLE TO [ plain | units ]
Parameters
- plain
- (default) Sets the default interval output to omit units.
- units
- Enables interval output to include subtype unit identifiers. When
INTERVALSTYLE
is set to units, the DATESTYLE parameter controls output. If you enable units and they do not display in the output, check the DATESTYLE parameter value, which must be set to ISO
or POSTGRES
for interval units to display.
Privileges
None
Examples
See Setting interval unit display.
41.4 - SET LOCALE
Specifies locale for the current.
Specifies locale for the current session.
You can also set the current locale with the vsql command
\locale
.
Syntax
SET LOCALE TO ICU-locale-identifier
Parameters
locale-identifier
- Specifies the ICU locale identifier to use, by default set to:
en_US@collation=binary
If set to an empty string, Vertica sets locale to en_US_POSIX
.
The following requirements apply:
Privileges
None
Commonly used locales
For details on identifier options, see About locale. For a complete list of locale identifiers, see the ICU Project.
de_DE
- German (Germany)
en_GB
- English (Great Britain)
es_ES
- Spanish (Spain)
fr_FR
- French (France)
pt_BR
- Portuguese (Brazil)
pt_PT
- Portuguese (Portugal)
ru_RU
- Russian (Russia)
ja_JP
- Japanese (Japan)
zh_CN
- Chinese (China, simplified Han)
zh_Hant_TW
- Chinese (Taiwan, traditional Han)
Examples
Set session locale to en_GB
:
=> SET LOCALE TO en_GB;
INFO 2567: Canonical locale: 'en_GB'
Standard collation: 'LEN'
English (United Kingdom)
SET
Use the short form of a locale:
=> SET LOCALE TO LEN;
INFO 2567: Canonical locale: 'en'
Standard collation: 'LEN'
English
SET
Specify collation:
=> SET LOCALE TO 'tr_tr@collation=standard';
INFO 2567: Canonical locale: 'tr_TR@collation=standard'
Standard collation: 'LTR'
Turkish (Turkey, collation=standard) Türkçe (Türkiye, Sıralama=standard)
SET
See also
41.5 - SET ROLE
Enables a role for the user's current session.
Enables a role for the user's current session. The user can access privileges that have been granted to the role. Enabling a role has no effect on roles that are currently enabled.
Syntax
SET ROLE roles-expression
Parameters
-
roles-expression
- Specifies what roles are the default roles for this user, with one of the following expressions:
-
NONE
(default): Disables all roles.
-
roles-list
: A comma-delimited list of roles to enable. You can only set roles that are currently granted to you.
-
ALL [EXCEPT
roles-list
]
: Enables all roles currently granted to this user, excluding any comma-delimited roles specified in the optional EXCEPT
clause.
-
DEFAULT
: Enables all default roles. Default roles are, by definition, enabled automatically, but this option might be useful for re-enabling them if they are disabled with SET ROLE NONE.
Privileges
None
Examples
This example shows the following:
-
SHOW AVAILABLE_ROLES; lists the roles available to the user, but not enabled.
-
SET ROLE applogs; enables the applogs role for the user.
-
SHOW ENABLED_ROLES; lists the applogs role as enabled (SET) for the user.
-
SET ROLE appuser; enables the appuser role for the user.
-
SHOW ENABLED_ROLES now lists both applogs and appuser as enabled roles for the user.
-
SET ROLE NONE disables all the users' enabled roles .
-
SHOW ENABLED_ROLES shows that no roles are enabled for the user.
=> SHOW AVAILABLE_ROLES;
name | setting
-----------------+----------------------------
available roles | applogs, appadmin, appuser
(1 row)
=> SET ROLE applogs;
SET
=> SHOW ENABLED_ROLES;
name | setting
---------------+---------
enabled roles | applogs
(1 row)
=> SET ROLE appuser;
SET
=> SHOW ENABLED_ROLES;
name | setting
---------------+------------------
enabled roles | applogs, appuser
(1 row)
=> SET ROLE NONE;
SET
=> SHOW ENABLED_ROLES;
name | setting
---------------+---------
enabled roles |
(1 row)
Set User Default Roles
Though the DBADMIN user is normally responsible for setting a user's default roles, as a user you can set your own role. For example, if you run SET ROLE NONE all of your enabled roles are disabled. Then it was determined you need access to role1 as a default role. The DBADMIN uses ALTER USER to assign you a default role:
=> ALTER USER user1 default role role1;
This example sets role1 as user1's default role because the DBADMIN assigned this default role using ALTER USER.
user1 => SET ROLE default;
user1 => SHOW ENABLED_ROLES;
name | setting
-----------------------
enabled roles | role1
(1 row)
Set All Roles as Default
This example makes all roles granted to user1 default roles:
user1 => SET ROLE all;
user1 => show enabled roles;
name | setting
----------------------------------
enabled roles | role1, role2, role3
(1 row)
Set All Roles as Default With EXCEPT
This example makes all the roles granted to the user default roles with the exception of role1.
user1 => set role all except role1;
user1 => SHOW ENABLED_ROLES
name | setting
----------------------------
enabled roles | role2, role3
(1 row)
41.6 - SET SEARCH_PATH
Specifies the order in which Vertica searches schemas when a SQL statement specifies a table name that is unqualified by a schema name.
Specifies the order in which Vertica searches schemas when a SQL statement specifies a table name that is unqualified by a schema name. SET SEARCH_PATH
overrides the current session's search path, which is initially set from the user profile. This search path remains in effect until the next SET SEARCH_PATH
statement, or the session ends. For details, see Setting search paths.
To view the current search path, use
SHOW SEARCH_PATH
.
Syntax
SET SEARCH_PATH { TO | = } { schema-list | DEFAULT }
Parameters
schema-list
- A comma-delimited list of schemas that indicates the order in which Vertica searches schemas for a table whose name is unqualified by a schema name.
If the search path includes a schema that does not exist, or for which the user lacks access privileges, Vertica silently skips over that schema.
DEFAULT
- Sets the search path to the database default:
"$user", public, v_catalog, v_monitor, v_internal
Privileges
None
Examples
Show the current search path:
=> SHOW SEARCH_PATH;
name | setting
-------------+---------------------------------------------------
search_path | "$user", public, v_catalog, v_monitor, v_internal
(1 row)
Reset the search path to schemas store
and public
:
=> SET SEARCH_PATH TO store, public;
=> SHOW SEARCH_PATH;
name | setting
-------------+-------------------------------------------------
search_path | store, public, v_catalog, v_monitor, v_internal
(1 row)
Reset the search path to the database default settings:
=> SET SEARCH_PATH TO DEFAULT;
SET
=> SHOW SEARCH_PATH;
name | setting
-------------+---------------------------------------------------
search_path | "$user", public, v_catalog, v_monitor, v_internal
(1 row)
41.7 - SET SESSION AUTHORIZATION
Sets the current and session user for the current database connection. You can change session authorization to execute queries as another user for testing or debugging purposes, or to limit query access.
Syntax
SET SESSION AUTHORIZATION { username | DEFAULT }
Parameters
username
- The name of the user that you want to authorize for the current SQL session.
DEFAULT
- Sets session authorization to the dbadmin user.
Privileges
Superuser
Examples
In the following example, the dbadmin
gives the debuguser
user session authorization, and then changes the session authorization back to the dbadmin
user.
-
Verify the current user and session user:
=> SELECT CURRENT_USER(), SESSION_USER();
current_user | session_user
--------------+--------------
dbadmin | dbadmin
(1 row)
-
Set authorization for the current session to debuguser
, and verify the changes:
=> SET SESSION AUTHORIZATION debuguser;
SET
=> SELECT CURRENT_USER(), SESSION_USER();
current_user | session_user
--------------+--------------
debuguser | debuguser
(1 row)
-
After you complete debugging tasks, set the session authorization to DEFAULT
to set the current and session user back to dbadmin
user, and verify the changes:
=> SET SESSION AUTHORIZATION DEFAULT;
SET
=> SELECT CURRENT_USER(), SESSION_USER();
current_user | session_user
--------------+--------------
dbadmin | dbadmin
(1 row)
41.8 - SET SESSION AUTOCOMMIT
Sets whether statements automatically commit their transactions on completion.
Sets whether statements automatically commit their transactions on completion. This statement is primarily used by the client drivers to enable and disable autocommit, you should never have to directly call it.
Syntax
SET SESSION AUTOCOMMIT TO { ON | OFF }
Parameters
ON
- Enable autocommit. Statements automatically commit their transactions when they complete. This is the default setting for connections made using the Vertica client libraries.
OFF
- Disable autocommit. Transactions are not automatically committed. This is the default for interactive sessions (connections made through vsql).
Privileges
None
Examples
This examples show how to set AUTOCOMMIT to 'on' and then to 'off'.
=> SET SESSION AUTOCOMMIT TO on;
SET
=> SET SESSION AUTOCOMMIT TO off;
SET
See also
41.9 - SET SESSION CHARACTERISTICS AS TRANSACTION
Sets the isolation level and access mode of all transactions that start after this statement is issued.
Sets the isolation level and access mode of all transactions that start after this statement is issued.
A transaction retains its isolation level until it completes, even if the session's isolation level changes during the transaction. Vertica internal processes (such as the Tuple Mover and refresh operations) and DDL operations always run at the SERIALIZABLE isolation level to ensure consistency.
Syntax
SET SESSION CHARACTERISTICS AS TRANSACTION settings
settings
- One or both of the following:
ISOLATION LEVEL argument
ISOLATION LEVEL arguments
The ISOLATION LEVEL clause determines what data the transaction can access when other transactions run concurrently. You cannot change the isolation level after the first query (SELECT) or DML statement (INSERT, DELETE, UPDATE) if a transaction has run.
Set ISOLATION LEVEL to one of the following arguments:
SERIALIZABLE
- Sets the strictest level of SQL transaction isolation. This level emulates transactions serially, rather than concurrently. It holds locks and blocks write operations until the transaction completes.
Applications that use SERIALIZABLE
must be prepared to retry transactions in the event of serialization failures. This isolation level is not recommended for normal query operations.
Setting the transaction isolation level to SERIALIZABLE
does not apply to temporary tables. Temporary tables are isolated by their transaction scope.
REPEATABLE READ
- Automatically converted to
SERIALIZABLE
.
READ COMMITTED
- Default, allows concurrent transactions.
READ UNCOMMITTED
- Automatically converted to
READ COMMITTED
.
READ WRITE/READ ONLY
You can set the transaction access mode with one of the following:
READ WRITE
- Default, allows read/write access to SQL statements.
READ ONLY
- Disallows SQL statements that require write access:
-
INSERT, UPDATE, DELETE, and COPY operations on any non-temporary table.
-
CREATE, ALTER, and DROP
-
GRANT, REVOKE
-
EXPLAIN if the SQL statement to explain requires write access.
Note
Setting the transaction session mode to read-only does not prevent all write operations.
Privileges
None
Viewing session transaction characteristics
SHOW TRANSACTION_ISOLATION and SHOW TRANSACTION_READ_ONLY show the transaction settings for the current session:
=> SHOW TRANSACTION_ISOLATION;
name | setting
-----------------------+--------------
transaction_isolation | SERIALIZABLE
(1 row)
=> SHOW TRANSACTION_READ_ONLY;
name | setting
-----------------------+---------
transaction_read_only | true
(1 row)
41.10 - SET SESSION GRACEPERIOD
Sets how long a session socket remains blocked while awaiting client input or output for a given query.
Sets how long a session socket remains blocked while awaiting client input or output for a given query. If the socket is blocked for a continuous period that exceeds the grace period setting, the server shuts down the socket and throws a fatal error. The session is then terminated. If no grace period is set, the query can maintain its block on the socket indefinitely.
Vertica applies a session's grace period and
RUNTIMECAP
settings independently. If no grace period is set, a query can continue to block indefinitely on a session socket, regardless of the query's RUNTIMECAP
setting.
Syntax
SET SESSION GRACEPERIOD duration
Parameters
duration
- Specifies how long a query can block on any session socket, one of the following:
-
'
interval
'
: Specifies as an interval the maximum grace period for current session queries, up to 20 days.
-
=DEFAULT
: Sets the grace period for queries in this session to the user's GRACEPERIOD
value. A new session is initially set to this value.
-
NONE
: Valid only for superusers, removes any grace period previously set on session queries.
Privileges
-
Superusers can increase session grace period to any value, regardless of database or node settings.
-
Non-superusers can only set the session grace period to a value equal to or lower than their own user setting. If no grace period is explicitly set for a user, the grace period for that user is inherited from the node or database settings.
Examples
See Handling session socket blocking in the Administrator's Guide.
41.11 - SET SESSION IDLESESSIONTIMEOUT
Sets the maximum amount of time that a session can remain idle before it exits.
Sets the maximum amount of time that a session can remain idle before it exits.
Note
An idle session has no queries running.
Syntax
SET SESSION IDLESESSIONTIMEOUT duration
Parameters
duration
- Specifies the amount of time a session can remain idle before it exits:
-
NONE
(default): No idle timeout set on the session.
-
'
interval
'
: Specifies as an interval the maximum amount of time a session can remain idle.
-
=DEFAULT
: Sets the idle timeout period for this session to the user's IDLESESSIONTIMEOUT
value.
Privileges
-
Superusers can increase the time a session can remain idle to any value, regardless of database or node settings.
-
Non-superusers can only set the session idle time to a value equal to or lower than their own user setting. If no session idle time is explicitly set for a user, the session idle time for that user is inherited from the node or database settings.
Examples
See Managing client connections in the Administrator's Guide.
41.12 - SET SESSION MEMORYCAP
Limits how much memory can be allocated to any request in the current.
Limits how much memory can be allocated to any request in the current session. This limit only applies to the current session; it does not limit the total amount of memory used by multiple sessions.
Syntax
SET SESSION MEMORYCAP limit
Parameters
limit
- One of the following:
-
'max-expression
': A string value that specifies the memory limit, one of the following:
-
int
%
— Expresses the maximum as a percentage of total memory available to the Resource manager, where int
is an integer value between 0 and 100.For example:
MEMORYCAP '40%'
-
int
{K|M|G|T}
— Expresses memory allocation in kilobytes, megabytes, gigabytes, or terabytes. For example:
MEMORYCAP '10G'
-
=DEFAULT
: Sets the memory cap for queries in this session to the user's MEMORYCAP
value.A new session is initially set to this value.
-
NONE
: Removes the memory cap for this session.
Privileges
Examples
Set the session memory cap to 2 gigabytes:
=> SET SESSION MEMORYCAP '2G';
SET
=> SHOW MEMORYCAP;
name | setting
-----------+---------
memorycap | 2097152
(1 row)
Revert the memory cap to the default setting as specified in the user profile:
=> SET MEMORYCAP=DEFAULT;
SET
=> SHOW MEMORYCAP;
name | setting
-----------+---------
memorycap | 2013336
(1 row)
See also
Managing workloads
41.13 - SET SESSION MULTIPLEACTIVERESULTSETS
Enables or disable the execution of multiple active result sets (MARS) on a single JDBC connection.
Enables or disable the execution of multiple active result sets (MARS) on a single JDBC connection. Using this option requires an active JDBC connection.
Syntax
SET SESSION MULTIPLEACTIVERESULTSETS TO { ON | OFF }
Parameters
ON
- Enable MultipleActiveResultSets.Allows you to execute multiple result sets on a single connection.
OFF
- Disable MultipleActiveResultSets. Allows only one active result set per connection.(Default value.)
Privileges
None
Examples
This example shows how you can set MultipleActiveResultSets to on and then to off:
=> SET SESSION MULTIPLEACTIVERESULTSETS TO on;
SET
=> SET SESSION MULTIPLEACTIVERESULTSETS TO off;
SET
41.14 - SET SESSION RESOURCE_POOL
Associates the user with the specified resource pool.
Associates the user session with the specified resource pool.
Syntax
SET SESSION RESOURCE_POOL = { pool-name | DEFAULT }
Parameters
pool-name
- The name of an existing resource pool to associate with the current session. Non-superusers must have USAGE privileges on the specified resource pool.
DEFAULT
- Sets the session's resource pool to the resource pool assigned to this user.
Privileges
None
Examples
This example sets ceo_pool
as the session resource pool:
=> SET SESSION RESOURCE_POOL = ceo_pool;
SET
See also
41.15 - SET SESSION RUNTIMECAP
Sets the maximum amount of time queries and stored procedures can run in a given session.
Sets the maximum amount of time queries and stored procedures can run in a given session. If a query or stored procedure exceeds its session's RUNTIMECAP
, Vertica terminates it and returns an error. You cannot increase the RUNTIMECAP
beyond the limit that is set in your user profile.
Note
Vertica does not strictly enforce session RUNTIMECAP
settings. If you time a query or stored procedure, you might discover that it runs longer than the RUNTIMECAP
setting.
Syntax
SET SESSION RUNTIMECAP duration
Parameters
duration
- Specifies how long a given query can run in the current session, one of the following:
-
NONE
(default): Removes a runtime limit for all current session queries.
-
'
interval
'
: Specifies as an interval the maximum runtime for current session queries, up to one year—for example, 1 minute
or 100 seconds
.
-
=DEFAULT
: Sets maximum runtime for queries in this session to the user's RUNTIMECAP
value.
Privileges
Examples
Set the maximum query runtime for the current session to 10 minutes:
=> SET SESSION RUNTIMECAP '10 minutes';
Revert the session RUNTIMECAP
to your user default setting:
=> SET SESSION RUNTIMECAP =DEFAULT;
SET
=> SHOW RUNTIMECAP;
name | setting
------------+-----------
runtimecap | UNLIMITED
(1 row)
Set the RUNTIMECAP to 1 SECOND
and run an anonymous procedure with an infinite loop:
=> SET SESSION RUNTIMECAP '1 SECOND';
SET
=> DO $$
BEGIN
LOOP
END LOOP;
END;
$$;
ERROR 0: Query exceeded maximum runtime
HINT: Change the maximum runtime using SET SESSION RUNTIMECAP
See also
41.16 - SET SESSION TEMPSPACECAP
Sets the maximum amount of temporary file storage that any request issued by the can consume.
Sets the maximum amount of temporary file storage that any request issued by the session can consume. If a query's execution plan requires more storage space than the session TEMPSPACECAP, it returns an error.
Syntax
SET SESSION TEMPSPACECAP limit
Arguments
limit
- The maximum amount of temporary file storage to allocate to the current session, one of the following:
-
NONE
(default): Unlimited temporary storage
-
= DEFAULT
: Session TEMPSPACECAP is set to the user's TEMPSPACECAP value.
-
String that specifies the storage limit, one of the following:
-
int
%
expresses the maximum as a percentage of total temporary storage available to the Resource Manager, where int
is an integer value between 0 and 100. For example:
SET SESSION TEMPSPACECAP '40%';
-
int
{K|M|G|T}
expresses storage allocation in kilobytes, megabytes, gigabytes, or terabytes. For example:
SET SESSION TEMPSPACECAP '10G';
Privileges
Non-superusers:
Examples
Set the session TEMPSPACECAP to 20 gigabytes:
=> SET SESSION TEMPSPACECAP '20G';
SET
=> SHOW TEMPSPACECAP;
name | setting
--------------+----------
tempspacecap | 20971520
(1 row)
Note
SHOW displays the TEMPSPACECAP in kilobytes.
Set the session TEMPSPACECAP to unlimited:
=> SET SESSION TEMPSPACECAP NONE;
SET
=> SHOW TEMPSPACECAP;
name | setting
--------------+-----------
tempspacecap | UNLIMITED
(1 row)
See also
41.17 - SET SESSION WORKLOAD
Sets the workload for the session.
Sets the workload for the current session.
For details on this and other session parameters, see SHOW:
Syntax
SET SESSION WORKLOAD TO { workload_name | DEFAULT | NONE }
Parameters
workload_name
- The workload to use for the current session. The specified workload must be associated with a workload routing rule.
DEFAULT
- Sets the workload to the workload granted to the current user or their roles. If they have more than one workload, the one with the highest priority is used.
NONE
- Removes the workload for the current session.
Privileges
None
Examples
The following example sets analytics
as the workload for the current session:
=> CREATE ROUTING RULE analytic_rule ROUTE WORKLOAD analytics TO SUBCLUSTER my_subcluster;
=> SET SESSION WORKLOAD analytics;
41.18 - SET STANDARD_CONFORMING_STRINGS
Specifies whether to treat backslashes as escape characters for the current session.
Specifies whether to treat backslashes as escape characters for the current session. By default, Vertica conforms to the SQL standard and supports SQL:2008 string literals within Unicode escapes.
Syntax
SET STANDARD_CONFORMING_STRINGS TO { ON | OFF }
Parameters
ON
- (Default) Treat ordinary string literals ('...') as backslashes () literally. This means that backslashes are treated as string literals and not as escape characters.
OFF
- Treat backslashes as escape characters.
Privileges
None
Requirements
Examples
Turn off conforming strings for the session:
=> SET STANDARD_CONFORMING_STRINGS TO OFF;
Verify the current setting:
=> SHOW STANDARD_CONFORMING_STRINGS;
name | setting
-----------------------------+---------
standard_conforming_strings | off
(1 row)
Turn on conforming strings for the session:
=> SET STANDARD_CONFORMING_STRINGS TO ON;
See also
41.19 - SET TIME ZONE
Changes the TIME ZONE run-time parameter for the current.
Changes the TIME ZONE run-time parameter for the current session. Use
SHOW TIMEZONE
to show the session's current time zone.
If you set the timezone using POSIX format, the timezone abbreviation you use overrides the default timezone abbreviation. If the date style is set to POSTGRES, the timezone abbreviation you use is also used when converting a timestamp to a string.
Syntax
SET TIME ZONE [TO] { value | 'value' }
Note
Vertica treats literals TIME ZONE
and TIMEZONE
as synonyms.
Parameters
value
- One of the following:
-
A time zone literal supported by Vertica. To view the default list of valid literals, see the files in the following directory:
/opt/vertica/share/timezonesets
-
A signed integer representing an offset from UTC in hours
-
A time zone literal with a signed integer offset. For example:
=> SET TIME ZONE TO 'America/New York -3'; -- equivalent to Pacific time
Note
Only valid timezone+offset combinations are meaningful as arguments to SET TIME ZONE. However, Vertica does not return an error for meaningless combinations—for example, America/NewYork + 150
.
-
An interval value
-
Constants LOCAL
and DEFAULT
, which respectively set the time zone to the one specified in environment variable TZ
, or if TZ
is undefined, to the operating system time zone.
Only valid (timezone+offset) combination are acceptable as parameter for this function.
Privileges
None
Examples
=> SET TIME ZONE TO DEFAULT;
=> SET TIME ZONE TO 'PST8PDT'; -- Berkeley, California
=> SET TIME ZONE TO 'Europe/Rome'; -- Italy
=> SET TIME ZONE TO '-7'; -- UDT offset equivalent to PDT
=> SET TIME ZONE TO INTERVAL '-08:00 HOURS';
See also
Using time zones with Vertica
41.19.1 - Time zone names for setting TIME ZONE
The time zone names listed below are recognized by Vertica as valid settings for the SQL time zone (the TIME ZONE run-time parameter).
The time zone names listed below are valid settings for the SQL time zone (the TIME ZONE run-time parameter).
These names are not the same as the names shown in
/opt/vertica/share/timezonesets
, which are recognized by Vertica in date/time input values. The TIME ZONE names listed below imply a local Daylight Saving Time rule, where date/time input names represent a fixed offset from UTC.
In many cases, the same zone has several names. These are grouped together. The list is sorted primarily by commonly used zone names.
In addition to the names in the list, Vertica accepts time zone names as one of the following:
where STD
is a zone abbreviation, offset
is a numeric offset in hours west from UTC, and DST
is an optional Daylight Saving Time zone abbreviation, assumed to stand for one hour ahead of the given offset.
For example, if EST5EDT
were not already a recognized zone name, Vertica accepts it as functionally equivalent to USA East Coast time. When a Daylight Saving Time zone name is present, Vertica assumes it uses USA time zone rules, so this feature is of limited use outside North America.
Caution
Be aware that this provision can lead to silently accepting invalid input, as there is no check on the reasonableness of the zone abbreviations. For example, SET TIME ZONE TO FOOBANKO
works, leaving the system effectively using a rather peculiar abbreviation for GMT.
Time zones
- Africa:
- Africa/Abidjan
- Africa/Accra
- Africa/Addis_Ababa
- Africa/Algiers
- Africa/Asmera
- Africa/Bamako
- Africa/Bangui
- Africa/Banjul
- Africa/Bissau
- Africa/Blantyre
- Africa/Brazzaville
- Africa/Bujumbura
- Africa/Cairo Egypt
- Africa/Casablanca
- Africa/Ceuta
- Africa/Conakry
- Africa/Dakar
- Africa/Dar_es_Salaam
- Africa/Djibouti
- Africa/Douala
- Africa/El_Aaiun
- Africa/Freetown
- Africa/Gaborone
- Africa/Harare
- Africa/Johannesburg
- Africa/Kampala
- Africa/Khartoum
- Africa/Kigali
- Africa/Kinshasa
- Africa/Lagos
- Africa/Libreville
- Africa/Lome
- Africa/Luanda
- Africa/Lubumbashi
- Africa/Lusaka
- Africa/Malabo
- Africa/Maputo
- Africa/Maseru
- Africa/Mbabane
- Africa/Mogadishu
- Africa/Monrovia
- Africa/Nairobi
- Africa/Ndjamena
- Africa/Niamey
- Africa/Nouakchott
- Africa/Ouagadougou
- Africa/Porto-Novo
- Africa/Sao_Tome
- Africa/Timbuktu
- Africa/Tripoli Libya
- Africa/Tunis
- Africa/Windhoek
- America
- America/Adak America/Atka US/Aleutian
- America/Anchorage SystemV/YST9YDT US/Alaska
- America/Anguilla
- America/Antigua
- America/Araguaina
- America/Aruba
- America/Asuncion
- America/Bahia
- America/Barbados
- America/Belem
- America/Belize
- America/Boa_Vista
- America/Bogota
- America/Boise
- America/Buenos_Aires
- America/Cambridge_Bay
- America/Campo_Grande
- America/Cancun
- America/Caracas
- America/Catamarca
- America/Cayenne
- America/Cayman
- America/Chicago CST6CDT SystemV/CST6CDT US/Central
- America/Chihuahua
- America/Cordoba America/Rosario
- America/Costa_Rica
- America/Cuiaba
- America/Curacao
- America/Danmarkshavn
- America/Dawson
- America/Dawson_Creek
- America/Denver MST7MDT SystemV/MST7MDT US/Mountain America/Shiprock Navajo
- America/Detroit US/Michigan
- America/Dominica
- America/Edmonton Canada/Mountain
- America/Eirunepe
- America/El_Salvador
- America/Ensenada America/Tijuana Mexico/BajaNorte
- America/Fortaleza
- America/Glace_Bay
- America/Godthab
- America/Goose_Bay
- America/Grand_Turk
- America/Grenada
- America/Guadeloupe
- America/Guatemala
- America/Guayaquil
- America/Guyana
- America/Halifax Canada/Atlantic SystemV/AST4ADT
- America/Havana Cuba
- America/Hermosillo
- America/Indiana/Indianapolis
- America/Indianapolis
- America/Fort_Wayne EST SystemV/EST5 US/East-Indiana
- America/Indiana/Knox America/Knox_IN US/Indiana-Starke
- America/Indiana/Marengo
- America/Indiana/Vevay
- America/Inuvik
- America/Iqaluit
- America/Jamaica Jamaica
- America/Jujuy
- America/Juneau
- America/Kentucky/Louisville America/Louisville
- America/Kentucky/Monticello
- America/La_Paz
- America/Lima
- America/Los_Angeles PST8PDT SystemV/PST8PDT US/Pacific US/Pacific- New
- America/Maceio
- America/Managua
- America/Manaus Brazil/West
- America/Martinique
- America/Mazatlan Mexico/BajaSur
- America/Mendoza
- America/Menominee
- America/Merida
- America/Mexico_City Mexico/General
- America/Miquelon
- America/Monterrey
- America/Montevideo
- America/Montreal
- America/Montserrat
- America/Nassau
- America/New_York EST5EDT SystemV/EST5EDT US/Eastern
- America/Nipigon
- America/Nome
- America/Noronha Brazil/DeNoronha
- America/North_Dakota/Center
- America/Panama
- America/Pangnirtung
- America/Paramaribo
- America/Phoenix MST SystemV/MST7 US/Arizona
- America/Port-au-Prince
- America/Port_of_Spain
- America/Porto_Acre America/Rio_Branco Brazil/Acre
- America/Porto_Velho
- America/Puerto_Rico SystemV/AST4
- America/Rainy_River
- America/Rankin_Inlet
- America/Recife
- America/Regina Canada/East-Saskatchewan Canada/Saskatchewan SystemV/CST6
- America/Santiago Chile/Continental
- America/Santo_Domingo
- America/Sao_Paulo Brazil/East
- America/Scoresbysund
- America/St_Johns Canada/Newfoundland
- America/St_Kitts
- America/St_Lucia
- America/St_Thomas America/Virgin
- America/St_Vincent
- America/Swift_Current
- America/Tegucigalpa
- America/Thule
- America/Thunder_Bay
- America/Toronto Canada/Eastern
- America/Tortola
- America/Vancouver Canada/Pacific
- America/Whitehorse Canada/Yukon
- America/Winnipeg Canada/Central
- America/Yakutat
- America/Yellowknife
- Antarctica
- Antarctica/Casey
- Antarctica/Davis
- Antarctica/DumontDUrville
- Antarctica/Mawson
- Antarctica/McMurdo
- Antarctica/South_Pole
- Antarctica/Palmer
- Antarctica/Rothera
- Antarctica/Syowa
- Antarctica/Vostok
- Asia
- Asia/Aden
- Asia/Almaty
- Asia/Amman
- Asia/Anadyr
- Asia/Aqtau
- Asia/Aqtobe
- Asia/Ashgabat Asia/Ashkhabad
- Asia/Baghdad
- Asia/Bahrain
- Asia/Baku
- Asia/Bangkok
- Asia/Beirut
- Asia/Bishkek
- Asia/Brunei
- Asia/Calcutta
- Asia/Choibalsan
- Asia/Chongqing Asia/Chungking
- Asia/Colombo
- Asia/Dacca Asia/Dhaka
- Asia/Damascus
- Asia/Dili
- Asia/Dubai
- Asia/Dushanbe
- Asia/Gaza
- Asia/Harbin
- Asia/Hong_Kong Hongkong
- Asia/Hovd
- Asia/Irkutsk
- Asia/Jakarta
- Asia/Jayapura
- Asia/Jerusalem Asia/Tel_Aviv Israel
- Asia/Kabul
- Asia/Kamchatka
- Asia/Karachi
- Asia/Kashgar
- Asia/Katmandu
- Asia/Krasnoyarsk
- Asia/Kuala_Lumpur
- Asia/Kuching
- Asia/Kuwait
- Asia/Macao Asia/Macau
- Asia/Magadan
- Asia/Makassar Asia/Ujung_Pandang
- Asia/Manila
- Asia/Muscat
- Asia/Nicosia Europe/Nicosia
- Asia/Novosibirsk
- Asia/Omsk
- Asia/Oral
- Asia/Phnom_Penh
- Asia/Pontianak
- Asia/Pyongyang
- Asia/Qatar
- Asia/Qyzylorda
- Asia/Rangoon
- Asia/Riyadh
- Asia/Riyadh87 Mideast/Riyadh87
- Asia/Riyadh88 Mideast/Riyadh88
- Asia/Riyadh89 Mideast/Riyadh89
- Asia/Saigon
- Asia/Sakhalin
- Asia/Samarkand
- Asia/Seoul ROK
- Asia/Shanghai PRC
- Asia/Singapore Singapore
- Asia/Taipei ROC
- Asia/Tashkent
- Asia/Tbilisi
- Asia/Tehran Iran
- Asia/Thimbu Asia/Thimphu
- Asia/Tokyo Japan
- Asia/Ulaanbaatar Asia/Ulan_Bator
- Asia/Urumqi
- Asia/Vientiane
- Asia/Vladivostok
- Asia/Yakutsk
- Asia/Yekaterinburg
- Asia/Yerevan
- Atlantic
- Atlantic/Azores
- Atlantic/Bermuda
- Atlantic/Canary
- Atlantic/Cape_Verde
- Atlantic/Faeroe
- Atlantic/Madeira
- Atlantic/Reykjavik Iceland
- Atlantic/South_Georgia
- Atlantic/St_Helena
- Atlantic/Stanley
- Australia
- Australia/ACT
- Australia/Canberra
- Australia/NSW
- Australia/Sydney
- Australia/Adelaide
- Australia/South
- Australia/Brisbane
- Australia/Queensland
- Australia/Broken_Hill
- Australia/Yancowinna
- Australia/Darwin
- Australia/North
- Australia/Hobart
- Australia/Tasmania
- Australia/LHI
- Australia/Lord_Howe
- Australia/Lindeman
- Australia/Melbourne
- Australia/Victoria
- Australia/Perth Australia/West
- CET
- EET
- Etc/GMT
- GMT
- GMT+0
- GMT-0
- GMT0
- Greenwich
- Etc/Greenwich
- Etc/GMT+0...Etc/GMT+12
- Etc/GMT-0...Etc/GMT-14
- Europe
- Europe/Amsterdam
- Europe/Andorra
- Europe/Athens
- Europe/Belfast
- Europe/Belgrade
- Europe/Ljubljana
- Europe/Sarajevo
- Europe/Skopje
- Europe/Zagreb
- Europe/Berlin
- Europe/Brussels
- Europe/Bucharest
- Europe/Budapest
- Europe/Chisinau Europe/Tiraspol
- Europe/Copenhagen
- Europe/Dublin Eire
- Europe/Gibraltar
- Europe/Helsinki
- Europe/Istanbul Asia/Istanbul Turkey
- Europe/Kaliningrad
- Europe/Kiev
- Europe/Lisbon Portugal
- Europe/London GB GB-Eire
- Europe/Luxembourg
- Europe/Madrid
- Europe/Malta
- Europe/Minsk
- Europe/Monaco
- Europe/Moscow W-SU
- Europe/Oslo
- Arctic/Longyearbyen
- Atlantic/Jan_Mayen
- Europe/Paris
- Europe/Prague Europe/Bratislava
- Europe/Riga
- Europe/Rome Europe/San_Marino Europe/Vatican
- Europe/Samara
- Europe/Simferopol
- Europe/Sofia
- Europe/Stockholm
- Europe/Tallinn
- Europe/Tirane
- Europe/Uzhgorod
- Europe/Vaduz
- Europe/Vienna
- Europe/Vilnius
- Europe/Warsaw Poland
- Europe/Zaporozhye
- Europe/Zurich
- Factory
- Indian
- Indian/Antananarivo
- Indian/Chagos
- Indian/Christmas
- Indian/Cocos
- Indian/Comoro
- Indian/Kerguelen
- Indian/Mahe
- Indian/Maldives
- Indian/Mauritius
- Indian/Mayotte
- Indian/Reunion
- MET
- Pacific
- Pacific/Apia
- Pacific/Auckland NZ
- Pacific/Chatham NZ-CHAT
- Pacific/Easter
- Chile/EasterIsland
- Pacific/Efate
- Pacific/Enderbury
- Pacific/Fakaofo
- Pacific/Fiji
- Pacific/Funafuti
- Pacific/Galapagos
- Pacific/Gambier SystemV/YST9
- Pacific/Guadalcanal
- Pacific/Guam
- Pacific/Honolulu HST SystemV/HST10 US/Hawaii
- Pacific/Johnston
- Pacific/Kiritimati
- Pacific/Kosrae
- Pacific/Kwajalein Kwajalein
- Pacific/Majuro
- Pacific/Marquesas
- Pacific/Midway
- Pacific/Nauru
- Pacific/Niue
- Pacific/Norfolk
- Pacific/Noumea
- Pacific/Pago_Pago
- Pacific/Samoa US/Samoa
- Pacific/Palau
- Pacific/Pitcairn SystemV/PST8
- Pacific/Ponape
- Pacific/Port_Moresby
- Pacific/Rarotonga
- Pacific/Saipan
- Pacific/Tahiti
- Pacific/Tarawa
- Pacific/Tongatapu
- Pacific/Truk
- Pacific/Wake
- Pacific/Wallis
- Pacific/Yap
- UCT Etc
- UCT
- UTC
- Universal Zulu
- Etc/UTC
- Etc/Universal
- Etc/Zulu
- WET
42 - SHOW
Shows run-time parameters for the current session.
Shows run-time parameters for the current session.
Syntax
SHOW { parameter | ALL }
Parameters
ALL
- Shows all run-time settings.
AUTOCOMMIT
- Returns on/off to indicate whether statements automatically commit their transactions when they complete.
-
AVAILABLE ROLES
- Lists all roles available to the user.
AVAILABLE WORKLOADS
- Returns the workloads available to the user and their enabled roles.
DATESTYLE
- Shows the current style of date values. See SET DATESTYLE.
-
ENABLED ROLES
- Shows the roles enabled for the current session. See SET ROLE.
ESCAPE_STRING_WARNING
- Returns on/off to indicate whether warnings are issued when backslash escapes are found in strings. See SET ESCAPE_STRING_WARNING.
GRACEPERIOD
- Shows the session GRACEPERIOD set by SET SESSION GRACEPERIOD.
IDLESESSIONTIMEOUT
- Shows how long the session can remain idle before it times out.
INTERVALSTYLE
- Shows whether units are output when printing intervals. See SET INTERVALSTYLE.
LOCALE
- Shows the current locale. See SET LOCALE.
MEMORYCAP
- Shows the maximum amount of memory that any request use. See SET MEMORYCAP.
MULTIPLEACTIVERESULTSETS
- Returns on/off to indicate whether multiple active result sets on one connection are allowed. See SET SESSION MULTIPLEACTIVERESULTSETS.
RESOURCE POOL
- Shows the resource pool that the session is using. See SET RESOURCE POOL.
RUNTIMECAP
- Shows the maximum amount of time that queries can run in the session. See SET RUNTIMECAP.
SEARCH_PATH
- Shows the order in which Vertica searches schemas. See SET SEARCH_PATH. For example:
=> SHOW SEARCH_PATH;
name | setting
-------------+---------------------------------------------------
search_path | "$user", public, v_catalog, v_monitor, v_internal
(1 row)
STANDARD_CONFORMING_STRINGS
- Shows whether backslash escapes are enabled for the session. See SET STANDARD_CONFORMING_STRINGS.
TEMPSPACECAP
- Shows the maximum amount of temporary file space that queries can use in the session. See SET TEMPSPACECAP.
TIMEZONE
- Shows the timezone set in the current session. See SET TIMEZONE.
-
TRANSACTION_ISOLATION
- Shows the current transaction isolation setting, as described in SET SESSION CHARACTERISTICS AS TRANSACTION. For example:
=> SHOW TRANSACTION_ISOLATION;
name | setting
-----------------------+----------------
transaction_isolation | READ COMMITTED
(1 row)
-
TRANSACTION_READ_ONLY
- Returns true/false to indicate the current read-only setting, as described in SET SESSION CHARACTERISTICS AS TRANSACTION. For example:
=> SHOW TRANSACTION_READ_ONLY;
name | setting
-----------------------+---------
transaction_read_only | false
(1 row)
WORKLOAD
- Shows the workloads associated with the current session.
Privileges
None
Examples
Display all current runtime parameter settings:
=> SHOW ALL;
name | setting
-----------------------------+-----------------------------------------------------------
locale | en_US@collation=binary (LEN_KBINARY)
autocommit | off
standard_conforming_strings | on
escape_string_warning | on
multipleactiveresultsets | off
datestyle | ISO, MDY
intervalstyle | plain
timezone | America/New_York
search_path | "$user", public, v_catalog, v_monitor, v_internal, v_func
transaction_isolation | READ COMMITTED
transaction_read_only | false
resource_pool | general
memorycap | UNLIMITED
tempspacecap | UNLIMITED
runtimecap | UNLIMITED
idlesessiontimeout | UNLIMITED
graceperiod | UNLIMITED
enabled roles | dbduser*, dbadmin*, pseudosuperuser*
available roles | dbduser*, dbadmin*, pseudosuperuser*
tcp_keepalive | idle_time: [7200], probe_interval: [75], probe_count: [9]
workload | analytics
(21 rows)
43 - SHOW CURRENT
Displays active configuration parameter values that are set at all levels.
Displays active configuration parameter values that are set at all levels. Vertica first checks values set at the session level. If a value is not set for a configuration parameter at the session level, Vertica next checks if the value is set for the node where you are logged in, and then checks the database level. If no values are set, SHOW CURRENT
shows the default value for the configuration parameter. If the configuration parameter requires a restart to take effect, the active values shown might differ from the set values.
Syntax
SHOW CURRENT { parameter-name[,...] | ALL }
Parameters
parameter-name
- Names of configuration parameters to show.
ALL
- Shows all configuration parameters set at all levels.
Privileges
Non-superuser: SHOW CURRENT ALL
returns masked parameter settings. Attempts to view specific parameter settings return an error.
Examples
Show configuration parameters and their settings at all levels.
=> SHOW CURRENT ALL;
level | name |setting
---------+---------------------------+---------
DEFAULT | ActivePartitionCount | 1
DEFAULT | AdvanceAHMInterval | 180
DEFAULT | AHMBackupManagement | 0
DATABASE | AnalyzeRowCountInterval | 3600
SESSION | ForceUDxFencedMode | 1
NODE | MaxClientSessions | 0
...
44 - SHOW DATABASE
Displays configuration parameter values that are set for the database.
Displays configuration parameter values that are set for the database.
Important
You can also get detailed information on configuration parameters, including their current and default values, by querying system table
CONFIGURATION_PARAMETERS.
Note
If the configuration parameter is set but requires a database restart to take effect, the value shown might differ from the active value.
Syntax
SHOW DATABASE db-spec { parameter-name[,...] | ALL }
Parameters
db-spec
Specifies the current database, set to the database name or DEFAULT
.
parameter-name
- Names of one or more configuration parameters to show.Non-superusers can only specify parameters whose settings are not masked by
SHOW DATABASE...ALL
, otherwise Vertica returns an error.
If you specify a single parameter that is not set, SHOW DATABASE
returns an empty row for that parameter.
To obtain the names of database-level parameters, query system table
CONFIGURATION_PARAMETERS
.
ALL
- Shows all configuration parameters set at the database level.For non-superusers, Vertica masks settings of security parameters, which only superusers can access.
Privileges
-
Superuser: Shows all database parameter settings.
-
Non-superuser: Masks all security parameter settings, which only superusers can access. To determine which parameters require superuser privileges, query system table
CONFIGURATION_PARAMETERS
.
Examples
Show to a non-superuser all configuration parameters that are set on the database:
=> SHOW DATABASE DEFAULT ALL;
name | setting
--------------------------------+----------
AllowNumericOverflow | 1
CopyFaultTolerantExpressions | 1
GlobalHeirUsername | ********
MaxClientSessions | 50
NumericSumExtraPrecisionDigits | 0
(6 rows)
Show settings for two configuration parameters:
=> SHOW DATABASE DEFAULT AllowNumericOverflow, NumericSumExtraPrecisionDigits;
name | setting
--------------------------------+---------
AllowNumericOverflow | 1
NumericSumExtraPrecisionDigits | 0
(2 rows)
45 - SHOW NODE
Displays configuration parameter values that are set for a node.
Displays configuration parameter values that are set for a node. If you specify a parameter that is not set, SHOW NODE
returns an empty row for that parameter.
Note
If the configuration parameter is set but requires a database restart to take effect, the value shown might differ from the active value.
Syntax
SHOW NODE node-name { parameter-name [,...] | ALL }
Parameters
node-name
- Name of the target node.
parameter-name
- Names of one or more node-level configuration parameters. To obtain the names of node-level parameters, query system table
CONFIGURATION_PARAMETERS
.
ALL
- Shows all configuration parameters set at the node level.
Privileges
None
Examples
View all configuration parameters and their settings for node v_vmart_node0001
:
=> SHOW NODE v_vmart_node0001 ALL;
name | setting
---------------------------+---------
DefaultIdleSessionTimeout | 5 hour
MaxClientSessions | 20
46 - SHOW SESSION
Displays configuration parameter values that are set for the current session.
Displays configuration parameter values that are set for the current session. If you specify a parameter that is not set, SHOW SESSION
returns an empty row for that parameter.
Note
If the configuration parameter is set but requires a database restart to take effect, the value shown might differ from the active value.
Syntax
SHOW SESSION { ALL | UDPARAMETER ALL }
Parameters
ALL
- Shows all Vertica configuration parameters set at the session level.
UDPARAMETER ALL
- Shows all parameters defined by user-defined extensions. These parameters are not shown in the CONFIGURATION_PARAMETERS table.
Privileges
None
Examples
View all Vertica configuration parameters and their settings for the current session. User-defined parameters are not included:
=> SHOW SESSION ALL;
name | setting
----------------------------+---------------------------------------------------
locale | en_US@collation=binary (LEN_KBINARY)
autocommit | off
standard_conforming_strings | on
escape_string_warning | on
datestyle | ISO, MDY
intervalstyle | plain
timezone | America/New_York
search_path | "$user", public, v_catalog, v_monitor, v_internal
transaction_isolation | READ COMMITTED
transaction_read_only | false
resource_pool | general
memorycap | UNLIMITED
tempspacecap | UNLIMITED
runtimecap | UNLIMITED
enabled roles | dbduser*, dbadmin*, pseudosuperuser*
available roles | dbduser*, dbadmin*, pseudosuperuser*
ForceUDxFencedMode | 1
(17 rows)
47 - SHOW USER
Displays configuration parameter settings for database users.
Displays configuration parameter settings for database users. To get the names of user-level parameters, query system table CONFIGURATION_PARAMETERS:
SELECT parameter_name, allowed_levels FROM configuration_parameters
WHERE allowed_levels ilike '%USER%' AND parameter_name ilike '%depot%';
parameter_name | allowed_levels
-------------------------+-------------------------
UseDepotForWrites | SESSION, USER, DATABASE
DepotOperationsForQuery | SESSION, USER, DATABASE
UseDepotForReads | SESSION, USER, DATABASE
(3 rows)
Syntax
SHOW USER { user-name | ALL } [PARAMETER] { cfg-parameter [,...] | ALL }
Parameters
user-name
| ALL
- Show parameter settings for the specified user, or for all users.
[PARAMETER]
parameter-list
- A comma-delimited list of user-level configuration parameters.
PARAMETER ALL
- Show all configuration parameters that are set for the specified users.
Privileges
Non-superusers: Can view only their own configuration parameter settings.
Examples
The following example shows configuration parameter settings for two users, Yvonne and Ahmed:
=> SELECT user_name FROM v_catalog.users WHERE user_name != 'dbadmin';
user_name
-----------
Ahmed
Yvonne
(2 rows)
=> SHOW USER Yvonne PARAMETER ALL;
user | parameter | setting
--------+-------------------------+---------
Yvonne | DepotOperationsForQuery | Fetches
(1 row)
=> ALTER USER Yvonne SET PARAMETER UseDepotForWrites = 0;
ALTER USER
=> SHOW USER Yvonne PARAMETER ALL;
user | parameter | setting
--------+-------------------------+---------
Yvonne | DepotOperationsForQuery | Fetches
Yvonne | UseDepotForWrites | 0
(2 rows)
=> ALTER USER Ahmed SET PARAMETER DepotOperationsForQuery = 'Fetches';
ALTER USER
=> SHOW USER ALL PARAMETER ALL;
user | parameter | setting
--------+-------------------------+---------
Ahmed | DepotOperationsForQuery | Fetches
Yvonne | DepotOperationsForQuery | Fetches
Yvonne | UseDepotForWrites | 0
(3 rows)
See also
ALTER USER
48 - START TRANSACTION
Starts a transaction block.
Starts a transaction block.
Syntax
START TRANSACTION [ isolation_level ]
where isolation_level is one of:
ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED }READ { ONLY | WRITE }
Parameters
Isolation level, described in the following table, determines what data the transaction can access when other transactions are running concurrently. The isolation level cannot be changed after the first query (SELECT
) or DML statement (INSERT
, DELETE
, UPDATE
) has run. A transaction retains its isolation level until it completes, even if the session's isolation level changes during the transaction. Vertica internal processes (such as the Tuple Mover and refresh operations) and DDL operations always run at the SERIALIZABLE isolation level to ensure consistency.
WORK | TRANSACTION
- Have no effect; they are optional keywords for readability.
ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED }
- SERIALIZABLE: Sets the strictest level of SQL transaction isolation. This level emulates transactions serially, rather than concurrently. It holds locks and blocks write operations until the transaction completes. Not recommended for normal query operations.
- REPEATABLE READ: Automatically converted to SERIALIZABLE by Vertica.
- READ COMMITTED (Default): Allows concurrent transactions. Use READ COMMITTED isolation for normal query operations, but be aware that there is a subtle difference between them. SeeTransactionsfor more information.
- READ UNCOMMITTED: Automatically converted to READ COMMITTED by Vertica.
READ {WRITE | ONLY}
- Determines whether the transaction is read/write or read-only. Read/write is the default.
Setting the transaction session mode to read-only disallows the following SQL commands, but does not prevent all disk write operations:
- INSERT, UPDATE, DELETE, and COPY if the table they would write to is not a temporary table
- All CREATE, ALTER, and DROP commands
- GRANT, REVOKE, and EXPLAIN if the command it would run is among those listed.
Privileges
None
Notes
BEGIN performs the same function as START TRANSACTION.
Examples
This example shows how to start a transaction.
= > START TRANSACTION ISOLATION LEVEL READ COMMITTED READ WRITE;
START TRANSACTION
=> CREATE TABLE sample_table (a INT);
CREATE TABLE
=> INSERT INTO sample_table (a) VALUES (1);
OUTPUT
--------
1
(1 row)
See also
49 - TRUNCATE TABLE
Removes all storage associated with a table, while leaving the table definition intact.
Removes all storage associated with a table, while leaving the table definition intact. TRUNCATE TABLE auto-commits the current transaction after statement execution and cannot be rolled back.
TRUNCATE TABLE also performs the following actions:
-
Removes all table history preceding the current epoch. After TRUNCATE TABLE returns, AT EPOCH queries on the truncated table return nothing.
-
Drops all table- and partition-level statistics.
Syntax
TRUNCATE TABLE [[{database | namespace }.]schema.]table-name
Arguments
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
table-name
- The name of the anchor table or temporary table to truncate. You cannot truncate an external table.
Privileges
Non-superuser:
Examples
See Truncating tables.
See also
50 - UPDATE
Replaces the values of the specified columns in all rows for which a specified condition is true.
Replaces the values of the specified columns in all rows for which a specified condition is true. All other columns and rows in the table are unchanged. If successful, UPDATE returns the number of rows updated. A count of 0 indicates no rows matched the condition.
Important
The Vertica implementation of UPDATE differs from traditional databases. It does not delete data from disk storage; it writes two rows, one with new data and one marked for deletion. Rows marked for deletion remain available for historical queries.
Syntax
UPDATE [[ { database | namespace }.]schema.]table-reference [AS] alias
SET set-expression [,...]
[ FROM from-list ]
[ where-clause ]
Note
UPDATE statements can also embed the following hints:
Arguments
{
database
|
namespace
}
- Name of the database or namespace that contains
table
:
-
Database name: If specified, it must be the current database.
-
Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.
schema
- Name of the schema, by default
public
. If you specify the namespace or database name, you must provide the schema name, even if the schema is public
.
If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.
table-reference
- A table, one of the following:
You cannot update a projection.
alias
- A temporary name used to reference the table.
SET
set-expression
- The columns to update from one or more set expressions. Each SET clause expression specifies a target column and its new value as follows:
column-name = { expression | DEFAULT }
where:
-
*
column-name*
is any column that does not have primary key or foreign key referential integrity constraints and is not of a complex type. Native arrays are permitted.
-
expression
specifies a value to assign to the column. The expression can use the current values of this and other table columns. For example:
=> UPDATE T1 SET C1 = C1+1
-
DEFAULT
sets column-name
to its default value, or is ignored if no default value is defined for this column.
UPDATE only modifies the columns specified by the SET clause. Unspecified columns remain unchanged.
FROM
from-list
- A list of table expressions, allowing columns from other tables to appear in the WHERE condition and the UPDATE expressions. This is similar to the list of tables that can be specified in the FROM clause of a SELECT command.
The FROM clause can reference the target table as follows:
FROM DEFAULT [join-type] JOIN table-reference [ ON join-predicate ]
DEFAULT specifies the table to update. This keyword can be used only once in the FROM clause, and it cannot be used elsewhere in the UPDATE statement.
Privileges
Table owner or user with GRANT OPTION is grantor.
-
UPDATE privilege on table
-
USAGE privilege on schema that contains the table
-
SELECT privilege on the table when executing an UPDATE statement that references table column values in a WHERE or SET clause
Subqueries and joins
UPDATE supports subqueries and joins, which is useful for updating values in a table based on values that are stored in other tables. For details, see Subqueries in UPDATE and DELETE statements.
Committing successive table changes
Vertica follows the SQL-92 transaction model, so successive INSERT, UPDATE, and DELETE statements are included in the same transaction. You do not need to explicitly start this transaction; however, you must explicitly end it with COMMIT, or implicitly end it with COPY. Otherwise, Vertica discards all changes that were made within the transaction.
Restrictions
-
You cannot update an immutable table.
-
You cannot update columns of complex types except for native arrays.
-
If the joins specified in the FROM clause or WHERE predicate produce more than one copy of the row in the target table, the new value of the row in the table is chosen arbitrarily.
-
If primary key, unique key, or check constraints are enabled for automatic enforcement in the target table, Vertica enforces those constraints when you load new data. If a violation occurs, Vertica rolls back the operation and returns an error.
-
If an update would violate a table or schema disk quota, the operation fails. For more information, see Disk quotas.
Examples
In the fact
table, modify the price
column value for all rows where the cost
column value is greater than 100:
=> UPDATE fact SET price = price - cost * 80 WHERE cost > 100;
In the retail.customer
table, set the state
column to NH
when the CID
column value is greater than 100:
=> UPDATE retail.customer SET state = 'NH' WHERE CID > 100;
To use table aliases in UPDATE queries, consider the following two tables:
=> SELECT * FROM result_table;
cust_id | address
---------+--------------------
20 | Lincoln Street
30 | Beach Avenue
30 | Booth Hill Road
40 | Mt. Vernon Street
50 | Hillside Avenue
(5 rows)
=> SELECT * FROM new_addresses;
new_cust_id | new_address
-------------+---------------
20 | Infinite Loop
30 | Loop Infinite
60 | New Addresses
(3 rows)
The following query and subquery use table aliases to update the address
column in result_table
(alias r
) with the new address from the corresponding column in the new_addresses
table (alias n
):
=> UPDATE result_table r
SET address=n.new_address
FROM new_addresses n
WHERE r.cust_id = n.new_cust_id;
result_table
shows the address
field updates made for customer IDs 20 and 30:
=> SELECT * FROM result_table ORDER BY cust_id;
cust_id | address
---------+------------------
20 | Infinite Loop
30 | Loop Infinite
30 | Loop Infinite
40 | Mt. Vernon Street
50 | Hillside Avenue
(5 rows)
You cannot use UPDATE to update individual elements of native arrays. Instead, replace the entire array value. The following example uses ARRAY_CAT to add an element to an array column:
=> SELECT * FROM singers;
lname | fname | bands
--------+-------+---------------------------------------------
Cher | | ["Sonny and Cher"]
Jagger | Mick | ["Rolling Stones"]
Slick | Grace | ["Jefferson Airplane","Jefferson Starship"]
(3 rows)
=> UPDATE singers SET bands=ARRAY_CAT(bands,ARRAY['something new'])
WHERE lname='Cher';
OUTPUT
--------
1
(1 row)
=> SELECT * FROM singers;
lname | fname | bands
--------+-------+---------------------------------------------
Jagger | Mick | ["Rolling Stones"]
Slick | Grace | ["Jefferson Airplane","Jefferson Starship"]
Cher | | ["Sonny and Cher","something new"]
(3 rows)