This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Statements

The primary structure of a SQL query is its statement.

The primary structure of a SQL query is its statement. Whether a statement stands on its own, or is part of a multi-statement query, each statement must end with a semicolon. The following example contains four common SQL statements—CREATE TABLE, INSERT, SELECT, and COMMIT:

=> CREATE TABLE comments (id INT, comment VARCHAR);
CREATE TABLE
=> INSERT INTO comments VALUES (1, 'Hello World');
OUTPUT
--------
1
(1 row)

=> SELECT * FROM comments;
 id |   comment
----+-------------
  1 | Hello World
(1 row)

=> COMMIT;
COMMIT
=>

1 - ACTIVATE DIRECTED QUERY

Activates a directed query and makes it available to the query optimizer across all sessions.

Activates a directed query and makes it available to the query optimizer across all sessions.

Syntax

ACTIVATE DIRECTED QUERY { query-name | where-clause }

Arguments

query-name
Name of the directed query to activate, as stored in the DIRECTED_QUERIES column query_name. You can also use GET DIRECTED QUERY to obtain names of all directed queries that map to an input query.
where-clause
Resolves to one or more directed queries that are filtered from system table DIRECTED_QUERIES. For example, the following statement activates all directed queries with the same save_plans_version identifier:
=> ACTIVATE DIRECTED QUERY WHERE save_plans_version = 21;

Privileges

Superuser

Activation life cycle

After you activate a directed query, it remains active until it is explicitly deactivated by DEACTIVATE DIRECTED QUERY or removed from storage by DROP DIRECTED QUERY. If a directed query is active at the time of database shutdown, Vertica automatically reactivates it when you restart the database.

Examples

See Managing directed queries.

2 - ALTER statements

ALTER statements let you change existing database objects.

ALTER statements let you change existing database objects.

2.1 - ALTER ACCESS POLICY

Performs one of the following actions on existing access policies:.

Performs one of the following actions on existing access policies:

  • Modify an access policy by changing its expression, and by enabling/disabling the policy.

  • Copy an access policy from one table to another.

Syntax

Modify policy:

ALTER ACCESS POLICY ON [[database.]schema.]table
   { FOR COLUMN column [ expression ] | FOR ROWS [ WHERE expression ] } { GRANT TRUSTED } { ENABLE | DISABLE }

Copy policy:

ALTER ACCESS POLICY ON [[database.]schema.]table
   { FOR COLUMN column | FOR ROWS } COPY TO TABLE table;

Parameters

`[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

table
The name of the table that contains the access policy you want to enable, disable, or copy.
FOR COLUMN column [expression]
Replaces the access policy expression that was previously set for this column. Omit expression from the FOR COLUMN clause in order to enable or disable this policy only, or copy it to another table.
FOR ROWS [WHERE expression]
Replaces the row access policy expression that was previously set for this table. Omit WHERE expression from the FOR ROWS clause in order to enable or disable this policy only, or copy it to another table.
GRANT TRUSTED

Specifies that GRANT statements take precedence over the access policy in determining whether users can perform DML operations on the target table. If omitted, users can only modify table data if the access policy allows them to see the stored data in its original, unaltered state. For more information, see Access policies and DML operations.

ENABLE | DISABLE
Indicates whether to enable or disable the access policy at the table level.
COPY TO TABLE tablename
Copies the existing access policy to the specified table. The copied access policy includes its enabled/disabled and GRANT TRUSTED statuses.

The following requirements apply:

  • Copying a column access policy:

    • The target table must have a column of the same name and compatible data type.

    • The target colum must not have an access policy.

  • Copying a row access policy: The target table must not have an access policy.

Privileges

Modify access policy

Non-superuser: Ownership of the table

Copy access policy

Non-superuser: Ownership of the source and destination tables

Examples

See Managing access policies

See also

CREATE ACCESS POLICY

2.2 - ALTER AUTHENTICATION

Modifies the settings for a specified authentication method.

Modifies the settings for a specified authentication method.

Syntax

ALTER AUTHENTICATION auth_record {
   | { ENABLE | DISABLE }
   | { LOCAL | HOST [ { TLS | NO TLS } ] host_ip_address }
   | RENAME TO new_auth_record_name
   | METHOD value
   | SET param=value[,...]
   | PRIORITY value
   | [ [ NO ] FALLTHROUGH ]
}

Parameters

Parameter Name Description
auth_record

Name of the authentication method to alter.

Type: VARCHAR

ENABLE | DISABLE

Enable or disable the specified authentication method.

Default: Enabled

When you perform an upgrade and use Kerberos authentication, you must manually set the authentication to ENABLE as it is disabled by default.

LOCAL | HOST [ { TLS | NO TLS } host_ip_address

Specify that the authentication method applies to local or remote (HOST) connections.

For authentication methods that use LDAP, specify whether or not LDAP uses Transport Layer Security (TLS).

For remote (HOST) connections, you must specify the IP address of the host from which the user or application is connecting, VARCHAR.

Vertica supports IPv4 and IPv6 addresses.

RENAME TO new_auth_record_name

Rename the authentication record.

Type: VARCHAR

METHOD value The authentication method you are altering.
SET param=value Set a parameter name and value for the authentication method that you are creating. This is required for LDAP, Ident, and OAuth authentication methods.
PRIORITY value

If the user is associated with multiple authentication methods, the priority value specifies which authentication method Vertica tries first.

Default: 0

Type: INTEGER

Greater values indicate higher priorities. For example, a priority of 10 is higher than a priority of 5; priority 0 is the lowest possible value.

For details, see Authentication record priority.

[ [ NO ] FALLTHROUGH ] Specifies whether to enable authentication fallthrough. For details, see Client authentication.

Privileges

Superuser

Examples

Enabling and Disabling Authentication Methods

This example uses ALTER AUTHENTICATION to disable the v_ldap authentication method and then enable it again:

=> ALTER AUTHENTICATION v_ldap DISABLE;
=> ALTER AUTHENTICATION v_ldap ENABLE;

Renaming Authentication Methods

This example renames the v_kerberos authentication method to K5. All users who have been granted the v_kerberos authentication method now have the K5 method granted instead.

=> ALTER AUTHENTICATION v_kerberos RENAME TO K5;

Modifying Authentication Parameters

This example sets the system user for ident1 authentication to user1:

=> CREATE AUTHENTICATION ident1 METHOD 'ident' LOCAL;
=> ALTER AUTHENTICATION ident1 SET system_users='user1';

When you set or modify LDAP or Ident parameters using ALTER AUTHENTICATION, Vertica validates them.

This example changes the IP address and specifies the parameters for an LDAP authentication method named Ldap1. Specify the bind parameters for the LDAP server. Vertica connects to the LDAP server, which authenticates the database client. If authentication succeeds, Vertica authenticates any users who have been associated with (granted) the Ldap1 authentication method on the designated LDAP server:

=> CREATE AUTHENTICATION Ldap1 METHOD 'ldap' HOST '172.16.65.196';

=> ALTER AUTHENTICATION Ldap1 SET host='ldap://172.16.65.177',
   binddn_prefix='cn=', binddn_suffix=',dc=qa_domain,dc=com';

The next example specifies the parameters for an LDAP authentication method named Ldap2. Specify the LDAP search and bind parameters. Sometimes, Vertica does not have enough information to create the distinguished name (DN) for a user attempting to authenticate. In such cases, you must specify to use LDAP search and bind:

=> CREATE AUTHENTICATION Ldap2 METHOD 'ldap' HOST '172.16.65.196';
=> ALTER AUTHENTICATION Ldap2 SET basedn='dc=qa_domain,dc=com',
   binddn='cn=Manager,dc=qa_domain,
   dc=com',search_attribute='cn',bind_password='secret';

Changing the Authentication Method

This example changes the localpwd authentication from hash to trust:

=> CREATE AUTHENTICATION localpwd METHOD 'hash' LOCAL;
=> ALTER AUTHENTICATION localpwd METHOD 'trust';

Set Multiple Realms

This example sets another realm for the authentication method krb_local:


=> ALTER AUTHENTICATION krb_local set realm = 'COMPANY.COM';

See also

2.3 - ALTER CA BUNDLE

Adds and removes certificates from or changes the owner of a certificate authority (CA) bundle.

Adds and removes certificates from or changes the owner of a certificate authority (CA) bundle.

Syntax

ALTER CA BUNDLE name
        [ADD CERTIFICATES ca_cert[, ca_cert[, ...]]
        [REMOVE CERTIFICATES ca_cert[, ca_cert[, ...]]
        [OWNER TO user]

Parameters

name
The name of the CA bundle.
ca_cert
The name of the CA certificate to add or remove from the bundle.
user
The name of a database user.

Privileges

Ownership of the CA bundle.

Examples

See Managing CA bundles.

See also

2.4 - ALTER DATA LOADER

Changes the properties of a data loader.

Changes the properties of a data loader created with CREATE DATA LOADER.

Syntax

ALTER DATA LOADER [schema.]name {
   SET TO copy-statement 
 | RETRY LIMIT { NONE | DEFAULT | limit }
 | RETENTION INTERVAL monitoring-retention
 | RENAME TO new-name 
 | TRIGGERED BY integration-json
 | DROP TRIGGER [IF EXISTS]
 | OWNER TO user
 | ENABLE | DISABLE
}

Arguments

schema
Schema containing the data loader. The default schema is public.
name
Name of the data loader to alter.
SET TO copy-statement
The new COPY statement that the loader executes. The FROM clause typically uses a glob.
RETRY LIMIT { NONE | DEFAULT | limit }
Maximum number of times to retry a failing file. Each time the data loader is executed, it attempts to load all files that have not yet been successfully loaded, up to this per-file limit. If set to DEFAULT, at load time the loader uses the value of the DataLoaderDefaultRetryLimit configuration parameter.

Default: DEFAULT

RETENTION INTERVAL monitoring-retention
How long to keep records in the events table. DATA_LOADER_EVENTS records events for all data loaders, but each data loader has its own retention interval.

Default: 14 days

RENAME TO new-name
New name for the data loader.
TRIGGERED BY integration-json
New external trigger for the data loader. For details on the JSON fields, see CREATE DATA LOADER. Before altering the trigger you must disable the data loader using DISABLE. After making the change, enable it again using ENABLE.
DROP TRIGGER [IF EXISTS]
Remove the trigger from this loader. You must first disable the data loader using DISABLE. After making the change, enable it again using ENABLE.

If the data loader does not have a trigger and you do not include IF EXISTS, the statement returns an error.

OWNER TO user
New owner for the data loader.
ENABLE | DISABLE
Whether the data loader can be executed. Calling EXECUTE DATA LOADER on a disabled data loader returns an error. When created, data loaders with triggers are disabled by default; all others are enabled by default. If a data loader has a trigger, you must disable it before dropping it or altering the trigger.

If a data loader is disabled while it is executing, Vertica finishes the current execution.

Privileges

Non-superuser:

  • USAGE on the schema.
  • Owner or ALTER privilege on the data loader.

See also

2.5 - ALTER DATABASE

Use ALTER DATABASE to perform the following tasks:.

Use ALTER DATABASE to perform the following tasks:

To see the current value of a parameter, query system table CONFIGURATION_PARAMETERS or use SHOW DATABASE.

Syntax

ALTER DATABASE db-spec {
      DROP ALL FAULT GROUP
      | EXPORT ON { subnet-name | DEFAULT }
      | RESET STANDBY
      | SET [PARAMETER] parameter=value [,...]
      | CLEAR [PARAMETER] parameter[,...]
}

Parameters

db-spec
Specifies the database to alter, one of the following:
  • The database name

  • DEFAULT: The current database

DROP ALL FAULT GROUP
Drops all fault groups defined on the specified database.
EXPORT ON
Specifies the network to use for importing and exporting data, one of the following:
  • subnet-name: A subnet of the public network.

  • DEFAULT: Specifies to use a private network.

For details, see Identify the database or nodes used for import/export, and Changing node export addresses.

RESET STANDBY
Enterprise Mode only, restores all down nodes and reverts their replacement nodes to standby status. If any replaced nodes cannot resume activity, Vertica leaves their standby nodes in place.
SET [PARAMETER]
Sets the specified parameters.
CLEAR [PARAMETER]
Resets the specified parameters to their default values.

Privileges

Superuser

2.6 - ALTER FAULT GROUP

Modifies an existing fault group.

Modifies an existing fault group. ALTER FAULT GROUP can perform the following tasks:

  • Add a node to or drop a node from an existing fault group.

  • Add a child fault group to or drop a child fault group from a parent fault group.

  • Rename a fault group.

Syntax

ALTER FAULT GROUP fault-group-name {
    | ADD NODE node-name
    | DROP NODE node-name
    | ADD FAULT GROUP child-fault-group-name
    | DROP FAULT GROUP child-fault-group-name
    | RENAME TO new-fault-group-name }

Parameters

fault-group-name
The existing fault group name you want to modify.
node-name
The node name you want to add to or drop from the existing (parent) fault group.
child-fault-group-name
The name of the child fault group you want to add to or remove from an existing parent fault group.
new-fault-group-name
The new name for the fault group you want to rename.

Privileges

Superuser

Examples

This example shows how to rename the parent0 fault group to parent100:

=> ALTER FAULT GROUP parent0 RENAME TO parent100;
ALTER FAULT GROUP

Verify the change by querying the FAULT_GROUPS system table:

=> SELECT member_name FROM fault_groups;
   member_name
----------------------
v_exampledb_node0003
parent100
mygroup
(3 rows)

See also

2.7 - ALTER FUNCTION statements

Vertica provides ALTER statements for each type of user-defined extension.

Vertica provides ALTER statements for each type of user-defined extension. Each ALTER statement modifies the metadata of a user-defined function in the Vertica catalog:

ALTER statement Extension
ALTER FUNCTION (scalar) User-defined scalar functions (UDSFs)
ALTER AGGREGATE FUNCTION User-defined aggregate functions (UDAFs)
ALTER ANALYTIC FUNCTION User-defined analytic functions (UDAnF)
ALTER TRANSFORM FUNCTION User-defined transform functions (UDTFs)
ALTER statements for user-defined load:
ALTER SOURCE Load source functions
ALTER FILTER Load filter functions
ALTER PARSER Load parser functions

Vertica also provides ALTER FUNCTION (SQL), which modifies the metadata of a user-defined SQL function.

2.7.1 - ALTER AGGREGATE FUNCTION

Alters a user-defined aggregate function.

Alters a user-defined aggregate function.

Syntax

ALTER AGGREGATE FUNCTION [[db-name.]schema.]function-name( [ parameter-list ] ) {
    OWNER TO new-owner
    | RENAME TO new-name
    | SET SCHEMA new-schema
}

Parameters

[db-name.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function-name``
Name of the SQL function to alter.
arg-list
Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

For these operations... Schema privileges required...
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema) CREATE: destination schema
USAGE: current schema

See also

CREATE AGGREGATE FUNCTION

2.7.2 - ALTER ANALYTIC FUNCTION

Alters a user-defined analytic function.

Alters a user-defined analytic function.

Syntax

ALTER ANALYTIC FUNCTION [[db-name.]schema.]function-name( [ parameter-list ] ) {
    OWNER TO new-owner
    | RENAME TO new-name
    | SET FENCED boolean-expr
    | SET SCHEMA new-schema
}

Parameters

[db-name.]schema
Database and schema. The default schema is public. If you specify a database, it must be the current database.
function-name
Name of the function to alter.
parameter-list
Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET FENCED { true | false }
Specifies whether to enable fenced mode for this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

Operation Schema privileges required
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema)
  • CREATE: destination schema

  • USAGE: current schema

See also

CREATE ANALYTIC FUNCTION

2.7.3 - ALTER FILTER

Alters a user-defined filter.

Alters a user-defined filter.

Syntax

ALTER FILTER [[db-name.]schema.]function-name( [ parameter-list ] ) {
    OWNER TO new-owner
    | RENAME TO new-name
    | SET FENCED boolean-expr
    | SET SCHEMA new-schema
}

Parameters

[db-name.]schema
Database and schema. The default schema is public. If you specify a database, it must be the current database.
function-name
Name of the function to alter.
parameter-list
Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET FENCED { true | false }
Specifies whether to enable fenced mode for this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

Operation Schema privileges required
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema)
  • CREATE: destination schema

  • USAGE: current schema

See also

CREATE FILTER

2.7.4 - ALTER FUNCTION (scalar)

Alters a user-defined scalar function.

Alters a user-defined scalar function.

Syntax

ALTER FUNCTION [[db-name.]schema.]function-name( [ parameter-list] ) {
    OWNER TO new-owner
    | RENAME TO new-name
    | SET FENCED boolean-expr
    | SET SCHEMA new-schema
}

Parameters

[db-name.]schema
Database and schema. The default schema is public. If you specify a database, it must be the current database.
function-name
Name of the function to alter.
parameter-list
Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET FENCED { true | false }
Specifies whether to enable fenced mode for this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

Operation Schema privileges required
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema)
  • CREATE: destination schema

  • USAGE: current schema

Examples

Rename function UDF_one to UDF_two:

=> ALTER FUNCTION UDF_one (int, int) RENAME TO UDF_two;

Move function UDF_two to schema macros:

=> ALTER FUNCTION UDF_two (int, int) SET SCHEMA macros;

Disable fenced mode for function UDF_two:

=> ALTER FUNCTION UDF_two (int, int) SET FENCED false;

See also

CREATE FUNCTION (scalar)

2.7.5 - ALTER FUNCTION (SQL)

Alters a user-defined SQL function.

Alters a user-defined SQL function.

Syntax

ALTER FUNCTION [[db-name.]schema.]function-name( [arg-list] ) {
    OWNER TO new-owner
    | RENAME TO new-name
    | SET SCHEMA new-schema
}

Parameters

[db-name.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function-name
The name of the SQL function to alter.
arg-list
A comma-delimited list of function argument names. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

For these operations... Schema privileges required...
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema) CREATE: destination schema
USAGE: current schema

Examples

Rename function SQL_one to SQL_two:

=> ALTER FUNCTION SQL_one (int, int) RENAME TO SQL_two;

Move function SQL_two to schema macros:

=> ALTER FUNCTION SQL_two (int, int) SET SCHEMA macros;

Reassign ownership of SQL_two:

=> ALTER FUNCTION SQL_two (int, int) OWNER TO user1;

See also

2.7.6 - ALTER PARSER

Alters a user-defined parser.

Alters a user-defined parser.

Syntax

ALTER PARSER [[db-name.]schema.]function-name( [ parameter-list ] ) {
    OWNER TO new-owner
    | RENAME TO new-name
    | SET FENCED boolean-expr
    | SET SCHEMA new-schema
}

Parameters

[db-name.]schema
Database and schema. The default schema is public. If you specify a database, it must be the current database.
function-name
Name of the function to alter.
parameter-list
Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET FENCED { true | false }
Specifies whether to enable fenced mode for this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

Operation Schema privileges required
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema)
  • CREATE: destination schema

  • USAGE: current schema

See also

CREATE PARSER

2.7.7 - ALTER SOURCE

Alters a user-defined load source function.

Alters a user-defined load source function.

Syntax

ALTER SOURCE [[db-name.]schema.]function-name( [ parameter-list ] ) {
    OWNER TO new-owner
    | RENAME TO new-name
    | SET FENCED boolean-expr
    | SET SCHEMA new-schema
}

Parameters

[db-name.]schema
Database and schema. The default schema is public. If you specify a database, it must be the current database.
function-name
Name of the function to alter.
parameter-list
Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET FENCED { true | false }
Specifies whether to enable fenced mode for this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

Operation Schema privileges required
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema)
  • CREATE: destination schema

  • USAGE: current schema

See also

CREATE SOURCE

2.7.8 - ALTER TRANSFORM FUNCTION

Alters a user-defined transform function.

Alters a user-defined transform function.

Syntax

ALTER TRANSFORM FUNCTION [[db-name.]schema.]function-name( [ parameter-list ] ) {
    OWNER TO new-owner
    | RENAME TO new-name
    | SET FENCED { true | false }
    | SET SCHEMA new-schema
}

Parameters

[db-name.]schema
Database and schema. The default schema is public. If you specify a database, it must be the current database.
function-name
Name of the function to alter.
parameter-list
Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET FENCED { true | false }
Specifies whether to enable fenced mode for this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

Operation Schema privileges required
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema)
  • CREATE: destination schema

  • USAGE: current schema

See also

CREATE TRANSFORM FUNCTION

2.8 - ALTER HCATALOG SCHEMA

Alters parameter values on a schema that was created with CREATE HCATALOG SCHEMA.

Alters parameter values on a schema that was created with CREATE HCATALOG SCHEMA. HCatalog schemas are used by the HCatalog Connector to access data stored in a Hive data warehouse. For more information, see Using the HCatalog Connector.

Some parameters cannot be altered after creation. If you need to change one of those values, delete and recreate the schema instead. You can use ALTER HCATALOG SCHEMA to change the following parameters:

  • HOSTNAME

  • PORT

  • HIVESERVER2_HOSTNAME

  • WEBSERVICE_HOSTNAME

  • WEBSERVICE_PORT

  • WEBHDFS_ADDRESS

  • HCATALOG_CONNECTION_TIMEOUT

  • HCATALOG_SLOW_TRANSFER_LIMIT

  • HCATALOG_SLOW_TRANSFER_TIME

  • SSL_CONFIG

  • CUSTOM_PARTITIONS

Syntax

ALTER HCATALOG SCHEMA schema-name SET [param=value]+;

Parameters

Parameter Description
schema-name The name of the schema in the Vertica catalog to alter. The tables in the Hive database are available through this schema.
param The name of the parameter to alter.
value The new value for the parameter. You must specify a value; this statement does not read default values from configuration files like CREATE HCATALOG SCHEMA.

Privileges

One of the following:

  • Superuser

  • Schema owner

Examples

The following example shows how to change the Hive metastore hostname and port for the "hcat" schema. In this example, Hive uses High Availability metastore.

=> ALTER HCATALOG SCHEMA hcat SET HOSTNAME='thrift://ms1.example.com:9083,thrift://ms2.example.com:9083';

The following example shows the error you receive if you try to set an unalterable parameter.

=> ALTER HCATALOG SCHEMA hcat SET HCATALOG_USER='admin';
   ERROR 4856: Syntax error at or near "HCATALOG_USER" at character 39

2.9 - ALTER LIBRARY

Replaces the library file that is currently associated with a UDx library in the Vertica catalog.

Replaces the library file that is currently associated with a UDx library in the Vertica catalog. Vertica automatically distributes copies of the updated file to all cluster nodes. UDxs defined in the catalog that reference the updated library automatically start using the updated library file. A UDx is considered to be the same if its name and signature match.

The current and replacement libraries must be written in the same language.

Syntax

ALTER LIBRARY [[database.]schema.]name [DEPENDS 'depends-path'] AS 'path';

Arguments

schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

name
The name of an existing library created with CREATE LIBRARY.
DEPENDS 'depends-path'

Files or libraries on which this library depends, one or more files or directories on the initiator node file system or other supported file systems or object stores. For a directory, end the path entry with a slash (/), optionally followed by a wildcard (*). To specify more than one file, separate entries with colons (:).

If any path entry contain colons, such as a URI, place brackets around the entire DEPENDS path and use double quotes for the individual path elements, as in the following example:

DEPENDS '["s3://mybucket/gson-2.3.1.jar"]'

To specify libraries with multiple directory levels, see Multi-level Library Dependencies.

DEPENDS has no effect for libraries written in R. R packages must be installed locally on each node, including external dependencies.

AS path
The absolute path on the initiator node file system of the replacement library file.

Privileges

Superuser, or UDXDEVELOPER and CREATE on the schema. Non-superusers must explicitly enable the UDXDEVELOPER role. See CREATE LIBRARY for examples.

Multi-level library dependencies

If a DEPENDS clause specifies a library with multiple directory levels, Vertica follows the library path to include all subdirectories of that library. For example, the following CREATE LIBRARY statement enables the UDx library mylib to import all Python packages and modules that it finds in subdirectories of site-packages:

=> CREATE LIBRARY mylib AS '/path/to/python_udx' DEPENDS '/path/to/python/site-packages' LANGUAGE 'Python';

Examples

This example shows how to update an already-defined library named myFunctions with a new file.

=> ALTER LIBRARY myFunctions AS '/home/dbadmin/my_new_functions.so';

See also

Developing user-defined extensions (UDxs)

2.10 - ALTER LOAD BALANCE GROUP

Changes the configuration of a load balance group.

Changes the configuration of a load balance group.

Syntax

ALTER LOAD BALANCE GROUP group-name {
    RENAME TO new-name |
    SET FILTER TO 'ip-cidr-addr' |
    SET POLICY TO 'policy' |
    ADD {ADDRESS | FAULT GROUP | SUBCLUSTER} add-list |
    DROP  {ADDRESS | FAULT GROUP | SUBCLUSTER} drop-list
}

Parameters

group-name
Name of an existing load balance group to change.
RENAME TO new-name
Renames the group to new-name.
SET FILTER TO 'ip-cidr-addr'
An IPv4 or IPv6 CIDR to replace the existing IP address filter that selects which members of a fault group or subcluster to include in the load balance group. This setting is only valid if the load balance group contains fault groups or subclusters.
SET POLICY TO 'policy'
Changes the policy the load balance group uses to select the target node for the incoming connection. One of:
  • ROUNDROBIN

  • RANDOM

  • NONE

See CREATE LOAD BALANCE GROUP for details.

ADD {ADDRESS | FAULT GROUP | SUBCLUSTER }
Adds objects of the specified type to the load balance group. Load balance groups can only contain one type of object. For example, if you created the load balance group using a list of addresses, you can only add additional addresses, not fault groups or subclusters.
add-list
A comma-delimited list of objects (addresses, fault groups, or subclusters) to add to the fault group.
DROP {ADDRESS | FAULT GROUP | SUBCLUSTER}
Removes objects of the specified type from the load balance group (addresses, fault groups, or subclusters). The object type must match the type of the objects already in the load balance group.
drop-list
The list of objects to remove from the load balance group.

Privileges

Superuser

Examples

Remove an address from the load balance group named group_2.

=> SELECT * FROM LOAD_BALANCE_GROUPS;
  name   |   policy   | filter |         type          | object_name
---------+------------+--------+-----------------------+-------------
 group_1 | ROUNDROBIN |        | Network Address Group | node01
 group_1 | ROUNDROBIN |        | Network Address Group | node02
 group_2 | ROUNDROBIN |        | Network Address Group | node03
(3 rows)

=> ALTER LOAD BALANCE GROUP group_2 DROP ADDRESS node03;
ALTER LOAD BALANCE GROUP

=> SELECT * FROM LOAD_BALANCE_GROUPS;
  name   |   policy   | filter |         type          | object_name
---------+------------+--------+-----------------------+-------------
 group_1 | ROUNDROBIN |        | Network Address Group | node01
 group_1 | ROUNDROBIN |        | Network Address Group | node02
 group_2 | ROUNDROBIN |        | Empty Group           |
(3 rows)

The following example adds three network addresses to the group named group_2:

=> ALTER LOAD BALANCE GROUP group_2 ADD ADDRESS node01,node02,node03;
ALTER LOAD BALANCE GROUP
=> SELECT * FROM load_balance_groups WHERE name = 'group_2';
-[ RECORD 1 ]----------------------
name        | group_2
policy      | ROUNDROBIN
filter      |
type        | Network Address Group
object_name | node01
-[ RECORD 2 ]----------------------
name        | group_2
policy      | ROUNDROBIN
filter      |
type        | Network Address Group
object_name | node02
-[ RECORD 3 ]----------------------
name        | group_2
policy      | ROUNDROBIN
filter      |
type        | Network Address Group
object_name | node03

See also

2.11 - ALTER MODEL

Allows users to rename an existing model, change ownership, or move it to a another schema.

Allows users to rename an existing model, change ownership, or move it to a another schema.

Syntax

ALTER MODEL [[database.]schema.]model
    { OWNER TO owner
    | RENAME TO new-name
    | SET SCHEMA schema
    | { INCLUDE | EXCLUDE | MATERIALIZE } [ SCHEMA ] PRIVILEGES }

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

model
Identifies the model to alter.
OWNER TO owner
Reassigns ownership of this model to owner. If a non-superuser, you must be the current owner.
RENAME TO
Renames the mode, where new-name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
SET SCHEMA schema
Moves the model from one schema to another. If privilege inheritance is enabled with INCLUDE SCHEMA PRIVILEGES, the model inherits the privileges of its new parent schema.
[ { INCLUDE | EXCLUDE | MATERIALIZE } [ SCHEMA ] PRIVILEGES ]
The INCLUDE and EXCLUDE parameters determine whether the model inherits the privileges of its parent schema, overriding the schema-level setting:
  • INCLUDE SCHEMA PRIVILEGES: The model inherits the privileges of its parent schema. This parameter has no effect while privilege inheritance is disabled at the database level with DisableInheritedPrivileges.

  • EXCLUDE SCHEMA PRIVILEGES: The model does not inherit the privileges of its parent schema.

The MATERIALIZE parameter converts inherited privileges into explicit grants on the model. If privilege inheritance is disabled at the database level with DisableInheritedPrivileges, MATERIALIZE converts the set of inherited privileges that would be on the model if privilege inheritance was enabled.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Model owner

  • ALTER privilege on the model

For certain operations, non-superusers must have the following schema privileges:

Schema privileges required... For these operations...
CREATE, USAGE Rename model

CREATE: destination schema

USAGE: current schema

Move model to another schema

Examples

To move a model to another schema:

=> ALTER MODEL my_kmeans_model SET SCHEMA clustering_models;
ALTER MODEL

To rename a model:

=> ALTER MODEL my_kmeans_model RENAME TO kmeans_model;

To change the owner of the model:

=> ALTER MODEL kmeans_model OWNER TO analytics_user;

2.12 - ALTER NETWORK ADDRESS

Changes the configuration of an existing network address.

Changes the configuration of an existing network address.

Syntax

ALTER NETWORK ADDRESS name {
    RENAME TO new-name
    | SET TO 'ip-addr' [PORT port-number]
    | { ENABLE | DISABLE }
    }

Parameters

name
Name of an existing network address to change.
RENAME TO new-name
Renames the network address to new-name. This name change has no effect on the network address's membership in load balance groups.
SET TO 'ip-addr'
Changes the IP address assigned to the network address.
PORT port-number
Sets the port number for the network address. You must supply a network address when altering the port number.
ENABLE | DISABLE
Enables or disables the network address.

Examples

Rename the network address from test_addr to alt_node1, then change its IP address to 192.168.1.200 with port number 4000:

=> ALTER NETWORK ADDRESS test_addr RENAME TO alt_node1;
ALTER NETWORK ADDRESS
=> ALTER NETWORK ADDRESS alt_node1 SET TO '192.168.1.200' PORT 4000;
ALTER NETWORK ADDRESS

See also

2.13 - ALTER NETWORK INTERFACE

This statement has been deprecated.

Renames a network interface.

Syntax

ALTER NETWORK INTERFACE network-interface-name RENAME TO new-network-interface-name

Parameters

network-interface-name
The name of the existing network interface.
new-network-interface-name
The new name for the network interface.

Privileges

Superuser

Examples

Rename a network interface:

=> ALTER NETWORK INTERFACE myNetwork RENAME TO myNewNetwork;

2.14 - ALTER NODE

Sets and clears node-level configuration parameters on the specified node.

Sets and clears node-level configuration parameters on the specified node. ALTER NODE also performs the following management tasks:

  • Changes the node type.

  • Specifies the network interface of the public network on individual nodes that are used for import and export.

  • Replaces a down node.

For information about removing a node, see

Syntax

ALTER NODE node-name {
    EXPORT ON { network-interface | DEFAULT }
    | [IS] node-type
    | REPLACE [ WITH standby-node ]
    | RESET
    | SET [PARAMETER] parameter=value[,...]
    | CLEAR [PARAMETER] parameter[,...]
}

Parameters

node-name
The name of the node to alter.
[IS] node-type
Changes the node type, where node-type is one of the following:
  • PERMANENT: (default): A node that stores data.

  • EPHEMERAL: A node that is in transition from one type to another—typically, from PERMANENT to either STANDBY or EXECUTE.

  • STANDBY: A node that is reserved to replace any node when it goes down. A standby node stores no segments or data until it is called to replace a down node. When used as a replacement node, Vertica changes its type to PERMANENT. For more information, see Active standby nodes.

  • EXECUTE: A node that is reserved for computation purposes only. An execute node contains no segments or data.

EXPORT ON
Specifies the network to use for importing and exporting data, one of the following:
  • network-interface: The name of a network interface of the public network.

  • DEFAULT: Use the default network interface of the public network, as specified by ALTER DATABASE.

REPLACE [WITH standby-node]
Enterprise Mode only, replaces the specified node with an available active standby node. If you omit the WITH clause, Vertica tries to find a replacement node from the same fault group as the down node.

If you specify a node that is not down, Vertica ignores this statement.

RESET
Enterprise Mode only, restores the specified down node and returns its replacement to standby status. If the down node cannot resume activity, Vertica ignores this statement and leaves the standby node in place.
SET [PARAMETER]
Sets one or more configuration parameters to the specified value at the node level.
CLEAR [PARAMETER]
Clears one or more specified configuration parameters.

Privileges

Superuser

Examples

Specify to use the default network interface of public network on v_vmart_node0001 for import/export operations:

=> ALTER NODE v_vmart_node0001 EXPORT ON DEFAULT;

Replace down node v_vmart_node0001 with an active standby node, then restore it:

=> ALTER NODE v_vmart_node0001 REPLACE WITH standby1;
...
=> ALTER NODE v_vmart_node0001 RESET;

Set and clear configuration parameter MaxClientSessions:

=> ALTER NODE v_vmart_node0001 SET MaxClientSessions = 0;
...
=> ALTER NODE v_vmart_node0001 CLEAR MaxClientSessions;

Set the node type as EPHEMERAL:

=> ALTER NODE v_vmart_node0001 IS EPHEMERAL;

2.15 - ALTER NOTIFIER

Updates an existing notifier.

Updates an existing notifier.

Syntax

ALTER NOTIFIER notifier-name
    [ ENABLE | DISABLE ]
    [ MAXPAYLOAD 'max-payload-size' ]
    [ MAXMEMORYSIZE 'max-memory-size' ]
    [ TLS CONFIGURATION tls-configuration ]
    [ TLSMODE 'tls-mode' ]
    [ CA BUNDLE bundle-name [ CERTIFICATE certificate-name ] ]
    [ IDENTIFIED BY 'uuid' ]
    [ [NO] CHECK COMMITTED ]
    [ PARAMETERS 'adapter-params' ]

Parameters

notifier-name
Specifies the notifier to update.
[NO] CHECK COMMITTED
Specifies to wait for delivery confirmation before sending the next message in the queue. Not all messaging systems support delivery confirmation.
ENABLE | DISABLE
Specifies whether to enable or disable the notifier.
MAXPAYLOAD
The maximum size of the message, up to 2 TB, specified in kilobytes, megabytes, gigabytes, or terabytes as follows:
MAXPAYLOAD integer{K|M|G|T}

The default setting is adapter-specific—for example, 1 M for Kafka.

Changes to this parameter take effect either after the notifier is disabled and reenabled or after the database restarts.

MAXMEMORYSIZE
The maximum size of the internal notifier, up to 2 TB, specified in kilobytes, megabytes, gigabytes, or terabytes as follows:
MAXMEMORYSIZE integer{K|M|G|T}

If the queue exceeds this size, the notifier drops excess messages.

TLS CONFIGURATION tls-configuration

The TLS CONFIGURATION to use for TLS.

Notifiers support the following TLS modes:

  • DISABLE

  • TRY_VERIFY (behaves like VERIFY_CA)

  • VERIFY_CA

  • VERIFY_FULL

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

TLSMODE 'tls-mode'

Specifies the type of connection between the notifier and an endpoint, one of the following:

  • disable (default): Plaintext connection.

  • verify-ca: Encrypted connection, and the server's certificate is verified as being signed by a trusted CA.

If you set this parameter to verify-ca, the generated TLS Configuration will be set to TRY_VERIFY, which has the same behavior as VERIFY_CA.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

CA BUNDLE bundle-name

Specifies a CA bundle. The certificates inside the bundle are used to validate the Kafka server's certificate if the TLSMODE requires it.

If a CA bundle is specified for a notifier that currently uses disable, which doesn't validate the Kafka server's certificate, the bundle will go unused when connecting to the Kafka server. This behavior persists unless the TLSMODE is changed to one that validates server certificates.

Changes to contents of the CA bundle take effect either after the notifier is disabled and re-enabled or after the database restarts. However, changes to which CA bundle the notifier uses takes effect immediately.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

CERTIFICATE certificate-name

Specifies a client certificate for validation by the endpoint.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

IDENTIFIED BY 'uuid'
Specifies the notifier's unique identifier. If set, all the messages published by this notifier have this attribute.
PARAMETERS 'adapter-params'
Specifies one or more optional adapter parameters that are passed as a string to the adapter. Adapter parameters apply only to the adapter associated with the notifier.

Changes to this parameter take effect either after the notifier is disabled and reenabled or after the database restarts.

For Kafka notifiers, refer to Kafka and Vertica configuration settings.

Privileges

Superuser

Encrypted notifiers for SASL_SSL Kafka configurations

Follow this procedure to create or alter notifiers for Kafka endpoints that use SASL_SSL. Note that you must repeat this procedure whenever you change the TLSMODE, certificates, or CA bundle for a given notifier.

  1. Create a TLS Configuration with the desired TLS mode, certificate, and CA certificates.

  2. Use CREATE or ALTER to disable the notifier and set the TLS Configuration:

    => ALTER NOTIFIER encrypted_notifier
        DISABLE
        TLS CONFIGURATION kafka_tls_config;
    
  3. ALTER the notifier and set the proper rdkafka adapter parameters for SASL_SSL:

    => ALTER NOTIFIER encrypted_notifier PARAMETERS
        'sasl.username=user;sasl.password=password;sasl.mechanism=PLAIN;security.protocol=SASL_SSL';
    
  4. Enable the notifier:

    => ALTER NOTIFIER encrypted_notifier ENABLE;
    

Examples

Update the settings on an existing notifier:

=> ALTER NOTIFIER my_dc_notifier
    ENABLE
    MAXMEMORYSIZE '2G'
    IDENTIFIED BY 'f8b0278a-3282-4e1a-9c86-e0f3f042a971'
    CHECK COMMITTED;

Add a TLS Configuration to a notifier. To create a custom TLS Configuration, see TLS configurations:

=> ALTER NOTIFIER my_notifier TLS CONFIGURATION notifier_tls_config

See also

2.16 - ALTER PROCEDURE (stored)

Alters a stored procedure, retaining any existing grants.

Alters a stored procedure, retaining any existing grants.

Syntax

ALTER PROCEDURE procedure ( [ [ parameter_mode ] [ parameter ] parameter_type [, ...] ] )
    [ SECURITY { INVOKER | DEFINER }
      | RENAME TO new_procedure_name
      | OWNER TO new_owner
      | SET SCHEMA new_schema
      | SOURCE TO new_source
    ]

Parameters

procedure
The procedure to alter.
parameter_mode
The IN and INOUT parameters of the stored procedure.
parameter
The name of the parameter.
parameter_type
The type of the parameter.
SECURITY { INVOKER | DEFINER }
Specifies whether to execute the procedure with the privileges of the invoker or its definer (owner).

For details, see Executing stored procedures.

RENAME TO new_procedure_name
The new name for the procedure.
OWNER TO new_owner
The new owner (definer) of the procedure.
SET SCHEMA new_schema
The new schema of the procedure.
SOURCE TO new_source
The new procedure source code. For details, see Scope and structure.

Privileges

OWNER TO

Superuser

RENAME and SCHEMA TO

Non-superuser:

  • CREATE on the procedure's schema

  • Ownership of the procedure

Other operations

Non-superuser: Ownership of the procedure

Examples

See Altering stored procedures.

2.17 - ALTER PROFILE

Changes a profile.

Changes a profile. All parameters that are not set in a profile inherit their setting from the default profile. You can use ALTER PROFILE to change the default profile.

Syntax

ALTER PROFILE name LIMIT [
    PASSWORD_LIFE_TIME setting
    PASSWORD_MIN_LIFE_TIME setting
    PASSWORD_GRACE_TIME setting
    FAILED_LOGIN_ATTEMPTS setting
    PASSWORD_LOCK_TIME setting
    PASSWORD_REUSE_MAX setting
    PASSWORD_REUSE_TIME setting
    PASSWORD_MAX_LENGTH setting
    PASSWORD_MIN_LENGTH setting
    PASSWORD_MIN_LETTERS setting
    PASSWORD_MIN_UPPERCASE_LETTERS setting
    PASSWORD_MIN_LOWERCASE_LETTERS setting
    PASSWORD_MIN_DIGITS setting
    PASSWORD_MIN_SYMBOLS setting
    PASSWORD_MIN_CHAR_CHANGE setting ]

Parameters

Name Description
name

The name of the profile to create, where *name*conforms to conventions described in Identifiers.

To modify the default profile, set name to default. For example:

ALTER PROFILE DEFAULT LIMIT PASSWORD_MIN_SYMBOLS 1;

PASSWORD_LIFE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days a password remains valid.

  • UNLIMITED: Password remains valid indefinitely.

After your password's lifetime and grace period expire, you must change your password on your next login, if you have not done so already.

PASSWORD_MIN_LIFE_TIME

Set to an integer value, one of the following:

  • Default: 0

  • ≥ 1: The number of days a password must be set before it can be changed

  • UNLIMITED: Password can be reset at any time.

PASSWORD_GRACE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days a password can be used after it expires.

  • UNLIMITED: No grace period.

FAILED_LOGIN_ATTEMPTS

Set to an integer value, one of the following:

  • ≥ 1: The number of consecutive failed login attempts Vertica allows before locking your account.

  • UNLIMITED: Vertica allows an unlimited number of failed login attempts.

PASSWORD_LOCK_TIME
  • ≥ 1: The number of days (units configurable with PasswordLockTimeUnit) a user's account is locked after FAILED_LOGIN_ATTEMPTS number of login attempts. The account is automatically unlocked when the lock time elapses.

  • UNLIMITED: Account remains indefinitely inaccessible until a superuser manually unlocks it.

PASSWORD_REUSE_MAX

Set to an integer value, one of the following:

  • ≥ 1: The number of times you must change your password before you can reuse an earlier password.

  • UNLIMITED: You can reuse an earlier password without any intervening changes.

PASSWORD_REUSE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days that must pass after a password is set before you can reuse it.

  • UNLIMITED: You can reuse an earlier password immediately.

PASSWORD_MAX_LENGTH

The maximum number of characters allowed in a password, one of the following:

  • Integer between 8 and 512, inclusive
PASSWORD_MIN_LENGTH

The minimum number of characters required in a password, one of the following:

  • 0 to PASSWORD_MAX_LENGTH

  • UNLIMITED: Minimum of PASSWORD_MAX_LENGTH

PASSWORD_MIN_LETTERS

Minimum number of letters (a-z and A-Z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_UPPERCASE_LETTERS

Minimum number of uppercase letters (A-Z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_LOWERCASE_LETTERS

Minimum number of lowercase letters (a-z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_DIGITS

Minimum number of digits (0-9) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_SYMBOLS

Minimum number of symbols—printable non-letter and non-digit characters such as $, #, @—that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_CHAR_CHANGE

Minimum number of characters that must be different from the previous password:

  • Default: 0

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

Privileges

Superuser

Profile settings and client authentication

The following profile settings affect client authentication methods, such as LDAP or GSS:

  • FAILED_LOGIN_ATTEMPTS

  • PASSWORD_LOCK_TIME

All other profile settings are used only by Vertica to manage its passwords.

Examples

ALTER PROFILE sample_profile LIMIT FAILED_LOGIN_ATTEMPTS 3;

See also

2.18 - ALTER PROFILE RENAME

Rename an existing profile.

Rename an existing profile.

Syntax

ALTER PROFILE name RENAME TO new-name;

Parameters

name
The current name of the profile.
new-name
The new name for the profile.

Privileges

Superuser

Examples

This example shows how to rename an existing profile.

ALTER PROFILE sample_profile RENAME TO new_sample_profile;

See also

2.19 - ALTER PROJECTION

Changes the DDL of the specified projection.

Changes the DDL of the specified projection.

Syntax

ALTER PROJECTION [[database.]schema.]projection
   { RENAME TO new-name | ON PARTITION RANGE BETWEEN min-val AND max-val | { ENABLE | DISABLE } }
 

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

projection
The projection to change, where projection can be one of the following:
  • Projection base name: Rename all projections that share this base name.

  • Projection name: Rename the specified projection and its base name. If the projection is segmented, its buddies are unaffected by this change.

See Projection naming for projection name conventions.

RENAME TOnew-name
The new projection name.
ON PARTITION RANGE

Limits projection data to a range of partition keys. The minimum value must be less than or equal to the maximum value.

Values can be NULL. A null minimum value indicates no lower bound and a null maximum value indicates no upper bound. If both are NULL, this statement drops the projection endpoints, producing a regular projection instead of a range projection.

If the new range of keys is outside the previous range, Vertica throws a warning that the projection is out of date and must be refreshed before it can be used.

For other requirements and usage details, see Partition range projections.

ENABLE | DISABLE
Specifies whether to mark this projection as unavailable for queries on its anchor table. If a projection is the queried table's only superprojection, attempts to disable it return with a rollback message. ENABLE restores the projection's availability to query planning. You can also mark a projection as unavailable for individual queries using the hint SKIP_PROJS.

Default: ENABLE

Privileges

Non-superuser, CREATE and USAGE on the schema and one of the following anchor table privileges:

Syntactic sugar

The statement

=> ALTER PROJECTION foo REMOVE PARTITION RANGE;

has the same effect as

=> ALTER PROJECTION foo ON PARTITION RANGE BETWEEN NULL AND NULL;

Examples

=> SELECT export_tables('','public.store_orders');

                export_tables
---------------------------------------------

CREATE TABLE public.store_orders
(
    order_no int,
    order_date timestamp NOT NULL,
    shipper varchar(20),
    ship_date date NOT NULL
);
(1 row)

=> CREATE PROJECTION store_orders_p AS SELECT * from store_orders;
CREATE PROJECTION
=> ALTER PROJECTION store_orders_p RENAME to store_orders_new;
ALTER PROJECTION
=> ALTER PROJECTION store_orders_new DISABLE;
=> SELECT * FROM store_orders_new;
ERROR 3586:  Insufficient projections to answer query
DETAIL:  No projections eligible to answer query
HINT:  Projection store_orders_new not used in the plan because the projection is disabled.
=> ALTER PROJECTION store_orders_new ENABLE;

See also

CREATE PROJECTION

2.20 - ALTER RESOURCE POOL

Modifies an existing resource pool by setting one or more parameters.

Modifies an existing resource pool by setting one or more parameters.

You can use ALTER RESOURCE POOL to modify some parameters in Vertica built-in resource pools. For details on default settings and restrictions, see Built-in resource pools configuration.

Syntax

ALTER RESOURCE POOL pool-name [ FOR subcluster ] parameter-name setting[...]

Arguments

pool-name
Name of the resource pool to modify.
FOR subcluster

Eon Mode only, the subcluster to associate with this resource pool, where subcluster is one of the following:

  • SUBCLUSTER subcluster-name: Resource pool for an existing subcluster. You cannot be connected to this subcluster, otherwise Vertica returns an error.
  • CURRENT SUBCLUSTER: Resource pool for the subcluster that you are connected to.
parameter-name setting
A resource pool parameter and its new setting. To reset this parameter to its default value, specify DEFAULT.

If you specify a subcluster, you can alter only the MAXMEMORYSIZE, MAXQUERYMEMORYSIZE, and MEMORYSIZE parameters for built-in pools.

Parameters

CASCADE TO

Secondary resource pool for executing queries that exceed the RUNTIMECAP setting of their assigned resource pool:

CASCADE TO secondary-pool
CPUAFFINITYMODE

Specifies whether the resource pool has exclusive or shared use of the CPUs specified in CPUAFFINITYSET:

CPUAFFINITYMODE { SHARED | EXCLUSIVE | ANY }
  • SHARED: Queries that run in this resource pool share its CPUAFFINITYSET CPUs with other Vertica resource pools.
  • EXCLUSIVE: Dedicates CPUAFFINITYSET CPUs to this resource pool only, and excludes other Vertica resource pools. If CPUAFFINITYSET is set as a percentage, then that percentage of CPU resources available to Vertica is assigned solely for this resource pool.
  • ANY: Queries in this resource pool can run on any CPU, invalid if CPUAFFINITYSET designates CPU resources.

Default: ANY

CPUAFFINITYSET

CPUs available to this resource pool. All cluster nodes must have the same number of CPUs. The CPU resources assigned to this set are unavailable to general resource pools.

CPUAFFINITYSET {
  'cpu-index[,...]'
| 'cpu-indexi-cpu-indexn'
| 'integer%'
| NONE
}
  • cpu-index[,...]: Dedicates one or more comma-delimited CPUs to this resource pool.
  • cpu-indexi-cpu-indexn: Dedicates a range of contiguous CPU indexes i through n to this resource pool.
  • integer%: Percentage of all available CPUs to use for this resource pool. Vertica rounds this percentage down to include whole CPU units.
  • NONE (empty string): No affinity set is assigned to this resource pool. Queries associated with this pool are executed on any CPU.

Default: NONE

EXECUTIONPARALLELISM

Number of threads used to process any single query issued in this resource pool.

EXECUTIONPARALLELISM { limit | AUTO }
  • limit: An integer value between 1 and the number of cores. Setting this parameter to a reduced value increases throughput of short queries issued in the resource pool, especially if queries are executed concurrently.
  • AUTO or 0: Vertica calculates the setting from the number of cores, available memory, and amount of data in the system. Unless memory is limited, or the amount of data is very small, Vertica sets this parameter to the number of cores on the node.

Default: AUTO

MAXCONCURRENCY

Maximum number of concurrent execution slots available to the resource pool across the cluster:

MAXCONCURRENCY { integer | NONE }

NONE (empty string): Unlimited number of concurrent execution slots.

Default: NONE

MAXMEMORYSIZE

Maximum size per node the resource pool can grow by borrowing memory from the GENERAL pool:

MAXMEMORYSIZE {
  'integer%'
  |'integer{K|M|G|T}'
  NONE
}
  • integer%: Percentage of total memory
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
  • NONE (empty string): Unlimited, resource pool can borrow any amount of available memory from the GENERAL pool.

Default: NONE

MAXQUERYMEMORYSIZE

Maximum amount of memory this resource pool can allocate at runtime to process a query. If the query requires more memory than this setting, Vertica stops execution and returns an error.

Set this parameter as follows:

MAXQUERYMEMORYSIZE {
  'integer%'
| 'integer{K|M|G|T}'
| NONE
}
  • integer%: Percentage of MAXMEMORYSIZE for this resource pool.
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes, up to the value of MAXMEMORYSIZE.
  • NONE (empty string): Unlimited; resource pool can borrow any amount of available memory from the GENERAL pool, within the limits set by MAXMEMORYSIZE.

Default: NONE

MEMORYSIZE

Total per-node memory available to the Vertica resource manager that is allocated to this resource pool:

MEMORYSIZE {
  'integer%'
| 'integer{K|M|G|T}'
}
  • integer%: Percentage of total memory
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes

Default: 0%. No memory allocated, the resource pool borrows memory from the GENERAL pool.

PLANNEDCONCURRENCY

Preferred number of queries to execute concurrently in the resource pool. This setting applies to the entire cluster:

PLANNEDCONCURRENCY { num-queries | AUTO }
  • num-queries: Integer value ≥ 1, the preferred number of queries to execute concurrently in the resource pool. When possible, query resource budgets are limited to allow this level of concurrent execution.

  • AUTO: Value is calculated automatically at query runtime. Vertica sets this parameter to the lower of these two calculations, but never less than 4:

    • Number of logical cores

    • Memory divided by 2GB

    If the number of logical cores on each node is different, AUTO is calculated differently for each node. Distributed queries run like the minimal effective planned concurrency. Single node queries run with the planned concurrency of the initiator.

Default: AUTO

PRIORITY

Priority of queries in this resource pool when they compete for resources in the GENERAL pool:

PRIORITY { integer | HOLD }
  • integer: Negative or positive integer value, where higher numbers denote higher priority:

    • User-defined resource pool: -100 to 100

    • Built-in resource pools SYSQUERY, RECOVERY, and TM: -110 to 110

  • HOLD: Sets priority to -999. Queries in this resource pool are queued until QUEUETIMEOUT is reached.

Default: 0

QUEUETIMEOUT

Maximum time a request can wait for pool resources before it is rejected, not more than one year:

QUEUETIMEOUT { integer | 'interval' | 'NONE' }
  • integer: Maximum wait time in seconds

  • [interval](/en/sql-reference/language-elements/literals/datetime-literals/interval-literal/): Maximum wait time expressed in the following format:

    num year num months num [days] HH:MM:SS.ms
    
  • NONE (empty string): No maximum wait time, request can be queued indefinitely, up to one year.

If the value that you specify resolves to more than one year, Vertica returns with a warning and sets the parameter to 365 days:

=> ALTER RESOURCE POOL user_0 QUEUETIMEOUT '11 months 50 days 08:32';
WARNING 5693:  Using 1 year for QUEUETIMEOUT
ALTER RESOURCE POOL
=> SELECT QUEUETIMEOUT FROM resource_pools WHERE name = 'user_0';
 QUEUETIMEOUT
--------------
 365
(1 row)

Default: 00:05 (5 minutes)

RUNTIMECAP

Maximum execution time allowed to queries in this resource pool, not more than one year, otherwise Vertica returns with an error. If a query exceeds this setting, it tries to cascade to a secondary pool:

RUNTIMECAP { 'interval' | NONE }
  • interval: Maximum wait time expressed in the following format:

    num year num month num [day] HH:MM:SS.ms
    
  • NONE (empty string): No maximum wait time, request can be queued indefinitely, up to one year.

If the user or session also has a RUNTIMECAP, the shorter limit applies.

RUNTIMEPRIORITY

Determines how the resource manager should prioritize dedication of run-time resources (CPU, I/O bandwidth) to queries already running in this resource pool:

RUNTIMEPRIORITY { HIGH | MEDIUM | LOW }

Default: MEDIUM

RUNTIMEPRIORITYTHRESHOLD

Maximum time (in seconds) in which query processing must complete before the resource manager assigns to it the resource pool's RUNTIMEPRIORITY. All queries begin execution with a priority of HIGH.

RUNTIMEPRIORITYTHRESHOLD seconds

Default: 2

SINGLEINITIATOR

Set to false for backward compatibility. Do not change this setting.

Privileges

Superuser

Examples

Set resource pool PRIORITY to 5:

=> ALTER RESOURCE POOL ceo_pool PRIORITY 5;

Designate a secondary resource pool:

=> CREATE RESOURCE POOL second_pool;
=> ALTER RESOURCE POOL ceo_pool CASCADE TO second_pool;

Decrease to 0% the MAXMEMORYSIZE and MEMORYSIZE settings on the dashboard subcluster's built-in TM resource pool. Changing these settings to 0 prevents the subcluster from running mergeout operations:

=> ALTER RESOURCE POOL TM FOR SUBCLUSTER dashboard MEMORYSIZE '0%'
   MAXMEMORYSIZE '0%';

See Tuning tuple mover pool settings for more information.

See also

2.21 - ALTER ROLE

Renames an existing role.

Renames an existing role.

Syntax

ALTER ROLE name RENAME TO new-name

Parameters

name
The role to rename.
new-name
The role's new name.

Privileges

Superuser

Examples

=> ALTER ROLE applicationadministrator RENAME TO appadmin;
ALTER ROLE

See also

2.22 - ALTER ROUTING RULE

Changes an existing load balancing policy routing rule.

Changes an existing load balancing policy routing rule.

Syntax

ALTER ROUTING RULE { rule_name | FOR WORKLOAD workload_name }  {
    RENAME TO new_name |
    SET ROUTE TO 'cidr_range'|
    SET GROUP TO group_name |
    SET WORKLOAD TO workload_name |
    SET SUBCLUSTER TO subcluster_name [,...] |
    SET PRIORITY TO priority |
    { ADD | REMOVE } SUBCLUSTER subcluster_name [,...] 
}

Parameters

rule_name
The name of the existing routing rule to change.
FOR WORKLOAD workload_name
The name of a workload.
RENAME TO new_name
The new name of the routing rule.
SET ROUTE TO 'cidr_range'
An IPv4 or IPv6 address range in CIDR format. Changes the address range of client connections this rule applies to.
SET GROUP TO group_name
The load balancing group that handles the connections that match this rule.
SET WORKLOAD TO workload_name
The name of the workload.
SET SUBCLUSTER TO subcluster_name
One or more subclusters to route clients to.
SET PRIORITY TO priority
Sets the priority for the routing rule, where priority is a non-negative INTEGER. The greater the value, the greater the priority. If a user or their role is granted multiple routing rules, the one with the greatest priority applies. You can set the priority for a routing rule only when applying a routing rule to a workload.
{ADD | REMOVE} SUBCLUSTER subcluster_name
Add or remove one or more subclusters from the routing rule.

Examples

This example changes the routing rule named etl_rule so it uses the load balancing group named etl_rule to handle incoming connections in the IP address range of 10.20.100.0 to 10.20.100.255.

=> ALTER ROUTING RULE etl_rule SET GROUP TO etl_group;
ALTER ROUTING RULE
=> ALTER ROUTING RULE etl_rule SET ROUTE TO '10.20.100.0/24';
ALTER ROUTING RULE
=> \x
Expanded display is on.
=> SELECT * FROM routing_rules WHERE NAME = 'etl_rule';
-[ RECORD 1 ]----+---------------
name             | etl_rule
source_address   | 10.20.100.0/24
destination_name | etl_group

This example routes analytics workloads to the sc_analytics_2 subcluster:

=> ALTER ROUTING RULE FOR WORKLOAD analytics SET SUBCLUSTER TO `sc_analytics_2`;

This example changes the workload rule to handle reporting instead of analytics workloads:

=> ALTER ROUTING RULE FOR WORKLOAD analytics SET WORKLOAD TO reporting;

This example changes the priority of the workload rule for reporting workloads:

=> ALTER ROUTING RULE FOR WORKLOAD reporting SET PRIORITY TO 5;

This example adds a subcluster to the routing rule:

=> ALTER ROUTING RULE FOR WORKLOAD analytics ADD SUBCLUSTER sc_01, sc_02;

This example removes a subcluster from the routing rule:

=> ALTER ROUTING RULE FOR WORKLOAD analytics REMOVE SUBCLUSTER sc_01, sc_02;

See also

2.23 - ALTER SCHEDULE

Modifies a schedule.

Modifies a schedule.

Syntax

ALTER SCHEDULE [[database.]schema.]schedule {
          OWNER TO new_owner
        | SET SCHEMA new_schema
        | RENAME TO new_schedule
        | USING CRON new_cron_expression
        | USING DATETIMES new_timestamp_list
    }

Arguments

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

schedule
The schedule to modify.
new_owner
The new owner of the schedule.
new_schema
The new schema of the schedule.
new_schedule
The new name for the schedule.
new_cron_expression
A cron expression. You should use this for recurring tasks. Value separators are not currently supported.
new_timestamp_list
A comma-separated list of timestamps. You should use this to schedule non-recurring events at arbitrary times.

Privileges

Superuser

Examples

To change the cron expression for a schedule:

=> ALTER SCHEDULE daily_schedule USING CRON '0 8 * * *';

To change a schedule that uses a cron expression to use a timestamp list instead:

=> ALTER SCHEDULE my_schedule USING DATETIMES('2023-10-01 12:30:00', '2022-11-01 12:30:00');

To rename a schedule:

=> ALTER SCHEDULE daily_schedule RENAME TO daily_8am_gmt;

2.24 - ALTER SCHEMA

Changes one or more schemas in one of the following ways:.

Changes one or more schemas in one of the following ways:

  • Enable or disable inheritance of schema privileges by tables created in the schemas.

  • Reassign schema ownership to another user.

  • Change schema disk quota.

  • Rename one or more schemas.

Syntax

ALTER SCHEMA [{ namespace. | database. }]schema
    DEFAULT {INCLUDE | EXCLUDE} SCHEMA PRIVILEGES
    | OWNER TO user-name [CASCADE]
    | DISK_QUOTA { value | SET NULL }

You can rename more than one schema in a single operation:

ALTER SCHEMA [{ namespace. | database. }]schema[,...] RENAME TO [namespace.]new-schema-name[,...]

Parameters

namespace
For Eon Mode databases, namespace of the schema to alter. If no namespace is specified, the namespace of the schema is assumed to be default_namespace.
database
For Enterprise Mode databases, name of the database containing the schema. If specified, it must be the current database.
schema
Name of the schema to modify.
DEFAULT {INCLUDE | EXCLUDE} SCHEMA PRIVILEGES

Specifies whether to enable or disable default inheritance of privileges for new tables in the specified schema:

  • EXCLUDE SCHEMA PRIVILEGES (default): Disables inheritance of schema privileges.

  • INCLUDE SCHEMA PRIVILEGES: Specifies to grant tables in the specified schema the same privileges granted to that schema. This option has no effect on existing tables in the schema.

See also Enabling schema inheritance.

OWNER TO
Reassigns schema ownership to the specified user:
OWNER TO user-name [CASCADE]

By default, ownership of objects in the reassigned schema remain unchanged. To reassign ownership of schema objects to the new schema owner, qualify the OWNER TO clause with CASCADE. For details, see Cascading Schema Ownership below.

DISK_QUOTA
One of the following:
  • A string, an integer followed by a supported unit: K, M, G, or T. If the new value is smaller than the current usage, the operation succeeds but no further disk space can be used until usage is reduced below the new quota.

  • SET NULL to remove a quota.

For more information, see Disk quotas.

RENAME TO
Renames one or more schemas:
RENAME TO [namespace.]new-schema-name[,...]

The following requirements apply:

  • The new schema name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, models, and schemas in the namespace.

  • If you specify multiple schemas to rename, the source and target lists must have the same number of names.

  • If you specified the namespace of the schema to alter, you must provide the same namespace for the new-schema-name. You cannot rename a schema to a different namespace.

Privileges

One of the following:

  • Superuser

  • Schema owner

Cascading schema ownership

By default, ALTER SCHEMA...OWNER TO does not affect ownership of objects in the target schema or the privileges granted on them. If you qualify the OWNER TO clause with CASCADE, Vertica acts as follows on objects in the target schema:

  • Transfers ownership of objects owned by the previous schema owner to the new owner.

  • Revokes all object privileges granted by the previous schema owner.

If issued by non-superusers, ALTER SCHEMA...OWNER TO CASCADE ignores all objects that belong to other users, and returns with notices on the objects that it cannot change. For example:

  1. Schema ms is owned by user mayday, and contains two tables: ms.t1 owned by mayday, and ms.t2 owned by user joe:

    => \dt
                               List of tables
         Schema     |         Name          | Kind  |  Owner  | Comment
    ----------------+-----------------------+-------+---------+---------
     ms             | t1                    | table | mayday  |
     ms             | t2                    | table | joe     |
    
  2. User mayday transfers ownership of schema ms to user dbadmin, using CASCADE. On return, ALTER SCHEMA reports that it cannot transfer ownership of table ms.t2 and its projections, which are owned by user joe:

    
    => \c - mayday
    You are now connected as user "mayday".
    => ALTER SCHEMA ms OWNER TO dbadmin CASCADE;
    NOTICE 3583:  Insufficient privileges on ms.t2
    NOTICE 3583:  Insufficient privileges on ms.t2_b0
    NOTICE 3583:  Insufficient privileges on ms.t2_b1
    ALTER SCHEMA
    => \c
    You are now connected as user "dbadmin".
    => \dt
                               List of tables
         Schema     |         Name          | Kind  |  Owner  | Comment
    ----------------+-----------------------+-------+---------+---------
     ms             | t1                    | table | dbadmin |
     ms             | t2                    | table | joe     |
    
  3. User dbadmin transfers ownership of schema ms to user pat, again using CASCADE. This time, because dbadmin is a superuser, ALTER SCHEMA transfers ownership of all ms tables to user pat

    => ALTER SCHEMA ms OWNER TO pat CASCADE;
    ALTER SCHEMA
    => \dt
                               List of tables
         Schema     |         Name          | Kind  |  Owner  | Comment
    ----------------+-----------------------+-------+---------+---------
     ms             | t1                    | table | pat     |
     ms             | t2                    | table | pat     |
    

Swapping schemas

Renaming schemas is useful for swapping schemas without actually moving data. To facilitate the swap, enter a non-existent, temporary placeholder schema. For example, the following ALTER SCHEMA statement uses the temporary schema temps to facilitate swapping schema S1 with schema S2. In this example, S1 is renamed to temps. Then S2 is renamed to S1. Finally, temps is renamed to S2.

=> ALTER SCHEMA S1, S2, temps RENAME TO temps, S1, S2;

Examples

The following example renames schemas S1 and S2 to S3 and S4, respectively:

=> ALTER SCHEMA S1, S2 RENAME TO S3, S4;

This example sets the default behavior for new table t2 to automatically inherit the schema's privileges:

=> ALTER SCHEMA s1 DEFAULT INCLUDE SCHEMA PRIVILEGES;

=> CREATE TABLE s1.t2 (i, int);

This example sets the default for new tables to not automatically inherit privileges from the schema:

=> ALTER SCHEMA s1 DEFAULT EXCLUDE SCHEMA PRIVILEGES;

In an Eon Mode database, the following statement renames the train schema to ferry in the transit namespace:

=> ALTER SCHEMA transit.train RENAME TO transit.ferry;

See also

2.25 - ALTER SEQUENCE

Changes a sequence in two ways:.

Changes a sequence in two ways:

  • Changes values that control sequence behavior—for example, its start value and range of minimum and maximum values. These changes take effect only when you start a new database session.
  • Changes sequence name, schema, or ownership. These changes take effect immediately.

Syntax

Change sequence behavior:

ALTER SEQUENCE [[database.]schema.]sequence
    [ INCREMENT [ BY ] integer ]
    [ MINVALUE integer | NO MINVALUE ]
    [ MAXVALUE integer | NO MAXVALUE ]
    [ RESTART [ WITH ] integer ]
    [ CACHE integer | NO CACHE ]
    [ CYCLE | NO CYCLE ]

Change sequence name, schema, or ownership:

ALTER SEQUENCE [schema.]sequence-name {
    RENAME TO name
    | SET SCHEMA schema]
    | OWNER TO owner
}

Arguments

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

sequence
Name of the sequence to alter.

In the case of IDENTITY table columns, Vertica generates the sequence name using the following convention:

table-name_col-name_seq

To obtain this name, query the SEQUENCES system table.

INCREMENT integer

Positive or negative minimum value change on each call to NEXTVAL. The default is 1.

This value is a minimum increment size. The sequence can increment by more than this value unless you also specify NO CACHE.

MINVALUE integer | NO MINVALUE
Minimum value the sequence can generate. If this change would invalidate the current sequence position, the operation fails.
MAXVALUE integer | NO MAXVALUE
Maximum value the sequence can generate. If this change would invalidate the current sequence position, the operation fails.
RESTART integer
New start value of the sequence. The next call to NEXTVAL returns the new start value.
CACHE | NO CACHE

How many unique sequence numbers to pre-allocate and store in memory on each node for faster access. Vertica sets up caching for each session and distributes it across all nodes. A value of 0 or 1 is equivalent to NO CACHE.

By default, the sequence cache is set to 250,000.

For details, see Distributing sequences.

CYCLE | NO CYCLE
How to handle reaching the end of the sequence. The default is NO CYCLE, meaning that a call to NEXTVAL returns an error after the sequence reaches its upper or lower limit. CYCLE instead wraps, as follows:
  • Ascending sequence: wraps to the minimum value.
  • Descending sequence: wraps to the maximum value.
RENAME TO name
New name in the current schema for the sequence. Supported only for named sequences.
SET SCHEMA schema
Moves the sequence to a new schema. Supported only for named sequences.
OWNER TO owner
Reassigns sequence ownership to another user.

Privileges

For named sequences, USAGE on the schema and one of the following:

  • Sequence owner

  • ALTER privilege on the sequence

  • For certain operations, non-superusers must have the following schema privileges:

    • To rename a sequence: CREATE, USAGE
    • To move a sequence to another schema: CREATE on the destination, USAGE on the source

For IDENTITY column sequences, USAGE on the table schema and one of the following:

  • Table owner

  • ALTER privileges

Non-superusers must also have SELECT privileges to enable or disable constraint enforcement, or remove partitioning.

Examples

See Altering sequences.

See also

CREATE SEQUENCE

2.26 - ALTER SESSION

ALTER SESSION sets and clears session-level configuration parameter values for the current session.

ALTER SESSION sets and clears session-level configuration parameter values for the current session. To identify session-level parameters, query the CONFIGURATION_PARAMETERS system table.

Syntax

ALTER SESSION {
    SET [PARAMETER] parameter=value[,...]
    | CLEAR { [PARAMETER] parameter[,...] | PARAMETER ALL }
    | SET UDPARAMETER [ FOR libname ] ud-parameter=value[,...]
    | CLEAR UDPARAMETER { [ FOR libname ] ud-parameter[,...] | ALL }
}

Arguments

SET [PARAMETER] parameter=value
Sets one or more configuration parameters to the specified value.
CLEAR { [PARAMETER] parameter | PARAMETER ALL }
Clears the specified configuration parameters, or all parameters, of changes that were set in the current session.
SET UDPARAMETER [ FOR libname ] ud-parameter=value[,...]
Sets one or more user-defined session parameters to be used with UDxs. The maximum length of a parameter name when set with ALTER SESSION is 128 characters.

If you specify a library, then only that library can access the parameter's value. Use this restriction to protect parameters that hold sensitive data, such as credentials.

CLEAR UDPARAMETER { [ FOR libname ] ud-parameter[,...] | ALL }
Clears the specified user-defined parameters, or all of them, of changes that were set in the current session.

Privileges

None

Examples

Set a configuration parameter:

=> ALTER SESSION SET ForceUDxFencedMode = 1;
ALTER SESSION
    
=> SELECT parameter_name, current_value, default_value 
   FROM CONFIGURATION_PARAMETERS 
   WHERE parameter_name = 'ForceUDxFencedMode';
   parameter_name   | current_value | default_value
--------------------+---------------+---------------
 ForceUDxFencedMode | 1             | 0
(1 row)

Clear all session-level settings for configuration parameters:

=> ALTER SESSION CLEAR PARAMETER ALL;
ALTER SESSION

Set a UDx parameter in a single library:

=> ALTER SESSION SET UDPARAMETER FOR MyLibrary RowCount = 25;
ALTER SESSION

2.27 - ALTER SUBCLUSTER

Changes the configuration of a subcluster.

Changes the configuration of a subcluster. You can use this statement to rename a subcluster or make it the default subcluster.

Syntax

ALTER SUBCLUSTER subcluster-name {
    RENAME TO new-name |
    SET DEFAULT
}

Parameters

subcluster-name
The name of the subcluster to alter.
RENAME TO new-name
Changes the name of the subcluster to new-name.
SET DEFAULT
Makes the subcluster the default subcluster. When you add new nodes to the database and do not specify a subcluster to contain them, Vertica adds them to the default subcluster. There can be only one default subcluster at a time. The subcluster that was previously the default subcluster becomes a non-default subcluster.

Privileges

Superuser

Examples

This example makes the analytics_cluster the default subcluster:

=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
  subcluster_name
--------------------
 default_subcluster
(1 row)

=> ALTER SUBCLUSTER analytics_cluster SET DEFAULT;
ALTER SUBCLUSTER
=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
  subcluster_name
-------------------
 analytics_cluster
(1 row)

This example renames default_subcluster to load_subcluster:

=> ALTER SUBCLUSTER default_subcluster RENAME TO load_subcluster;
ALTER SUBCLUSTER

=> SELECT DISTINCT subcluster_name FROM subclusters;
  subcluster_name
-------------------
 load_subcluster
 analytics_cluster
(2 rows)

See also

2.28 - ALTER SUBNET

Renames an existing subnet.

Renames an existing subnet.

Syntax

ALTER SUBNET subnet-name RENAME TO new-subnet-name

Parameters

subnet-name
The name of the existing subnet.
new-subnet-name
The new name for the subnet.

Privileges

Superuser

Examples

=> ALTER SUBNET mysubnet RENAME TO myNewSubnet;

2.29 - ALTER TABLE

Modifies the metadata of an existing table.

Modifies the metadata of an existing table. All changes are auto-committed.

Syntax

ALTER TABLE [[{ namespace. | database. } ]schema.]table {
    ADD COLUMN [ IF NOT EXISTS ] column datatype
       [ column‑constraint ]
       [ ENCODING encoding-type ]
       [ PROJECTIONS (projections-list) | ALL PROJECTIONS ]
       [, ...]
    | ADD table-constraint
    | ALTER COLUMN column {
        ENCODING encoding-type PROJECTIONS (projection-list)
        | { SET | DROP } expression }
    | ALTER CONSTRAINT constraint-name { ENABLED | DISABLED }
    | DISK_QUOTA { value | SET NULL }
    | DROP CONSTRAINT constraint-name [ CASCADE | RESTRICT ]
    | DROP [ COLUMN ] [ IF EXISTS ] column [ CASCADE | RESTRICT ]
    | FORCE OUTER integer
    | { INCLUDE | EXCLUDE | MATERIALIZE } [ SCHEMA ] PRIVILEGES
    | OWNER TO owner
    | partition-clause [ REORGANIZE ]
    | REMOVE PARTITIONING
    | RENAME [ COLUMN ] name TO new‑name
    | RENAME TO new-table-name[,...]
    | REORGANIZE
    | SET {
        ActivePartitionCount { count | DEFAULT }
        | IMMUTABLE ROWS
        | MERGEOUT { 1 | 0 }
        | SCHEMA [namespace.]schema }
}

Arguments

namespace
For Eon Mode databases, namespace that contains the table to alter. If no namespace is specified, the namespace of the table is assumed to be default_namespace.
database
For Enterprise Mode databases, name of the database containing the table. If specified, it must be the current database.
schema
Name of the schema that contains the table to alter.
table
The table to alter.
ADD COLUMN
Adds one or more columns to the table and to the specified projections (or all projections, if not specified).

Adding a column with the same name as one that already exists is an error unless you use IF NOT EXISTS.

Restrictions on columns of complex types also apply to columns that you add using ADD COLUMN.

You can qualify the new column definition with one or more of these options:

  • A column constraint:

      [ CONSTRAINT constraint-name ]
      { [NOT] NULL 
        | DEFAULT expression 
        | SET USING expression 
        | DEFAULT USING expression }
    

    Columns that use DEFAULT with static values can be added in a single ALTER TABLE statement. Columns that use non-static DEFAULT values must be added in separate ALTER TABLE statements. For more information, see Column-constraint.

  • ENCODING: the column encoding type, by default AUTO.

  • PROJECTIONS: a comma-separated list of base names of existing projections to add the column to. Vertica adds the column to all buddies of each projection. The projection list cannot include projections with pre-aggregated data such as live aggregate projections. Instead of specifying projections, you can use ALL PROJECTIONS to add the column to all projections, excluding those with pre-aggregated data.

ADD table‑constraint
Adds a constraint to a table that does not have any associated projections.
ALTER COLUMN
You can alter an existing column in one of two ways:
  • Set encoding on a column for one or more projections of this table:

    ENCODING encoding-type PROJECTIONS (projections-list)
    

    where projections-list is a comma-delimited list of projections to update with the new encoding. You can specify each projection in two ways:

    • Projection base name: Update all projections that share this base name.

    • Projection name: Update the specified projection. If the projection is segmented, the change is propagated to all buddies.

    If one of the projections does not contain the target column, Vertica returns with a rollback error.

    For details, see Projection Column Encoding.

  • Set or drop a setting for a column of scalar data, including primitive arrays:

    SET { DEFAULT expression
              | USING expression
        | DEFAULT USING expression
        | NOT NULL
        | DATA TYPE datatype
    }
    DROP { DEFAULT
         | SET USING
         | DEFAULT USING
         | NOT NULL
    }
    

    You cannot change the data type of a column of any complex type that is neither a scalar type nor an array of scalar types. One exception applies: in external tables, you can change a primitive column type to a complex type.

    Setting a DEFAULT or SET USING expression has no effect on existing column values. To refresh the column with its DEFAULT or SET USING expression, update it as follows

    • SET USING column: Call REFRESH_COLUMNS on the table.

    • DEFAULT column: update the column as follows:

      UPDATE table-name SET column-name=DEFAULT;
      

Altering a column with DEFAULT or SET USING can increase disk usage, which can cause the operation to fail if it would violate the table or schema disk quota.

ALTER CONSTRAINT
Specifies whether to enforce primary key, unique key, and check constraints:
ALTER CONSTRAINT constraint-name {ENABLED | DISABLED}
DISK_QUOTA
One of the following:
  • A string, an integer followed by a supported unit: K, M, G, or T. If the new value is smaller than the current usage, the operation succeeds but no further disk space can be used until usage is reduced below the new quota.

  • SET NULL to remove a quota.

For more information, see Disk quotas.

DROP CONSTRAINT
Drops the specified table constraint from the table:
DROP CONSTRAINT constraint-name [CASCADE | RESTRICT]

You can qualify DROP CONSTRAINT with one of these options:

  • CASCADE: Drops a constraint and all dependencies in other tables.

  • RESTRICT: Does not drop a constraint if there are dependent objects. Same as the default behavior.

Dropping a table constraint has no effect on views that reference the table.

DROP [COLUMN]
Drops the specified column from the table and that column's ROS containers:
DROP [COLUMN] [IF EXISTS] column [CASCADE | RESTRICT]

You can qualify DROP COLUMN with one of these options:

  • IF EXISTS generates an informational message if the column does not exist. If you omit this option and the column does not exist, Vertica generates a ROLLBACK error message.

  • CASCADE is required if the column has dependencies.

  • RESTRICT drops the column only from the given table.

The column's table cannot be immutable.

See Dropping table columns.

FORCE OUTER integer
Specifies whether a table is joined to another as an inner or outer input. For details, see Controlling join inputs.
{INCLUDE | EXCLUDE | MATERIALIZE} [SCHEMA] PRIVILEGES
Specifies default inheritance of schema privileges for this table:
  • EXCLUDE PRIVILEGES (default) disables inheritance of privileges from the schema.

  • INCLUDE PRIVILEGES grants the table the same privileges granted to its schema.

  • MATERIALIZE PRIVILEGES copies grants to the table and creates a GRANT object on the table. This disables the inherited privileges flag on the table, so you can:

    • Grant more specific privileges at the table level.

    • Use schema-level privileges as a template.

    • Move the table to a different schema.

    • Change schema privileges without affecting the table.

See also Setting privilege inheritance on tables and views.

OWNER TO owner
Changes the table owner.
partition-clause [REORGANIZE]
Invalid for external tables, logically divides table data storage through a PARTITION BY clause:
PARTITION BY expression
  [ GROUP BY expression ]
  [ SET ACTIVEPARTITIONCOUNT integer ]

For details, see Partition clause.

If you qualify the partition clause with REORGANIZE and the table previously specified no partitioning, the Vertica Tuple Mover immediately implements the partition clause. If the table previously specified partitioning, the Tuple Mover evaluates ROS storage containers and reorganizes them as needed to conform with the new partition clause.

REMOVE PARTITIONING
Specifies to remove partitioning from a table definition. The Tuple Mover subsequently removes existing partitions from ROS containers.
RENAME [COLUMN]
Renames the specified column within the table. The column's table cannot be immutable.
RENAME TO
Renames one or more tables:
RENAME TO new-table-name[,...]

The following requirements apply:

  • The renamed table must be in the same schema as the original table.

  • The new table name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.

  • If you specify multiple tables to rename, the source and target lists must have the same number of names.

REORGANIZE
Valid only for partitioned tables, invokes the Tuple Mover to reorganize ROS storage containers as needed to conform with the table's current partition clause. ALTER TABLE...REORGANIZE and Vertica meta-function PARTITION_TABLE operate identically.

REORGANIZE can also qualify a new partition clause.

SET
Changes a table setting, one of the following:
  • ActivePartitionCount { count | DEFAULT }, valid only for partitioned tables, specifies how many partitions are active for this table, one of the following:

    • count: Unsigned integer, supersedes configuration parameter ActivePartitionCount.

    • DEFAULT: Removes the table-level active partition count. The table obtains its active partition count from the configuration parameter ActivePartitionCount.

    For details on usage, see Active and inactive partitions.

  • IMMUTABLE ROWS prevents changes to table row values by blocking DML operations such as UPDATE and DELETE. Once set, table immutability cannot be reverted.

    You cannot set a flattened table to be immutable. For details on all immutable table restrictions, see Immutable tables.

  • MERGEOUT { 1 | 0 } specifies whether to enable or disable mergeout to ROS containers that consolidate projection data of this table. By default, mergeout is enabled (1) on all tables.

  • SCHEMA [namespace.]schema-name moves the table from its current schema to [namespace.]schema-name. For Eon Mode databases, the new schema must be in the same namespace as the old schema. When no namespace is provided, the namespace is set to default_namespace.

Vertica automatically moves all projections that are anchored to the source table to the destination schema. It also moves all IDENTITY columns to the destination schema. For details, see Moving tables to another schema

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Table owner

  • ALTER privileges

Non-superusers must also have SELECT privileges to enable or disable constraint enforcement, or remove partitioning.

For certain operations, non-superusers must have the following schema privileges:

  • To rename a table: CREATE, USAGE

  • To move a table to another schema: USAGE on the source schema, CREATE on the destination schema

Restrictions for complex types

Complex types used in native tables have some restrictions, in addition to the restrictions for individual types listed on their reference pages:

  • A native table must have at least one column that is a primitive type or a native array (one-dimensional array of a primitive type). If a flex table has real columns, it must also have at least one column satisfying this restriction.

  • Complex type columns cannot be used in ORDER BY or PARTITION BY clauses nor as FILLER columns.

  • Complex type columns cannot have constraints.

  • Complex type columns cannot use DEFAULT or SET USING.

  • Expressions returning complex types cannot be used as projection columns, and projections cannot be segmented or ordered by columns of complex types.

Exclusive ALTER TABLE clauses

The following ALTER TABLE clauses cannot be combined with another ALTER TABLE clause:

  • ADD COLUMN

  • DROP COLUMN

  • RENAME COLUMN

  • SET SCHEMA

  • RENAME [TO]

Node down limitations

Enterprise Mode only

The following ALTER TABLE operations are not supported when one or more database cluster nodes are down:

  • ALTER COLUMN ... ADD table-constraint

  • ALTER COLUMN ... SET DATA TYPE

  • ALTER COLUMN ... { SET DEFAULT | DROP DEFAULT }

  • ALTER COLUMN ... { SET USING | DROP SET USING }

  • ALTER CONSTRAINT

  • DROP COLUMN

  • DROP CONSTRAINT

Pre-aggregated projection restrictions

You cannot modify the metadata of anchor table columns that are included in live aggregate or Top-K projections. You also cannot drop these columns. To make these changes, you must first drop all live aggregate and Top-K projections that are associated with it.

External table restrictions

Not all ALTER TABLE options pertain to external tables. For instance, you cannot add a column to an external table, but you can rename the table:

=> ALTER TABLE mytable RENAME TO mytable2;
ALTER TABLE

Locked tables

If the operation cannot obtain an O lock on the target table, Vertica tries to close any internal Tuple Mover sessions that are running on that table. If successful, the operation can proceed. Explicit Tuple Mover operations that are running in user sessions do not close. If an explicit Tuple Mover operation is running on the table, the operation proceeds only when the operation is complete.

Examples

In an Eon Mode database, the following statement alters the schema of the flights table from schema airline1 in namespace airport to schema airline2 in namespace airport:

=> ALTER TABLE airport.airline1.flights SET SCHEMA airport.airline2.flights;

See also

2.29.1 - Projection column encoding

After you create a table and its projections, you can call ALTER TABLE...ALTER COLUMN to set or change the encoding type of an existing column in one or more projections.

After you create a table and its projections, you can call ALTER TABLE...ALTER COLUMN to set or change the encoding type of an existing column in one or more projections. For example:

ALTER TABLE store.store_dimension ALTER COLUMN store_region
  ENCODING rle PROJECTIONS (store.store_dimension_p1_b0, store.store_dimension_p2);

In this example, the ALTER TABLE statement specifies to set RLE encoding on column store_region for two projections: store_dimension_p1_b0 and store_dimension_p2. The PROJECTIONS list references the two projections by their projection name and base name, respectively. You can reference a projection either way; in both cases, the change is propagated to all buddies of the projection and stored in its DDL accordingly:

=> select export_objects('','store.store_dimension');

                          export_objects
------------------------------------------------------------------
CREATE TABLE store.store_dimension
(
    store_key int NOT NULL,
    store_name varchar(64),
    store_number int,
    store_address varchar(256),
    store_city varchar(64),
    store_state char(2),
    store_region varchar(64)
);

CREATE PROJECTION store.store_dimension_p1
(
 store_key,
 store_name,
 store_number,
 store_address,
 store_city,
 store_state,
 store_region ENCODING RLE
)
AS
 SELECT store_dimension.store_key,
        store_dimension.store_name,
        store_dimension.store_number,
        store_dimension.store_address,
        store_dimension.store_city,
        store_dimension.store_state,
        store_dimension.store_region
 FROM store.store_dimension
 ORDER BY store_dimension.store_key
SEGMENTED BY hash(store_dimension.store_key) ALL NODES KSAFE 1;

CREATE PROJECTION store.store_dimension_p2
(
 store_key,
 store_name,
 store_number,
 store_address,
 store_city,
 store_state,
 store_region ENCODING RLE
)
AS
 SELECT ...

2.30 - ALTER TLS CONFIGURATION

Alters a specified TLS Configuration object.

Alters a specified TLS Configuration object. For information on existing TLS Configuration objects, query TLS_CONFIGURATIONS.

Syntax

ALTER TLS CONFIGURATION tls_config_name {
    [ CERTIFICATE { NULL | cert_name } ]
    [ ADD CA CERTIFICATES ca_cert_name [,...] ]
    [ REMOVE CA CERTIFICATES ca_cert_name [,...] ]
    [ CIPHER SUITES { '' | 'openssl_cipher [,...]' } ]
    [ TLSMODE 'tlsmode' ]
    [ OWNER TO user_name ]
}

Parameters

tls_config_name
The TLS Configuration object to alter.
NULL
Removes the non-CA certificate from the TLS Configuration.
cert_name
A certificate created with CREATE CERTIFICATE.

You must have USAGE privileges on the certificate (either from ownership of the certificate or USAGE on its key, if any) to add it to a TLS Configuration.

ca_cert_name
A CA certificate created with CREATE CERTIFICATE.

You must have USAGE privileges on the certificate (either from ownership of the certificate or USAGE on its key, if any) to add it to a TLS Configuration.

openssl_cipher
A comma-separated list of cipher suites to use instead of the default set of cipher suites. Providing an empty string for this parameter clears the alternate cipher suite list and instructs the specified TLS Configuration to use the default set of cipher suites.

To view enabled cipher suites, use LIST_ENABLED_CIPHERS.

tlsmode
How Vertica establishes TLS connections and handles certificates, one of the following, in order of ascending security:
  • DISABLE: Disables TLS. All other options for this parameter enable TLS.

  • ENABLE: Enables TLS. Vertica does not check client certificates.

  • TRY_VERIFY: Establishes a TLS connection if one of the following is true:

    • the other host presents a valid certificate

    • the other host doesn't present a certificate

    If the other host presents an invalid certificate, the connection will use plaintext.

  • VERIFY_CA: Connection succeeds if Vertica verifies that the other host's certificate is from a trusted CA. If the other host does not present a certificate, the connection uses plaintext.

  • VERIFY_FULL: Connection succeeds if Vertica verifies that the other host's certificate is from a trusted CA and the certificate's cn (Common Name) or subjectAltName attribute matches the hostname or IP address of the other host.

    Note that for client certificates, cn is used for the username, so subjectAltName must match the hostname or IP address of the other host.

VERIFY_FULL is unsupported for client-server TLS (the connection type handled by ServerTLSConfig) and behaves like VERIFY_CA.

Privileges

Non-superuser: ALTER privileges on the TLS Configuration.

Examples

To remove all certificates and CA certificates from the LDAPLink TLS Configuration:

=>  SELECT * FROM tls_configurations WHERE name='LDAPLink';
   name   |  owner  | certificate | ca_certificate | cipher_suites |  mode
----------+---------+-------------+----------------+---------------+---------
 LDAPLink | dbadmin | server_cert | ca             |               | DISABLE
 LDAPLink | dbadmin | server_cert | ica            |               | DISABLE
(2 rows)

=> ALTER TLS CONFIGURATION LDAPLink CERTIFICATE NULL REMOVE CA CERTIFICATES ca, ica;
ALTER TLS CONFIGURATION

=> SELECT * FROM tls_configurations WHERE name='LDAPLink';
   name   |  owner  | certificate | ca_certificate | cipher_suites |  mode
----------+---------+-------------+----------------+---------------+---------
 LDAPLink | dbadmin |             |                |               | DISABLE
(3 rows)

To use an alternate set of cipher suites for client-server TLS:

 => ALTER TLS CONFIGURATION server CIPHER SUITES
    'DHE-PSK-AES256-CBC-SHA384,
     DHE-PSK-AES128-GCM-SHA256,
     PSK-AES128-CBC-SHA256';
ALTER TLS CONFIGURATION

 => SELECT name, cipher_suites FROM tls_configurations WHERE name='server';
   name   |                               cipher_suites
 server   | DHE-PSK-AES256-CBC-SHA384,DHE-PSK-AES128-GCM-SHA256,PSK-AES128-CBC-SHA256
(1 row)

For other examples, see:

2.31 - ALTER TRIGGER

Modifies a trigger.

Modifies a trigger.

Syntax

ALTER TRIGGER [[database.]schema.]trigger {
          OWNER TO new_owner
        | SET SCHEMA new_schema
        | RENAME TO new_trigger
        | PROCEDURE TO new_procedure
    }

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

trigger
The trigger to modify.
new_owner
The new owner and definer of the trigger. This affects the behavior of AS DEFINER.
new_schema
The new schema of the trigger.
new_trigger
The new name for the trigger.
new_procedure
The function signature of the stored procedure.

Privileges

Superuser

Examples

To attach a different procedure to the trigger:

=> ALTER TRIGGER daily_1am PROCEDURE TO log_user_actions(10, 20);

To rename a trigger:

=> ALTER TRIGGER daily_1am RENAME TO daily_1am_gmt;

To move the trigger to a different schema:

=> ALTER TRIGGER daily_1am_gmt

2.32 - ALTER USER

Changes user account parameters and user-level configuration parameters.

Changes user account parameters and user-level configuration parameters.

Syntax

ALTER USER user-name {
   account-parameter value[,...]
   | SET [PARAMETER] cfg-parameter=value[,...]
   | CLEAR [PARAMETER] cfg-parameter[,...]
}

Parameters

user-name
Name of the user. Names that contain special characters must be double-quoted. To enforce case-sensitivity, use double-quotes.

For details on name requirements, see Creating a database name and password.

account-parameter value
Specifies user account settings (see below).
SET [PARAMETER]
Sets the specified configuration parameters. The new setting applies only to the current session, and to all later sessions launched by this user. Concurrent user sessions are unaffected by new settings unless they call meta-function RESET_SESSION.
CLEAR [PARAMETER]
Resets the specified configuration parameters to their default values.

User account parameters

Specify one or more user-account parameters and their settings as a comma-delimited list:

account-parameter value[,...]

Parameter Setting
ACCOUNT

Locks or unlocks user access to the database, one of the following:

  • UNLOCK (default)

  • LOCK prevents a new user from logging in. This can be useful when creating an account for a user who does not need immediate access.

DEFAULT ROLE

Specifies what roles are the default roles for this user, set to one of the following:

  • NONE (default): Removes all default roles.

  • role[,...]: Comma-delimited list of roles.

  • ALL: Sets as default all user roles.

  • ALL EXCEPT role[,...]: Comma-delimited list of roles to exclude as default roles.

Default roles are automatically activated when a user logs in. The roles specified by this parameter supersede any roles assigned earlier.

GRACEPERIOD

Specifies how long a user query can block on any session socket, one of the following:

  • NONE (default): Removes any grace period previously set on session queries.

  • 'interval': Specifies as an interval the maximum grace period for current session queries, up to 20 days.

For details, see Handling session socket blocking.

IDENTIFIED BY

Changes the user's password:

IDENTIFIED BY '[new-password]'
   | ['hashed-password' SALT 'hash-salt'] 
   [REPLACE 'current-password']
  • new-password: ASCII password that Vertica then hashes for internal storage. An empty string enables this user to access the database with no password.

  • hashed-password: A pre-hashed password and its associated hex string hash-salt. Setting a password this way bypasses all password complexity requirements.

  • REPLACE: Required for non-superusers, who must supply their current password. Non-superusers can only change their own passwords.

For details, see Password guidelines and Creating a database name and password.

IDLESESSIONTIMEOUT

The length of time the system waits before disconnecting an idle session, one of the following:

  • NONE (default): No limit set for this user. If you omit this parameter, no limit is set for this user.

  • 'interval': An interval value, up to one year.

For details, see Managing client connections.

MAXCONNECTIONS

Sets the maximum number of connections the user can have to the server, one of the following:

  • NONE (default): No limit set. If you omit this parameter, the user can have an unlimited number of connections across the database cluster.

  • integer ON DATABASE: Sets to integer the maximum number of connections across the database cluster.

  • integer ON NODE: Sets to integer the maximum number of connections to each node.

For details, see Managing client connections.

MEMORYCAP

Sets how much memory can be allocated to user requests, one of the following:

  • NONE (default): No limit

  • A string value that specifies the memory limit, one of the following:

    • 'int%' expresses the maximum as a percentage of total memory available to the Resource Manager, where int is an integer value between 0 and 100.For example:

      MEMORYCAP '40%'

    • 'int{K|M|G|T}' expresses memory allocation in kilobytes, megabytes, gigabytes, or terabytes. For example:

      MEMORYCAP '10G'

PASSWORD EXPIRE

Forces immediate expiration of the user's password. The user must change the password on the next login.

PROFILE

Assigns a profile that controls password requirements for this user, one of the following:

  • DEFAULT (default): Assigns the default database profile to this user.

  • profile-name: A profile that is defined by CREATE PROFILE.

RENAME TO

Assigns the user a new user name. All privileges assigned to the user remain unchanged.

RESOURCE POOL pool-name [FOR SUBCLUSTER sc-name]

Assigns a resource pool to this user. The user must also be granted privileges to this pool, unless privileges to the pool are set to PUBLIC.

The FOR SUBCLUSTER clause assigns a subcluster-specific resource pool to the user. You can assign only one subcluster-specific resource pool to each user.

RUNTIMECAP

Sets how long this user's queries can execute, one of the following:

  • NONE (default): No limit set for this user. If you omit this parameter, no limit is set for this user.

  • 'interval': An interval value, up to one year.

A query's runtime limit can be set at three levels: the user's runtime limit, the user's resource pool, and the session setting. For more information, see Setting a runtime limit for queries.

SEARCH_PATH

Specifies the user's default search path, that tells Vertica which schemas to search for unqualified references to tables and UDFs, one of the following:

  • DEFAULT (default): Sets the search path as follows:

    "$user", public, v_catalog, v_monitor, v_internal
    
  • Comma-delimited list of schemas.

For details, see Setting Search Paths.

SECURITY_ALGORITHM 'algorithm'

Sets the user-level security algorithm for hash authentication, where algorithm is one of the following:

  • NONE (default): Uses the system-level parameter, SecurityAlgorithm

  • SHA512

  • MD5

The user's password expires when you change the SECURITY_ALGORITHM value and must be reset.

TEMPSPACECAP

Sets how much temporary file storage is available for user requests, one of the following:

  • NONE (default): No limit

  • String value that specifies the storage limit, one of the following:

    • int% expresses the maximum as a percentage of total temporary storage available to the Resource Manager, where int is an integer value between 0 and 100. For example:

      TEMPSPACECAP '40%'

    • int{K|M|G|T} expresses storage allocation in kilobytes, megabytes, gigabytes, or terabytes. For example:

      TEMPSPACECAP '10G'

Privileges

Non-superusers can change the following options on their own user accounts:

  • IDENTIFIED BY

  • RESOURCE POOL

  • SEARCH_PATH

  • SECURITY_ALGORITHM

When changing a another user's resource pool to one outside of the PUBLIC schema, the user must have USAGE privileges on the resource pool from at least one of the following:

Setting user-level configuration parameters

SET | CLEAR PARAMETER can specify only user-level configuration parameters, otherwise Vertica returns an error. Only superusers can set and clear user-level parameters, unless they are also supported at the session level.

To get the names of user-level parameters, query system table CONFIGURATION_PARAMETERS. For example:

=> SELECT parameter_name, allowed_levels FROM configuration_parameters
      WHERE allowed_levels ilike '%USER%' AND parameter_name ilike '%depot%' ORDER BY parameter_name;
       parameter_name        |     allowed_levels
-----------------------------+-------------------------
 BackgroundDepotWarming      | SESSION, USER, DATABASE
 DepotOperationsForQuery     | SESSION, USER, DATABASE
 EnableDepotWarmingFromPeers | SESSION, USER, DATABASE
 UseDepotForReads            | SESSION, USER, DATABASE
 UseDepotForWrites           | SESSION, USER, DATABASE
(5 rows)

The following example sets the user-level configuration parameter UseDepotForWrites for two users, Yvonne and Ahmed:

=> SHOW USER Yvonne PARAMETER ALL;
  user  |        parameter        | setting
--------+-------------------------+---------
 Yvonne | DepotOperationsForQuery | Fetches
(1 row)

=> ALTER USER Yvonne SET PARAMETER UseDepotForWrites = 0;
ALTER USER
=> SHOW USER Yvonne PARAMETER ALL;
  user  |        parameter        | setting
--------+-------------------------+---------
 Yvonne | DepotOperationsForQuery | Fetches
 Yvonne | UseDepotForWrites       | 0
(2 rows)

=> ALTER USER Ahmed SET PARAMETER DepotOperationsForQuery = 'Fetches';
ALTER USER
=> SHOW USER ALL PARAMETER ALL;
  user  |        parameter        | setting
--------+-------------------------+---------
 Ahmed  | DepotOperationsForQuery | Fetches
 Yvonne | DepotOperationsForQuery | Fetches
 Yvonne | UseDepotForWrites       | 0
(3 rows)

Examples

Set a user's password

=> CREATE USER user1;
=> ALTER USER user1 IDENTIFIED BY 'newpassword';

Set user's security algorithm and password

This example sets a user's security algorithm and password to SHA-512 and newpassword, respectively. When you execute the ALTER USER statement, Vertica hashes the password with the SHA-512 algorithm and saves the hash:

=> CREATE USER user1;
        => ALTER USER user1 SECURITY_ALGORITHM 'SHA512' IDENTIFIED BY 'newpassword'

Assign default roles to a user

This example make a user's assigned roles their default roles. Default roles are automatically set (enabled) when a user logs in:

=> CREATE USER user1;
CREATE USER
=> GRANT role1, role2, role3 to user1;
=> ALTER USER user1 DEFAULT ROLE ALL;

You can pair ALL with EXCEPT to exclude certain roles:

=> CREATE USER user2;
CREATE USER
=> GRANT role1, role2, role3 to user2;
=> ALTER USER user2 DEFAULT ROLE ALL EXCEPT role1;

See also

2.33 - ALTER VIEW

Modifies the metadata of an existing.

Modifies the metadata of an existing view. The changes are auto-committed.

Syntax

General usage:

ALTER VIEW [[database.]schema.]view {
    | OWNER TO owner
    | SET SCHEMA schema
    | { INCLUDE | EXCLUDE | MATERIALIZE } [ SCHEMA ] PRIVILEGES
}

Rename view:

ALTER VIEW [[database.]schema.]view[,...] RENAME TO new-view-name[,...]

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

view
The view to alter.
SET SCHEMA schema``
Moves the view from one schema to another.
OWNER TO owner
Changes the view owner.
{ INCLUDE | EXCLUDE | MATERIALIZE } [SCHEMA] PRIVILEGES
Specifies default inheritance of schema privileges for this view:
  • EXCLUDE [SCHEMA] PRIVILEGES (default) disables inheritance of privileges from the schema.

  • INCLUDE [SCHEMA] PRIVILEGES grants the view the same privileges granted to its schema.

  • MATERIALIZE: Copies grants to the view and creates a GRANT object on the view. This disables the inherited privileges flag on the view, so you can:

    • Grant more specific privileges at the view level

    • Use schema-level privileges as a template

    • Move the view to a different schema

    • Change schema privileges without affecting the view

See also Setting privilege inheritance on tables and views.

RENAME TO
Renames one or more views:
RENAME TO new-view-name[,...]

The following requirements apply:

  • The new view name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.

  • If you specify multiple views to rename, the source and target lists must have the same number of names.

  • Renaming a view requires USAGE and CREATE privileges on the schema that contains the view.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • View owner

  • ALTER privilege on the view

For certain operations, non-superusers must have the following schema privileges:

Schema privileges required... For these operations...
CREATE, USAGE Rename view
CREATE: destination schema
USAGE: current schema
Move view to another schema

Examples

Rename view view1 to view2:

=> CREATE VIEW view1 AS SELECT * FROM t;
CREATE VIEW
=> ALTER VIEW view1 RENAME TO view2;
ALTER VIEW

3 - BEGIN

Starts a transaction block.

Starts a transaction block.

Syntax

BEGIN [ WORK | TRANSACTION ] [ isolation-level ] [ READ [ONLY] | WRITE ]

Parameters

WORK | TRANSACTION
Optional keywords for readability only.
isolation-level
Specifies the transaction's isolation level, which determines what data the transaction can access when other transactions are running concurrently, one of the following:
  • READ COMMITTED (default)

  • SERIALIZABLE

  • REPEATABLE READ (automatically converted to SERIALIZABLE)

  • READ UNCOMMITTED (automatically converted to READ COMMITTED)

For details, see Transactions.

READ [ONLY] | WRITE
Specifies the transaction mode, one of the following:
  • READ WRITE (default): Transaction is read/write.

  • READ ONLY: Transaction is read-only.

Setting the transaction session mode to read-only disallows the following SQL statements, but does not prevent all disk write operations:

  • INSERT, UPDATE, DELETE, and COPY if the target table is not a temporary table

  • All CREATE, ALTER, and DROP commands

  • GRANT, REVOKE, and EXPLAIN if the SQL to run is one of the statements cited above.

Privileges

None

Examples

Create a transaction with the isolation level set to READ COMMITTED and the transaction mode to READ WRITE:

=> BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED READ WRITE;
BEGIN
=> CREATE TABLE sample_table (a INT);
CREATE TABLE
=> INSERT INTO sample_table (a) VALUES (1);
OUTPUT
--------
1
(1 row)

=> END;
COMMIT

See also

4 - CALL

Invokes a stored procedure created with CREATE PROCEDURE (Stored).

Invokes a stored procedure created with CREATE PROCEDURE (stored).

Syntax

CALL [[database.]schema.]procedure( [ argument-list] );

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

procedure
The name of the stored procedure, where procedure conforms to conventions described in Identifiers.
argument-list
A comma-delimited list of arguments to pass to the stored procedure, whose types correspond to the types of the argument's IN parameters.

Privileges

Non-superuser: EXECUTE on the procedure

Examples

See Executing stored procedures and Stored procedures: use cases and examples.

See also

5 - COMMENT ON statements

COMMENT ON statements let you create comments on database objects, such as schemas, tables, and libraries.

COMMENT ON statements let you create comments on database objects, such as schemas, tables, and libraries. Each object can have one comment. Comments are stored in the system table COMMENTS.

5.1 - COMMENT ON AGGREGATE FUNCTION

Adds, revises, or removes a comment on an aggregate function.

Adds, revises, or removes a comment on an aggregate function. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON AGGREGATE FUNCTION [[database.]schema.]function (function-args) IS { 'comment' | NULL };

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function
The name of the aggregate function with which to associate the comment.
function-args
The function arguments.
comment
Specifies the comment text to add. If a comment already exists for this function, this overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the APPROXIMATE_MEDIAN(x FLOAT) function:

=> COMMENT ON AGGREGATE FUNCTION APPROXIMATE_MEDIAN(x FLOAT) IS 'alias of APPROXIMATE_PERCENTILE with 0.5 as its parameter';

The following example removes a comment from the APPROXIMATE_MEDIAN(x FLOAT) function:

=> COMMENT ON AGGREGATE FUNCTION APPROXIMATE_MEDIAN(x FLOAT) IS NULL;

5.2 - COMMENT ON ANALYTIC FUNCTION

Adds, revises, or removes a comment on an analytic function.

Adds, revises, or removes a comment on an analytic function. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON ANALYTIC FUNCTION [[database.]schema.]function (function-args) IS { 'comment' | NULL };

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function
The name of the analytic function with which to associate the comment.
function-args
The function arguments.
comment
Specifies the comment text to add. If a comment already exists for this function, this overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the user-defined an_rank() function:

=> COMMENT ON ANALYTIC FUNCTION an_rank() IS 'built from the AnalyticFunctions library';

The following example removes a comment from the user-defined an_rank() function:

=> COMMENT ON ANALYTIC FUNCTION an_rank() IS NULL;

5.3 - COMMENT ON COLUMN

Adds, revises, or removes a comment on a column in a table or projection.

Adds, revises, or removes a comment on a column in a table or projection. Each object can have one comment. Comments are stored in the COMMENTS system table.

Syntax

COMMENT ON COLUMN [[database.]schema.]object.column IS {'comment' | NULL}

Arguments

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

object
Name of the table or projection containing the column.
column
Name of the column to which the comment applies.
comment | NULL
Comment text to add, or NULL to remove an existing comment. The new value replaces any existing comment for this column.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment for a table column:

=> COMMENT ON COLUMN store.store_sales_fact.transaction_time IS 'GMT';

You can also add comments to columns in projections:

=> COMMENT ON COLUMN customer_dimension_vmart_node01.customer_name IS 'Last name only';

To remove a comment, specify NULL:

=> COMMENT ON COLUMN store.store_sales_fact.transaction_time IS NULL;

5.4 - COMMENT ON CONSTRAINT

Adds, revises, or removes a comment on a constraint.

Adds, revises, or removes a comment on a constraint. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON CONSTRAINT constraint ON [[database.]schema.]table IS ... {'comment' | NULL };

Parameters

constraint
The name of the constraint associated with the comment.
[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

table
The name of the table constraint with which to associate a comment.
comment
Specifies the comment text to add. If a comment already exists for this constraint, this comment overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the constraint_x constraint on the promotion_dimension table:

=> COMMENT ON CONSTRAINT constraint_x ON promotion_dimension IS 'Primary key';

The following example removes a comment from the constraint_x constraint on the promotion_dimension table:

=> COMMENT ON CONSTRAINT constraint_x ON promotion_dimension IS NULL;

5.5 - COMMENT ON FUNCTION

Adds, revises, or removes a comment on a function.

Adds, revises, or removes a comment on a function. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON FUNCTION [[database.]schema.]function (function-args) IS { 'comment' | NULL };

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function
The name of the function with which to associate the comment.
function-args
The function arguments.
comment
Specifies the comment text to add. If a comment already exists for this function, this overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the macros.zerowhennull (x INT) function:

=> COMMENT ON FUNCTION macros.zerowhennull(x INT) IS 'Returns a 0 if not NULL';

The following example removes a comment from the macros.zerowhennull (x INT) function:

=> COMMENT ON FUNCTION macros.zerowhennull(x INT) IS NULL;

5.6 - COMMENT ON LIBRARY

Adds, revises, or removes a comment on a library.

Adds, revises, or removes a comment on a library . Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON LIBRARY [[database.]schema.]library IS {'comment' | NULL}

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

library
The name of the library associated with the comment.
comment
Specifies the comment text to add. If a comment already exists for this library, this comment overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the library MyFunctions:

=> COMMENT ON LIBRARY MyFunctions IS 'In development';

The following example removes a comment from the library MyFunctions:

=> COMMENT ON LIBRARY MyFunctions IS NULL;

See also

5.7 - COMMENT ON NODE

Adds, revises, or removes a comment on a node.

Adds, revises, or removes a comment on a node. Each object can have one comment. Comments are stored in the system table COMMENTS.

Dropping an object drops all comments associated with the object.

Syntax

COMMENT ON NODE node-name IS  { 'comment' | NULL }

Parameters

node-name
The name of the node associated with the comment.
comment
Specifies the comment text to add. If a comment already exists for this node, this comment overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment for the initiator node:

=> COMMENT ON NODE initiator IS 'Initiator node';

The following example removes a comment from the initiator node:

=> COMMENT ON NODE initiator IS NULL;

See also

COMMENTS

5.8 - COMMENT ON PROJECTION

Adds, revises, or removes a comment on a projection.

Adds, revises, or removes a comment on a projection. Each object can have one comment. Comments are stored in the system table COMMENTS.

Dropping an object drops all comments associated with the object.

Syntax

COMMENT ON PROJECTION [[database.]schema.]projection IS { 'comment' | NULL }

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

projection
The name of the projection associated with the comment.
comment
Specifies the text of the comment to add. If a comment already exists for this projection, the comment you enter here overwrites the previous comment.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the customer_dimension_vmart_node01 projection:

=> COMMENT ON PROJECTION customer_dimension_vmart_node01 IS 'Test data';

The following example removes a comment from the customer_dimension_vmart_node01 projection:

=> COMMENT ON PROJECTION customer_dimension_vmart_node01 IS NULL;

See also

COMMENTS

5.9 - COMMENT ON SCHEMA

Adds, revises, or removes a comment on a schema.

Adds, revises, or removes a comment on a schema. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON SCHEMA schema-name IS {'comment' | NULL}

Parameters

schema-name
The schema associated with the comment.
comment
Text of the comment to add. If a comment already exists for this schema, the comment you enter here overwrites the previous comment.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the public schema:

=> COMMENT ON SCHEMA public  IS 'All users can access this schema';

The following example removes a comment from the public schema.

=> COMMENT ON SCHEMA public IS NULL;

5.10 - COMMENT ON SEQUENCE

Adds, revises, or removes a comment on a sequence.

Adds, revises, or removes a comment on a sequence. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON SEQUENCE [[database.]schema.]sequence IS { 'comment' | NULL }

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

sequence
The name of the sequence associated with the comment.
comment
Specifies the text of the comment to add. If a comment already exists for this sequence, this comment overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the sequence called prom_seq.

=> COMMENT ON SEQUENCE prom_seq IS 'Promotion codes';

The following example removes a comment from the prom_seq sequence.

=> COMMENT ON SEQUENCE prom_seq IS NULL;

5.11 - COMMENT ON TABLE

Adds, revises, or removes a comment on a table.

Adds, revises, or removes a comment on a table. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON TABLE [[database.]schema.]table IS { 'comment' | NULL }

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

table
The name of the table with which to associate the comment.
comment
Specifies the text of the comment to add. Enclose the text of the comment within single-quotes. If a comment already exists for this table, the comment you enter here overwrites the previous comment.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes a previously added comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the promotion_dimension table:

=> COMMENT ON TABLE promotion_dimension IS '2011 Promotions';

The following example removes a comment from the promotion_dimension table:

=> COMMENT ON TABLE promotion_dimension IS NULL;

5.12 - COMMENT ON TRANSFORM FUNCTION

Adds, revises, or removes a comment on a user-defined transform function.

Adds, revises, or removes a comment on a user-defined transform function. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON TRANSFORM FUNCTION [[database.]schema.]tfunction
...( [ tfunction-arg-name tfunction-arg-type ][,...] ) IS {'comment' | NULL}

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

tfunction
The name of the transform function with which to associate the comment.
tfunction-arg-name tfunction-arg-type
The names and data types of one or more transform function arguments. If you supply argument names and types, each type must match the type specified in the library used to create the original transform function.
comment
Specifies the comment text to add. If a comment already exists for this transform function, this comment overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment to the macros.zerowhennull (x INT) UTF function:

=> COMMENT ON TRANSFORM FUNCTION macros.zerowhennull(x INT) IS 'Returns a 0 if not NULL';

The following example removes a comment from the acros.zerowhennull (x INT) function by using the NULL option:

=> COMMENT ON TRANSFORM FUNCTION macros.zerowhennull(x INT) IS NULL;

5.13 - COMMENT ON VIEW

Adds, revises, or removes a comment on a view.

Adds, revises, or removes a comment on a view. Each object can have one comment. Comments are stored in the system table COMMENTS.

Syntax

COMMENT ON VIEW [[database.]schema.]view IS { 'comment' | NULL }

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

view
The name of the view with which to associate the comment.
comment
Specifies the text of the comment to add. If a comment already exists for this view, this comment overwrites the previous one.

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

NULL
Removes an existing comment.

Privileges

Non-superuser: object owner

Examples

The following example adds a comment from the curr_month_ship view:

=> COMMENT ON VIEW curr_month_ship IS 'Shipping data for the current month';

The following example removes a comment from the curr_month_ship view:

=> COMMENT ON VIEW curr_month_ship IS NULL;

6 - COMMIT

Ends the current transaction and makes all changes that occurred during the transaction permanent and visible to other users.

Ends the current transaction and makes all changes that occurred during the transaction permanent and visible to other users.

COMMIT is a synonym for END

Syntax

COMMIT [ WORK | TRANSACTION ]

Parameters

WORK TRANSACTION
Optional keywords for readability only.

Privileges

None

Examples

This example shows how to commit an insert.

=> CREATE TABLE sample_table (a INT);
=> INSERT INTO sample_table (a) VALUES (1);
OUTPUT
--------
1
=> COMMIT;

See also

7 - CONNECT TO VERTICA

Connects to another Vertica database to enable importing and exporting data across Vertica databases, with COPY FROM VERTICA and EXPORT TO VERTICA, respectively.

Connects to another Vertica database to enable importing and exporting data across Vertica databases, with COPY FROM VERTICA and EXPORT TO VERTICA, respectively.

After you establish a connection to another database, the connection remains open in the current session until you explicitly close it with DISCONNECT. You can have only one connection to another database at a time. However, you can establish successive connections to different databases in the same session.

By default, invoking CONNECT TO VERTICA occurs over the Vertica private network. For information about creating a connection over a public network, see Using public and private IP networks.

Syntax

CONNECT TO VERTICA db-spec
    [ USER username ]
    [ PASSWORD 'password' ] ON 'host', port
    [ TLS CONFIGURATION tls_configuration ]
    [ TLSMODE PREFER ]

Parameters

db-spec
The target database, either the database name or DEFAULT.
username
The username to use when connecting to the other database. If omitted, the current user's username is used.
password
The password of the user on the target database. If omitted, you can use credential forwarding or mutual TLS to authenticate to the target database. For details, see Passwordless authentication
host
The host name of one of the nodes in the other database.
port
The port number of the other database.
tls_configuration
The TLS Configuration to use for TLS. The TLS Configuration is ignored if ImportExportTLSMode is set to any of the following:
  • REQUIRE_FORCE

  • VERIFY_CA_FORCE

  • VERIFY_FULL_FORCE

The effective TLS mode of CONNECT TO VERTICA changes depending on the TLSMODE of the TLS Configuration and the value of ImportExportTLSMode (for non-FORCE values). For details, see Effective TLSMode.

If the target database is configured for mutual TLS and the user on the target database can authenticate with TLS, you can use tls_configuration to authenticate instead of password.

TLSMODE PREFER

Overrides the value of configuration parameter ImportExportTLSMode for this connection to PREFER. If TLS CONFIGURATION is set or ImportExportTLSMode is set to REQUIRE_FORCE, VERIFY_CA_FORCE, or VERIFY_FULL_FORCE, then TLSMODE PREFER has no effect.

If TLSMODE PREFER and ImportExportTLSMode are both not set, CONNECT TO VERTICA uses ENABLE.

Effective TLS mode

The effective TLS mode of CONNECT TO VERTICA is determined by the TLSMODE of the TLS Configuration and the value of ImportExportTLSMode. The following table summarizes this interaction for non-FORCE values of ImportExportTLSMode:

TLS Configuration ImportExportTLSMode Effective TLS mode
ENABLE PREFER PREFER
ENABLE Anything except PREFER REQUIRE
TRY_VERIFY, VERIFY_CA Anything VERIFY_CA
VERIFY_FULL Anything VERIFY_FULL

Privileges

None

Security requirements

When importing from or exporting to a Vertica database, you can connect only to a database that uses trusted (username only) or password-based authentication, as described in Security and authentication. OAuth and Kerberos authentication methods are not supported.

If configured with a certificate, Vertica encrypts data during transmission using TLS and attempts to encrypt plan metadata. You can set configuration parameter ImportExportTLSMode to require encryption for plan metadata.

Passwordless authentication

If you omit the password in the call to CONNECT TO VERTICA, you can authenticate to the target database in the following ways:

  • Credential forwarding
  • TLS authentication

For brevity, in this and later sections, "caller" refers to the user that runs CONNECT TO VERTICA from the source database to connect to the target database.

Credential forwarding

Credential forwarding lets you authenticate as the current user on the target database by forwarding your hash or password, depending on the caller's authentication method on the target database. To do this, the following requirements must be met:

For an example, see Examples.

TLS authentication

TLS authentication lets you use the certificates and keys in the specified tls_configuration parameter to authenticate instead of a password. To do this, the following requirements must be met:

  • The caller exists in both databases.
  • In the source database:
    • A custom TLS configurations contains a CA certificate, a client certificate, and client key that can be used to authenticate to the target database with TLS authentication.
    • The caller has USAGE privileges on the TLS Configuration.
  • In the target database:

For an example, see Examples.

Examples

Password authentication

The following example authenticates as the dbadmin to ExampleDB on the host VerticaHost01 on port 5433:

=> CONNECT TO VERTICA ExampleDB USER dbadmin PASSWORD 'Password123' ON 'VerticaHost01',5433;
CONNECT

Credential forwarding

In the following example, Penny wants to run CONNECT TO VERTICA from db_1 to connect to db_2:

  1. On db_1, create the user penny:

    => CREATE USER penny IDENTIFIED BY 'my_password';
    
  2. On db_2, create the user penny with the same hash, salt, and security (hashing) algorithm:

    1. On db_1, retrieve the hash (from the password column) and salt from PASSWORDS:
      => SELECT user_name, password, salt FROM passwords WHERE user_name='penny';
      -[ RECORD 1 ]-------------------------------------------------------------------------------------------------------------------------------------
      user_name | penny
      password  | sha512b2e0911954a79d9d419b7b42774d36d17dd8a663c966bf0dd8f4cd6aad00d3c7ee7ba74b9bf0e56071cd995ae04aeaae537b35903e97255ed5fe286b6ff0d00a
      salt      | eac4ad840264e8c590120b31b815318f
      
    2. On db_1, retrieve the effective security algorithm from PASSWORD_AUDITOR:
      => SELECT user_name, effective_security_algorithm FROM password_auditor WHERE user_name='penny';
       user_name | effective_security_algorithm
      -----------+------------------------------
       penny     | SHA512
      (1 row)
      
    3. On db_2, create the user penny:
      => CREATE USER penny;
      
    4. On db_2, use ALTER USER to set the security algorithm to the value in db_1. For details, see Password hashing algorithm:
      => ALTER USER penny SECURITY_ALGORITHM 'SHA512';
      
    5. On db_2, set penny's password using the hash and salt from db_1:
      => ALTER USER penny IDENTIFIED BY 
      'sha512b2e0911954a79d9d419b7b42774d36d17dd8a663c966bf0dd8f4cd6aad00d3c7ee7ba74b9bf0e56071cd995ae04aeaae537b35903e97255ed5fe286b6ff0d00a'
      SALT 'eac4ad840264e8c590120b31b815318f';
      
  3. On db_1, enable credential forwarding for penny by setting EnableConnectCredentialForwarding (disabled by default):

    => ALTER USER penny SET EnableConnectCredentialForwarding=1;
    
  4. On db_2, create an authentication record with the hash method:

    => CREATE AUTHENTICATION v_hash_auth METHOD 'hash' HOST '0.0.0.0/0';
    
  5. Grant the authentication record to penny:

    => GRANT AUTHENTICATION v_hash_auth TO penny;
    
  6. On db_1, run CONNECT TO VERTICA to connect to db_2, omitting the password. This example uses the default port:

    => CONNECT TO VERTICA db_2 ON 'example.com', 5433;
    

TLS authentication

In the following example, Penny wants to run CONNECT TO VERTICA from db_1 to connect to db_2 without specifying her password:

  1. Create the user penny on both databases.

    => CREATE USER penny IDENTIFIED BY 'my_password';
    
  2. On db_1, create or import a CA certificate, server certificate, and client certificate.

    -- Create a CA certificate
    => CREATE KEY root_key TYPE 'RSA' LENGTH 2048;
    
    =>  CREATE CA CERTIFICATE mtls_root_cert
    SUBJECT '/C=US/ST=Massachusetts/L=Burlington/O=OpenText/OU=Vertica/CN=Vertica Root CA'
    VALID FOR 3650
    EXTENSIONS 'authorityKeyIdentifier' = 'keyid:always,issuer', 'nsComment' = 'Vertica generated root CA cert'
    KEY mtls_root_key;
    
    -- Create a client certificate, signing it with the CA certificate
    => CREATE KEY client_key TYPE 'RSA' LENGTH 2048;
    
    => CREATE CERTIFICATE mtls_client_cert
    SUBJECT '/C=US/ST=Massachusetts/L=Burlington/O=OpenText/OU=Vertica/CN=penny'
    SIGNED BY mtls_root_cert
    EXTENSIONS 'nsComment' = 'Vertica client cert', 'extendedKeyUsage' = 'clientAuth'
    KEY mtls_client_key;
    
    -- Create a server certificate, signing it with the CA certificate
    CREATE KEY mtls_server_key TYPE 'RSA' LENGTH 2048;
    
    CREATE CERTIFICATE mtls_server_cert
    SUBJECT '/C=US/ST=Massachusetts/L=Burlington/O=OpenText/OU=Vertica/CN=*.example.com'
    SIGNED BY mtls_root_cert
    EXTENSIONS 'extendedKeyUsage' = 'serverAuth'
    KEY mtls_server_key;
    
  3. On db_1, create a custom TLS configuration to use with CONNECT TO VERTICA, adding the mtls_client_cert and mtls_root_cert as the certificates and setting TLSMODE to ENABLE or higher:

    => CREATE TLS CONFIGURATION mtls;
    => ALTER TLS CONFIGURATION mtls CERTIFICATE mtls_client_cert ADD CA CERTIFICATES mtls_root_cert TLSMODE 'ENABLE';
    
  4. On db_2, import the CA certificate, server certificate, and server key from db_1. You can retrieve the contents of a certificate and key from the CERTIFICATES and CRYPTOGRAPHIC_KEYS system tables:

          
    => CREATE CA CERTIFICATE mtls_root_cert AS '-----BEGIN CERTIFICATE-----certificate_text-----END CERTIFICATE-----';
    => CREATE CERTIFICATE mtls_server_key AS '-----BEGIN PRIVATE KEY-----key-----END PRIVATE KEY-----'
    => CREATE CERTIFICATE mtls_server_cert AS '-----BEGIN CERTIFICATE-----certificate_text-----END CERTIFICATE-----';
    
    

  5. On db_2, configure mutual mode client-server TLS using mtls_root_cert and mtls_server_cert and setting the TLSMODE to TRY_VERIFY or higher:

    => ALTER TLS CONFIGURATION server CERTIFICATE mtls_server_cert ADD CA CERTIFICATES mtls_root_cert TLSMODE 'TRY_VERIFY';
    
  6. On db_2, create an authentication record for TLS authentication.

    => CREATE AUTHENTICATION mtls_auth METHOD 'tls' HOST TLS '0.0.0.0/0';
    
  7. On db_2, grant mtls_auth to penny:

    => GRANT AUTHENTICATION mtls_auth TO penny;
    
  8. On db_1, run CONNECT TO VERTICA to connect to db_2, specifying the mtls TLS Configuration. This example uses the default port:

    => CONNECT TO VERTICA vmart USER penny ON 'example.com', 5433 TLS CONFIGURATION mtls;
    

8 - COPY

COPY; Load data;.

COPY bulk-loads data into a Vertica database. By default, COPY automatically commits itself and any current transaction except when loading temporary tables. If COPY is terminated or interrupted, Vertica rolls it back.

COPY reads data as UTF-8 encoding.

For information on loading one or more files or pipes on a cluster host or on a client system, see COPY LOCAL.

To define a data pipeline to automatically load new files, see Automatic load.

Syntax

COPY [ /*+ LABEL (label-string)*/ ] [[{namespace. | database. }]schema.]target-table
   [ ( { column-as-expression | column }
       [ DELIMITER [ AS ] 'char' ]
       [ ENCLOSED [ BY ] 'char' ]
       [ ENFORCELENGTH ]
       [ ESCAPE [ AS ] 'char' | NO ESCAPE ]
       [ FILLER datatype]
       [ FORMAT 'format' ]
       [ NULL [ AS ] 'string' ]
       [ TRIM 'byte' ]
       [,...] ) ]
   [ COLUMN OPTION (column
       [ DELIMITER [ AS ] 'char' ]
       [ ENCLOSED [ BY ] 'char' ]
       [ ENFORCELENGTH ]
       [ ESCAPE [ AS ] 'char' | NO ESCAPE ]
       [ FORMAT 'format' ]
       [ NULL [ AS ] 'string' ]
       [ TRIM 'byte' ]
     [,...] ) ]
FROM {
   [ LOCAL ] STDIN [ compression ]
   | { 'path-to-data'
       [ ON { nodename | (nodeset) | ANY NODE | EACH NODE } ] [ compression ] }[,...]
     [ PARTITION COLUMNS column[,...] ]
   | LOCAL 'path-to-data' [ compression ] [,...]
   | VERTICA {source-namespace. | source-database. }[source-schema.]source-table[( source-column[,...] ) ]
  }
  [ NATIVE
    | FIXEDWIDTH COLSIZES {( integer )[,...]}
    | NATIVE VARCHAR
    | ORC
    | PARQUET
  ]
  | [ WITH ] UDL-clause[...]
}
   [ ABORT ON ERROR ]
   [ DELIMITER [ AS ] 'char' ]
   [ ENCLOSED [ BY ] 'char'
   [ ENFORCELENGTH ]
   [ ERROR TOLERANCE ]
   [ ESCAPE [ AS ] 'char' | NO ESCAPE ]
   [ EXCEPTIONS 'path' [ ON nodename] [,...]
   [ NULL [ AS ] 'string' ]
   [ RECORD TERMINATOR 'string' ]
   [ REJECTED DATA {'path' [ ON nodename] [,...] | AS TABLE reject-table} ]
   [ REJECTMAX integer ]
   [ SKIP integer ]
   [ SKIP BYTES integer ]
   [ STREAM NAME  'streamName']
   [ TRAILING NULLCOLS ]
   [ TRIM 'byte' ]
   [ [ WITH ] PARSER parser ([ arg=value[,...] ]) ] ]
   [ NO COMMIT ]

Parameters

See Parameters.

Restrictions

See Restrictions.

Privileges

Superusers have full COPY privileges. The following requirements apply to non-superusers:

  • INSERT privilege on table

  • USAGE privilege on schema

  • USER-accessible storage location

  • Applicable READ or WRITE privileges granted to the storage location where files are read or written

COPY can specify a path to store rejected data and exceptions. If the path resolves to a storage location, the following privileges apply to non-superusers:

8.1 - Parameters

COPY parameters and their descriptions are divided into the following sections:.

COPY parameters and their descriptions are divided into the following sections:

Target options

The following options apply to the target tables and their columns:

LABEL

Assigns a label to a statement to identify it for profiling and debugging.

{ database | namespace }
Name of the database or namespace that contains table:
  • Database name: If specified, it must be the current database.

  • Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.

schema
Name of the schema, by default public. If you specify the namespace or database name, you must provide the schema name, even if the schema is public.

If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.

target-table
The target columnar or flexible table for loading new data. Vertica loads the data into all projections that include columns from the schema table.
column-as-expression
An expression used to compute values for the target column, which must not be of a complex type. For example:
=> COPY t(year AS TO_CHAR(k, 'YYYY')) FROM 'myfile.dat'

Use this option to transform data when it is loaded into the target database.

For details, see Transforming data during loads.

column
Restricts the load to one or more specified columns in the table. If you omit specifying columns, COPY loads all columns by default.

Table columns that you omit from the column list are assigned their DEFAULT or SET USING values, if any; otherwise, COPY inserts NULL.

If you leave the column parameter blank to load all columns in the table, you can use the optional parameter COLUMN OPTION to specify parsing options for specific columns.

The data file must contain the same number of columns as the COPY command's column list.

COLUMN OPTION
Specifies load metadata for one or more columns declared in the table column list. For example, you can specify that a column has its own DELIMITER, ENCLOSED BY, or NULL AS expression, and so on. You do not have to specify every column name explicitly in the COLUMN OPTION list, but each column you specify must correspond to a column in the table column list.

Column options

Depending on how they are specified, the following COPY options can qualify specific columns or all columns. Some parser-specific options can also apply to either specific columns or all columns. See Global and column-specific options For details about these two modes.

ENFORCELENGTH
If specified, COPY rejects data rows of type CHAR, VARCHAR, BINARY, and VARBINARY, or elements of those types in collections, if they are larger than the declared size.

By default, COPY truncates offending rows of these data types and elements of these types in collections, but does not reject the rows. For more details, see Handling Messy Data.

If a collection does not fit with all of its elements, COPY rejects the row without truncating. It does not reduce the number of elements. This can happen if each element is individually within limits but the number of elements causes the collection to exceed the maximum size for the column.

FILLER datatype
Reads but does not copy the data of an input column. Use filler columns to ignore input columns that do not have columns in the table. You can also use filler columns to transform data (see Examples and Transforming data during loads). Filler columns cannot be of complex types.

If a filler column has the same name as a column in the table, you must explicitly scope other uses of the name in the COPY statement. To refer to an ambiguous filler column, prefix the name with "*FILLER*".:

=> COPY names(first FILLER VARCHAR, last, 
   full AS "*FILLER*".first||' '||last) FROM ...;
FORMAT 'format'
Input format, one of the following:
  • octal

  • hex

  • bitstream

See Binary (native) data to learn more about these formats.

When loading date/time columns, using FORMAT significantly improves load performance. COPY supports the same formats as the TO_DATE function. See Template patterns for date/time formatting.

If you specify invalid format strings, the COPY operation returns an error.

NULL [AS]
The string representing a null value. The default is an empty string (''). You can specify a null value as any ASCII value in the range E'\000' to E'\177' inclusive. You cannot use the same character for both the DELIMITER and NULL options. For details, see Delimited data.

Input options

The following options are available for specifying source data:

LOCAL
Loads data files (up to 4,294,967,295 files) on a client system, rather than on a cluster host. LOCAL can qualify the STDIN and path-to-data arguments. For details, see COPY LOCAL.

Restrictions: Invalid for CREATE EXTERNAL TABLE AS COPY

STDIN
Reads from the client a standard input instead of a file. STDIN takes one input source only. To load multiple input sources, use path-to-data.

User must have INSERT privileges on the table and USAGE privileges on its schema.

Restrictions: Invalid for CREATE EXTERNAL TABLE AS COPY

path-to-data
Specifies the absolute path of the file (or files) containing the data, which can be from multiple input sources.
  • If the file is stored in HDFS, path-to-data is a URI in the webhdfs scheme, typically [[s]web]hdfs://[nameservice]/path. See HDFS file system.

  • If the file is stored in an S3 bucket, path-to-data is a URI in the format s3://bucket/path. See S3 object store.

  • If the file is stored in Google Cloud Storage, path-to-data is a URI in the format gs://bucket/path. See Google Cloud Storage (GCS) object store.

  • If the file is stored in Azure Blob Storage, path-to-data is a URI in the format azb://account/container/path. See Azure Blob Storage object store.

  • If the file is on the local Linux file system or an NFS mount, path-to-data is a local absolute file path.

path-to-data can optionally contain wildcards to match more than one file. The file or files must be accessible to the local client or the host on which the COPY statement runs. COPY skips empty files in the file list. A file list that includes directories causes the query to fail. See Specifying where to load data from. The supported patterns for wildcards are specified in the Linux Manual Page for Glob (7), and for ADO.net platforms, through the .NET Directory.getFiles method.

You can use variables to construct the pathname as described in Using load scripts.

If path-to-data resolves to a storage location on a local file system, and the user invoking COPY is not a superuser, the following requirements apply:

Further, if a user has privileges but is not a superuser, and invokes COPY from that storage location, Vertica ensures that symbolic links do not result in unauthorized access.

PARTITION COLUMNS column[,...]
Columns whose values are specified in the directory structure and not in the data itself.
  • For Hive-style partitioning, paths contain directory names of the form form colname=value, and COPY parses values for the specified partition columns. If a value is missing or cannot be coerced to the column data type, the source path is rejected.
  • For other partition schemes, column expressions specify how to extract values using the CURRENT_LOAD_SOURCE function. If the expression cannot be evaluated, the source path is rejected.

For more information, see Partitioned data.

ON nodename
Specifies the node on which the data to copy resides and the node that should parse the load file. If you omit nodename, the location of the input file defaults to the initiator node. Use nodename to copy and parse a load file from a node other than the COPY initiator node.
ON (nodeset)
Specifies a set of nodes on which to perform the load. The same data must be available for load on all named nodes. nodeset is a comma-separated list of node names in parentheses. For example:
=> COPY t FROM 'file1.txt' ON (v_vmart_node0001, v_vmart_node0002);

Vertica apportions the load among all of the specified nodes. If you also specify ERROR TOLERANCE or REJECTMAX, Vertica instead chooses a single node on which to perform the load.

If the data is available on all nodes, you usually use ON ANY NODE, which is the default for loads from HDFS and cloud object stores. However, you can use ON nodeset to do manual load-balancing among concurrent loads.

ON ANY NODE
Specifies that the data to load is available on all nodes, so COPY opens the path and parses it from any node in the cluster. For an Eon Mode database, COPY uses nodes within the same subcluster as the initiator.

Vertica attempts to apportion the load among several nodes if a file is large enough to benefit from apportioning. It chooses a single node if ERROR TOLERANCE or REJECTMAX is specified.

You can use a wildcard or glob (such as *.dat) to load multiple input files, combined with the ON ANY NODE clause. If you use a glob, COPY distributes the list of files to all cluster nodes and spreads the workload.

ON ANY NODE is invalid with STDIN and LOCAL. STDIN can only use the client host, and LOCAL indicates a client node.

ON ANY NODE is the default for loads from all paths other than Linux (HDFS and cloud object stores).

ON EACH NODE
Loads data from the specified path on each node. Use this option when the path exists on all nodes but the data files it contains are different on each node. If the path is not valid on all nodes, COPY loads the valid paths and produces a warning. If the path is a shared location, COPY loads it only once as for ON ANY NODE.
compression
The input compression type, one of the following:
  • UNCOMPRESSED (default)

  • BZIP

  • GZIP

  • LZO

  • ZSTD

Input files can be of any format. If you use wildcards, all qualifying input files must be in the same format. To load different file formats, specify the format types specifically.

The following requirements and restrictions apply:

  • When using concatenated BZIP or GZIP files, verify that all source files terminate with a record terminator before concatenating them.

  • Concatenated BZIP and GZIP files are not supported for NATIVE (binary) and NATIVE VARCHAR formats.

  • LZO files are assumed to be compressed with lzop. Vertica supports the following lzop arguments:

    • --no-checksum / -F

    • --crc32

    • --adler32

    • --no-name / -n

    • --name / -N

    • --no-mode

    • --no-time

    • --fast

    • --best

    • Numbered compression levels

  • BZIP, GZIP, ZSTD, and LZO compression cannot be used with ORC format.

VERTICA
See COPY FROM VERTICA.
[WITH] UDL-clause[...]
Specifies one or more user-defined load functions—one source, and optionally one or more filters and one parser, as follows:
SOURCE source( [arg=value[,...] ]
[ FILTER filter( [arg=value[,...] ] ) ]...
[ PARSER parser( [arg=value[,...] ] ) ]

To use a flex table parser for column tables, use the PARSER parameter followed by a flex table parser argument. For supported flex table parsers, see Bulk loading data into flex tables.

Handling options

The following options control how COPY handles different contingencies:

ABORT ON ERROR
Specifies that COPY stops if any row is rejected. The statement is rolled back and no data is loaded.
COLSIZES (integer[,...])
Specifies column widths when loading fixed-width data. COPY requires that you specify COLSIZES when using the FIXEDWIDTH parser. COLSIZES and the list of integers must correspond to the columns listed in the table column list. For details, see Fixed-width format data.
ERROR TOLERANCE
Specifies that COPY treats each source during execution independently when loading data. The statement is not rolled back if a single source is invalid. The invalid source is skipped and the load continues.

Using this parameter disables apportioned load.

Restrictions: Invalid for ORC or Parquet data

EXCEPTIONS { 'path' [ ON nodename ] [,...]
Exceptions, used in combination with REJECTED DATA, describe why each rejected row was rejected. Each exception row corresponds to a rejection row.

This option specifies the file name or absolute path of the output file for exceptions. If the file already exists, it is overwritten.

Files are written on the node or nodes executing the load or, if ON is used, on the designated node. You cannot specify a path on a shared file system or qualify the path with ON ANY NODE. For more information, see Saving load exceptions (EXCEPTIONS).

To collect exceptions together with the rejected rows, use REJECTED DATA AS TABLE. You cannot use EXCEPTIONS with REJECTED DATA AS TABLE.

If you use this option with COPY...ON ANY NODE, you must still specify the individual nodes for the exception files, as in the following example:

EXCEPTIONS '/home/ex01.txt' on v_db_node0001,'/home/ex02.txt'
on v_db_node0002,'/home/ex03.txt' on v_db_node0003

If path resolves to a storage location, non-superusers have the following requirements:

  • The storage location must be created with the USER option (see CREATE LOCATION).

  • The user must have READ access to the storage location where the files exist, as described in GRANT (storage location).

REJECTED DATA { 'path' [ ON nodename ] [,...] | AS TABLE reject-table
Specifies where to write each row that failed to load. If this option is specified, records that failed due to parsing errors are always written. Records that failed due to an error during a transformation are written only if the CopyFaultTolerantExpressions configuration parameter is set.

Vertica can write rejected data to the local file system or to a table:

  • Local file system: Vertica writes rejected row data to the specified path on the node or nodes executing the load or, if ON is used, on the designated node. You cannot specify a path on a shared file system or qualify the path with ON ANY NODE. To make rejected data available on all nodes, write rejections to a table.

    The value of path can be a directory or a file prefix. If there are multiple load sources, path is always treated as a directory. If there are not multiple load sources but path ends with '/', or if a directory of that name already exists, it is also treated as a directory. Otherwise, path is treated as a file prefix.

    If the output file already exists, it is overwritten.

    When REJECTED DATA with a path is used with LOCAL, the output is written to the client.

  • Table: Vertica writes rejected row data and exceptions to a table. You cannot use this option with EXCEPTIONS.

    Writing rejections to a table is generally the better option, because they can then be queried from any node. If rejected data needs to be available from outside of Vertica, you can write it to a table and then export that table to a shared file system.

For more information about both options, see Handling messy data.

REJECTMAX integer
The maximum number of logical records that can be rejected before a load fails. For details, see Handling messy data.

REJECTMAX disables apportioned load.

SKIP integer
The number of records to skip in a load file. For example, you can use the SKIP option to omit table header information.

Restrictions: Invalid for ORC or Parquet data

STREAM NAME
Supplies a COPY load stream identifier. Using a stream name helps to quickly identify a particular load. The STREAM NAME value that you supply in the load statement appears in the STREAM_NAME column of system tables LOAD_STREAMS and LOAD_SOURCES.

A valid stream name can contain any combination of alphanumeric or special characters up to 128 bytes in length.

For example:

=> COPY mytable FROM myfile 
   DELIMITER '|' STREAM NAME 'My stream name';
WITH parser
Specifies the parser to use when bulk loading columnar tables, one of the following:

By default, COPY uses the DELIMITER parser for UTF-8 format, delimited text input data. You do not specify the DELIMITER parser directly; absence of a specific parser indicates the default.

To use a flex table parser for column tables, use the PARSER parameter followed by a flex table parser argument. For supported flex table parsers, see Bulk loading data into flex tables.

When loading into flex tables, you must use a compatible parser. For supported flex table parsers, see Bulk loading data into flex tables.

COPY LOCAL does not support the NATIVE, NATIVE VARCHAR, ORC, and PARQUET parsers.

For parser support for complex data types, see the documentation of the specific parser.

For parser details, see Data formats in Data load.

NO COMMIT
Prevents the COPY statement from committing its transaction automatically when it finishes copying data. This option must be the last COPY statement parameter. For details, see Using transactions to stage a load.

This option is ignored by CREATE EXTERNAL TABLE AS COPY.

Parser-specific options

The following options apply only when using specific parsers.

DELIMITED parser

DELIMITER

Indicates the single ASCII character used to separate columns within each record of a file. You can use any ASCII value in the range E'\000' to E'\177', inclusive. You cannot use the same character for both the DELIMITER and NULL parameters. For more information, see Delimited data.

Default: Vertical bar ('|').

ENCLOSED [BY]

Sets the quote character within which to enclose data, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000'). By default, ENCLOSED BY has no value, meaning data is not enclosed by any sort of quote character.

ESCAPE [AS]

Sets the escape character. Once set, the character following the escape character is interpreted literally, rather than as a special character. You can define an escape character using any ASCII value in the range E'\001' to E'\177', inclusive (any ASCII character except NULL: E'\000').

The COPY statement does not interpret the data it reads in as String literals. It also does not follow the same escape rules as other SQL statements (including the COPY parameters). When reading data, COPY interprets only the characters defined by these options as special values:

  • ESCAPE [AS]

  • DELIMITER

  • ENCLOSED [BY]

  • RECORD TERMINATOR

  • All COLLECTION options

Default: Backslash ('\').

NO ESCAPE

Eliminates escape-character handling. Use this option if you do not need any escape character and you want to prevent characters in your data from being interpreted as escape sequences.

RECORD TERMINATOR
Specifies the literal character string indicating the end of a data file record. For more information about using this parameter, see Delimited data.
TRAILING NULLCOLS
Specifies that if Vertica encounters a record with insufficient data to match the columns in the table column list, COPY inserts the missing columns with NULL values. For other information and examples, see Fixed-width format data.
COLLECTIONDELIMITER

For columns of collection types, indicates the single ASCII character used to separate elements within each collection. You can use any ASCII value in the range E'\000' to E'\177', inclusive. No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.

Default: Comma (',').

COLLECTIONOPEN, COLLECTIONCLOSE

For columns of collection types, these options indicate the characters that mark the beginning and end of the collection. It is an error to use these characters elsewhere within the list of elements without escaping them. No COLLECTION option may have the same value as any other COLLECTION option.

Default: Square brackets ('[' and ']').

COLLECTIONNULLELEMENT

The string representing a null element value in a collection. You can specify a null value as any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII value except NULL: E'\000'). No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.

Default: 'null'

COLLECTIONENCLOSE

For columns of collection types, sets the quote character within which to enclose individual elements, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000').

No COLLECTION option may have the same value as any other COLLECTION option.

Default: double quote ('"')

FIXEDWIDTH parser

SKIP BYTES integer
The total number of bytes in a record to skip.
TRIM
Trims the number of bytes you specify from a column. This option is only available when loading fixed-width data. You can set TRIM at the table level for a column, or as part of the COLUMN OPTION parameter.

8.2 - Restrictions

COPY has the following restrictions:.

COPY has the following restrictions:

Invalid data

COPY considers the following data invalid:

  • Missing columns (an input line has fewer columns than the recipient table).

  • Extra columns (an input line has more columns than the recipient table).

  • Empty columns for an INTEGER or DATE/TIME data type. If a column is empty for either of these types, COPY does not use the default value that was defined by CREATE TABLE. However, if you do not supply a column option as part of the COPY statement, the default value is used.

  • Incorrect representation of a data type. For example, trying to load a non-numeric value into an INTEGER column is invalid.

Constraint violations

If primary key, unique key, or check constraints are enabled for automatic enforcement in the target table, Vertica enforces those constraints when you load new data. If a violation occurs, Vertica rolls back the operation and returns an error.

Empty line handling

When COPY encounters an empty line while loading data, the line is neither inserted nor rejected, but COPY increments the line record number. Consider this behavior when evaluating rejected records. If you return a list of rejected records and COPY encountered an empty row while loading data, the position of rejected records is not incremented by one, as demonstrated in the following example.

The example first loads values into a table that defines the first column as INT. Note the errors on rows 3, 4, and 8:

=> \! cat -n /home/dbadmin/test.txt
     1 1|A|2
     2 2|B|4
     3 A|D|7
     4 A|E|7
     5
     6
     7 6|A|3
     8 B|A|3

The empty rows (5 and 6) shift the reporting of the error on row 8:

=> SELECT row_number, rejected_data, rejected_reason FROM test_bad;
 row_number | rejected_data | rejected_reason
------------+---------------+----------------------------------------------
          3 | A|D|7 | Invalid integer format 'A' for column 1 (c1)
          4 | A|E|7 | Invalid integer format 'A' for column 1 (c1)
          6 | B|A|3 | Invalid integer format 'B' for column 1 (c1)
(3 rows)

Compressed file errors

When loading compressed files, COPY might abort and report an error, if the file seems to be corrupted. For example, this behavior can occur if reading the header block fails.

Disk quota

Tables and schemas can have disk quotas. If a load would violate either quota, the operation fails. For more information, see Disk quotas.

8.3 - Parsers

Vertica supports several parsers to load different types of data.

Vertica supports several parsers to load different types of data. Some parsers are for use only with flex tables, as noted.

8.3.1 - DELIMITED

Use the DELIMITED parser, which is the default, to load delimited text data using COPY.

Use the DELIMITED parser, which is the default, to load delimited text data using COPY. You can specify the delimiter, escape characters, how to handle null values, and other parameters.

The DELIMITED parser supports both apportioned load and cooperative parse.

COPY options

The following options are specific to this parser. See Parameters for other applicable options.

DELIMITER

Indicates the single ASCII character used to separate columns within each record of a file. You can use any ASCII value in the range E'\000' to E'\177', inclusive. You cannot use the same character for both the DELIMITER and NULL parameters. For more information, see Delimited data.

Default: Vertical bar ('|').

ENCLOSED [BY]

Sets the quote character within which to enclose data, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000'). By default, ENCLOSED BY has no value, meaning data is not enclosed by any sort of quote character.

ESCAPE [AS]

Sets the escape character. Once set, the character following the escape character is interpreted literally, rather than as a special character. You can define an escape character using any ASCII value in the range E'\001' to E'\177', inclusive (any ASCII character except NULL: E'\000').

The COPY statement does not interpret the data it reads in as String literals. It also does not follow the same escape rules as other SQL statements (including the COPY parameters). When reading data, COPY interprets only the characters defined by these options as special values:

  • ESCAPE [AS]

  • DELIMITER

  • ENCLOSED [BY]

  • RECORD TERMINATOR

  • All COLLECTION options

Default: Backslash ('\').

NO ESCAPE

Eliminates escape-character handling. Use this option if you do not need any escape character and you want to prevent characters in your data from being interpreted as escape sequences.

RECORD TERMINATOR
Specifies the literal character string indicating the end of a data file record. For more information about using this parameter, see Delimited data.
TRAILING NULLCOLS
Specifies that if Vertica encounters a record with insufficient data to match the columns in the table column list, COPY inserts the missing columns with NULL values. For other information and examples, see Fixed-width format data.
COLLECTIONDELIMITER

For columns of collection types, indicates the single ASCII character used to separate elements within each collection. You can use any ASCII value in the range E'\000' to E'\177', inclusive. No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.

Default: Comma (',').

COLLECTIONOPEN, COLLECTIONCLOSE

For columns of collection types, these options indicate the characters that mark the beginning and end of the collection. It is an error to use these characters elsewhere within the list of elements without escaping them. No COLLECTION option may have the same value as any other COLLECTION option.

Default: Square brackets ('[' and ']').

COLLECTIONNULLELEMENT

The string representing a null element value in a collection. You can specify a null value as any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII value except NULL: E'\000'). No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.

Default: 'null'

COLLECTIONENCLOSE

For columns of collection types, sets the quote character within which to enclose individual elements, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000').

No COLLECTION option may have the same value as any other COLLECTION option.

Default: double quote ('"')

Data types

The DELIMITED parser supports reading one-dimensional collections (arrays or sets) of scalar types.

If the total size of an array exceeds the size defined by the target table, the parser rejects the row.

Examples

The following example shows the default behavior, in which the delimiter character is '|'

=> CREATE TABLE employees (id INT, name VARCHAR(50), department VARCHAR(50));
CREATE TABLE

=> COPY employees FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 42|Sheldon Cooper|Physics
>> 17|Howard Wolowitz|Astronomy
>> \.

=> SELECT * FROM employees;
 id |      name       |  department
----+-----------------+--------------
 17 | Howard Wolowitz | Astrophysics
 42 | Sheldon Cooper  | Physics
(2 rows)

The following example shows loading array values with the default options.

=> CREATE TABLE researchers (id INT, name VARCHAR, grants ARRAY[VARCHAR], values ARRAY[INT]);
CREATE TABLE

=> COPY researchers FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 42|Sheldon Cooper|[US-7376,DARPA-1567]|[65000,135000]
>> 17|Howard Wolowitz|[NASA-1683,NASA-7867,SPX-76]|[16700,85000,45000]
>> \.

=> SELECT * FROM researchers;
 id |      name       |               grants               |       values
----+-----------------+------------------------------------+---------------------
 17 | Howard Wolowitz | ["NASA-1683","NASA-7867","SPX-76"] | [16700,85000,45000]
 42 | Sheldon Cooper  | ["US-7376","DARPA-1567"]           | [65000,135000]
(2 rows)

In the following example, collections are enclosed in braces and delimited by periods, and the arrays contain null values.

=> COPY researchers FROM STDIN COLLECTIONOPEN '{' COLLECTIONCLOSE '}' COLLECTIONDELIMITER '.';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 19|Leonard|{"us-1672".null."darpa-1963"}|{16200.null.16700}
>> \.

=> SELECT * FROM researchers;
 id |      name       |               grants               |       values
----+-----------------+------------------------------------+---------------------
 17 | Howard Wolowitz | ["NASA-1683","NASA-7867","SPX-76"] | [16700,85000,45000]
 42 | Sheldon Cooper  | ["US-7376","DARPA-1567"]           | [65000,135000]
 19 | Leonard         | ["us-1672",null,"darpa-1963"]      | [16200,null,16700]
(3 rows)

8.3.2 - FAVROPARSER

Parses data from an Avro file.

Parses data from an Avro file. The following requirements apply:

  • Avro files must be encoded in the Avro binary serialization encoding format, described in the Apache Avro standard. The parser also supports Snappy and deflate compression.

  • FAVROPARSER does not support Avro files with separate schema files. The Avro file must include the schema.

You can load complex types in the Avro source (arrays, structs, or combinations) with strong typing or as flexible complex types. A flexible complex type is loaded into a VMap column, as in flex tables. To load complex types as VMap columns, specify a column type of LONG VARBINARY. To preserve the indexing in complex types, set flatten_maps to false.

This parser can notify you if it finds keys in the data that are not part of the table definition. See Unmatched Keys.

When loading into a flex table, Vertica loads all data into the __raw__ (VMap) column, including complex types found in the data.

This parser does not support apportioned load or cooperative parse.

Syntax

FAVROPARSER ( [parameter=value[,...]] )

Parameters

flatten_maps
Boolean, whether to flatten all Avro maps. Key names are concatenated with nested levels. This value is recursive and affects all data in the load.

This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.

Default: true

flatten_arrays
Boolean, whether to flatten all Avro arrays. Key names are concatenated with nested levels. This value is recursive and affects all data in the load.

This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.

Default: false

flatten_records
Boolean, whether to flatten all Avro records. Key names are concatenated with nested levels. This value is recursive and affects all data in the load.

This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.

Default: true

reject_on_materialized_type_error

Boolean, whether to reject a data row that contains a materialized column value that cannot be coerced into a compatible data type. If the value is false and the type cannot be coerced, the parser sets the value in that column to NULL.

If the column is an array and the data to be loaded is too large, then false sets the column value to NULL and true rejects the row.

If the column is a strongly-typed complex type, as opposed to a flexible complex type, then a type mismatch anywhere in the complex type causes the entire column to be treated as a mismatch. The parser does not partially load complex types.

Default: false

suppress_warnings
String, which warnings to suppress:
  • unmatched_key (see Unmatched Keys)

  • true or t (suppress all warnings)

  • false or f (do not suppress warnings)

Default: false

Primitive data types

FAVROPARSER supports the following primitive data types, including as element types and field values in complex types.

AVRO Data Type Vertica Data Type Value
NULL NULL value No value
boolean Boolean data type A binary value
int INTEGER 32-bit signed integer
long INTEGER 64-bit signed integer
float

DOUBLE PRECISION (FLOAT)

Synonymous with 64-bit IEEE FLOAT

Single precision (32-bit) IEEE 754 floating-point number
double DOUBLE PRECISION (FLOAT) Double precision (64-bit) IEEE 754 floating-point number
bytes VARBINARY Sequence of 8-bit unsigned bytes
string VARCHAR Unicode character sequence

Avro logical types

FAVROPARSER supports the following Avro logical types. The target column must use a Vertica data type that supports the logical type. When you attempt to load data using an invalid logical type, the logical type is ignored and the underlying Avro type is used.

AVRO Logical Type Base Avro Type Supported Vertica Data Types

decimal

0 < precision ≤ 1024

0 ≤ scaleprecision

bytes or fixed

NUMERIC, Character

Vertica rejects the value if:

  • The Avro precision setting is greater than the precision setting for the target column.

  • For fixed types, the precision value is greater than what is allowed by the size attribute.

If the data type for the target column uses the default precision setting, the precision setting in the Avro schema overrides the default.

date integer DATE, Character
time-micros long

TIME/TIMETZ, Character

The time logical type does not provide a time zone value. For target columns that use the TIMETZ data type, Vertica uses UTC as the default.

time-millis int
timestamp-micros long

TIMESTAMP/TIMESTAMPTZ, TIME/TIMETZ

For timestamp-millis only, the timezone is included and is represented as an offset to UTC. Additionally, the millisecond values are right-extended with padded zeros.

timestamp-millis long
duration fixed INTERVAL, Character

Avro complex data types

The Avro format supports several complex data types. When loading into strongly-typed columns, you can use the ROW and ARRAY types to represent them. For example, Avro Record and Enums are structs (ROWs); see the Avro specification.

You can use ARRAY[ROW] to match an Avro map. You must name the ROW fields key and value. These are the names that the Avro format uses for those fields in the data, and the parser relies on field names to match data to table columns.

If the total size of an array exceeds the size defined by the target table, the parser sets the value to null.

When loading into flex tables or using flexible complex types, this parser handles Avro complex types as follows:

Record

The name of each field is used as a virtual column name. If flatten_records is true and several nesting levels are present, Vertica concatenates the record names to create the key name.

Map

The value of each map key is used as a virtual column name. If flatten_maps is true and several nesting levels are present, Vertica concatenates the key names to create the key name.

Enum

Vertica treats Avro Enums like records, with the name of the Enum as the key and the value as the value.

Array

Vertica treats Avro Arrays as key/value pairs. By default, the index of each element is the key. In the following example, product_detail is a Record with a field, product_category, that is an Array:

=> CREATE FLEX TABLE products;
CREATE TABLE

=> COPY products FROM :datafile WITH PARSER FAVROPARSER();
 Rows Loaded
-------------
           2
(1 row)

=> SELECT MAPTOSTRING(__raw__) FROM products ORDER BY __identity__;
                    maptostring
--------------------------------------------------------------------------------
 {
    "__name__": "Order",
    "customer_id": "111222",
    "order_details": {
        "0.__name__": "OrderDetail",
        "0.product_detail.__name__": "Product",
        "0.product_detail.price": "46.21",
        "0.product_detail.product_category": {
            "0": "electronics",
            "1": "printers",
            "2": "computers"
        },
        "0.product_detail.product_description": "hp printer X11ew description :\
P",
        "0.product_detail.product_hash": "\u0000\u0001\u0002\u0003\u0004",
        "0.product_detail.product_id": "999012",
        "0.product_detail.product_map.one": "1.1",
        "0.product_detail.product_map.two": "1.1",
        "0.product_detail.product_name": "hp printer X11ew",
        "0.product_detail.product_status": "ONLY_FEW_LEFT",
        "0.quantity": "3",
        "0.total": "354.34"
    },
    "order_id": "2389646",
    "total": "132.43"
}
...

If flatten_arrays is true and several nesting levels are present, Vertica concatenates the indices to create the key name.

=> COPY products FROM :datafile WITH PARSER FAVROPARSER(flatten_arrays=true);
 Rows Loaded
-------------
           2
(1 row)

=> SELECT MAPTOSTRING(__raw__) FROM products ORDER BY __identity__;
                    maptostring
--------------------------------------------------------------------------------

 {
    "__name__": "Order",
    "customer_id": "111222",
    "order_details.0.__name__": "OrderDetail",
    "order_details.0.product_detail.__name__": "Product",
    "order_details.0.product_detail.price": "46.21",
    "order_details.0.product_detail.product_category.0": "electronics",
    "order_details.0.product_detail.product_category.1": "printers",
    "order_details.0.product_detail.product_category.2": "computers",
    "order_details.0.product_detail.product_description": "hp printer X11ew des\
cription :P",
    "order_details.0.product_detail.product_hash": "\u0000\u0001\u0002\u0003\u0\
004",
    "order_details.0.product_detail.product_id": "999012",
    "order_details.0.product_detail.product_map.one": "1.1",
    "order_details.0.product_detail.product_map.two": "1.1",
    "order_details.0.product_detail.product_name": "hp printer X11ew",
    "order_details.0.product_detail.product_status": "ONLY_FEW_LEFT",
    "order_details.0.quantity": "3",
    "order_details.0.total": "354.34",
    "order_id": "2389646",
    "total": "132.43"
}
...

Union

Vertica treats Avro Unions as arrays.

Unmatched keys

Data being loaded can contain keys that are not part of the table definition. If you are loading into a flex table (or a flexible complex type column), no data is lost. For a table with strongly-defined columns, however, new keys cannot be loaded because the table does not have a place to put them.

This parser emits warnings if it finds new keys and if both of the following are true:

  • The target table is not a flex table.

  • The new key is not nested within a flexible complex type column.

New keys are logged in the UDX_EVENTS system table. If a new key is a complex type with nested keys, only the top-level key is logged. When you see a warning about unmatched keys, you can query this table and then use ALTER TABLE to modify your table definition for future loads.

Querying an external table loads data and thus can trigger these warnings. To prevent them, set the suppress_warnings parameter to 'unmatched_keys' or 'true':

=> CREATE EXTERNAL TABLE restaurants(
            name VARCHAR(50),
            menu ARRAY[ROW(item VARCHAR(50), price NUMERIC(8,2)),100])
   AS COPY FROM '/data/rest.json'
   PARSER FAVROPARSER(suppress_warnings='unmatched_key');

Examples

This example shows how to create and load a flex table with Avro data using favroparser. After loading the data, you can query virtual columns:

=> CREATE FLEX TABLE avro_basic();
CREATE TABLE

=> COPY avro_basic FROM '/home/dbadmin/data/weather.avro' PARSER FAVROPARSER();
Rows Loaded
-------------
5
(1 row)

=> SELECT station, temp, time FROM avro_basic;
station | temp |     time
---------+------+---------------
mohali  | 0    | -619524000000
lucknow | 22   | -619506000000
norwich | -11  | -619484400000
ams     | 111  | -655531200000
baddi   | 78   | -655509600000
(5 rows)

For more information, see Avro data.

8.3.3 - FCEFPARSER

Parses ArcSight Common Event Format (CEF) log files.

Parses ArcSight Common Event Format (CEF) log files. This parser loads values directly into any table column with a column name that matches a source data key. The parser stores the data loaded into a flex table in a single VMap.

This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__ column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.

Syntax

FCEFPARSER ( [parameter-name='value'[,...]] )

Parameters

delimiter
Single-character delimiter.

Default: ' '

record_terminator
Single-character record terminator.

**Default ****value: **newline

trim
Boolean, specifies whether to trim white space from header names and key values.

Default: true

reject_on_unescaped_delimiter
Boolean, specifies whether to reject rows containing unescaped delimiters. The CEF standard does not permit them.

Default: false

Examples

The following example illustrates creating a sample flex table for CEF data, with two real columns, eventId and priority.

  1. Create a flex table cefdata:

    => create flex table cefdata();
    CREATE TABLE
    
  2. Load some basic CEF data, using the flex parser fcefparser:

    => copy cefdata from stdin parser fcefparser();
    Enter data to be copied followed by a newline.
    End with a backslash and a period on a line by itself.
    >> CEF:0|ArcSight|ArcSight|2.4.1|machine:20|New alert|High|
    >> \.
    
  3. Use the maptostring() function to view the contents of your cefdata flex table:

    => select maptostring(__raw__) from cefdata;
                      maptostring
    -------------------------------------------------------------
     {
       "deviceproduct" : "ArcSight",
       "devicevendor" : "ArcSight",
       "deviceversion" : "2.4.1",
       "name" : "New alert",
       "severity" : "High",
       "signatureid" : "machine:20",
       "version" : "0"
    }
    
    
    (1 row)
    
  4. Select some virtual columns from the cefdata flex table:

    
    = select deviceproduct, severity, deviceversion from cefdata;
     deviceproduct | severity | deviceversion
    ---------------+----------+---------------
     ArcSight      | High     | 2.4.1
    (1 row)
    

    For more information, see Common event format (CEF) data

    See also

8.3.4 - FCSVPARSER

Parses CSV format (comma-separated values) data.

Parses CSV format (comma-separated values) data. Use this parser to load CSV data into columnar, flex, and hybrid tables. All data must be encoded in Unicode UTF-8 format. The fcsvparser parser supports the RFC 4180 standard for CSV data, and other options, to accommodate variations in CSV file format definitions. Invalid records are rejected. For more information about data formats, see Handling Non-UTF-8 input.

This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__ column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.

Syntax

FCSVPARSER ( [parameter='value'[,...]] )

Parameters

type
The default parameter values for the parser, one of the following strings:
  • rfc4180

  • traditional

You do not have to use the type parameter when loading data that conforms to the RFC 4180 standard (such as MS Excel files). See Loading CSV data for the RFC4180 default parameters, and other options you can specify for traditional CSV files.

Default: RFC4180

delimiter
A single-character value used to separate fields in the CSV data.

Default: , (for rfc4180 and traditional)

escape
A single-character value used as an escape character to interpret the next character in the data literally.

Default:

  • rfc4180: "

  • traditional: \

enclosed_by
A single-character value. Use enclosed_by to include a value that is identical to the delimiter, but should be interpreted literally. For example, if the data delimiter is a comma (,), and you want to use a comma within the data ("my name is jane, and his is jim").

Default: "

record_terminator
A single-character value used to specify the end of a record.

Default:

  • rfc4180: \n

  • traditional: \r\n

header
Boolean, specifies whether to use the first row of data as a header column. When header=true (default), and no header exists, fcsvparser uses a default column heading. The default header consists of ucoln, where n is the column offset number, starting with 0 for the first column. You can specify custom column heading names using the header_names parameter, described next.

If you specify header=false, the fcsvparser parses the first row of input as data, rather than as column headers.

Default: true

header_names
A list of column header names, delimited by the character defined by the parser's delimiter parameter. Use this parameter to specify header names in a CSV file without a header row, or to override the column names present in the CSV source. To override one or more existing column names, specify the header names to use. This parameter overrides any header row in the data.
trim
Boolean, specifies whether to trim white space from header names and key values.

Default: true

omit_empty_keys
Boolean, specifies how the parser handles header keys without values. If true, keys with an empty value in the header row are not loaded.

Default: false

reject_on_duplicate
Boolean, specifies whether to ignore duplicate records (false), or to reject duplicates (true). In either case, the load continues.

Default: false

reject_on_empty_key
Boolean, specifies whether to reject any row containing a key without a value.

Default: false

reject_on_materialized_type_error
Boolean, specifies whether to reject any materialized column value that the parser cannot coerce into a compatible data type. See Loading CSV data.

Default: false

Examples

The following example loads data that complies with RFC 4180:

=> CREATE TABLE sample(a INT, b VARCHAR, c INT);
CREATE TABLE

=> COPY sample FROM stdin PARSER FCSVPARSER(header='false');
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 10,10,20
>> 10,"10",30
>> 10,"20""5",90
>> \.

=> SELECT * FROM sample;
 a  |  b   | c
----+------+----
 10 | 10   | 20
 10 | 10   | 30
 10 | 20"5 | 90
(3 rows)

Note the value 20"5, which came from the input "20""5". Per RFC 4180, to include a double-quote character in a string, you must double-quote the entire string and then escape the double-quote with another double-quote.

For more information and examples using other parameters of this parser, see Loading CSV data.

See also

8.3.5 - FDELIMITEDPAIRPARSER

Parses delimited data files.

Parses delimited data files. This parser provides a subset of the functionality in the parser fdelimitedparser. Use the fdelimitedpairparser when the data you are loading specifies pairs of column names with data in each row.

This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__ column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.

Syntax

FDELIMITEDPAIRPARSER ( [parameter-name='value'[,...]] )

Parameters

delimiter
Specifies a single-character delimiter.

Default: ' '

record_terminator
Specifies a single-character record terminator.

Default: newline

trim
Boolean specifies whether to trim white space from header names and key values.

Default: true

Examples

The following example illustrates creating a sample flex table for simple delimited data, with two real columns, eventId and priority.

  1. Create a table:

    => create flex table CEFData(eventId int default(eventId::int), priority int default(priority::int) );
    CREATE TABLE
    
  2. Load a sample delimited OpenText ArcSight log file into the CEFData table, using the fcefparser:

    => copy CEFData from '/home/release/kmm/flextables/sampleArcSight.txt' parser fdelimitedpairparser();
    Rows Loaded | 200
    
  3. After loading the sample data file, use maptostring() to display the virtual columns in the __raw__ column of CEFData:

    => select maptostring(__raw__) from CEFData limit 1;                                                                                                                                                                                                                                        maptostring
    -----------------------------------------------------------
       "agentassetid" : "4-WwHuD0BABCCQDVAeX21vg==",
       "agentzone" : "3083",
       "agt" : "265723237",
       "ahost" : "svsvm0176",
       "aid" : "3tGoHuD0BABCCMDVAeX21vg==",
       "art" : "1099267576901",
       "assetcriticality" : "0",
       "at" : "snort_db",
       "atz" : "America/Los_Angeles",
       "av" : "5.3.0.19524.0",
       "cat" : "attempted-recon",
       "categorybehavior" : "/Communicate/Query",
       "categorydevicegroup" : "/IDS/Network",
       "categoryobject" : "/Host",
       "categoryoutcome" : "/Attempt",
       "categorysignificance" : "/Recon",
       "categorytechnique" : "/Scan",
       "categorytupledescription" : "An IDS observed a scan of a host.",
       "cnt" : "1",
       "cs2" : "3",
       "destinationgeocountrycode" : "US",
       "destinationgeolocationinfo" : "Richardson",
       "destinationgeopostalcode" : "75082",
       "destinationgeoregioncode" : "TX",
       "destinationzone" : "3133",
       "device product" : "Snort",
       "device vendor" : "Snort",
       "device version" : "1.8",
       "deviceseverity" : "2",
       "dhost" : "198.198.121.200",
       "dlat" : "329913940429",
       "dlong" : "-966644973754",
       "dst" : "3334896072",
       "dtz" : "America/Los_Angeles",
       "dvchost" : "unknown:eth1",
       "end" : "1364676323451",
       "eventid" : "1219383333",
       "fdevice product" : "Snort",
       "fdevice vendor" : "Snort",
       "fdevice version" : "1.8",
       "fdtz" : "America/Los_Angeles",
       "fdvchost" : "unknown:eth1",
       "lblstring2label" : "sig_rev",
       "locality" : "0",
       "modelconfidence" : "0",
       "mrt" : "1364675789222",
       "name" : "ICMP PING NMAP",
       "oagentassetid" : "4-WwHuD0BABCCQDVAeX21vg==",
       "oagentzone" : "3083",
       "oagt" : "265723237",
       "oahost" : "svsvm0176",
       "oaid" : "3tGoHuD0BABCCMDVAeX21vg==",
       "oat" : "snort_db",
       "oatz" : "America/Los_Angeles",
       "oav" : "5.3.0.19524.0",
       "originator" : "0",
       "priority" : "8",
       "proto" : "ICMP",
       "relevance" : "10",
       "rt" : "1099267573000",
       "severity" : "8",
       "shost" : "198.198.104.10",
       "signature id" : "[1:469]",
       "slat" : "329913940429",
       "slong" : "-966644973754",
       "sourcegeocountrycode" : "US",
       "sourcegeolocationinfo" : "Richardson",
       "sourcegeopostalcode" : "75082",
       "sourcegeoregioncode" : "TX",
       "sourcezone" : "3133",
       "src" : "3334891530",
       "start" : "1364676323451",
       "type" : "0"
    }
    
    (1 row)
    
  4. Select the eventID and priority real columns, along with two virtual columns, atz and destinationgeoregioncode:

    
    =>  select eventID, priority, atz, destinationgeoregioncode from CEFData limit 10;
      eventID   | priority |         atz         | destinationgeoregioncode
    ------------+----------+---------------------+--------------------------
     1218325417 |        5 | America/Los_Angeles |
     1219383333 |        8 | America/Los_Angeles | TX
     1219533691 |        9 | America/Los_Angeles | TX
     1220034458 |        5 | America/Los_Angeles | TX
     1220034578 |        9 | America/Los_Angeles |
     1220067119 |        5 | America/Los_Angeles | TX
     1220106960 |        5 | America/Los_Angeles | TX
     1220142122 |        5 | America/Los_Angeles | TX
     1220312009 |        5 | America/Los_Angeles | TX
     1220321355 |        5 | America/Los_Angeles | CA
    (10 rows)
    

See also

8.3.6 - FDELIMITEDPARSER

Parses data using a delimiter character to separate values.

Parses data using a delimiter character to separate values. The fdelimitedparser loads delimited data, storing it in a single-value VMap.

This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__ column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.

Syntax

FDLIMITEDPARSER ( [parameter-name='value'[,...]] )

Parameters

delimiter
Single character delimiter.

Default:|

record_terminator
Single-character record terminator.

Default:\n

trim
Boolean, specifies whether to trim white space from header names and key values.

Default: true

header
Boolean, specifies that a header column exists. The parser uses col### for the column names if you use this parameter but no header exists.

Default:true

omit_empty_keys
Boolean, specifies how the parser handles header keys without values. If omit_empty_keys=true, keys with an empty value in the headerrow are not loaded.

Default: false

reject_on_duplicate
Boolean, specifies whether to ignore duplicate records (false), or to reject duplicates (true). In either case, the load continues.

Default:false

reject_on_empty_key
Boolean, specifies whether to reject any row containing a key without a value.

Default:false

reject_on_materialized_type_error
Boolean, specifies whether to reject any row value for a materialized column that the parser cannot coerce into a compatible data type. See Using flex table parsers.

Default:false

treat_empty_val_as_null
Boolean, specifies that empty fields become NULLs, rather than empty strings ('').

Default: true

Examples

  1. Create a flex table for delimited data:

    t=> CREATE FLEX TABLE delim_flex ();
    CREATE TABLE
    
  2. Use the fdelimitedparser to load some delimited data from STDIN, specifying a comma (,) column delimiter:

    => COPY delim_flex FROM STDIN parser fdelimitedparser (delimiter=',');
    Enter data to be copied followed by a newline.
    End with a backslash and a period on a line by itself.
    >> deviceproduct, severity, deviceversion
    >> ArcSight, High, 2.4.1
    >> \.
    

You can now query virtual columns in the delim_flex flex table:

=> SELECT deviceproduct, severity, deviceversion from delim_flex;
 deviceproduct | severity | deviceversion
---------------+----------+---------------
 ArcSight      | High     | 2.4.1
(1 row)

See also

8.3.7 - FJSONPARSER

Parses and loads a JSON file.

Parses and loads a JSON file. This file can contain either repeated JSON data objects (including nested maps), or an outer list of JSON elements.

When loading into a flex or hybrid table, the parser stores the JSON data in a single-value VMap. When loading into a hybrid or columnar table, the parser loads data directly into any table column with a column name that matches a key in the JSON source data.

You can load complex types in the JSON source (arrays, structs, or combinations) with strong typing or as flexible complex types. A flexible complex type is loaded into a VMap column, as in flex tables. To load complex types as VMap columns, specify a column type of LONG VARBINARY. To preserve the indexing in complex types, set flatten_maps to false.

This parser can notify you if it finds keys in the data that are not part of the table definition. See Unmatched Keys.

FJSONPARSER supports cooperative parse only if record_terminator is specified. It does not support apportioned load.

Syntax

FJSONPARSER ( [parameter=value[,...]] )

Parameters

flatten_maps
Boolean, whether to flatten sub-maps within the JSON data, separating map levels with a period (.). This value affects all data in the load, including nested maps.

This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.

Default: true

flatten_arrays
Boolean, whether to convert lists to sub-maps with integer keys. When lists are flattened, key names are concatenated as for maps. Lists are not flattened by default. This value affects all data in the load, including nested lists.

This parameter applies only to flex tables or VMap columns and is ignored when loading strongly-typed complex types.

Default: false

reject_on_duplicate
Boolean, whether to ignore duplicate records (false), or to reject duplicates (true). In either case, the load continues.

Default: false

reject_on_empty_key
Boolean, whether to reject any row containing a field key without a value.

Default: false

omit_empty_keys
Boolean, whether to omit any field key from the data that does not have a value. Other fields in the same record are loaded.

Default: false

record_terminator
When set, any invalid JSON records are skipped and parsing continues with the next record. Records must be terminated uniformly. For example, if your input file has JSON records terminated by newline characters, set this parameter to E'\n'). If any invalid JSON records exist, parsing continues after the next record_terminator.

Even if the data does not contain invalid records, specifying an explicit record terminator can improve load performance by allowing cooperative parse and apportioned load to operate more efficiently.

When you omit this parameter, parsing ends at the first invalid JSON record.

reject_on_materialized_type_error

Boolean, whether to reject a data row that contains a materialized column value that cannot be coerced into a compatible data type. If the value is false and the type cannot be coerced, the parser sets the value in that column to NULL.

If the column is an array and the data to be loaded is too large, then false sets the column value to NULL and true rejects the row.

If the column is a strongly-typed complex type, as opposed to a flexible complex type, then a type mismatch anywhere in the complex type causes the entire column to be treated as a mismatch. The parser does not partially load complex types.

Default: false

start_point
String, the name of a key in the JSON load data at which to begin parsing. The parser ignores all data before the start_point value. The value is loaded for each object in the file. The parser processes data after the first instance, and up to the second, ignoring any remaining data.
start_point_occurrence
Integer, the nth occurrence of the value you specify with start_point. Use in conjunction with start_point when the data has multiple start values and you know the occurrence at which to begin parsing.

Default: 1

suppress_nonalphanumeric_key_chars
Boolean, whether to suppress non-alphanumeric characters in JSON key values. The parser replaces these characters with an underscore (_) when this parameter is true.

Default: false

key_separator
Character for the parser to use when concatenating key names.

Default: period (.)

suppress_warnings
String, which warnings to suppress:
  • unmatched_key (see Unmatched Keys)

  • true or t (suppress all warnings)

  • false or f (do not suppress warnings)

Default: false

Data types

If the total size of an array exceeds the size defined by the target table, the parser sets the value to null.

Unmatched keys

Data being loaded can contain keys that are not part of the table definition. If you are loading into a flex table (or a flexible complex type column), no data is lost. For a table with strongly-defined columns, however, new keys cannot be loaded because the table does not have a place to put them.

This parser emits warnings if it finds new keys and if both of the following are true:

  • The target table is not a flex table.

  • The new key is not nested within a flexible complex type column.

New keys are logged in the UDX_EVENTS system table. If a new key is a complex type with nested keys, only the top-level key is logged. When you see a warning about unmatched keys, you can query this table and then use ALTER TABLE to modify your table definition for future loads.

Querying an external table loads data and thus can trigger these warnings. To prevent them, set the suppress_warnings parameter to 'unmatched_keys' or 'true':

=> CREATE EXTERNAL TABLE restaurants(
            name VARCHAR(50),
            menu ARRAY[ROW(item VARCHAR(50), price NUMERIC(8,2)),100])
   AS COPY FROM '/data/rest.json'
   PARSER FJSONPARSER(suppress_warnings='unmatched_key');

Examples

The following example loads JSON data from STDIN using the default parameters:

=> CREATE TABLE people(age INT, name VARCHAR);
CREATE TABLE

=> COPY people FROM STDIN PARSER FJSONPARSER();
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> {"age": 5, "name": "Tim"}
>>  {"age": 3}
>>  {"name": "Fred"}
>>  {"name": "Bob", "age": 10}
>> \.
=> SELECT * FROM people;
 age | name
-----+------
     | Fred
  10 | Bob
   5 | Tim
   3 |
(4 rows)

The following example uses the reject_on_duplicate parameter to reject duplicate values:

=> CREATE FLEX TABLE json_dupes();
CREATE TABLE
=> COPY json_dupes FROM stdin PARSER fjsonparser(reject_on_duplicate=true)
exceptions '/home/dbadmin/load_errors/json_e.out'
rejected data '/home/dbadmin/load_errors/json_r.out';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> {"a":"1","a":"2","b":"3"}
>> \.
=>  \!cat /home/dbadmin/load_errors/json_e.out
COPY: Input record 1 has been rejected (Rejected by user-defined parser).
Please see /home/dbadmin/load_errors/json_r.out, record 1 for the rejected record.
COPY: Loaded 0 rows, rejected 1 rows.

The following example loads array data:

$ cat addrs.json
 {"number": 301, "street": "Grant", "attributes": [1, 2, 3, 4]}

=> CREATE EXTERNAL TABLE customers(number INT, street VARCHAR, attributes ARRAY[INT])
    AS COPY FROM 'addrs.json' PARSER fjsonparser();

=> SELECT number, street, attributes FROM customers;
 num | street| attributes
-----+-----------+---------------
301  | Grant | [1,2,3,4]
(1 row)

The following example loads a flexible complex type, rejecting rows that have empty keys within the nested records. Notice that while the data has two restaurants, one has a key name that is an empty string. This one is rejected:

$ cat rest1.json
{
    "name" : "Bob's pizzeria",
    "cuisine" : "Italian",
    "location_city" : ["Cambridge", "Pittsburgh"],
    "menu" : [{"item" : "cheese pizza", "" : "$8.25"},
              {"item" : "spinach pizza", "price" : "$10.50"}]
}
{
    "name" : "Bakersfield Tacos",
    "cuisine" : "Mexican",
    "location_city" : ["Pittsburgh"],
    "menu" : [{"item" : "veggie taco", "price" : "$9.95"},
              {"item" : "steak taco", "price" : "$10.95"}]
}

=> CREATE TABLE rest (name VARCHAR, cuisine VARCHAR, location_city LONG VARBINARY, menu LONG VARBINARY);

=> COPY rest FROM '/data/rest1.json'
    PARSER fjsonparser(flatten_maps=false, reject_on_empty_key=true);
Rows Loaded
------------
       1
(1 row)

=> SELECT maptostring(location_city), maptostring(menu) FROM rest;
        maptostring        |                          maptostring
---------------------------+-------------------------------------------------------
 {
    "0": "Pittsburgh"
} | {
    "0": {
        "item": "veggie taco",
        "price": "$9.95"
    },
    "1": {
        "item": "steak taco",
        "price": "$10.95"
    }
}
(1 row)

To instead load partial data, use omit_empty_keys to bypass the missing keys while loading everything else:



=> COPY rest FROM '/data/rest1.json'
    PARSER fjsonparser(flatten_maps=false, omit_empty_keys=true);
 Rows Loaded
-------------
           2
(1 row)

=> SELECT maptostring(location_city), maptostring(menu) from rest;
                   maptostring                   |               maptostring
-------------------------------------------------+---------------------------------
 {
    "0": "Pittsburgh"
}                       | {
    "0": {
        "item": "veggie taco",
        "price": "$9.95"
    },
    "1": {
        "item": "steak taco",
        "price": "$10.95"
    }
}
 {
    "0": "Cambridge",
    "1": "Pittsburgh"
} | {
    "0": {
        "item": "cheese pizza"
    },
    "1": {
        "item": "spinach pizza",
        "price": "$10.50"
    }
}
(2 rows)

To instead load this data with strong typing, define the complex types in the table:

=> CREATE EXTERNAL TABLE restaurants
  (name VARCHAR, cuisine VARCHAR,
   location_city ARRAY[VARCHAR(80)],
   menu ARRAY[ ROW(item VARCHAR(80), price FLOAT) ]
  )
 AS COPY FROM '/data/rest.json' PARSER fjsonparser();

=> SELECT * FROM restaurants;
       name        | cuisine |       location_city        |                    \
                menu
-------------------+---------+----------------------------+--------------------\
--------------------------------------------------------
 Bob's pizzeria    | Italian | ["Cambridge","Pittsburgh"] | [{"item":"cheese pi\
zza","price":0.0},{"item":"spinach pizza","price":0.0}]
 Bakersfield Tacos | Mexican | ["Pittsburgh"]             | [{"item":"veggie ta\
co","price":0.0},{"item":"steak taco","price":0.0}]
(2 rows)

In the following example, the data contains two new fields. One is a top-level field (a new column), and the other is a new field on an existing struct. The new fields are recorded in the UDX_EVENTS system table:

=> COPY rest FROM '/data/rest2.json' PARSER FJSONPARSER();
WARNING 10596:  Warning in UDx call in user-defined object [FJSONParser], code: 0, message:
Data source contained keys which did not match table schema
HINT:  SELECT key, sum(num_instances) FROM v_monitor.udx_events WHERE event_type = 'UNMATCHED_KEY' GROUP BY key
 Rows Loaded
-------------
           2
(1 row)

=> SELECT key, SUM(num_instances) FROM v_monitor.UDX_EVENTS
   WHERE event_type = 'UNMATCHED_KEY' GROUP BY key;
          key           | SUM
------------------------+-----
 chain                  |   1
 menu.elements.calories |   7
(2 rows)

For other examples, see JSON data.

8.3.8 - FREGEXPARSER

Parses a regular expression, matching columns to the contents of the named regular expression groups.

Parses a regular expression, matching columns to the contents of the named regular expression groups.

This parser is for use in Flex tables only. All flex parsers store the data as a single VMap in the LONG VARBINAR_raw__ column. If a data row is too large to fit in the column, it is rejected. Vertica supports null values for loading data with NULL-specified columns.

Syntax

FREGEXPARSER ( pattern=[parameter-name='value'[,...]] )

Parameters

pattern
Specifies the regular expression of data to match.

Default: Empty string ("")

use_jit
Boolean, specifies whether to use just-in-time compiling when parsing the regular expression.

Default: false

record_terminator
Specifies the character used to separate input records.

Default: \n

logline_column
A string that captures the destination column containing the full string that the regular expression matched.

Default: Empty string ("")

Example

These examples use the following regular expression, which searches for information that includes the timestamp, date, thread_name, and thread_id strings.

This example expression loads any thread_id hex value, regardless of whether it has a 0x prefix, (<thread_id>(?:0x)?[0-9a-f]+).

'^(?<time>\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d+)
 (?<thread_name>[A-Za-z ]+):(?<thread_id>(?:0x)?[0-9a-f]+)
-?(?<transaction_id>[0-9a-f])?(?:[(?<component>\w+)]
\<(?<level>\w+)\> )?(?:<(?<elevel>\w+)> @[?(?<enode>\w+)]?: )
?(?<text>.*)'
  1. Create a flex table (vlog) to contain the results of a Vertica log file. For this example, we made a copy of a log file in the directory /home/dbadmin/data/vertica.log:

    => CREATE FLEX TABLE vlog1();
    CREATE TABLE
    
  2. Use the fregexparser with the sample regular expression to load data from the log file. Be sure to remove any line characters before using this expression shown here:

    =>  COPY vlog1 FROM '/home/dbadmin/tempdat/KMvertica.log'
    PARSER FREGEXPARSER(pattern=
    '^(?<time>\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d+)
     (?<thread_name>[A-Za-z ]+):(?<thread_id>(?:0x)?[0-9a-f]+)
    -?(?<transaction_id>[0-9a-f])?(?:[(?<component>\w+)]
    \<(?<level>\w+)\> )?(?:<(?<elevel>\w+)> @[?(?<enode>\w+)]?: )
    ?(?<text>.*)'
    
    ); Rows Loaded ------------- 31049 (1 row)
  3. After successfully loading data, use the MAPTOSTRING() function with the table's __raw__ column. The four rows (limt 4) that the query returns are regular expression results of the KMvertica.log file, parsed with fregexparser. The output shows thread_id values with a preceding 0x or without:

    => SELECT maptostring(__raw__) FROM vlog1 LIMIT 4;
                            maptostring
    -------------------------------------------------------------------------------------
    {
      "text" : " [Init] <INFO> Log /home/dbadmin/VMart/v_vmart_node0001_catalog/vertica.log 
    opened; #2",
      "thread_id" : "0x7f2157e287c0",
      "thread_name" : "Main",
      "time" : "2017-03-21 23:30:01.704"
    }
    
    {
      "text" : " [Init] <INFO> Processing command line: /opt/vertica/bin/vertica -D 
    /home/dbadmin/VMart/v_vmart_node0001_catalog -C VMart -n v_vmart_node0001 -h 
    10.20.100.247 -p 5433 -P 4803 -Y ipv4",
      "thread_id" : "0x7f2157e287c0",
      "thread_name" : "Main",
      "time" : "2017-03-21 23:30:01.704"
    }
    
    {
      "text" : " [Init] <INFO> Starting up Vertica Analytic Database v8.1.1-20170321",
      "thread_id" : "7f2157e287c0",
      "thread_name" : "Main",
      "time" : "2017-03-21 23:30:01.704"
    }
    
    {
      "text" : " [Init] <INFO> Compiler Version: 4.8.2 20140120 (Red Hat 4.8.2-15)",
      "thread_id" : "7f2157e287c0",
      "thread_name" : "Main",
      "time" : "2017-03-21 23:30:01.704"
    }
    (4 rows)
    

See also

8.3.9 - ORC

Use the ORC clause with the COPY FROM statement to load data in the ORC format.

Use the ORC clause with the COPY statement to load data in the ORC format. When loading data into Vertica, you can read all primitive types, UUIDs, and complex types.

By default, the ORC parser uses strong schema matching, meaning that columns in the data must exactly match the columns in the table using the data. You can optionally use Loose (soft) Schema Matching.

If the table definition includes columns of primitive types and those columns are not in the data, the parser fills those columns with NULL. If the table definition includes columns of complex types, those columns must be present in the data.

This parser does not support apportioned load or cooperative parse.

Syntax

ORC ( [ parameter=value[,...] ] )

Parameters

All parameters are optional.

hive_partition_cols
Comma-separated list of columns that are partition columns in the data.
allow_no_match
Whether to accept a path containing a glob with no matching files and report zero rows in query results. If this parameter is not set, Vertica returns an error if the path in the FROM clause does not match at least one file.
do_soft_schema_match_by_name
Whether to enable loose schema matching (true) instead of the strict matching based on column order in the table definition and ORC file (false, default). See Loose Schema Matching for more information.

Default: false.

reject_on_materialized_type_error
Boolean, applies only if do_soft_schema_match_by_name is true. Specifies what to do when loose schema matching is being used and a value cannot be coerced from the data to the target column type. A value of true means to reject the row; a value of false means to use NULL for the value or, for strings that are too long, truncate. See the table of type coercions for coercible type mappings.

Default: true.

Loose schema matching

By default, the ORC parser uses strong schema matching. This means that all columns in the ORC data must be loaded, and they must be loaded in the same order as in the data. However, there are times when you only want to pull certain columns, or you want to be able to accommodate future changes in the ORC schema. This is called loose (or soft) schema matching.

Use the do_soft_schema_match_by_name parameter to enable loose schema matching. This setting has the following effects:

  • Columns in the data are matched to columns in the table by their names. Names must exactly match but are case-insensitive.

  • Columns that exist in the ORC data but are not part of the table definition are ignored.

  • Columns that exist in the table definition but not the ORC data are filled with NULL.

  • If the same case-insensitive column name occurs more than once in the ORC data, the parser uses the last one. (This situation can arise when using data written by tools that are case-sensitive.)

  • Column types do not need to exactly match, so long as the data type in the ORC file can be coerced to the type used by the table.

While the Parquet parser applies loose schema matching to both column and field names, the ORC parser applies it only to column names.

Data types

This parser can read all primitive types, UUIDs, and complex types.

If the total size of an array exceeds the size defined by the target table, the parser rejects the row.

The following mappings are supported with type coercion and loose schema matching:

ORC Physical Type Coercible to Vertica Data Type
BOOLEAN BOOLEAN
BYTE, SHORT, INT, LONG INT
FLOAT, DOUBLE FLOAT
DECIMAL NUMERIC
DATE DATE
TIMESTAMP TIMESTAMP, TIMESTAMPTZ
STRING, CHAR, VARCHAR CHAR, VARCHAR, LONG VARCHAR
BINARY BINARY, VARBINARY, LONG VARBINARY

Examples

The ORC clause does not use the PARSER option:

=> CREATE EXTERNAL TABLE orders (...)
   AS COPY FROM 's3://DataLake/orders.orc' ORC;

You can read a map column as an array of rows, as in the following example:

=> CREATE EXTERNAL TABLE orders
 (orderkey INT,
  custkey INT,
  prods ARRAY[ROW(key VARCHAR(10), value DECIMAL(12,2))],
  orderdate DATE
 ) AS COPY FROM '...' ORC;

8.3.10 - PARQUET

Use the PARQUET parser with the COPY FROM statement to load data in the Parquet format.

Use the PARQUET parser with the COPY statement to load data in the Parquet format. When loading data into Vertica you can read all primitive types, UUIDs, and complex types.

By default, the Parquet parser uses strong schema matching, meaning that columns in the data must exactly match the columns in the table using the data. You can optionally use Loose Schema Matching.

When loading Parquet data, Vertica caches the Parquet metadata to improve efficiency. This cache uses local TEMP storage and is not used if TEMP is remote. See the ParquetMetadataCacheSizeMB configuration parameter to change the size of the cache.

This parser does not support apportioned load or cooperative parse.

Syntax

PARQUET ( [ parameter=value[,...] ] )

Parameters

All parameters are optional.

hive_partition_cols
Comma-separated list of columns that are partition columns in the data.
allow_no_match
Boolean. Whether to accept a path containing a glob with no matching files and report zero rows in query results. If this parameter is not set, Vertica returns an error if the path in the FROM clause does not match at least one file.
allow_long_varbinary_match_complex_type
Boolean. Whether to enable flexible column types (see Flexible complex types). If true, the Parquet parser allows a complex type in the data to match a table column defined as LONG VARBINARY. If false, the Parquet parser requires strong typing of complex types. With the parameter set you can still use strong typing. Set this parameter to false if you want use of flexible columns to be treated as an error.
do_soft_schema_match_by_name
Whether to enable loose schema matching (true) instead of the strict matching based on column order in the table definition and parquet file (false, default). See Loose Schema Matching for more information.

Default: false.

reject_on_materialized_type_error
Boolean, applies only if do_soft_schema_match_by_name is true. Specifies what to do when loose schema matching is being used and a value cannot be coerced from the data to the target column type. A value of true means to reject the row; a value of false means to use NULL for the value or, for strings that are too long, truncate. See the table of type coercions for coercible type mappings.

Default: true.

Loose schema matching

By default, the Parquet parser uses strong schema matching. This means that all columns and struct fields in the Parquet data must be loaded, and they must be loaded in the same order as in the data. However, there are times when you only want to pull certain columns or fields, or you want to be able to accommodate future changes in the Parquet schema. This is called loose (or soft) schema matching.

Use the do_soft_schema_match_by_name parameter to enable loose schema matching. This setting has the following effects:

  • Columns or struct fields in the data are matched to columns or fields in the table by their names. Names must exactly match but are case-insensitive.

  • Columns or fields that exist in the Parquet data but are not part of the table definition are ignored.

  • Columns or fields that exist in the table definition but not the Parquet data are filled with NULL. The parser logs an UNMATCHED_TABLE_COLUMNS_PARQUETPARSER event in QUERY_EVENTS.

  • If the same case-insensitive column name occurs more than once in the Parquet data, the parser uses the last one. (This situation can arise when using data written by tools that are case-sensitive.)

  • Column and field types do not need to exactly match, so long as the data type in the Parquet file can be coerced to the type used by the table. If a type cannot be coerced, the parser logs a TYPE_MISMATCH_COLUMNS_PARQUETPARSER event in QUERY_EVENTS. If reject_on_materialized_type_error is true then the parser rejects the row. If it is false, the parser uses NULL or, for string values that are too long, truncates the value.

Data types

The Parquet parser maps Parquet data types to Vertica data types as follows.

Parquet Logical Type Vertica Data Type
StringLogicalType VARCHAR
MapLogicalType ARRAY[ROW]
ListLogicalType ARRAY/SET
IntLogicalType INT/NUMERIC
DecimalLogicalType NUMERIC
DateLogicalType DATE
TimeLogicalType TIME
TimestampLogicalType TIMESTAMP
UUIDLogicalType UUID

If the total size of an array exceeds the size defined by the target table, the parser rejects the row.

The following logical types are not supported:

  • EnumLogicalType
  • IntervalLogicalType (Intervals exported using EXPORT TO PARQUET do not use this logical type)
  • JSONLogicalType
  • BSONLogicalType
  • UnknownLogicalType

The Parquet parser supports the following mappings of physical types:

Parquet Physical Type Vertica Data Type
BOOLEAN BOOLEAN
INT32/INT64 INT
INT64 (a number of microseconds) INTERVAL
INT96 Supported only for TIMESTAMP
FLOAT DOUBLE
DOUBLE DOUBLE
BYTE_ARRAY VARBINARY
FIXED_LEN_BYTE_ARRAY BINARY

The following mappings are supported with type coercion and loose schema matching.

Parquet Physical Type Coercible to Vertica Data Type
BOOLEAN BOOLEAN
INT32, INT64, BOOLEAN INT
FLOAT, DOUBLE DOUBLE
INT32, INT96 DATE
INT64, INT96 TIMESTAMP, TIMESTAMPTZ

INT64

If precision > 0: INT32, BYTE_ARRAY, FIXED_LEN_BYTE_ARRAY

Numeric
BYTE_ARRAY CHAR, VARCHAR, LONG VARCHAR, BINARY, VARBINARY, LONG VARBINARY
FIXED_LEN_BYTE_ARRAY UUID

Vertica supports only 3-level-encoded arrays, not 2-level-encoded.

Examples

The PARQUET clause does not use the PARSER option:

=> COPY sales FROM 's3://DataLake/sales.parquet' PARQUET;

In the following example, the data directory contains no files:

=> CREATE EXTERNAL TABLE customers (...)
    AS COPY FROM 'webhdfs:///data/*.parquet' PARQUET;
=> SELECT COUNT(*) FROM customers;
ERROR 7869: No files match when expanding glob: [webhdfs:///data/*.parquet]

To read zero rows instead of producing an error, use the allow_no_match parameter:

=> CREATE EXTERNAL TABLE customers (...)
    AS COPY FROM 'webhdfs:///data/*.parquet'
       PARQUET(allow_no_match='true');
=> SELECT COUNT(*) FROM customers;
 count
-------
     0
(1 row)

To allow reading a complex type (menu, in this example) as a flexible column type, use the allow_long_varbinary_match_complex_type parameter:

=> CREATE EXTERNAL TABLE restaurants
    (name VARCHAR, cuisine VARCHAR, location_city ARRAY[VARCHAR], menu LONG VARBINARY)
    AS COPY FROM '/data/rest*.parquet'
    PARQUET(allow_long_varbinary_match_complex_type='True');

To read only some columns from the restaurant data, use loose schema matching:

=> CREATE EXTERNAL TABLE restaurants(name VARCHAR, cuisine VARCHAR)
    AS COPY FROM '/data/rest*.parquet'
    PARQUET(allow_long_varbinary_match_complex_type='True',
            do_soft_schema_match_by_name='True');

=> SELECT * from restaurant;
       name        | cuisine
-------------------+----------
 Bob's pizzeria    | Italian
 Bakersfield Tacos | Mexican
(2 rows)

8.4 - Examples

For additional COPY examples, see the reference pages for specific parsers, including: DELIMITED (Parser), ORC (Parser), PARQUET (Parser), FJSONPARSER (Parser), and FAVROPARSER (Parser).

For additional COPY examples, see the reference pages for specific parsers, including: DELIMITED, ORC, PARQUET, FJSONPARSER, and FAVROPARSER.

Specifying string options

Use COPY with FORMAT, DELIMITER, NULL, and ENCLOSED BY options:

=> COPY public.customer_dimension (customer_since FORMAT 'YYYY')
   FROM STDIN
   DELIMITER ','
   NULL AS 'null'
   ENCLOSED BY '"';

This example sets and references a vsql variable for the input file:

=> \set input_file ../myCopyFromLocal/large_table.gzip
=> COPY store.store_dimension
   FROM :input_file
   DELIMITER '|'
   NULL ''
   RECORD TERMINATOR E'\f';

Including multiple source files

Create a table and then copy multiple source files to it:

=> CREATE TABLE sampletab (a int);
CREATE TABLE

=> COPY sampletab FROM '/home/dbadmin/one.dat', 'home/dbadmin/two.dat';
 Rows Loaded
-------------
           2
(1 row)

Use wildcards to indicate a group of files:

=> COPY myTable FROM 'webhdfs:///mydirectory/ofmanyfiles/*.dat';

Wildcards can include regular expressions:

=> COPY myTable FROM 'webhdfs:///mydirectory/*_[0-9]';

Specify multiple paths in a single COPY statement:

=> COPY myTable FROM 'webhdfs:///data/sales/01/*.dat', 'webhdfs:///data/sales/02/*.dat',
    'webhdfs:///data/sales/historical.dat';

Distributing a load

Load data that is shared across all nodes. Vertica distributes the load across all nodes, if possible:

=> COPY sampletab FROM '/data/file.dat' ON ANY NODE;

Load data from two files. Because the first load file does not specify nodes (or ON ANY NODE), the initiator performs the load. Loading the second file is distributed across all nodes:

=> COPY sampletab FROM '/data/file1.dat', '/data/file2.dat' ON ANY NODE;

Specify different nodes for each load file:

=> COPY sampletab FROM '/data/file1.dat' ON (v_vmart_node0001, v_vmart_node0002),
    '/data/file2.dat' ON (v_vmart_node0003, v_vmart_node0004);

Loading data from shared storage

To load data from shared storage, use URLs in the corresponding schemes:

  • HDFS: [[s]web]hdfs://[nameservice]/path

  • S3: s3://bucket/path

  • Google Cloud: gs://bucket/path

  • Azure: azb://account/container/path

Load a file stored in HDFS using the default name node or name service:

=> COPY t FROM 'webhdfs:///opt/data/file1.dat';

Load data from a particular HDFS name service (testNS). You specify a name service if your database is configured to read from more than one HDFS cluster:

=> COPY t FROM 'webhdfs://testNS/opt/data/file2.csv';

Load data from an S3 bucket:

=> COPY t FROM 's3://AWS_DataLake/*' ORC;

Partitioned data

Data files can be partitioned using the directory structure, such as:

path/created=2016-11-01/region=northeast/*
path/created=2016-11-01/region=central/*
path/created=2016-11-01/region=southeast/*
path/created=2016-11-01/...
path/created=2016-11-02/region=northeast/*
path/created=2016-11-02/region=central/*
path/created=2016-11-02/region=southeast/*
path/created=2016-11-02/...
path/created=2016-11-03/...
path/...

Load partition columns using the PARTITION COLUMNS option:

=> CREATE EXTERNAL TABLE records (id int, name varchar(50), created date, region varchar(50))
   AS COPY FROM 'webhdfs:///path/*/*/*'
   PARTITION COLUMNS created, region;

Using filler columns

In the following example, the table has columns for first name, last name, and full name, but the data being loaded contains columns for first, middle, and last names. The COPY statement reads all of the source data but only loads the source columns for first and last names. It constructs the data for the full name by concatenating each of the source data columns, including the middle name. The middle name is read as a FILLER column so it can be used in the concatenation, but is ignored otherwise. (There is no table column for middle name.)

=> CREATE TABLE names(first VARCHAR(20), last VARCHAR(20), full VARCHAR(60));
CREATE TABLE
=> COPY names(first,
              middle FILLER VARCHAR(20),
              last,
              full AS first||' '||middle||' '||last)
      FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> Marc|Gregory|Smith
>> Sue|Lucia|Temp
>> Jon|Pete|Hamilton
>> \.
=> SELECT * from names;
 first |   last   |        full
-------+----------+--------------------
 Jon   | Hamilton | Jon Pete Hamilton
 Marc  | Smith    | Marc Gregory Smith
 Sue   | Temp     | Sue Lucia Temp
(3 rows)

In the following example, JSON data contains an unencrypted password. The JSON data and the table definition use the same name for the field. Instead of loading the value directly, the COPY statement uses a filler column and a (hypothetical) function to encrypt the value. Both the table column (users.password) and the filler column ("*FILLER*".password) are fully qualified:

=> CREATE TABLE users(user VARCHAR(32), password VARCHAR(32));

=> COPY users(user, password FILLER VARCHAR(32),
        users.password AS encrypt("*FILLER*".password))
   FROM ... PARSER FJSONPARSER();

Setting defaults for null values

When creating a table, you can specify a default value for a column. When loading data, Vertica uses the default if the column value is not present. However, defaults do not override null values.

To intercept null values and apply defaults, you can first stage the load with COPY NO COMMIT and then update the values:

=> CREATE TABLE sales (id INT, shipped TIMESTAMP(6) DEFAULT now());

=> COPY sales FROM ... NO COMMIT;

=> UPDATE sales SET shipped = DEFAULT WHERE epoch IS NULL;

=> COMMIT;

All Vertica tables have an implicit epoch column, which records when the data was committed. For data that is not yet committed, this value is null. You can use this column to identify the just-loaded data and update it before committing.

Loading data into a flex table

Create a Flex table and copy JSON data into it:

=> CREATE FLEX TABLE darkdata();
CREATE TABLE
=> COPY tweets FROM '/myTest/Flexible/DATA/tweets_12.json' PARSER FJSONPARSER();
 Rows Loaded
-------------
          12
(1 row)

Using named pipes

COPY supports named pipes that follow the same naming conventions as file names on the given file system. Permissions are open, write, and close.

Create a named pipe and set two vsql variables:

=> \! mkfifo  pipe1
=> \set dir `pwd`/
=> \set file '''':dir'pipe1'''

Copy an uncompressed file from the named pipe:

=> \! cat pf1.dat > pipe1 &
=> COPY large_tbl FROM :file delimiter '|';
=> SELECT * FROM large_tbl;
=> COMMIT;

Loading compressed data

Copy a GZIP file from a named pipe and uncompress it:

=> \! gzip pf1.dat
=> \! cat pf1.dat.gz > pipe1 &
=> COPY large_tbl FROM :file ON site01 GZIP delimiter '|';
=> SELECT * FROM large_tbl;
=> COMMIT;
=> \!gunzip pf1.dat.gz

9 - COPY FROM VERTICA

Imports data from another Vertica database.

Imports data from another Vertica database. COPY FROM VERTICA is similar to COPY, but supports only a subset of its parameters.

Syntax

COPY [[{database | namespace }.]target-schema.]target-table
    [( target-columns )]
    FROM VERTICA {source-database | source-namespace }.[source-schema.]source-table
    [( source-columns )]
    [STREAM NAME 'stream name']
    [NO COMMIT]

Arguments

{ database | namespace }
Name of the database or namespace that contains table:
  • Database name: If specified, it must be the current database.

  • Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.

schema
Name of the schema, by default public. If you specify the namespace or database name, you must provide the schema name, even if the schema is public.

If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.

target-table
The target table for the imported data. Vertica loads the data into all projections that include columns from the schema table.
target-columns
A comma-delimited list of columns in target-table to store the copied data.See Mapping Between Target and Source Columns below.

You cannot use FILLER columns or columns of complex types, except native arrays, as part of the column definition.

{ source-database | source-namespace }
The source of the data to import. Namespaces are available only in Eon Mode.

A connection to the source database must already exist in the current session before starting the copy operation; otherwise Vertica returns an error. For details, see CONNECT TO VERTICA.

[source-schema.]source-table
The table that is the source of the imported data. If schema is any schema other than public or if you specified a source-namespace, you must supply the schema name.
source-columns
A comma-delimited list of the columns in the source table to import. If omitted, all columns are exported.Columns cannot be of complex types. See Mapping Between Target and Source Columns below.
STREAM NAME
A COPY load stream identifier. Using a stream name helps to quickly identify a particular load. The STREAM NAME value that you specify in the load statement appears in the stream column of the LOAD_STREAMS system table.
NO COMMIT
Prevents COPY from committing its transaction automatically when it finishes copying data. For details, see Using transactions to stage a load.

Privileges

  • Source table: SELECT

  • Source table schema: USAGE

  • Target table: INSERT

  • Target table schema: USAGE

Mapping between target and source columns

If you copy all table data from one database to another, COPY FROM VERTICA can omit specifying column lists if column definitions in both tables comply with the following conditions:

  • Same number of columns

  • Identical column names

  • Same sequence of columns

  • Matching or compatible column data types

  • No complex data types (ARRAY, SET, or ROW), except for native arrays

If any of these conditions is not true, the COPY FROM VERTICA statement must include column lists that explicitly map target and source columns to each other, as follows:

  • Contain the same number of columns.

  • List source and target columns in the same order.

  • Pair columns with the same (or compatible) data types.

Node failure during COPY

See Handling node failure during copy/export.

Examples

The following example copies the contents of an entire table from the vmart database to an identically-defined table in the current database:

=> CONNECT TO VERTICA vmart USER dbadmin PASSWORD 'myPassword' ON 'VertTest01',5433;
CONNECT
=> COPY customer_dimension FROM  VERTICA vmart.customer_dimension;
 Rows Loaded
-------------
      500000
(1 row)
=> DISCONNECT vmart;
DISCONNECT

For more examples, see Copying data from another Vertica database.

See also

EXPORT TO VERTICA

10 - COPY LOCAL

Using the COPY statement with its LOCAL option lets you load a data file on a client system, rather than on a cluster host.

Using the COPY statement with its LOCAL option lets you load data files (up to 4,294,967,295 files) on a client system, rather than on a cluster host. COPY LOCAL supports the STDIN and 'pathToData' parameters, but not the [ON nodename] clause. COPY LOCAL does not support multiple file batches in NATIVE or NATIVE VARCHAR formats. COPY LOCAL does not support reading ORC or Parquet files; use ON NODE instead. COPY LOCAL does not support CURRENT_LOAD_SOURCE().

The COPY LOCAL option is platform-independent. The statement works in the same way across all supported Vertica platforms and drivers. For more details about supported drivers, see Client drivers.

COPY LOCAL does not automatically create files detailing exceptions and rejections, even if exceptions occur.

COPY LOCAL is subject to the following restrictions when used with vsql and the client libraries:

  • COPY LOCAL must be the first statement in any multi-statement query.
  • COPY LOCAL cannot be used multiple times in the same query.

Privileges

User must have INSERT privilege on the table and USAGE privilege on the schema.

How COPY LOCAL works

COPY LOCAL loads data in a platform-neutral way. The COPY LOCAL statement loads all files from a local client system to the Vertica host, where the server processes the files. You can copy files in various formats: uncompressed, compressed, fixed-width format, in bzip or gzip format, or specified as a bash glob. Files of a single format (such as all bzip, or gzip) can be comma-separated in the list of input files. You can also use any of the applicable COPY statement options (as long as the data format supports the option). For instance, you can define a specific delimiter character, or how to handle NULLs, and so forth.

For more information about using the COPY LOCAL option to load data, see COPY for syntactical descriptions, and Specifying where to load data from for detailed examples.

The Vertica host uncompresses and processes the files as necessary, regardless of file format or the client platform from which you load the files. Once the server has the copied files, Vertica maintains performance by distributing file parsing tasks, such as encoding, compressing, uncompressing, across nodes.

Viewing copy local operations in a query plan

When you use the COPY LOCAL option, the GraphViz query plan includes a label for Load-Client-File, rather than Load-File. Following is a section from a sample query plan:

-----------------------------------------------
  PLAN:  BASE BULKLOAD PLAN  (GraphViz Format)
-----------------------------------------------
 digraph G {
 graph [rankdir=BT, label = " BASE BULKLOAD PLAN \nAll Nodes Vector:
 \n\n  node[0]=initiator (initiator) Up\n", labelloc=t, labeljust=l ordering=out]
.
.
.
10[label = "Load-Client-File(/tmp/diff) \nOutBlk=[UncTuple]",
color = "green", shape = "ellipse"];

Examples

The following example shows a load from a local file.

$ cat > t.dat
12
17
9
^C

=> CREATE TABLE numbers (value INT);
CREATE TABLE

=> COPY numbers FROM LOCAL 't.dat';
 Rows Loaded
-------------
           3
(1 row)

=> SELECT * FROM numbers;
 value
-------
    12
    17
     9
(3 rows)

11 - CREATE statements

CREATE statements let you create new database objects such as tables and users.

CREATE statements let you create new database objects such as tables and users.

11.1 - CREATE ACCESS POLICY

Creates an access policy that filters access to table data to users and roles.

Creates an access policy that filters access to table data to users and roles. You can create access policies for table rows and columns. Vertica applies the access policy filters with each query and returns only the data that is permissible for the current user or role.

You cannot set access policies on columns of complex data types other than native arrays. If the table contains complex-type columns, you can still set row access policies and column access policies on other columns.

Syntax


CREATE ACCESS POLICY ON [[database.]schema.]table
    { FOR COLUMN column | FOR ROWS WHERE } expression [GRANT TRUSTED] { ENABLE | DISABLE }

Parameters

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

table
The table with the target column or rows.
FOR COLUMN column
The column on which to apply this access policy. The column can be a native array, but other complex types are not supported. (See Complex types.)
FOR ROWS WHERE
The rows on which to apply this access policy.
expression
A SQL expression that specifies conditions for accessing row or column data:
  • Row access policies limit access to specific rows in a table, as specified by the policy's WHERE expression. Only rows that satisfy this expression are fetched from the table. For details and sample usage, see Creating row access policies.

  • Column access policies limit access to specific table columns. The access policy expression can also specify how to render column data to specific users and roles. For details and sample usage, see Creating column access policies.

GRANT TRUSTED

Specifies that GRANT statements take precedence over the access policy in determining whether users can perform DML operations on the target table. If omitted, users can only modify table data if the access policy allows them to see the stored data in its original, unaltered state. For more information, see Access policies and DML operations.

ENABLE | DISABLE
Whether to enable the access policy. You can enable and disable existing access policies with ALTER ACCESS POLICY.

Privileges

Non-superuser: Ownership of the table

Restrictions

The following limitations apply to access policies:

  • A column can have only one access policy.

  • Column access policies cannot be set on columns of complex types other than native arrays.

  • Column access policies cannot be set for materialized columns on flex tables. While it is possible to set an access policy for the __raw__ column, doing so restricts access to the whole table.

  • Row access policies are invalid on temporary tables and tables with aggregate projections.

  • Access policy expressions cannot contain:

    • Subqueries

    • Aggregate functions

    • Analytic functions

    • User-defined transform functions (UDTF)

  • If the query optimizer cannot replace a deterministic expression that involves only constants with their computed values, it blocks all DML operations such as INSERT.

See also

11.2 - CREATE AUTHENTICATION

Creates and enables an authentication record associated with users or roles.

Creates and enables an authentication record associated with users or roles. Authentication records are automatically enabled after creation.

Syntax

CREATE AUTHENTICATION auth-record-name
            METHOD 'auth-method'
            access-method
            [ FALLTHROUGH ]

Parameters

Name Description
auth-record-name Name of the authentication record, where auth-record-name conforms to conventions described in Identifiers.
auth-method

The authentication method, one of the following:

  • trust: Users can authenticate with a valid username (that is, without a password).

  • reject: Rejects the connection attempt.

  • hash: Users must provide a valid username and password. For details, see Hash authentication.

  • gss: Authorizes clients that connect to Vertica with an MIT Kerberos implementation. The Key Distribution Center (KDC) must support Kerberos 5 using the GSS-API. Non-MIT Kerberos implementations must use the GSS-API. For details, see Kerberos authentication.

  • ident: Authenticates the client against a username on an Ident server. For details, see Ident authentication.

  • ldap: Authenticates a client and their username and password with an LDAP or Active Directory server. For details, see LDAP authentication.

  • tls: Authenticates clients that provide a certificate with a Common Name (CN) that specifies a valid database username. Vertica must be configured for mutual mode TLS to use this method. For details, see TLS authentication

  • oauth: Authenticates a client with an access token. For details, see OAuth 2.0 authentication.

For details, see Supported Client Authentication Methods.

access-method

The access method the client uses to connect, specified in one of the following ways:

  • LOCAL: Matches connection attempts made using local domain sockets.

  • HOST [ TLS | NO TLS ] 'host-ip-address': Matches connection attempts made using TCP/IP, where host-ip-address can be an IPv4 or IPv6 address. You can qualify HOST with one of the following options:

    • TLS (default): Match an SSL/TLS-wrapped TCP socket.

    • NO TLS: Match a plain (non-SSL/TLS) socket only.

[ FALLTHROUGH ]

Whether to enable fallthrough authentication for this record. To disable fallthrough, see ALTER AUTHENTICATION.

Fallthrough cannot be enabled for authentication records that use the following authentication methods:

  • gss

  • oauth

  • reject

  • trust

Privileges

DBADMIN

Examples

See Creating authentication records.

See also

11.3 - CREATE CA BUNDLE

Creates a certificate authority (CA) bundle.

Creates a certificate authority (CA) bundle. These contain root CA certificates.

Syntax

CREATE CA BUNDLE name [CERTIFICATES ca_cert[, ca_cert[, ...]]

Parameters

name
The name of the CA bundle.
ca_cert
The name of the CA certificate. If no certificates are specified, the bundle will be empty.

Privileges

Ownership of the CA certificates in the CA bundle.

Examples

See Managing CA bundles.

See also

11.4 - CREATE CERTIFICATE

Creates or imports a certificate, Certificate Authority (CA), or intermediate CA.

Creates or imports a certificate, Certificate Authority (CA), or intermediate CA. These certificates can be used with ALTER TLS CONFIGURATION to set up client-server TLS, LDAPLink TLS, LDAPAuth TLS, and internode TLS.

CREATE CERTIFICATE generates x509v3 certificates.

Syntax

CREATE [TEMP[ORARY]] [CA] CERTIFICATE certificate_name
    {AS cert [KEY key_name]
    | SUBJECT subject
      [ SIGNED BY ca_cert ]
      [ VALID FOR days ]
      [ EXTENSIONS ext = val[,...] ]
      [ KEY private_key ]}

Parameters

TEMPORARY
Create with session scope. The key is stored in memory and is valid only for the current session.
CA
Designates the certificate as a CA or intermediate certificate. If omitted, the operation creates a normal certificate.
certificate_name
The name of the certificate.
AS cert
The imported certificate (string).

This parameter should include the entire chain of certificates, excluding the CA certificate.

KEY key_name
The name of the key.

This parameter only needs to be set for client/server certificates and CA certificates that you intend to sign other certificates with in Vertica. If your imported CA certificate will only be used for validating other certificates, you do not need to specify a key.

SUBJECT subject
The entity to issue the certificate to (string).
SIGNED BY ca_cert
The name of the CA that signed the certificate.

When adding a CA certificate, this parameter is optional. Specifying it will create an intermediate CA that cannot be used to sign other CA certificates.

When creating a certificate, this parameter is required.

VALID FOR days
The number of days that the certificate is valid.
EXTENSIONS ext=val
Strings specifying certificate extensions. For a full list of extensions, see the OpenSSL documentation.
KEY private_key
The name of the certificate's private key.

When importing a certificate, this parameter is required.

Privileges

Superuser

Default extensions

CREATE CERTIFICATE generates x509v3 certificates and includes several extensions by default. These differ based on the type of certificate you create:

CA Certificate:

  • 'basicConstraints' = 'critical, CA:true'

  • 'keyUsage' = 'critical, digitalSignature, keyCertSign'

  • 'nsComment' = Vertica generated [CA] certificate'

  • 'subjectKeyIdentifier' = 'hash'

Certificate:

  • 'basicConstraints' = 'CA:false'

  • 'keyUsage' = 'critical, digitalSignature, keyEncipherment'

Examples

See Generating TLS certificates and keys.

See also

11.5 - CREATE DATA LOADER

CREATE DATA LOADER creates an automatic data loader that executes a COPY statement when new data files appear in a specified location. The loader records which files have already been successfully loaded and skips them.

CREATE DATA LOADER creates an automatic data loader that executes a COPY statement when new data files appear in a specified path. The loader records which files have already been successfully loaded and skips them. These events are recorded in the DATA_LOADER_EVENTS system table.

You can execute a data loader in the following ways:

Executing the loader automatically commits the transaction.

The DATA_LOADERS system table shows all defined loaders. The DATA_LOADER_EVENTS system table records paths that were attempted and their outcomes. To prevent unbounded growth, records in DATA_LOADER_EVENTS are purged after a specified retention interval. If previously-loaded files are still in the source path after this purge, the data loader sees them as new files and loads them again.

Syntax

CREATE [ OR REPLACE ] DATA LOADER [ IF NOT EXISTS ]
   [schema.]name
   [ RETRY LIMIT { NONE | DEFAULT | limit } ]
   [ RETENTION INTERVAL monitoring-retention ]
   [ TRIGGERED BY integration-json ]
   AS copy-statement

Arguments

OR REPLACE
If a data loader with the same name in the same schema exists, replace it. The original data loader's monitoring table is dropped.

This option cannot be used with IF NOT EXISTS.

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

This option cannot be used with OR REPLACE.

schema
Schema containing the data loader. The default schema is public.
name
Name of the data loader.
RETRY LIMIT { NONE | DEFAULT | limit }
Maximum number of times to retry a failing file. Each time the data loader is executed, it attempts to load all files that have not yet been successfully loaded, up to this per-file limit. If set to DEFAULT, at load time the loader uses the value of the DataLoaderDefaultRetryLimit configuration parameter.

Default: DEFAULT

RETENTION INTERVAL monitoring-retention
How long to keep records in the events table. DATA_LOADER_EVENTS records events for all data loaders, but each data loader has its own retention interval.

Default: 14 days

TRIGGERED BY integration-json
Information about an external input for the data loader, a JSON object with the following keys:
  • provider (required): name of the provider; the only supported value is "AWS-SQS".
  • resourceUrl (required): endpoint for the integration.
  • timeout: number of seconds to wait between loads. The default is 10.
  • minFileNumPerLoad: if the number of files waiting to be loaded reaches this threshold, the load begins immediately instead of waiting for the next scheduled load. The default is 10 files.
  • minTotalSizePerLoad: if the total size of files waiting to be loaded reaches this threshold, the load begins immediately. The value is an integer with a unit of measure, one of K, M, G, or T. The default is 1G.

If a data loader has a trigger, the loader is initially disabled. When you are ready to use it, use ALTER DATA LOADER with the ENABLE option. When the loader is enabled, Vertica executes it automatically when SQS produces notifications for new files. For details, see Triggers.

AS copy-statement
The COPY statement that the loader executes. The FROM clause typically uses a glob.

Privileges

Non-superuser: CREATE privileges on the schema.

Restrictions

  • COPY NO COMMIT is not supported.
  • Data loaders are executed with COPY ABORT ON ERROR.
  • The COPY statement must specify file paths. You cannot use COPY FROM VERTICA.

Triggers (AWS SQS only)

If the data to be loaded is stored on AWS and you have configured an SQS queue, then instead of executing a data loader explicitly using EXECUTE DATA LOADER, you can define the loader to respond to S3 events. When files are added to the AWS bucket, a service running on AWS adds a message to the SQS queue. Vertica reads from this queue, executes the data loader, and, if the load was successful, removes the message from the queue. After the initial setup, this process is automatic.

To configure the data loader to read from SQS, use the TRIGGERED BY option with a JSON object following this template:

{
  "provider" : "AWS-SQS",
  "resourceUrl" : "https://sqs.region.amazonaws.com/account_number/queue_name"
}

You can set additional values to control timing. See the description of the TRIGGERED BY syntax.

Use the following SQS parameters to authenticate to SQS:

  • SQSAuth, to set an ID and key for all queue access.
  • SQSQueueCredentials, to set per-queue access. Queue-specific values take precedence over SQSAuth.

A data loader with an SQS trigger is initially disabled. Use ALTER DATA LOADER with the ENABLE option when you are ready to begin processing events. To pause execution, call ALTER DATA LOADER with the DISABLE option.

You can alter the trigger JSON using ALTER DATA LOADER. You must disable the loader before changing the trigger and then enable it again.

If a data loader has a trigger, you must disable it before dropping it.

File systems

The source path can be any shared file system or object store that all database nodes can access. To use an HDFS or Linux file system safely, you must prevent the loader from reading a partially-written file. One way to achieve this is to only execute the loader when files are not being written to the loader's path. Another way to achieve this is to write new files in a temporary location and move them to the loader's path only when they are complete.

Examples

The following data loader can be executed manually or by using a scheduled stored procedure:

=> CREATE DATA LOADER s.dl1 RETRY LIMIT NONE 
   AS COPY s.local FROM 's3://b/data/*.dat';

=> SELECT name, schemaname, copystmt, retrylimit FROM data_loaders;
 name | schemaname |              copystmt                 | retrylimit 
------+------------+---------------------------------------+------------
 dl1  | s          | COPY s.local FROM 's3://b/data/*.dat' |         -1
(1 row)

The following data loader reads from an SQS queue on AWS and loads new files automatically at least every 10 seconds, or immediately when five files or 100MB of data accumulate:

=> CREATE DATA LOADER s.dl2
   TRIGGERED BY '{ "provider" : "AWS-SQS", 
                   "resourceUrl" : "https://sqs.us-east-1.amazonaws.com/accountID/queueName",
                   "minFileNumPerLoad" : "5",
                   "minTotalSizePerLoad" : "100M" }'
   AS COPY s.local FROM 's3://b/data/*.dat';

See Automatic load for an extended example.

See also

11.6 - CREATE DIRECTED QUERY

Saves an association between an input query and a query that is annotated with optimizer hints.

Saves an association between an input query and a query that is annotated with optimizer hints.

CREATE DIRECTED QUERY has two variants:

  • CREATE DIRECTED QUERY OPTIMIZER directs the query optimizer to generate annotated SQL from the specified input query. The annotated query contains hints that the optimizer can use to recreate its current query plan for that input query.
  • CREATE DIRECTED QUERY CUSTOM specifies an annotated query supplied by the user. Vertica associates the annotated query with the input query specified by the last SAVE QUERY statement.

In both cases, Vertica associates the annotated query and input query, and registers their association in the system table DIRECTED_QUERIES under query_name.

Syntax

Optimizer-generated

CREATE DIRECTED QUERY OPT[IMIZER] query-name [COMMENT 'comments'] input-query

User-defined (custom)

CREATE DIRECTED QUERY CUSTOM query-name [COMMENT 'comments'] annotated-query

Parameters

OPT[IMIZER]
Directs the query optimizer to generate an annotated query from input-query, and associate both in the new directed query.
CUSTOM
Specifies to associate annotated-query with the query previously specified by SAVE QUERY.
query-name
A unique identifier for the directed query, a string that conforms to conventions described in Identifiers.
COMMENT 'comments'
Comment about the directed query, up to 128 characters. Comments can be useful for future reference—for example, to explain why a given directed query was created.

If you omit this argument, Vertica inserts one of the following comments:

  • Optimizer-generated directed query

  • Custom directed query

input-query
The input query to associate with an optimizer-generated directed query. The input query supports only one optimizer hint, :v (alias IGNORECONST).
annotated-query
A query with embedded optimizer hints to associate with the input query most recently saved with SAVE QUERY.

Privileges

Superuser

See also

Creating directed queries

11.7 - CREATE EXTERNAL TABLE AS COPY

CREATE EXTERNAL TABLE AS COPY creates a table definition for data external to your Vertica database.

CREATE EXTERNAL TABLE AS COPY creates a table definition for data external to your Vertica database. This statement is a combination of the CREATE TABLE and COPY statements, supporting a subset of each statement's parameters.

Canceling a CREATE EXTERNAL TABLE AS COPY statement can cause unpredictable results. If you need to make a change, allow the statement to complete, drop the table, and then retry.

You can use ALTER TABLE to change the data types of columns instead of dropping and recreating the table.

You can use CREATE EXTERNAL TABLE AS COPY with any types except types from the Place package.

Vertica also supports external tables backed by Iceberg data. See CREATE EXTERNAL TABLE ICEBERG.

Syntax

CREATE EXTERNAL TABLE [ IF NOT EXISTS ] [[{namespace. | database. }]schema.]table-name 
    ( column-definition[,...] )
[{INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES]
AS COPY
    [ ( { column-as-expression | column }
       [ DELIMITER [ AS ] 'char' ]
       [ ENCLOSED [ BY ] 'char' ]
       [ ENFORCELENGTH ]
       [ ESCAPE [ AS ] 'char' | NO ESCAPE ]
       [ FILLER datatype ]
       [ FORMAT 'format' ]
       [ NULL [ AS ] 'string' ]
       [ TRIM 'byte' ]
    [,...] ) ]
    [ COLUMN OPTION ( column 
       [ DELIMITER [ AS ] 'char' ]
       [ ENCLOSED [ BY ] 'char' ]
       [ ENFORCELENGTH ]
       [ ESCAPE [ AS ] 'char' | NO ESCAPE ]
       [ FORMAT 'format' ]
       [ NULL [ AS ] 'string' ]
       [ TRIM 'byte' ]
    [,...] ) ]


FROM {
   { 'path-to-data'
       [ ON { nodename | (nodeset) | ANY NODE | EACH NODE } ] [ compression ] }[,...]
     [ PARTITION COLUMNS column[,...] ]
   | LOCAL 'path-to-data' [ compression ] [,...]
   | VERTICA {source-namespace. | source-database. }[source-schema.]source-table[( source-column[,...] ) ]
  }
      [ NATIVE
        | FIXEDWIDTH COLSIZES {( integer )[,...]}
        | NATIVE VARCHAR
        | ORC
        | PARQUET
      ]
   [ ABORT ON ERROR ]
   [ DELIMITER [ AS ] 'char' ]
   [ ENCLOSED [ BY ] 'char' ]
   [ ENFORCELENGTH ]
   [ ERROR TOLERANCE ]
   [ ESCAPE AS 'char' | NO ESCAPE ]
   [ EXCEPTIONS 'path' [ ON nodename ] [,...] ]
   [ [ WITH ] FILTER filter( [ arg=value[,...] ] ) ]
   [ NULL [ AS ] 'string' ]
   [ [ WITH ] PARSER parser([arg=value [,...] ]) ]
   [ RECORD TERMINATOR 'string' ]
   [ REJECTED DATA 'path' [ ON nodename ] [,...] ]
   [ REJECTMAX integer ]
   [ SKIP integer ]
   [ SKIP BYTES integer ]
   [ TRAILING NULLCOLS ]
   [ TRIM 'byte' ]

Parameters

For all supported parameters, see the CREATE TABLE and COPY statements. For information on using this statement with UDLs, see User-defined load (UDL).

For additional guidance on using COPY parameters, see Specifying where to load data from.

Privileges

Superuser, or non-superuser with the following privileges:

  • READ privileges on the USER-accessible storage location. See GRANT (storage location)

  • Full access (including SELECT) to an external table that the user has privileges to create

Partitioned data

Data can be partitioned using its directory structure and Vertica can take advantage of that partitioning to improve query performance for external tables. For details, seePartitioned data.

If you see unexpected results when reading data, verify that globs in your file paths correctly align with the partition structure. See Troubleshooting external tables.

ORC and Parquet data

When using the ORC and Parquet formats, Vertica supports some additional options in the COPY statement and data structures for columns. See ORC and PARQUET.

Examples

The following example defines an external table for delimited data stored in HDFS:

=> CREATE EXTERNAL TABLE sales (itemID INT, date DATE, price FLOAT)
    AS COPY FROM 'hdfs:///data/ext1.csv' DELIMITER ',';

The following example uses data in the ORC format that is stored in S3. The data has two partition columns. For more information about partitions, see Partitioned data.

=> CREATE EXTERNAL TABLE records (id int, name varchar(50), created date, region varchar(50))
   AS COPY FROM 's3://datalake/sales/*/*/*'
   PARTITION COLUMNS created, region;

The following example shows how you can read from all Parquet files in a local directory, with no partitions and no globs:

=> CREATE EXTERNAL TABLE sales (itemID INT, date DATE, price FLOAT)
    AS COPY FROM '/data/sales/*.parquet' PARQUET;

The following example creates an external table for data containing arrays:

=> CREATE EXTERNAL TABLE cust (cust_custkey int,
        cust_custname varchar(50),
        cust_custstaddress ARRAY[varchar(100)],
        cust_custaddressln2 ARRAY[varchar(100)],
        cust_custcity ARRAY[varchar(50)],
        cust_custstate ARRAY[char(2)],
        cust_custzip ARRAY[int],
        cust_email varchar(50), cust_phone varchar(30))
   AS COPY FROM 'webhdfs://data/*.parquet' PARQUET;

To allow users without superuser access to use external tables with data on the local file system, S3, or GCS, create a location for 'user' usage and grant access to it. This example shows granting access to a user named Bob to any external table whose data is located under /tmp (including in subdirectories to any depth):

=> CREATE LOCATION '/tmp' ALL NODES USAGE 'user';
=> GRANT ALL ON LOCATION '/tmp' to Bob;

The following example shows CREATE EXTERNAL TABLE using a user-defined source:

=> CREATE SOURCE curl AS LANGUAGE 'C++' NAME 'CurlSourceFactory' LIBRARY curllib;
=> CREATE EXTERNAL TABLE curl_table1 as COPY SOURCE CurlSourceFactory;

See also

Creating external tables

11.8 - CREATE EXTERNAL TABLE ICEBERG

Creates an external table for data stored by Apache Iceberg.

Creates an external table for data stored by Apache Iceberg. An Iceberg table consists of data files and metadata describing the schema. Unlike other external tables, an Iceberg external table need not specify column definitions (DDL). The information is read from Iceberg metadata at query time. For certain data types you can adjust column definitions, for example to specify VARCHAR sizes.

A single Iceberg table can have more than one metadata file, each describing a different version of the table. You can create an external table using either the base location of the table or a specific metadata file.

If a metadata file specifies columns or struct fields that are not present in the data, Vertica treats the missing values as NULL.

All Iceberg files, both data and metadata, must be accessible to all database nodes.

Iceberg can store data in several file formats. Vertica can read Iceberg data in the Parquet format only and with either version 1 or version 2 metadata. If the Parquet files encode field IDs, Vertica uses them directly. Otherwise, Vertica uses Iceberg fallback name mapping to read field IDs from the metadata (the schema.name-mapping.default property). If the metadata does not contain a mapping for a field, Vertica sets the column values to NULL.

Syntax

CREATE EXTERNAL TABLE [[{namespace. | database. }]schema.]table
   STORED BY ICEBERG LOCATION { path | metadata-file }
   [COLUMN TYPES (column-name type[,...])] ;

Arguments

{ database | namespace }
Name of the database or namespace that contains table:
  • Database name: If specified, it must be the current database.

  • Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.

schema
Name of the schema, by default public. If you specify the namespace or database name, you must provide the schema name, even if the schema is public.

If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.

table
Name of the table to create, which must be unique among names of all sequences, tables, projections, views, and models within the schema.
STORED BY ICEBERG LOCATION { path|metadata-file }
Location of Iceberg data, one of:
  • Base location of an Iceberg File System table. Vertica uses the latest metadata file.

  • Path to a metadata file with a name ending in .metadata.json. Vertica uses this metadata file even if it is not the latest.

On S3, an Iceberg table is not a File System table but a metastore. This means you cannot specify a base location on S3. You must specify the full path to a metadata file.

If the paths embedded in the Iceberg data are not accessible to Vertica, use the IcebergPathMapping configuration parameter to provide mappings. See Path Prefixes.

COLUMN TYPES (column-name type[,...])
Column names and types for VARCHAR, VARBINARY, or ARRAY columns or ROW fields only. You can specify types only to set lengths or array bounds, not type coercion. See Data Types. If you do not specify a type, the table uses the Vertica defaults.

You cannot specify any other column properties, such as defaults or constraints.

Columns that are specified but not found in the Iceberg schema are ignored.

Privileges

Superuser, or non-superuser with the following privileges:

  • READ privileges on the USER-accessible storage location. See GRANT (storage location)

  • Full access (including SELECT) to an external table that the user has privileges to create

Path prefixes

Iceberg tables store file paths in the metadata as absolute URIs (host and port). Sometimes this URI differs from the URI that Vertica can use to access the data. This can particularly be an issue for files stored on HDFS, where the metadata can use a different URI scheme and port number than what Vertica expects.

To change the URIs, set the IcebergPathMapping configuration parameter. The value is a list of one or more pairs of Iceberg URI prefixes and corresponding Vertica prefixes:

=> ALTER SESSION SET IcebergPathMapping=
   '{"hdfs://node-196.example.com:9000":"webhdfs://node-196.example.com:9870"}';

Include only the URI prefix (up through the port), not complete paths. If IcebergPathMapping contains more than one mapping that could apply, Vertica uses the longest entry that matches.

You can set IcebergPathMapping at the database, session, or user level.

Data types

The following table shows the mappings of Iceberg data types to Vertica data types. For types that allow it, you can use the COLUMN TYPES clause to override these defaults.

Iceberg Type Vertica Type Allows Override?
boolean BOOLEAN No
int (32-bit) INT No
long (64-bit) INT No
float (32-bit) FLOAT No
double (64-bit) FLOAT No
decimal(precision, scale) NUMERIC with same precision and scale No
date DATE No
time TIME No
timestamp TIMESTAMP No
timestamptz TIMESTAMP WITH TIMEZONE No
string VARCHAR(80) VARCHAR or LONG VARCHAR with custom length
uuid UUID No
fixed(length)

BINARY(length) if length <= 65000

LONG VARBINARY(length) otherwise

No
binary (variable length) VARBINARY(80) VARBINARY or LONG VARBINARY with custom length
struct ROW No, but you can override individual fields if their types permit
list ARRAY (default bound) ARRAY with custom bound
map<key, value> ARRAY[ROW(key, value)] (default bound) ARRAY[ROW(key, value)] with custom bound

Restrictions

The following restrictions apply to external tables backed by Iceberg:

  • Data files must be in Parquet format.

  • Metadata files must use version 1 or version 2 format.

  • Iceberg column defaults are not supported.

  • Iceberg delete files are not supported.

  • Malformed data is an error that aborts the load. You cannot reject bad data and continue.

  • VARCHAR values are not truncated. If a string is too long, it is treated as an error.

The following restrictions apply to queries of Iceberg tables:

  • You cannot use a column in an Iceberg table as a DEFAULT or SET USING option in another table. The following example is an error:

    => CREATE TABLE t(
        id INT DEFAULT (SELECT COUNT(*) FROM iceberg_table));
    ERROR 0:  Default and set using expressions cannot refer to external iceberg tables
    
  • Errors in Iceberg data or metadata, such as missing files or type mismatches, can manifest as query errors such as the following:

    => SELECT * FROM iceberg_table;
    ERROR 0:  Problem reading metadata for table iceberg_table. Detail: Could not determine type of column a. User specified type: int. Iceberg type: boolean
    

Examples

The following example creates a table based on the Parquet data files with no overrides:

=> CREATE EXTERNAL TABLE sales
   STORED BY ICEBERG LOCATION 's3:/sales/*';

In the following example, the data uses a struct for the shipping address, with fields for street address (string), city (string), and zip code (integer). The following table definition overrides the default VARCHAR lengths. Note that the zip code is not included in COLUMN TYPES overrides. The ROW column contains only the fields being changed, but all fields including the zip code are part of the table definition and are included in query results:

=> CREATE EXTERNAL TABLE sales
   STORED BY ICEBERG LOCATION 's3:/sales/*'
   COLUMN TYPES (address ROW(street VARCHAR(50), city VARCHAR(50)));

See also

EXTERNAL_DATA_FILES_PRUNED event in QUERY_EVENTS.

11.9 - CREATE FAULT GROUP

Creates a fault group, which can contain the following:.

Enterprise Mode only

Creates a fault group, which can contain the following:

  • One or more nodes

  • One or more child fault groups

  • One or more nodes and one or more child fault groups

CREATE FAULT GROUP creates an empty fault group. Use ALTER FAULT GROUP to add nodes or other fault groups to an existing fault group.

Syntax

CREATE FAULT GROUP name

Parameters

name
The name of the fault group to create, unique among all fault groups, where name conforms to conventions described in Identifiers.

Privileges

Superuser

Examples

The following command creates a fault group called parent0:

=> CREATE FAULT GROUP parent0;
CREATE FAULT GROUP

See also

11.10 - CREATE FLEXIBLE EXTERNAL TABLE AS COPY

CREATE FLEXIBLE EXTERNAL TABLE AS COPY creates a flexible external table.

CREATE FLEXIBLE EXTERNAL TABLE AS COPY creates a flexible external table. This statement combines statements CREATE FLEXIBLE TABLE and COPY statements, supporting a subset of each statement's parameters.

You can also use user-defined load functions (UDLs) to create external flex tables. For details about creating and using flex tables, see Using Flex Tables.

For details about creating and using flex tables, see Creating flex tables in Using Flex Tables.

Syntax

CREATE FLEX[IBLE] EXTERNAL TABLE [ IF NOT EXISTS ] [[database.]schema.]table-name
   ( [ column-definition[,...] ] )
   [ INCLUDE | EXCLUDE [SCHEMA] PRIVILEGES ]
AS COPY [ ( { column-as-expression | column } [ FILLER datatype ] ]
   FROM {
      'path-to-data' [ ON nodename | ON ANY NODE | ON (nodeset) ] input-format [,...]
      | [ WITH ] UDL-clause[...]
   }
   [ ABORT ON ERROR ]
   [ DELIMITER [ AS ] 'char' ]
   [ ENCLOSED [ BY ] 'char' ]
   [ ENFORCELENGTH ]
   [ ESCAPE [ AS ] 'char' | NO ESCAPE ]
   [ EXCEPTIONS 'path' [ ON nodename ] [,...] ]
   [ NULL [ AS ] 'string' ]
   [ RECORD TERMINATOR 'string' ]
   [ REJECTED DATA 'path' [ ON nodename ][,...] ]
   [ REJECTMAX integer ]
   [ SKIP integer ]
   [ SKIP BYTES integer ]
   [ TRAILING NULLCOLS ]
   [ TRIM 'byte' ]

Parameters

For parameter descriptions, see CREATE TABLE and Parameters.

Privileges

Superuser, or non-superuser with the following privileges:

  • READ privileges on the USER-accessible storage location. See GRANT (storage location)

  • Full access (including SELECT) to an external table that the user has privileges to create

Examples

To create an external flex table:

=> CREATE flex external table mountains() AS COPY FROM 'home/release/KData/kmm_ountains.json' PARSER fjsonparser();
CREATE TABLE

As with other flex tables, creating an external flex table produces two regular tables: the named table and its associated _keys table. The keys table is not an external table:

=> \dt mountains
                 List of tables
 Schema |   Name    | Kind  |  Owner  | Comment
--------+-----------+-------+---------+---------
 public | mountains | table | release |
(1 row)

You can use the helper function, COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW, to compute keys and create a view for the external table:

=> SELECT compute_flextable_keys_and_build_view ('appLog');

                     compute_flextable_keys_and_build_view
--------------------------------------------------------------------------------------------------
Please see public.appLog_keys for updated keys
The view public.appLog_view is ready for querying
(1 row)

Check the keys from the _keys table for the results of running the helper application:

=> SELECT * FROM appLog_keys;
                          key_name                       | frequency |   data_type_guess
----------------------------------------------------------+-----------+------------------
contributors                                             |         8 | varchar(20)
coordinates                                              |         8 | varchar(20)
created_at                                               |         8 | varchar(60)
entities.hashtags                                        |         8 | long varbinary(186)
.
.
.
retweeted_status.user.time_zone                          |         1 | varchar(20)
retweeted_status.user.url                                |         1 | varchar(68)
retweeted_status.user.utc_offset                         |         1 | varchar(20)
retweeted_status.user.verified                           |         1 | varchar(20)
(125 rows)

You can query the view:

=> SELECT "user.lang" FROM appLog_view;
 user.lang
-----------
it
en
es
en
en
es
tr
en
(12 rows)

See also

11.11 - CREATE FLEXIBLE TABLE

Creates a flexible (flex) table in the logical schema.

Creates a flexible (flex) table in the logical schema.

When you create a flex table, Vertica automatically creates two dependent objects:

  • Keys table that is named flex-table-name_keys

  • View that is named flex-table-name_view

The flex table requires the keys table and view. Neither of these objects can exist independently of the flex table.

Syntax

Create with column definitions:

CREATE [[ scope ] TEMP[ORARY]] FLEX[IBLE] TABLE [ IF NOT EXISTS ]
   [[database.]schema.]table-name
   ( [ column-definition[,...] [, table-constraint ][,...] ] )
   [ ORDER BY column[,...] ]
   [ segmentation-spec ]
   [ KSAFE [k-num] ]
   [ partition-clause]
   [ {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES ]
   [ DISK_QUOTA quota ]

Create from another table:

CREATE FLEX[IBLE] TABLE [[database.]schema.] table-name
   [ ( column-name-list ) ]
   [ {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES ]
   AS  query [ ENCODED BY column-ref-list ]
   [ DISK_QUOTA quota ]

Parameters

For general parameter descriptions, see CREATE TABLE; for parameters specific to temporary flex tables, see CREATE TEMPORARY TABLE and Creating flex tables.

You cannot partition a flex table on any virtual column (key).

Privileges

Non-superuser: CREATE privilege on table schema

Default columns

The CREATE statement can omit specifying any column definitions. CREATE FLEXIBLE TABLE always creates two columns automatically:

__raw__
LONG VARBINARY type column to store unstructured data that you load. By default, this column has a NOT NULL constraint.
__identity__
IDENTITY column that is used for segmentation and sorting when no other column is defined.

Default projections

Vertica automatically creates superprojections for both the flex table and keys tables when you create them.

If you create a flex table with one or more of the ORDER BY, ENCODED BY, SEGMENTED BY, or KSAFE clauses, the clause information is used to create projections. If no clauses are in use, Vertica uses the following defaults:

Table Sort order Encoding Segmentation K-safety
Flexible table ORDER BY *.__identity__ none SEGMENTED BY hash *.__identity__ ALL NODES OFFSET 0 1
Keys table ORDER BY *._keys_frequency none UNSEGMENTED ALL NODES 1

Examples

The following example creates a flex table named darkdata without specifying any column information. Vertica creates a default superprojection and buddy projection as part of creating the table:

=> CREATE FLEXIBLE TABLE darkdata();
CREATE TABLE
=> \dj darkdata1*
                         List of projections
 Schema |         Name         |  Owner  |       Node       | Comment
--------+----------------------+---------+------------------+---------
 public | darkdata1_b0         | dbadmin |                  |
 public | darkdata1_b1         | dbadmin |                  |
 public | darkdata1_keys_super | dbadmin | v_vmart_node0001 |
 public | darkdata1_keys_super | dbadmin | v_vmart_node0002 |
 public | darkdata1_keys_super | dbadmin | v_vmart_node0003 |
(5 rows)

=> SELECT export_objects('','darkdata1_b0');
CREATE PROJECTION public.darkdata1_b0 /*+basename(darkdata1),createtype(P)*/
(
 __identity__,
 __raw__
)
AS
 SELECT darkdata1.__identity__,
        darkdata1.__raw__
 FROM public.darkdata1
 ORDER BY darkdata1.__identity__
SEGMENTED BY hash(darkdata1.__identity__) ALL NODES OFFSET 0;

SELECT MARK_DESIGN_KSAFE(1);
(1 row)

=> select export_objects('','darkdata1_keys_super');
CREATE PROJECTION public.darkdata1_keys_super /*+basename(darkdata1_keys),createtype(P)*/
(
 key_name,
 frequency,
 data_type_guess
)
AS
 SELECT darkdata1_keys.key_name,
        darkdata1_keys.frequency,
        darkdata1_keys.data_type_guess
 FROM public.darkdata1_keys
 ORDER BY darkdata1_keys.frequency
UNSEGMENTED ALL NODES;

SELECT MARK_DESIGN_KSAFE(1);
(1 row)

The following example creates a table called darkdata1 with one column definition (date_col). The statement specifies the partition by clause to partition the data by year. Vertica creates a default superprojection and buddy projections as part of creating the table:

=> CREATE FLEX TABLE darkdata1 (date_col date NOT NULL) partition by
  extract('year' from date_col);
CREATE TABLE

See also

11.12 - CREATE FUNCTION statements

Vertica provides CREATE statements for each type of user-defined extension.

Vertica provides CREATE statements for each type of user-defined extension. Each CREATE statement adds a user-defined function to the Vertica catalog:

CREATE statement Extension
CREATE FUNCTION (scalar) User-defined scalar functions (UDSFs)
CREATE AGGREGATE FUNCTION User-defined aggregate functions (UDAFs)
CREATE ANALYTIC FUNCTION User-defined analytic functions (UDAnF)
CREATE TRANSFORM FUNCTION User-defined transform functions (UDTFs)
CREATE statements for user-defined load:
CREATE SOURCE Load source functions
CREATE FILTER Load filter functions
CREATE PARSER Load parser functions

Vertica also provides CREATE FUNCTION (SQL), which stores SQL expressions as functions that you can invoke in a query.

11.12.1 - CREATE AGGREGATE FUNCTION

Adds a user-defined aggregate function (UDAF) to the catalog.

Adds a user-defined aggregate function (UDAF) to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.

CREATE AGGREGATE FUNCTION automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading aggregate functions. When you call the SQL function, Vertica passes the input table to the function to process.

User-defined aggregate functions run in unfenced mode only.

Syntax

CREATE [ OR REPLACE ] AGGREGATE FUNCTION [ IF NOT EXISTS ]
  [[database.]schema.]function AS
  [ LANGUAGE 'language' ]
  NAME 'factory'
  LIBRARY library
  [ NOT FENCED ];

Arguments

OR REPLACE

If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

IF NOT EXISTS

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function

Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.

The function name must conform to the restrictions on Identifiers.

LANGUAGE 'language'
The language used to develop this function, currently C++ only (the default).
NAME 'factory'
Name of the factory class that generates the function instance.
LIBRARY library
Name of the shared library that contains the function. This library must have already been loaded by CREATE LIBRARY.
NOT FENCED
Indicates that the function runs in unfenced mode. Aggregate functions cannot be run in fenced mode.

Privileges

Non-superuser:

  • CREATE privilege on the function's schema

  • USAGE privilege on the function's library

Examples

The following example demonstrates loading a library named AggregateFunctions and then defining functions named ag_avg and ag_cat. The functions are mapped to the AverageFactory and ConcatenateFactory classes in the library:

=> CREATE LIBRARY AggregateFunctions AS '/opt/vertica/sdk/examples/build/AggregateFunctions.so';
CREATE LIBRARY
=> CREATE AGGREGATE FUNCTION ag_avg AS LANGUAGE 'C++' NAME 'AverageFactory'
   library AggregateFunctions;
CREATE AGGREGATE FUNCTION
=> CREATE AGGREGATE FUNCTION ag_cat AS LANGUAGE 'C++' NAME 'ConcatenateFactory'
   library AggregateFunctions;
CREATE AGGREGATE FUNCTION
=> \x
Expanded display is on.
select * from user_functions;
-[ RECORD 1 ]----------+------------------------------------------------------------------
schema_name            | public
function_name          | ag_avg
procedure_type         | User Defined Aggregate
function_return_type   | Numeric
function_argument_type | Numeric
function_definition    | Class 'AverageFactory' in Library 'public.AggregateFunctions'
volatility             |
is_strict              | f
is_fenced              | f
comment                |
-[ RECORD 2 ]----------+------------------------------------------------------------------
schema_name            | public
function_name          | ag_cat
procedure_type         | User Defined Aggregate
function_return_type   | Varchar
function_argument_type | Varchar
function_definition    | Class 'ConcatenateFactory' in Library 'public.AggregateFunctions'
volatility             |
is_strict              | f
is_fenced              | f
comment                |

See also

11.12.2 - CREATE ANALYTIC FUNCTION

Adds a user-defined analytic function (UDAnF) to the catalog.

Adds a user-defined analytic function (UDAnF) to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.

CREATE ANALYTIC FUNCTION automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading analytic functions. When you call the SQL function, Vertica passes the input table to the function in the library to process.

Syntax

CREATE [ OR REPLACE ] ANALYTIC FUNCTION [ IF NOT EXISTS ]
    [[database.]schema.]function AS
    [ LANGUAGE 'language' ]
    NAME 'factory'
    LIBRARY library
    [ FENCED | NOT FENCED ]

Arguments

OR REPLACE

If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

IF NOT EXISTS

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function

Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.

The function name must conform to the restrictions on Identifiers.

LANGUAGE 'language'
Language used to develop this function, one of the following:
  • C++ (default)

  • Java

NAME 'factory'
Name of the factory class that generates the function instance.
LIBRARY library
Name of the library that contains the function. This library must already be loaded by CREATE LIBRARY.
FENCED | NOT FENCED
Enables or disables fenced mode for this function.

Default: FENCED

Privileges

Non-superuser:

  • CREATE privilege on the function's schema

  • USAGE privilege on the function's library

Examples

This example creates an analytic function named an_rank based on the factory class named RankFactory in the AnalyticFunctions library:

=> CREATE ANALYTIC FUNCTION an_rank AS LANGUAGE 'C++'
   NAME 'RankFactory' LIBRARY AnalyticFunctions;

See also

Analytic functions (UDAnFs)

11.12.3 - CREATE FILTER

Adds a user-defined load filter function to the catalog.

Adds a user-defined load filter function to the catalog. The library containing the filter function must have been previously added using CREATE LIBRARY.

CREATE FILTER automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading load filter functions. When you call the SQL function, Vertica passes the input table to the function in the library to process.

Syntax

CREATE [ OR REPLACE ] FILTER [ IF NOT EXISTS ]
   [[database.]schema.]function AS
   [ LANGUAGE 'language' ]
   NAME 'factory' LIBRARY library
   [ FENCED | NOT FENCED ]

Arguments

OR REPLACE

If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

IF NOT EXISTS

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function

Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.

The function name must conform to the restrictions on Identifiers.

LANGUAGE 'language'
The language used to develop this function, one of the following:
  • C++ (default)

  • Java

  • Python

NAME 'factory'
Name of the factory class that generates the function instance. This is the same name used by the RegisterFactory class.
LIBRARY library
Name of the C++ library shared object file, Python file, or Java Jar file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
Enables or disables fenced mode for this function.

Default: FENCED

Privileges

Superuser

Examples

The following example demonstrates loading a library named iConverterLib, then defining a filter function named Iconverter that is mapped to the iConverterFactory factory class in the library:

=> CREATE LIBRARY iConverterLib as '/opt/vertica/sdk/examples/build/IconverterLib.so';
CREATE LIBRARY
=> CREATE FILTER Iconverter AS LANGUAGE 'C++' NAME 'IconverterFactory' LIBRARY IconverterLib;
CREATE FILTER FUNCTION
=> \x
Expanded display is on.
=> SELECT * FROM user_functions;
-[ RECORD 1 ]----------+--------------------
schema_name            | public
function_name          | Iconverter
procedure_type         | User Defined Filter
function_return_type   |
function_argument_type |
function_definition    |
volatility             |
is_strict              | f
is_fenced              | f
comment                |

See also

11.12.4 - CREATE FUNCTION (scalar)

Adds a user-defined scalar function (UDSF) to the catalog.

Adds a user-defined scalar function (UDSF) to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.

A UDSF takes in a single row of data and returns a single value. These functions can be used anywhere a native Vertica function or statement can be used, except CREATE TABLE with its PARTITION BY or any segmentation clause.

CREATE FUNCTION automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading UDxs. When you call the function, Vertica passes the parameters to the function in the library to process.

Syntax

CREATE [ OR REPLACE ] FUNCTION [ IF NOT EXISTS ]
   [[database.]schema.]function AS
   [ LANGUAGE 'language' ]
   NAME 'factory'
   LIBRARY library
   [ FENCED | NOT FENCED ]

Arguments

OR REPLACE

If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

IF NOT EXISTS

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function

Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.

The function name must conform to the restrictions on Identifiers.

LANGUAGE 'language'
Language used to develop this function, one of the following:
  • C++ (default)

  • Python

  • Java

  • R

NAME 'factory'
Name of the factory class that generates the function instance.
LIBRARY library
Name of the C++ shared object file, Python file, Java Jar file, or R functions file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
Enables or disables fenced mode for this function. Functions written in Java and R always run in fenced mode.

Default: FENCED

Privileges

  • CREATE privilege on the function's schema

  • USAGE privilege on the function's library

Examples

The following example loads a library named ScalarFunctions and then defines a function named Add2ints that is mapped to the Add2intsInfo factory class in the library:

=> CREATE LIBRARY ScalarFunctions AS '/opt/vertica/sdk/examples/build/ScalarFunctions.so';
CREATE LIBRARY
=> CREATE FUNCTION Add2Ints AS LANGUAGE 'C++' NAME 'Add2IntsFactory' LIBRARY ScalarFunctions;
CREATE FUNCTION
=> \x
Expanded display is on.
=> SELECT * FROM USER_FUNCTIONS;

-[ RECORD 1 ]----------+----------------------------------------------------
schema_name            | public
function_name          | Add2Ints
procedure_type         | User Defined Function
function_return_type   | Integer
function_argument_type | Integer, Integer
function_definition    | Class 'Add2IntsFactory' in Library 'public.ScalarFunctions'
volatility             | volatile
is_strict              | f
is_fenced              | t
comment                |

=> \x
Expanded display is off.
=> -- Try a simple call to the function
=> SELECT Add2Ints(23,19);
 Add2Ints
----------
       42
(1 row)

The following example uses a scalar function that returns a ROW:

=> CREATE FUNCTION div_with_rem AS LANGUAGE 'C++' NAME 'DivFactory' LIBRARY ScalarFunctions;

=> SELECT div_with_rem(18,5);
        div_with_rem
------------------------------
 {"quotient":3,"remainder":3}
(1 row)

See also

Developing user-defined extensions (UDxs)

11.12.5 - CREATE FUNCTION (SQL)

Stores SQL expressions as functions for use in queries.

Stores SQL expressions as functions for use in queries. User-defined SQL functions are useful for executing complex queries and combining Vertica built-in functions. You can call the function in a given query. If multiple SQL functions with the same name and argument types are in the search path, Vertica calls the first match that it finds.

SQL functions do not support complex types for arguments or return values.

SQL functions are flattened in all cases, including DDL.

Syntax

CREATE [ OR REPLACE ] FUNCTION [ IF NOT EXISTS ]
    [[database.]schema.]function( [ argname argtype[,...] ] )
    RETURN return_type
    AS
    BEGIN
       RETURN expression;
    END;

Arguments

OR REPLACE
If a function of the same name and arguments exists, replace it. If you only change the function arguments, Vertica ignores this option and maintains both functions under the same name.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

IF NOT EXISTS

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function
Name of the function to create, which must conform to the conventions described in Identifiers.
argname argtype[,...]
A comma-delimited list of argument names and their data types. Complex types are not supported.
return_type
The data type that this function returns. Complex types are not supported.
RETURN expression
The SQL function body, which can contain built-in functions, operators, and argument names specified in the CREATE FUNCTION statement. The expression must end with a semicolon.

Privileges

Non-superuser:

  • CREATE privilege on the function's schema

  • USAGE privilege on the function's library

Strictness and volatility

Vertica infers the strictness and volatility (stable, immutable, or volatile) of a SQL function from its definition. Vertica then determines the correctness of usage, such as where an immutable function is expected but a volatile function is provided.

SQL functions and views

You can create views on the queries that use SQL functions and then query the views. When you create a view, a SQL function replaces a call to the user-defined function with the function body in a view definition. Therefore, when the body of the user-defined function is replaced, the view should also be replaced.

Examples

See Creating user-defined SQL functions.

See also

11.12.6 - CREATE PARSER

Adds a user-defined load parser function to the catalog.

Adds a user-defined load parser function to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.

CREATE PARSER automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading load parser functions. When you call the SQL function, Vertica passes the input table to the function in the library to process.

Syntax

CREATE [ OR REPLACE ] PARSER [ IF NOT EXISTS ]
   [[database.]schema.]function AS
   [ LANGUAGE 'language' ]
   NAME 'factory'
   LIBRARY library
   [ FENCED | NOT FENCED ]

Arguments

OR REPLACE

If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

IF NOT EXISTS

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function

Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.

The function name must conform to the restrictions on Identifiers.

LANGUAGE 'language'
The language used to develop this function, one of the following:
  • C++ (default)

  • Java

  • Python

NAME 'factory'
Name of the factory class that generates the function instance. This is the same name used by the RegisterFactory class.
LIBRARY library
Name of the C++ library shared object file, Python file, or Java Jar file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
Enables or disables fenced mode for this function.

Default: FENCED

Privileges

Superuser

Examples

The following example demonstrates loading a library named BasicIntegrerParserLib, then defining a parser function named BasicIntegerParser that is mapped to the BasicIntegerParserFactory factory class in the library:

=> CREATE LIBRARY BasicIntegerParserLib as '/opt/vertica/sdk/examples/build/BasicIntegerParser.so';
CREATE LIBRARY
=> CREATE PARSER BasicIntegerParser AS LANGUAGE 'C++' NAME 'BasicIntegerParserFactory' LIBRARY BasicIntegerParserLib;
CREATE PARSER FUNCTION
=> \x
Expanded display is on.
=> SELECT * FROM user_functions;
-[ RECORD 1 ]----------+--------------------
schema_name            | public
function_name          | BasicIntegerParser
procedure_type         | User Defined Parser
function_return_type   |
function_argument_type |
function_definition    |
volatility             |
is_strict              | f
is_fenced              | f
comment                |

See also

11.12.7 - CREATE SOURCE

Adds a user-defined load source function to the catalog.

Adds a user-defined load source function to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.

CREATE SOURCE automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading load source functions. When you call the SQL function, Vertica passes the input table to the function in the library to process.

Syntax

CREATE [ OR REPLACE ] SOURCE [ IF NOT EXISTS ]
    [[database.]schema.]function AS
    [ LANGUAGE 'language' ]
    NAME 'factory'
    LIBRARY library
    [ FENCED | NOT FENCED ]

Arguments

OR REPLACE

If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

IF NOT EXISTS

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function

Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.

The function name must conform to the restrictions on Identifiers.

LANGUAGE 'language'
Language used to develop this function, one of the following:
  • C++ (default)

  • Java

NAME 'factory'
Name of the factory class that generates the function instance. This is the same name used by the RegisterFactory class.
LIBRARY library
Name of the C++ library shared object file or Java Jar file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
Enables or disables fenced mode for this function.

Default: FENCED

Privileges

Superuser

Examples

The following example demonstrates loading a library named curllib, then defining a source function named curl that is mapped to the CurlSourceFactory factory class in the library:

=> CREATE LIBRARY curllib as '/opt/vertica/sdk/examples/build/cURLLib.so';
CREATE LIBRARY
=> CREATE SOURCE curl AS LANGUAGE 'C++' NAME 'CurlSourceFactory' LIBRARY curllib;
CREATE SOURCE
=> \x
Expanded display is on.
=> SELECT * FROM user_functions;
-[ RECORD 1 ]----------+--------------------
schema_name            | public
function_name          | curl
procedure_type         | User Defined Source
function_return_type   |
function_argument_type |
function_definition    |
volatility             |
is_strict              | f
is_fenced              | f
comment                |

See also

11.12.8 - CREATE TRANSFORM FUNCTION

Adds a user-defined transform function (UDTF) to the catalog.

Adds a user-defined transform function (UDTF) to the catalog. The library containing the function must have been previously added using CREATE LIBRARY.

CREATE TRANSFORM FUNCTION automatically determines the function parameters and return value from data supplied by the factory class. Vertica supports overloading transform functions. When you call the SQL function, Vertica passes the input table to the transform function in the library to process.

Syntax

CREATE [ OR REPLACE ] TRANSFORM FUNCTION [ IF NOT EXISTS ]
    [[database.]schema.]function AS
    [ LANGUAGE 'language' ]
    NAME 'factory'
    LIBRARY library
    [ FENCED | NOT FENCED ]

Arguments

OR REPLACE

If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

IF NOT EXISTS

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

function

Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.

The function name must conform to the restrictions on Identifiers.

LANGUAGE 'language'
The language used to develop this function, one of the following:
  • C++ (default)

  • Java

  • R

  • Python

NAME 'factory'
Name of the factory class that generates the function instance.
LIBRARY library
Name of the C++ shared object file, Python file, Java Jar file, or R functions file. This library must already have been loaded by CREATE LIBRARY.
FENCED | NOT FENCED
Enables or disables fenced mode for this function. Functions written in Java and R always run in fenced mode.

Default: FENCED

Privileges

Non-superuser:

  • CREATE privilege on the function's schema

  • USAGE privilege on the function's library

Restrictions

A query that includes a UDTF cannot:

Examples

The following example loads a library named TransformFunctions and then defines a function named tokenize that is mapped to the TokenFactory factory class in the library:

=> CREATE LIBRARY TransformFunctions AS
   '/home/dbadmin/TransformFunctions.so';
CREATE LIBRARY
=> CREATE TRANSFORM FUNCTION tokenize
   AS LANGUAGE 'C++' NAME 'TokenFactory' LIBRARY TransformFunctions;
CREATE TRANSFORM FUNCTION

See also

11.13 - CREATE HCATALOG SCHEMA

Define a schema for data stored in a Hive data warehouse using the HCatalog Connector.

Define a schema for data stored in a Hive data warehouse using the HCatalog Connector. For more information, see Using the HCatalog Connector.

Most of the optional parameters are read out of Hadoop configuration files if available. If you copied the Hadoop configuration files as described in Configuring Vertica for HCatalog, you can omit most parameters. By default this statement uses the values specified in those configuration files. If the configuration files are complete, the following is a valid statement:

=> CREATE HCATALOG SCHEMA hcat;

If a value is not specified in the configuration files and a default is shown in the parameter list, then that default value is used.

Some parameters apply only if you are using HiveServer2 (the default). Others apply only if you are using WebHCat, a legacy Hadoop service. When using HiveServer2, use HIVESERVER2_HOSTNAME to specify the server host. When using WebHCat, use WEBSERVICE_HOSTNAME to specify the server host.

If you need to use WebHCat you must also set the HCatalogConnectorUseHiveServer2 configuration parameter to 0. See Hadoop parameters.

After creating the schema, you can change many (but not all) parameters using ALTER HCATALOG SCHEMA.

Syntax

CREATE HCATALOG SCHEMA [IF NOT EXISTS] schemaName
    [AUTHORIZATION user-id]
    [WITH [param=value [,...] ] ]

Arguments

Argument Description
[IF NOT EXISTS] If given, the statement exits without an error when the schema named in schemaName already exists.
schemaName The name of the schema to create in the Vertica catalog. The tables in the Hive database will be available through this schema.
AUTHORIZATION user-id The name of a Vertica account to own the schema being created. This parameter is ignored if Kerberos authentication is being used; in that case the current vsql user is used.

Parameters

Parameter Description
HOSTNAME

The hostname, IP address, or URI of the database server that stores the Hive data warehouse's metastore information.

If you specify this parameter and do not also specify PORT, then this value must be in the URI format used for hive.metastore.uris in hive-site.xml.

If the Hive metastore supports High Availability, you can specify a comma-separated list of URIs for this value.

If this value is not specified, hive-site.xml must be available.

PORT The port number on which the metastore database is running. If you specify this parameter, you must also specify HOSTNAME and it must be a name or IP address (not a URI).
HIVESERVER2_HOSTNAME

The hostname or IP address of the HiveServer2 service. This parameter is optional if in hive-site.xml you set one of the following properties:

  • hive.server2.thrift.bind.host to a valid host

  • hive.server2.support.dynamic.service.discovery to true

This parameter is ignored if you are using WebHCat.

WEBSERVICE_HOSTNAME The hostname or IP address of the WebHCat service, if using WebHCat instead of HiveServer2. If this value is not specified, webhcat-site.xml must be available.
WEBSERVICE_PORT The port number on which the WebHCat service is running, if using WebHCat instead of HiveServer2. If this value is not specified, webhcat-site.xml must be available.
WEBHDFS_ADDRESS The host and port ("host:port") for the WebHDFS service. This parameter is used only for reading ORC and Parquet files. If this value is not set, hdfs-site.xml must be available to read these file types through the HCatalog Connector.
HCATALOG_SCHEMA The name of the Hive schema or database that the Vertica schema is being mapped to. The default is schemaName.
CUSTOM_PARTITIONS Whether the Hive schema uses custom partition locations ('YES' or 'NO'). If the schema uses custom partition locations, then Vertica queries Hive to get those locations when executing queries. These additional Hive queries can be expensive, so use this parameter only if you need to. The default is 'NO' (disabled). For more information, see Using Partitioned Data.
HCATALOG_USER The username of the HCatalog user to use when making calls to the HiveServer2 or WebHCat server. The default is the current database user.
HCATALOG_CONNECTION_TIMEOUT The number of seconds the HCatalog Connector waits for a successful connection to the HiveServer or WebHCat server. A value of 0 means wait indefinitely.
HCATALOG_SLOW_TRANSFER_LIMIT The lowest data transfer rate (in bytes per second) from the HiveServer2 or WebHCat server that the HCatalog Connector accepts. See HCATALOG_SLOW_TRANSFER_TIME for details.
HCATALOG_SLOW_TRANSFER_TIME The number of seconds the HCatalog Connector waits before enforcing the data transfer rate lower limit. After this time has passed, the HCatalog Connector tests whether the data transfer rate is at least as fast as the value set in HCATALOG_SLOW_TRANSFER_LIMIT. If it is not, then the HCatalog Connector breaks the connection and terminates the query.
SSL_CONFIG The path of the Hadoop ssl-client.xml configuration file. This parameter is required if you are using HiveServer2 and it uses SSL wire encryption. This parameter is ignored if you are using WebHCat.

The default values for HCATALOG_CONNECTOR_TIMEOUT, HCATALOG_SLOW_TRANSFER_LIMIT, and HCATALOG_SLOW_TRANSFER_TIME are set by the database configuration parameters HCatConnectionTimeout, HCatSlowTransferLimit, and HCatSlowTransferTime. See Hadoop parameters for more information.

Configuration files

The HCatalog Connector uses the following values from the Hadoop configuration files if you do not override them when creating the schema.

File Properties
hive-site.xml hive.server2.thrift.bind.host (used for HIVESERVER2_HOSTNAME)
hive.server2.thrift.port
hive.server2.transport.mode
hive.server2.authentication
hive.server2.authentication.kerberos.principal
hive.server2.support.dynamic.service.discovery
hive.zookeeper.quorum (used as HIVESERVER2_HOSTNAME if dynamic service discovery is enabled)
hive.zookeeper.client.port
hive.server2.zookeeper.namespace
hive.metastore.uris (used for HOSTNAME and PORT)
ssl-client.xml ssl.client.truststore.location
ssl.client.truststore.password

Privileges

The user must be a superuser or be granted all permissions on the database to use this statement.

The user also requires access to Hive data in one of the following ways:

  • Have USAGE permissions on hcatalog_schema, if Hive does not use an authorization service (Sentry or Ranger) to manage access.

  • Have permission through an authorization service, if Hive uses it to manage access. In this case you must either set EnableHCatImpersonation to 0, to access data as the Vertica principal, or grant users access to the HDFS data. For Sentry, you can use ACL synchronization to manage HDFS access.

  • Be the dbadmin user, with or without an authorization service.

Examples

The following example shows how to use CREATE HCATALOG SCHEMA to define a new schema for tables stored in a Hive database and then query the system tables that contain information about those tables:

=> CREATE HCATALOG SCHEMA hcat WITH HOSTNAME='hcathost' PORT=9083
   HCATALOG_SCHEMA='default' HIVESERVER2_HOSTNAME='hs.example.com'
   SSL_CONFIG='/etc/hadoop/conf/ssl-client.xml' HCATALOG_USER='admin';
CREATE SCHEMA
=> \x
Expanded display is on.

=> SELECT * FROM v_catalog.hcatalog_schemata;
-[ RECORD 1 ]----------------+-------------------------------------------
schema_id                    | 45035996273748224
schema_name                  | hcat
schema_owner_id              | 45035996273704962
schema_owner                 | admin
create_time                  | 2017-12-05 14:43:03.353404-05
hostname                     | hcathost
port                         | -1
hiveserver2_hostname         | hs.example.com
webservice_hostname          |
webservice_port              | 50111
webhdfs_address              | hs.example.com:50070
hcatalog_schema_name         | default
ssl_config                   | /etc/hadoop/conf/ssl-client.xml
hcatalog_user_name           | admin
hcatalog_connection_timeout  | -1
hcatalog_slow_transfer_limit | -1
hcatalog_slow_transfer_time  | -1
custom_partitions            | f

=> SELECT * FROM v_catalog.hcatalog_table_list;
-[ RECORD 1 ]------+------------------
table_schema_id    | 45035996273748224
table_schema       | hcat
hcatalog_schema    | default
table_name         | nation
hcatalog_user_name | admin
-[ RECORD 2 ]------+------------------
table_schema_id    | 45035996273748224
table_schema       | hcat
hcatalog_schema    | default
table_name         | raw
hcatalog_user_name | admin
-[ RECORD 3 ]------+------------------
table_schema_id    | 45035996273748224
table_schema       | hcat
hcatalog_schema    | default
table_name         | raw_rcfile
hcatalog_user_name | admin
-[ RECORD 4 ]------+------------------
table_schema_id    | 45035996273748224
table_schema       | hcat
hcatalog_schema    | default
table_name         | raw_sequence
hcatalog_user_name | admin

The following example shows how to specify more than one metastore host.

=> CREATE HCATALOG SCHEMA hcat
   WITH HOSTNAME='thrift://node1.example.com:9083,thrift://node2.example.com:9083';

The following example shows how to include custom partition locations:

=> CREATE HCATALOG SCHEMA hcat WITH HCATALOG_SCHEMA='default'
    HIVESERVER2_HOSTNAME='hs.example.com'
    CUSTOM_PARTITIONS='yes';

11.14 - CREATE KEY

Creates a private key.

Creates a private key.

Syntax

CREATE [TEMP[ORARY]] KEY name
       { 'AES' [ PASSWORD 'password' ] | 'RSA' }
       {LENGTH length | AS key_text}

Parameters

TEMPORARY
Create with session scope. The key is stored in memory and is valid only for the current session.
name
The name of the key.
password
Password for the key.
length
Size of the key in bits.

Example: 2048

key_text
The contents of the key to import.

Example:

-----BEGIN RSA PRIVATE KEY-----...ABCD1234...-----END RSA PRIVATE KEY-----

Privileges

Superuser

Examples

See Generating TLS certificates and keys.

See also

11.15 - CREATE LIBRARY

Loads a library containing user-defined extensions (UDxs) into the Vertica catalog.

Loads a library containing user-defined extensions (UDxs) into the Vertica catalog. Vertica automatically distributes copies of the library file and supporting libraries to all cluster nodes.

Because libraries are added to the database catalog, they persist across database restarts.

After loading a library in the catalog, you can use statements such as CREATE FUNCTION to define the extensions contained in the library. See Developing user-defined extensions (UDxs) for details.

Syntax

CREATE [OR REPLACE] LIBRARY
    [[database.]schema.]name
    AS 'path'
    [ DEPENDS 'depends-path' ]
    [ LANGUAGE 'language' ]

Arguments

OR REPLACE
If a library with the same name exists, replace it. UDxs defined in the catalog that reference the updated library automatically start using the new library file.

If you do not use this directive and the library already exists, the CREATE statement returns with an error.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

name
Name of the library to create. This is the name used when creating functions in the library (see Creating UDx Functions). While not required, it is good practice to match the file name.
AS path
Path of the library to load, either an absolute path on the initiator node file system or a URI for another supported file system or object store.
DEPENDS 'depends-path'

Files or libraries on which this library depends, one or more files or directories on the initiator node file system or other supported file systems or object stores. For a directory, end the path entry with a slash (/), optionally followed by a wildcard (*). To specify more than one file, separate entries with colons (:).

If any path entry contain colons, such as a URI, place brackets around the entire DEPENDS path and use double quotes for the individual path elements, as in the following example:

DEPENDS '["s3://mybucket/gson-2.3.1.jar"]'

To specify libraries with multiple directory levels, see Multi-level Library Dependencies.

DEPENDS has no effect for libraries written in R. R packages must be installed locally on each node, including external dependencies.

If a Java library depends on native libraries (SO files), use DEPENDS to specify the path and call System.loadLibrary() in your UDx to load the native libraries from that path.

LANGUAGE 'language'
The programming language of the functions in the library, one of:
  • C++ (default)

  • Python

  • Java

  • R

Privileges

Superuser, or UDXDEVELOPER and CREATE on the schema. Non-superusers must explicitly enable the UDXDEVELOPER role, as in the following example:

=> SET ROLE UDXDEVELOPER;
SET

-- Not required, but you can confirm the role as follows:
=> SHOW ENABLED ROLES;
     name      |   setting
---------------+--------------
 enabled roles | udxdeveloper
(1 row)

=> CREATE LIBRARY MyLib AS '/home/dbadmin/my_lib.so';
CREATE LIBRARY

-- Create functions...

-- UDXDEVELOPER also grants DROP (replace):
=> CREATE OR REPLACE LIBRARY MyLib AS '/home/dbadmin/my_lib.so';

Requirements

  • Vertica makes its own copies of the library files. Later modification or deletion of the original files specified in the statement does not affect the library defined in the catalog. To update the library, use ALTER LIBRARY.

  • Loading a library does not guarantee that it functions correctly. CREATE LIBRARY performs some basic checks on the library file to verify it is compatible with Vertica. The statement fails if it detects that the library was not correctly compiled or it finds other basic incompatibilities. However, CREATE LIBRARY cannot detect many other issues in shared libraries.

Multi-level library dependencies

If a DEPENDS clause specifies a library with multiple directory levels, Vertica follows the library path to include all subdirectories of that library. For example, the following CREATE LIBRARY statement enables the UDx library mylib to import all Python packages and modules that it finds in subdirectories of site-packages:

=> CREATE LIBRARY mylib AS '/path/to/python_udx' DEPENDS '/path/to/python/site-packages' LANGUAGE 'Python';

Examples

Load a library in the home directory of the dbadmin account:

=> CREATE LIBRARY MyFunctions AS '/home/dbadmin/my_functions.so';

Load a library located in the directory where you started vsql:

=> \set libfile '\''`pwd`'/MyOtherFunctions.so\'';
=> CREATE LIBRARY MyOtherFunctions AS :libfile;

Load a library from the cloud:

=> CREATE LIBRARY SomeFunctions AS 'S3://mybucket/extensions.so';

Load a library that depends on multiple JAR files in the same directory:

=> CREATE LIBRARY DeleteVowelsLib AS '/home/dbadmin/JavaLib.jar'
   DEPENDS '/home/dbadmin/mylibs/*' LANGUAGE 'Java';

Load a library with multiple explicit dependencies:

=> CREATE LIBRARY mylib AS '/path/to/java_udx'
   DEPENDS '/path/to/jars/this.jar:/path/to/jars/that.jar' LANGUAGE 'Java';

Load a library with dependencies in the cloud:

=> CREATE LIBRARY s3lib AS 's3://mybucket/UdlLib.jar'
   DEPENDS '["s3://mybucket/gson-2.3.1.jar"]' LANGUAGE 'Java';

11.16 - CREATE LOAD BALANCE GROUP

Creates a group of network addresses that can be targeted by a load balancing routing rule.

Creates a group of network addresses that can be targeted by a load balancing routing rule. You create a group either using a list of network addresses, or basing it on one or more fault groups or subclusters.

Syntax

CREATE LOAD BALANCE GROUP group_name WITH {
      ADDRESS address[,...]
    | FAULT GROUP  fault_group[,...] FILTER 'IP_range'
    | SUBCLUSTER subcluster[,...] FILTER 'IP_range'
    }
    [ POLICY 'policy_setting' ]

Parameters

group_name
Name of the group to create. You use this name later when defining load balancing rules.
address[,...]
Comma-delimited list of network addresses you created earlier.
fault_group[,...]
Comma-delimited list of fault groups to use as the basis of the load balance group.
subcluster[,...]
Comma-delimited list of subclusters to use as the basis of the load balance group.
IP_range
Range of IP addresses in CIDR notation to include in the load balance group from the fault groups or subclusters. This range can be either IPv4 or IPv6. Only nodes that have a network address with an IP address that falls within this range are added to the load balancing group.
policy_setting
Determines how the initially-contacted node chooses a target from the group, one of the following:
  • ROUNDROBIN (default) rotates among the available members of the load balancing group. The initially-contacted node keeps track of which node it chose last time, and chooses the next one in the cluster.

  • RANDOM chooses an available node from the group randomly.

  • NONE disables load balancing.

Privileges

Superuser

Examples

The following statement demonstrates creating a load balance group that contains several network addresses:

=> CREATE NETWORK ADDRESS addr01 ON v_vmart_node0001 WITH '10.20.110.21';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr02 ON v_vmart_node0002 WITH '10.20.110.22';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr03 on v_vmart_node0003 WITH '10.20.110.23';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr04 on v_vmart_node0004 WITH '10.20.110.24';
CREATE NETWORK ADDRESS
=> CREATE LOAD BALANCE GROUP group_1 WITH ADDRESS addr01, addr02;
CREATE LOAD BALANCE GROUP
=> CREATE LOAD BALANCE GROUP group_2 WITH ADDRESS addr03, addr04;
CREATE LOAD BALANCE GROUP

=> SELECT * FROM LOAD_BALANCE_GROUPS;
    name    |   policy   |     filter      |         type          | object_name
------------+------------+-----------------+-----------------------+-------------
 group_1    | ROUNDROBIN |                 | Network Address Group | addr01
 group_1    | ROUNDROBIN |                 | Network Address Group | addr02
 group_2    | ROUNDROBIN |                 | Network Address Group | addr03
 group_2    | ROUNDROBIN |                 | Network Address Group | addr04
(4 rows)

This example demonstrates creating a load balancing group using a fault group:

=> CREATE FAULT GROUP fault_1;
CREATE FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE  v_vmart_node0001;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE  v_vmart_node0002;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE  v_vmart_node0003;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE  v_vmart_node0004;
ALTER FAULT GROUP
=> SELECT node_name,node_address,node_address_family,export_address
   FROM v_catalog.nodes;
    node_name     | node_address | node_address_family | export_address
------------------+--------------+---------------------+----------------
 v_vmart_node0001 | 10.20.110.21 | ipv4                | 10.20.110.21
 v_vmart_node0002 | 10.20.110.22 | ipv4                | 10.20.110.22
 v_vmart_node0003 | 10.20.110.23 | ipv4                | 10.20.110.23
 v_vmart_node0004 | 10.20.110.24 | ipv4                | 10.20.110.24
(4 rows)

=> CREATE LOAD BALANCE GROUP group_all WITH FAULT GROUP fault_1 FILTER
   '0.0.0.0/0';
CREATE LOAD BALANCE GROUP

=> CREATE LOAD BALANCE GROUP group_some WITH FAULT GROUP fault_1 FILTER
   '10.20.110.21/30';
CREATE LOAD BALANCE GROUP

=> SELECT * FROM LOAD_BALANCE_GROUPS;
      name      |   policy   |     filter      |         type          | object_name
----------------+------------+-----------------+-----------------------+-------------
 group_all      | ROUNDROBIN | 0.0.0.0/0       | Fault Group           | fault_1
 group_some     | ROUNDROBIN | 10.20.110.21/30 | Fault Group           | fault_1
(2 rows)

See also

11.17 - CREATE LOCAL TEMPORARY VIEW

Creates or replaces a local temporary view.

Creates or replaces a local temporary view. Views are read only, so they do not support insert, update, delete, or copy operations. Local temporary views are session-scoped, so they are visible only to their creator in the current session. Vertica drops the view when the session ends.

Syntax

CREATE [OR REPLACE] LOCAL TEMP[ORARY] VIEW view [ (column[,...] ) ] AS query

Parameters

OR REPLACE
Specifies to overwrite the existing view view-name. If you omit this option and view-name already exists, CREATE VIEW returns an error.
view
Identifies the view to create, where view conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
column[,...]
List of up to 9800 names to use as view column names. Vertica maps view column names to query columns according to the order of their respective lists. By default, the view uses column names as they are specified in the query.
AS query
A SELECT statement that the temporary view executes. The SELECT statement can reference tables, temporary tables, and other views.

Privileges

See Creating views.

Examples

The following CREATE LOCAL TEMPORARY VIEW statement creates the temporary view myview. This view sums all individual incomes of customers listed in the store.store_sales_fact table, and groups results by state:

=> CREATE LOCAL TEMP VIEW myview AS
   SELECT SUM(annual_income), customer_state FROM public.customer_dimension
     WHERE customer_key IN (SELECT customer_key FROM store.store_sales_fact)
     GROUP BY customer_state
     ORDER BY customer_state ASC;

The following example uses the temporary view myview with a WHERE clause that limits the results to combined salaries greater than $2 billion:

=> SELECT * FROM myview WHERE SUM > 2000000000;


     SUM     | customer_state
-------------+----------------
  2723441590 | AZ
 29253817091 | CA
  4907216137 | CO
  3769455689 | CT
  3330524215 | FL
  4581840709 | IL
  3310667307 | IN
  2793284639 | MA
  5225333668 | MI
  2128169759 | NV
  2806150503 | PA
  2832710696 | TN
 14215397659 | TX
  2642551509 | UT
(14 rows)

See also

11.18 - CREATE LOCATION

Creates a storage location where Vertica can store data.

Creates a storage location where Vertica can store data. After you create the location, you create storage policies that assign the storage location to the database objects that will store data in the location.

Syntax

CREATE LOCATION 'path'
    [NODE 'node' | ALL NODES]
    [ SHARED ]
    [ COMMUNAL ]
    [ USAGE 'usage']
    [LABEL 'label']
    [LIMIT 'size']

Arguments

path
Where to store this location's data. The type of file system on which the location is based determines the path format:

HDFS storage locations have additional requirements.

ALL NODES | NODE 'node'
The node or nodes on which the storage location is defined, one of the following:
  • ALL NODES (default): Create the storage location on each node that is currently part of the cluster. If you later add nodes, you must also create the location on those nodes. If SHARED is also specified, create the storage location once for use by all nodes.
  • NODE 'node': Create the storage location on a single node, where node is the name of the node in the NODES system table. You cannot use this option with SHARED.
SHARED
Indicates the location set by path is shared (used by all nodes) rather than local to each node. You cannot specify individual nodes with SHARED; you must use ALL NODES.

Most remote file systems such as HDFS and S3 are shared. For these file systems, the path argument represents a single location in the remote file system where all nodes store data. If using a remote file system, you must specify SHARED, even for one-node clusters.

If path is set to S3 communal storage, SHARED is always implied and can be omitted.

COMMUNAL
The location set by path is to be a communal storage location.

If you specify COMMUNAL, you do not need to provide the SHARED, ALL NODES, or NODE options because the storage location is necessarily shared by all nodes in the database. For the USAGE option, you must specify a value of DATA.

After the communal storage location is created, you can assign objects to it using the SET_OBJECT_STORAGE_POLICY function.

USAGE 'usage'
The type of data the storage location can hold, where usage is one of the following:
  • DATA,TEMP (default): The storage location can store persistent and temporary DML-generated data, and data for temporary tables.

  • TEMP: A path-specified location to store DML-generated temporary data. If path is set to S3, then this location is used only when the RemoteStorageForTemp configuration parameter is set to 1, and TEMP must be qualified with ALL NODES SHARED. For details, see S3 Storage of Temporary Data.

  • DATA: The storage location can only store persistent data.

  • USER: Users with READ and WRITE privileges can access data and external tables of this storage location.

  • DEPOT: The storage location is used in Eon Mode to store the depot. Only create DEPOT storage locations on local Linux file systems.

    Vertica allows a single DEPOT storage location per node. If you want to move your depot to different location (on a different file system, for example) you must first drop the old depot storage location, then create the new location.

LABEL 'label'
A label for the storage location, used when assigning the storage location to data objects. You use this name later when assigning the storage location to data objects.
LIMIT 'size'

Valid only if the storage location usage type is set to DEPOT, specifies the maximum amount of disk space that the depot can allocate from the storage location's file system.

You can specify size in two ways:

  • integer%: Percentage of storage location disk size.

  • integer{K|M|G|T}: Amount of storage location disk size in kilobytes, megabytes, gigabytes, or terabytes.

If you do not specify a limit, it is set to 60 percent.

Privileges

Superuser

File system access

The Vertica process must have read and write permissions to the location where data is to be stored. Each file system has its own requirements:

File system Requirements
Linux Database superuser account (usually named dbadmin) must have full read and write access to the directory in the path argument.
HDFS without Kerberos A Hadoop user whose username matches the Vertica database administrator username (usually dbadmin) must have read and write access to the HDFS directory specified in the path argument. The UseServerIdentityOverUserIdentity configuration parameter must be set to true in the user session; otherwise Vertica tries to use the identity associated with the logged-in user.
HDFS with Kerberos A Hadoop user whose username matches the principal in the keytab file on each Vertica node must have read and write access to the HDFS directory stored in the path argument. This is not the same as the database administrator username. The UseServerIdentityOverUserIdentity configuration parameter must be set to true in the user session; otherwise Vertica tries to use the Kerberos principal associated with the logged-in user.
Object stores (S3, GCS, Azure) Database-level credentials must be specified and provide full read and write access to the location in the path argument. If session-level credentials are specified they are used, directly overriding the use of the storage location.

Examples

Create a storage location in the local Linux file system for temporary data storage:

=> CREATE LOCATION '/home/dbadmin/testloc' USAGE 'TEMP' LABEL 'tempfiles';

Create a storage location on HDFS. The HDFS cluster does not use Kerberos:

=> CREATE LOCATION 'hdfs://hadoopNS/vertica/colddata' ALL NODES SHARED
   USAGE 'data' LABEL 'coldstorage';

Create the same storage location, but on a Hadoop cluster that uses Kerberos. Note the output that reports the principal being used:

=> CREATE LOCATION 'hdfs://hadoopNS/vertica/colddata' ALL NODES SHARED
   USAGE 'data' LABEL 'coldstorage';
NOTICE 0: Performing HDFS operations using kerberos principal [vertica/hadoop.example.com]
CREATE LOCATION

Create a location for user data, grant access to it, and use it to create an external table:

=> CREATE LOCATION '/tmp' ALL NODES USAGE 'user';
CREATE LOCATION
=> GRANT ALL ON LOCATION '/tmp' to Bob;
GRANT PRIVILEGE
=> CREATE EXTERNAL TABLE ext1 (x integer) AS COPY FROM '/tmp/data/ext1.dat' DELIMITER ',';
CREATE TABLE

Create a user storage location on S3 and a role, so that users without their own S3 credentials can read data from S3 using the server credential:

   --- set database-level credential (once):
=> ALTER DATABASE DEFAULT SET AWSAuth = 'myaccesskeyid123456:mysecretaccesskey123456789012345678901234';

=> CREATE LOCATION 's3://datalake' SHARED USAGE 'USER' LABEL 's3user';

=> CREATE ROLE ExtUsers;
   --- Assign users to this role using GRANT (Role).

=> GRANT READ ON LOCATION 's3://datalake' TO ExtUsers;

Create a communal storage location on S3 with the label cold-storage:

=> CREATE LOCATION 's3://bucket/s3' COMMUNAL USAGE 'DATA' LABEL 'cold-storage';

See also

11.19 - CREATE NAMESPACE

Eon Mode only

Creates a namespace for an Eon Mode database.

When you create a database, a namespace named default_namespace is created with the shard count defined during database creation. This function lets you create additional namespaces. Each namespace has a distinct shard count that determines the segmentation of its member objects. For more information, see Managing namespaces and Namespaces and shards.

Every schema and table in an Eon Mode database belongs to a namespace. When you create a table or schema, you optionally specify the namespace under which to create the object. If no namespace is specified, the table or schema is created in default_namespace.

Syntax

CREATE NAMESPACE namespace [ SHARD COUNT shards ]

Arguments

namespace
Name of the namespace. Cannot be the same as the initial substring of any schema name containing a period. For example, if a schema named prefix.s exists, you cannot create a namespace named prefix.
SHARD COUNT shards
Shard count of the namespace. If unspecified, the shard count is set to that of default_namespace.

Privileges

Superuser

Examples

Create the analytics namespace with a shard count of 12:

=> CREATE NAMESPACE analytics SHARD COUNT 12;
CREATE NAMESPACE

You can view all namespaces in your database by querying the NAMESPACES system table:

=> SELECT namespace_name, is_default, default_shard_count FROM NAMESPACES;
  namespace_name   | is_default | default_shard_count
-------------------+------------+---------------------
 default_namespace | t          |                   6
 analytics         | f          |                   12
(2 rows)

For a more in-depth example, see Managing namespaces.

See also

11.20 - CREATE NETWORK ADDRESS

Creates a network address that can be used as part of a connection load balancing policy.

Creates a network address that can be used as part of a connection load balancing policy. A network address creates a name in the Vertica catalog for an IP address and port number associated with a node. Nodes can have multiple network addresses, up to one for each IP address they have on the network.

Syntax

CREATE NETWORK ADDRESS name ON node WITH 'ip-address' [PORT port-number] [ENABLED | DISABLED]

Parameters

name
The name of the new network address. Use this name when creating connection load balancing groups.
node
The name of the node on which to create the network address. This should be name of the node as it appears in the node_name column of system table NODES.
ip-address
The IPv4 or and IPv6 address on the node to associate with the network address.
PORT port-number
Sets the port number for the network address. You must supply a network address when altering the port number.
ENABLED | DISABLED
Enables or disables the network address.

Privileges

Superuser

Examples

Create three network addresses, one for each node in a three-node cluster:

=> SELECT node_name,export_address from v_catalog.nodes;
      node_name      | export_address
---------------------+----------------
 v_vmart_br_node0001 | 10.20.100.62
 v_vmart_br_node0002 | 10.20.100.63
 v_vmart_br_node0003 | 10.20.100.64
(3 rows)

=> CREATE NETWORK ADDRESS node01 ON v_vmart_br_node0001 WITH '10.20.100.62';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node02 ON v_vmart_br_node0002 WITH '10.20.100.63';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node03 ON v_vmart_br_node0003 WITH '10.20.100.64';

See also

11.21 - CREATE NETWORK INTERFACE

Identifies a network interface to which a node belongs.

Identifies a network interface to which a node belongs.

Use this statement when you want to configure import/export operations from individual nodes to other Vertica clusters. By default, when you install Vertica, it creates interfaces for all connected networks. You would only need CREATE NETWORK INTERFACE in situations where the network topology has changed since you installed Vertica.

Syntax

CREATE NETWORK INTERFACE network-interface-name ON node-name [WITH] 'node-IP-address' [PORT port-number] [ENABLED | DISABLED]
network-interface-name
The name you assign to the network interface, where network-interface-name conforms to conventions described in Identifiers.
node-name
The name of the node.
node-IP-address
The node's IP address, either a public or private IP address. For more information, see Using Public and Private IP Networks.
PORT port-number
Sets the port number for the network interface. You must supply a network interface when altering the port number.
[ENABLED | DISABLED]
Enables or disables the network interface.

Privileges

Superuser

Examples

Create a network interface:

=> CREATE NETWORK INTERFACE mynetwork ON v_vmart_node0001 WITH '123.4.5.6' PORT 456 ENABLED;

11.22 - CREATE NOTIFIER

Creates a push-based notifier to send event notifications and messages out of Vertica.

Creates a push-based notifier to send event notifications and messages out of Vertica.

Syntax

CREATE NOTIFIER [ IF NOT EXISTS ] notifier-name ACTION 'notifier-type'
    [ ENABLE | DISABLE ]
    [ MAXPAYLOAD 'integer{K|M}' ]
    MAXMEMORYSIZE 'integer{K|M|G|T}'
    [ TLS CONFIGURATION tls-configuration ]
    [ TLSMODE 'tls-mode' ]
    [ CA BUNDLE bundle-name [ CERTIFICATE certificate-name ] ]
    [ IDENTIFIED BY 'uuid' ]
    [ [NO] CHECK COMMITTED ]
    [ PARAMETERS 'adapter-params' ]

Parameters

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

notifier-name
This notifier's unique identifier.
ACTION 'notifier-type'
String, the type of notifier, one of the following:
  • URL, with the following format, that identifies one or more target Kafka servers:

    kafka://kafka-server-ip-address:port-number
    

    To enable failover when a Kafka server is unavailable, specify additional hosts in a comma-delimited list. For example:

    kafka://192.0.2.0:9092,192.0.2.1:9092,192.0.2.2:9092
    
  • syslog: Notifications are sent to syslog. To use notifiers of this type, you must set the SyslogEnabled parameter:

    => ALTER DATABASE DEFAULT SET SyslogEnabled = 1
    

    Events monitored by this notifier type are not logged to MONITORING_EVENTS nor vertica.log.

  • sns: Notifications are sent to a Simple Notification Service (SNS) endpoint.

ENABLE | DISABLE
Specifies whether to enable or disable the notifier.

Default: ENABLE.

MAXPAYLOAD 'integer{K|M}'
The maximum size of the message, up to 10^9 bytes, specified in kilobytes or megabytes.

The following restrictions apply:

  • MAXPAYLOAD cannot be greater than MAXMEMORYSIZE.

  • If you configure syslog to send messages to a remote destination, ensure that MaxMessageSize (in /etc/rsyslog for rsyslog) is greater than or equal to MAXPAYLOAD.

  • The MAXPAYLOAD for SNS notifiers cannot exceed 256KB.

Defaults:

  • Kafka: 1M

  • syslog: 1M

  • SNS: 256K

MAXMEMORYSIZE 'integer{K|M|G|T}'
The maximum size of the internal notifier, up to 2 TB, specified in kilobytes, megabytes, gigabytes, or terabytes.

MAXMEMORYSIZE must be greater than MAXPAYLOAD.

If the size of the message queue exceeds MAXMEMORYSIZE, the notifier drops excess messages.

TLS CONFIGURATION tls-configuration

The TLS CONFIGURATION to use for TLS.

Notifiers support the following TLS modes:

  • DISABLE

  • TRY_VERIFY (behaves like VERIFY_CA)

  • VERIFY_CA

  • VERIFY_FULL

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

TLSMODE 'tls-mode'

Specifies the type of connection between the notifier and an endpoint, one of the following:

  • disable (default): Plaintext connection.

  • verify-ca: Encrypted connection, and the server's certificate is verified as being signed by a trusted CA.

If you set this parameter to verify-ca, the generated TLS Configuration will be set to TRY_VERIFY, which has the same behavior as VERIFY_CA.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

CA BUNDLE bundle-name

Specifies a CA bundle. The certificates inside the bundle are used to validate the Kafka server's certificate if the TLSMODE requires it.

If a CA bundle is specified for a notifier that currently uses disable, which doesn't validate the Kafka server's certificate, the bundle will go unused when connecting to the Kafka server. This behavior persists unless the TLSMODE is changed to one that validates server certificates.

Changes to contents of the CA bundle take effect either after the notifier is disabled and re-enabled or after the database restarts. However, changes to which CA bundle the notifier uses takes effect immediately.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

CERTIFICATE certificate-name

Specifies a client certificate for validation by the endpoint.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

IDENTIFIED BY uuid
Specifies the notifier's unique identifier. If set, all the messages published by this notifier have this attribute.
[NO] CHECK COMMITTED
Specifies to wait for delivery confirmation before sending the next message in the queue.

Some messaging systems, like syslog, do not support delivery confirmation.

For SNS notifiers, CHECK COMMITTED must be specified, and NO CHECK COMMITTED behaves like CHECK COMMITTED.

PARAMETERS 'adapter-params'
Specifies one or more optional adapter parameters that are passed as a string to the adapter. Adapter parameters apply only to the adapter associated with the notifier.

For Kafka notifiers, refer to Kafka and Vertica configuration settings.

For syslog notifiers, specify the severity of the event with eventSeverity=severity, where severity is one of the following:

  • 0: Emergency

  • 1: Alert

  • 2: Critical

  • 3: Error

  • 4: Warning

  • 5: Notice

  • 6: Informational

  • 7: Debug

Most syslog implementations, by default, do not log events with a severity level of 7. You must configure syslog to record these types of events.

Parameters cannot be set for SNS notifiers.

Privileges

Superuser

Encrypted notifiers for SASL_SSL Kafka configurations

Follow this procedure to create or alter notifiers for Kafka endpoints that use SASL_SSL. Note that you must repeat this procedure whenever you change the TLSMODE, certificates, or CA bundle for a given notifier.

  1. Create a TLS Configuration with the desired TLS mode, certificate, and CA certificates.

  2. Use CREATE or ALTER to disable the notifier and set the TLS Configuration:

    => ALTER NOTIFIER encrypted_notifier
        DISABLE
        TLS CONFIGURATION kafka_tls_config;
    
  3. ALTER the notifier and set the proper rdkafka adapter parameters for SASL_SSL:

    => ALTER NOTIFIER encrypted_notifier PARAMETERS
        'sasl.username=user;sasl.password=password;sasl.mechanism=PLAIN;security.protocol=SASL_SSL';
    
  4. Enable the notifier:

    => ALTER NOTIFIER encrypted_notifier ENABLE;
    

Examples

Kafka notifiers

Create a Kafka notifier:

=> CREATE NOTIFIER my_dc_notifier
    ACTION 'kafka://172.16.20.10:9092'
    MAXMEMORYSIZE '1G'
    IDENTIFIED BY 'f8b0278a-3282-4e1a-9c86-e0f3f042a971'
    NO CHECK COMMITTED;

Create a notifier with an adapter-specific parameter:

=> CREATE NOTIFIER my_notifier
    ACTION 'kafka://127.0.0.1:9092'
    MAXMEMORYSIZE '10M'
    PARAMETERS 'queue.buffering.max.ms=1000';

Create a notifier that uses an encrypted connection and verifies the Kafka server's certificate with the CA certificates in the notifier_tls_config object:

=> CREATE NOTIFIER encrypted_notifier
    ACTION 'kafka://127.0.0.1:9092'
    MAXMEMORYSIZE '10M'
    TLS CONFIGURATION 'notifier_tls_config'

Syslog notifiers

The following example creates a notifier that writes a message to syslog when the Data collector (DC) component LoginFailures updates:

  1. Enable syslog notifiers for the current database:

    => ALTER DATABASE DEFAULT SET SyslogEnabled = 1;
    
  2. Create and enable a syslog notifier v_syslog_notifier:

    => CREATE NOTIFIER v_syslog_notifier ACTION 'syslog'
        ENABLE
        MAXMEMORYSIZE '10M'
        IDENTIFIED BY 'f8b0278a-3282-4e1a-9c86-e0f3f042a971'
        PARAMETERS 'eventSeverity = 5';
    
  3. Configure the syslog notifier v_syslog_notifier for updates to the LoginFailures DC component with SET_DATA_COLLECTOR_NOTIFY_POLICY:

    => SELECT SET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures','v_syslog_notifier', 'Login failed!', true);
    

    This notifier writes the following message to syslog (default location: /var/log/messages) when a user fails to authenticate as the user Bob:

    Apr 25 16:04:58
    vertica_host_01
    vertica:
        Event Posted:
            Event Code:21
            Event Id:0
            Event Severity: Notice [5]
            PostedTimestamp: 2022-04-25 16:04:58.083063
            ExpirationTimestamp: 2022-04-25 16:04:58.083063
            EventCodeDescription: Notifier
            ProblemDescription: (Login failed!)
        {
           "_db":"VMart",
           "_schema":"v_internal",
           "_table":"dc_login_failures",
           "_uuid":"f8b0278a-3282-4e1a-9c86-e0f3f042a971",
           "authentication_method":"Reject",
           "client_authentication_name":"default: Reject",
           "client_hostname":"::1",
           "client_label":"",
           "client_os_user_name":"dbadmin",
           "client_pid":523418,
           "client_version":"",
           "database_name":"dbadmin",
           "effective_protocol":"3.8",
           "node_name":"v_vmart_node0001",
           "reason":"REJECT",
           "requested_protocol":"3.8",
           "ssl_client_fingerprint":"",
           "ssl_client_subject":"",
           "time":"2022-04-25 16:04:58.082568-05",
           "user_name":"Bob"
        }#012
        DatabaseName: VMart
        Hostname: vertica_host_01
    

For details on syslog notifiers, see Configuring reporting for syslog.

See also

11.23 - CREATE PROCEDURE (external)

Adds an external procedure to Vertica.

Enterprise Mode only

Adds an external procedure to Vertica. See External procedures for more information.

Syntax

CREATE PROCEDURE [ IF NOT EXISTS ]
    [[database.]schema.]procedure( [ argument-list ] )
    AS executable
    LANGUAGE 'EXTERNAL'
    USER OS-user

Parameters

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

This option cannot be used with OR REPLACE.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

procedure
Specifies a name for the external procedure, where procedure-name conforms to conventions described in Identifiers.
argument-list
A comma-delimited list of procedure arguments, where each argument is specified as follows:
[ argname ] argtype
  • argname optionally provides a descriptive name for this argument.

  • argtype must be one of the following data types supported by Vertica:

    • BIGINT

    • BOOLEAN

    • DECIMAL

    • DOUBLE PRECISION

    • FLOAT

    • FLOAT8

    • INT

    • INT8

    • INTEGER

    • MONEY

    • NUMBER

    • NUMERIC

    • REAL

    • SMALLINT

    • TINYINT

    • VARCHAR

executable
The name of the executable program in the procedures directory, a string.
OS-user
The owner of the file, a string. The owner:
  • Cannot be root

  • Must have execute privileges on executable

Privileges

Superuser

System security

  • The procedure file must be owned by the database administrator (OS account) or by a user in the same group as the administrator. The procedure file must also have the set UID attribute enabled, and allow read and execute permission for the group.

  • External procedures that you create with CREATE PROCEDURE (external) are always run with Linux dbadmin privileges. If a dbadmin or pseudosuperuser grants a non-dbadmin permission to run a procedure using GRANT (procedure), be aware that the non-dbadmin user runs the procedure with full Linux dbadmin privileges.

Examples

The following example shows how to create a procedure named helloplanet for the procedure file helloplanet.sh. This file accepts one VARCHAR argument.

Create the file:

#!/bin/bash
echo "hello planet argument: $1" >> /tmp/myprocedure.log

Create the procedure with the following SQL:

=> CREATE PROCEDURE helloplanet(arg1 varchar) AS 'helloplanet.sh' LANGUAGE 'external' USER 'dbadmin';

See also

11.24 - CREATE PROCEDURE (stored)

Creates a stored procedure.

Creates a stored procedure.

Syntax

CREATE [ OR REPLACE ] PROCEDURE [ IF NOT EXISTS ]
    [[{namespace. | database. }]schema.]procedure( [ parameter-list ] )
    [ LANGUAGE 'language-name' ]
    [ SECURITY { DEFINER | INVOKER } ]
    AS $$ source $$;

Parameters

OR REPLACE
If a procedure with the same name already exists, replace it. Users and roles with privileges on the original procedure retain these privileges on the new procedure.

This option cannot be used with IF NOT EXISTS.

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

This option cannot be used with OR REPLACE.

{ database | namespace }
Name of the database or namespace that contains table:
  • Database name: If specified, it must be the current database.

  • Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.

schema
Name of the schema, by default public. If you specify the namespace or database name, you must provide the schema name, even if the schema is public.

If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.

procedure
The name of the stored procedure, where *procedure-name*conforms to conventions described in Identifiers.
parameter-list
A comma-delimited list of formal parameters, each specified as follows:
[ parameter-mode ] parameter-name parameter-type
  • parameter-name: the name of the parameter.

  • parameter-type: Any SQL data type, with the following exceptions:

    • DECIMAL

    • NUMERIC

    • NUMBER

    • MONEY

    • UUID

    • GEOGRAPHY

    • GEOMETRY

    • Complex types

language-name
Specifies the language of the procedure source, one of the following (both options refer to PLvSQL; PLpgSQL is included to maintain compatibility with existing scripts):
  • PLvSQL

  • PLpgSQL

Default: PLvSQL

SECURITY { DEFINER | INVOKER }
Determines whose privileges to use when the procedure is called and executes it as if the user is one of the following:
  • DEFINER: User who defined the procedure

  • INVOKER: User who called the procedure

A procedure with SECURITY DEFINER effectively executes the procedure as that user, so changes to the database appear to be performed by the procedure's definer rather than its caller.

For more information, see Executing stored procedures.

source
The procedure source code. For details, see Scope and structure.

Privileges

Non-superuser: CREATE on the procedure's schema

Examples

For more complex examples, see Stored procedures: use cases and examples

The following procedure echoes its inputs as a result set:

=> CREATE PROCEDURE echo_int_varchar(INOUT x INT, INOUT y VARCHAR) LANGUAGE PLvSQL AS $$
BEGIN
    RAISE NOTICE 'This procedure outputs a result set of its inputs:';
END
$$;

=> CALL echo_int_varchar(3, 'a string');
NOTICE 2005:  This procedure outputs a result set of its inputs:
 x |    y
---+----------
 3 | a string
(1 row)

The example uses a set of procedures that convert a given Fahrenheit temperature to Celsius and Kelvin and returns them all as a result set. The procedure f_to_c_and_k() calls the helper procedures f_to_c() and f_to_k() to convert to Celsius and Kelvin, respectively. The f_to_k() procedure uses the output of f_to_c() for part of the conversion:

=> CREATE PROCEDURE f_to_c_and_k(IN f_temp DOUBLE PRECISION, OUT c_temp DOUBLE PRECISION, OUT k_temp DOUBLE PRECISION) AS $$
BEGIN
    c_temp := CALL f_to_c(f_temp);
    k_temp := CALL f_to_k(f_temp);
END;
$$;

=> CREATE PROCEDURE f_to_c(INOUT temp DOUBLE PRECISION) AS $$
BEGIN
    temp := (temp - 32) * 5/9;
END;
$$;

=> CREATE PROCEDURE f_to_k(INOUT temp DOUBLE PRECISION) AS $$
BEGIN
    temp := CALL f_to_c(temp);
    temp := temp + 273.15;
END;
$$;

=> CALL f_to_c_and_k(80);
      c_temp      |      k_temp
------------------+------------------
 26.6666666666667 | 299.816666666667
(1 row)

See also

11.25 - CREATE PROFILE

Creates a profile that controls password requirements for users.

Creates a profile that controls password requirements for users.

Syntax

CREATE PROFILE profile-name LIMIT [
    PASSWORD_LIFE_TIME setting
    PASSWORD_MIN_LIFE_TIME setting
    PASSWORD_GRACE_TIME setting
    FAILED_LOGIN_ATTEMPTS setting
    PASSWORD_LOCK_TIME setting
    PASSWORD_REUSE_MAX setting
    PASSWORD_REUSE_TIME setting
    PASSWORD_MAX_LENGTH setting
    PASSWORD_MIN_LENGTH setting
    PASSWORD_MIN_LETTERS setting
    PASSWORD_MIN_UPPERCASE_LETTERS setting
    PASSWORD_MIN_LOWERCASE_LETTERS setting
    PASSWORD_MIN_DIGITS setting
    PASSWORD_MIN_SYMBOLS setting
    PASSWORD_MIN_CHAR_CHANGE setting ]

Parameters

Name Description
name

The name of the profile to create, where *name*conforms to conventions described in Identifiers.

To modify the default profile, set name to default. For example:

ALTER PROFILE DEFAULT LIMIT PASSWORD_MIN_SYMBOLS 1;

PASSWORD_LIFE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days a password remains valid.

  • UNLIMITED: Password remains valid indefinitely.

After your password's lifetime and grace period expire, you must change your password on your next login, if you have not done so already.

PASSWORD_MIN_LIFE_TIME

Set to an integer value, one of the following:

  • Default: 0

  • ≥ 1: The number of days a password must be set before it can be changed

  • UNLIMITED: Password can be reset at any time.

PASSWORD_GRACE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days a password can be used after it expires.

  • UNLIMITED: No grace period.

FAILED_LOGIN_ATTEMPTS

Set to an integer value, one of the following:

  • ≥ 1: The number of consecutive failed login attempts Vertica allows before locking your account.

  • UNLIMITED: Vertica allows an unlimited number of failed login attempts.

PASSWORD_LOCK_TIME
  • ≥ 1: The number of days (units configurable with PasswordLockTimeUnit) a user's account is locked after FAILED_LOGIN_ATTEMPTS number of login attempts. The account is automatically unlocked when the lock time elapses.

  • UNLIMITED: Account remains indefinitely inaccessible until a superuser manually unlocks it.

PASSWORD_REUSE_MAX

Set to an integer value, one of the following:

  • ≥ 1: The number of times you must change your password before you can reuse an earlier password.

  • UNLIMITED: You can reuse an earlier password without any intervening changes.

PASSWORD_REUSE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days that must pass after a password is set before you can reuse it.

  • UNLIMITED: You can reuse an earlier password immediately.

PASSWORD_MAX_LENGTH

The maximum number of characters allowed in a password, one of the following:

  • Integer between 8 and 512, inclusive
PASSWORD_MIN_LENGTH

The minimum number of characters required in a password, one of the following:

  • 0 to PASSWORD_MAX_LENGTH

  • UNLIMITED: Minimum of PASSWORD_MAX_LENGTH

PASSWORD_MIN_LETTERS

Minimum number of letters (a-z and A-Z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_UPPERCASE_LETTERS

Minimum number of uppercase letters (A-Z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_LOWERCASE_LETTERS

Minimum number of lowercase letters (a-z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_DIGITS

Minimum number of digits (0-9) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_SYMBOLS

Minimum number of symbols—printable non-letter and non-digit characters such as $, #, @—that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_CHAR_CHANGE

Minimum number of characters that must be different from the previous password:

  • Default: 0

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

Privileges

Superuser

Profile settings and client authentication

The following profile settings affect client authentication methods, such as LDAP or GSS:

  • FAILED_LOGIN_ATTEMPTS

  • PASSWORD_LOCK_TIME

All other profile settings are used only by Vertica to manage its passwords.

Examples

=> CREATE PROFILE sample_profile LIMIT PASSWORD_MAX_LENGTH 20;

See also

11.26 - CREATE PROJECTION

Creates metadata for a in the Vertica catalog.

Creates metadata for a projection in the Vertica catalog. Vertica supports four types of projections:

  • Standard projection: Stores a collection of table data in a format that optimizes execution of certain queries on that table.

  • Live aggregate projection: Stores the grouped results of queries that invoke aggregate functions (such as SUM) on table columns.

  • Top-K projection: Stores the top k rows from partitions of selected rows.

  • UDTF projection: Stores newly-loaded data after it is transformed and/or aggregated by user-defined transformation functions (UDTFs).

Complex data types have additional restrictions when used within a projection:

  • Each projection must include at least one column that is a primitive type or native array.

  • An AS SELECT clause can use a complex-type column, but any other expression must be of a scalar type or native array.

  • The ORDER BY, PARTITION BY, and GROUP BY clauses cannot use complex types.

  • If a projection does not include an ORDER BY or segmentation clause, Vertica uses only the primitive columns from the select list to order or segment data.

  • Projection columns cannot be complex types returned from functions such as ARRAY_CAT.

  • TopK and UDTF projections do not support complex types.

11.26.1 - Encoding types

Vertica supports various encoding and compression types, specified by the following ENCODING parameter arguments:.

Vertica supports various encoding and compression types, specified by the following ENCODING parameter arguments:

You can set encoding types on a projection column when you create the projection. You can also change the encoding of one or more projection columns for a given table with ALTER TABLE...ALTER COLUMN.

AUTO (default)

AUTO encoding is ideal for sorted, many-valued columns such as primary keys. It is also suitable for general purpose applications for which no other encoding or compression scheme is applicable. Therefore, it serves as the default if no encoding/compression is specified.

Column data type Default encoding type
BINARY/VARBINARY
BOOLEAN
CHAR/VARCHAR
FLOAT
Lempel-Ziv-Oberhumer-based (LZO) compression
DATE/TIME/TIMESTAMP
INTEGER
INTERVAL
Compression scheme based on the delta between consecutive column values.

The CPU requirements for this type are relatively small. In the worst case, data might expand by eight percent (8%) for LZO and twenty percent (20%) for integer data.

BLOCK_DICT

For each block of storage, Vertica compiles distinct column values into a dictionary and then stores the dictionary and a list of indexes to represent the data block.

BLOCK_DICT is ideal for few-valued, unsorted columns where saving space is more important than encoding speed. Certain kinds of data, such as stock prices, are typically few-valued within a localized area after the data is sorted, such as by stock symbol and timestamp, and are good candidates for BLOCK_DICT. By contrast, long CHAR/VARCHAR columns are not good candidates for BLOCK_DICT encoding.

CHAR and VARCHAR columns that contain 0x00 or 0xFF characters should not be encoded with BLOCK_DICT. Also, BINARY/VARBINARY columns do not support BLOCK_DICT encoding.

BLOCK_DICT encoding requires significantly higher CPU usage than default encoding schemes. The maximum data expansion is eight percent (8%).

BLOCKDICT_COMP

This encoding type is similar to BLOCK_DICT except dictionary indexes are entropy coded. This encoding type requires significantly more CPU time to encode and decode and has a poorer worst-case performance. However, if the distribution of values is extremely skewed, using BLOCK_DICT_COMP encoding can lead to space savings.

BZIP_COMP

BZIP_COMP encoding uses the bzip2 compression algorithm on the block contents. See bzip web site for more information. This algorithm results in higher compression than the automatic LZO and gzip encoding; however, it requires more CPU time to compress. This algorithm is best used on large string columns such as VARCHAR, VARBINARY, CHAR, and BINARY. Choose this encoding type when you are willing to trade slower load speeds for higher data compression.

COMMONDELTA_COMP

This compression scheme builds a dictionary of all deltas in the block and then stores indexes into the delta dictionary using entropy coding.

This scheme is ideal for sorted FLOAT and INTEGER-based (DATE/TIME/TIMESTAMP/INTERVAL) data columns with predictable sequences and only occasional sequence breaks, such as timestamps recorded at periodic intervals or primary keys. For example, the following sequence compresses well: 300, 600, 900, 1200, 1500, 600, 1200, 1800, 2400. The following sequence does not compress well: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55.

If delta distribution is excellent, columns can be stored in less than one bit per row. However, this scheme is very CPU intensive. If you use this scheme on data with arbitrary deltas, it can cause significant data expansion.

DELTARANGE_COMP

This compression scheme is primarily used for floating-point data; it stores each value as a delta from the previous one.

This scheme is ideal for many-valued FLOAT columns that are sorted or confined to a range. Do not use this scheme for unsorted columns that contain NULL values, as the storage cost for representing a NULL value is high. This scheme has a high cost for both compression and decompression.

To determine if DELTARANGE_COMP is suitable for a particular set of data, compare it to other schemes. Be sure to use the same sort order as the projection, and select sample data that will be stored consecutively in the database.

DELTAVAL

For INTEGER and DATE/TIME/TIMESTAMP/INTERVAL columns, data is recorded as a difference from the smallest value in the data block. This encoding has no effect on other data types.

DELTAVAL is best used for many-valued, unsorted integer or integer-based columns. CPU requirements for this encoding type are minimal, and data never expands.

GCDDELTA

For INTEGER and DATE/TIME/TIMESTAMP/INTERVAL columns, and NUMERIC columns with 18 or fewer digits, data is recorded as the difference from the smallest value in the data block divided by the greatest common divisor (GCD) of all entries in the block. This encoding has no effect on other data types.

ENCODING GCDDELTA is best used for many-valued, unsorted, integer columns or integer-based columns, when the values are a multiple of a common factor. For example, timestamps are stored internally in microseconds, so data that is only precise to the millisecond are all multiples of 1000. The CPU requirements for decoding GCDDELTA encoding are minimal, and the data never expands, but GCDDELTA may take more encoding time than DELTAVAL.

GZIP_COMP

This encoding type uses the gzip compression algorithm. See gzip web site for more information. This algorithm results in better compression than the automatic LZO compression, but lower compression than BZIP_COMP. It requires more CPU time to compress than LZO but less CPU time than BZIP_COMP. This algorithm is best used on large string columns such as VARCHAR, VARBINARY, CHAR, and BINARY. Use this encoding when you want a better compression than LZO, but at less CPU time than bzip2.

RLE

RLE (run length encoding) replaces sequences (runs) of identical values with a single pair that contains the value and number of occurrences. Therefore, it is best used for low cardinality columns that are present in the ORDER BY clause of a projection.

The Vertica execution engine processes RLE encoding run-by-run and the Vertica optimizer gives it preference. Use it only when run length is large, such as when low-cardinality columns are sorted.

Zstandard compression

Vertica supports three ZSTD compression types:

  • ZSTD_COMP provides high compression ratios. This encoding type has a higher compression than gzip. Use this when you want a better compression than gzip. For general use cases, use this or the ZSTD_FAST_COMP encoding type.

  • ZSTD_FAST_COMP uses the fastest compression level that the zstd library provides. It is the fastest encoding type of the zstd library, but takes up more space than the other two encoding types. For general use cases, use this or the ZSTD_COMP encoding type.

  • ZSTD_HIGH_COMP offers the best compression in the zstd library. It is slower than the other two encoding types. Use this type when you need the best compression, with slower CPU time.

11.26.2 - GROUPED clause

Groups two or more columns into a single disk file.

Enterprise Mode only

Groups two or more columns into a single disk file. Doing so minimizes file I/O for the following tasks:

  • Read a large percentage of the columns in a table.

  • Perform single row look-ups.

  • Query against many small columns.

  • Frequently update data in these columns.

You can improve query performance by grouping columns that are always accessed together and are not used in predicates. Once columns are grouped, queries can no longer retrieve from disk records for one column independently of the others.

You can group columns in several ways:

  • Group some of the columns:

    (a, GROUPED(b, c), d)
    
  • Group all of the columns:

    (GROUPED(a, b, c, d))
    
  • Create multiple groupings in the same projection:

    (GROUPED(a, b), GROUPED(c, d))
    

Grouping columns

The following example shows how to group columns bid and ask. The stock column is stored separately.

=> CREATE TABLE trades (stock CHAR(5), bid INT, ask INT);
=> CREATE PROJECTION tradeproj (stock ENCODING RLE,
   GROUPED(bid ENCODING DELTAVAL, ask))
   AS (SELECT * FROM trades) KSAFE 1;

The following example show how to create a projection that uses expressions in the column definition. The projection contains two integer columns a and b, and a third column product_value that stores the product of a and b:

=> CREATE TABLE values (a INT, b INT);
=> CREATE PROJECTION product (a, b, product_value) AS
   SELECT a, b, a*b FROM values ORDER BY a KSAFE;

11.26.3 - Hash segmentation clause

A general SQL expression.

Specifies how to segment projection data for distribution across all cluster nodes. You can specify segmentation for a table and a projection. If a table definition specifies segmentation, Vertica uses it for that table's auto-projections.

It is strongly recommended that you use Vertica's built-in HASH function, which distributes data evenly across the cluster, and facilitates optimal query execution.

Syntax

SEGMENTED BY expression ALL NODES [ OFFSET offset ]

Parameters

SEGMENTED BY expression
A general SQL expression. Hash segmentation is the preferred method of segmentation. Vertica recommends using its built-in HASH function, whose arguments resolve to table columns. If you use an expression other than HASH, Vertica issues a warning.

The segmentation expression should specify columns with a large number of unique data values and acceptable skew in their data distribution. In general, primary key columns that meet these criteria are good candidates for hash segmentation.

For details, see Expression Requirements below.

ALL NODES
Automatically distributes data evenly across all nodes when the projection is created. Node ordering is fixed.
OFFSET offset
A zero-based offset that indicates on which node to start segmentation distribution.

This option is not valid for CREATE TABLE and CREATE TEMPORARY TABLE.

Expression requirements

A segmentation expression must specify table columns as they are defined in the source table. Projection column names are not supported.

The following restrictions apply to segmentation expressions:

  • All leaf expressions must be constants or column references to a column in the CREATE PROJECTION 's SELECT list.

  • The expression must return the same value over the life of the database.

  • Aggregate functions are not allowed.

  • The expression must return non-negative INTEGER values in the range 0 <= x < 263, and values are generally distributed uniformly over that range.

Examples

The following CREATE PROJECTION statement creates projection public.employee_dimension_super. It specifies to include all columns in table public.employee_dimension. The hash segmentation clause invokes the Vertica HASH function to segment projection data on the column employee_key; it also includes the ALL NODES clause, which specifies to distribute projection data evenly across all nodes in the cluster:

=> CREATE PROJECTION public.employee_dimension_super
    AS SELECT * FROM public.employee_dimension
    ORDER BY employee_key
    SEGMENTED BY hash(employee_key) ALL NODES;

11.26.4 - Live aggregate projection

Stores the grouped results of queries that invoke aggregate functions (such as SUM) on table columns.

Stores the grouped results of queries that invoke aggregate functions (such as SUM) on table columns. For details, see Live aggregate projections.

Vertica does not regard live aggregate projections as superprojections, even those that include all table columns. For other requirements and restrictions, see Creating live aggregate projections.

Syntax

CREATE PROJECTION [ IF NOT EXISTS ] [[database.]schema.]projection
[ (
   { projection-column | grouped-clause
   [ ENCODING encoding-type ]
   [ ACCESSRANK integer ] }[,...]
) ]
AS SELECT { table-column | expr-with-table-columns }[,...] FROM [[database.]schema.]table [ [AS] alias]
   GROUP BY column-expr[,...]
   [ KSAFE [ k-num ] ]

Arguments

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

[database.]schema

Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.

projection

Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.

projection-column

The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.

If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.

grouped-clause
See GROUPED clause.
ENCODING encoding-type

The column encoding type, by default set to AUTO.

ACCESSRANK integer

Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.

AS SELECT

Specifies the table data to query:

{table-column | expr-with-table-columns } [ [AS] alias] }[,...]

You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement.

If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.

GROUP BY column-expr[,...]
One or more column expressions from the SELECT list. The first expression must be the first column expression in the SELECT list, the second expression must be the second column expression in the SELECT list, and so on.

Privileges

Non-superusers:

  • Anchor table owner

  • CREATE privilege on the schema

Examples

See Live aggregate projection example.

11.26.5 - Standard projection

Stores a collection of table data in a format that optimizes execution of certain queries on that table.

Stores a collection of table data in a format that optimizes execution of certain queries on that table. For details, see Projections .

Syntax

CREATE PROJECTION [ IF NOT EXISTS ] [[database.]schema.]projection
[ (
   { projection-column | grouped-clause
   [ ENCODING encoding-type ]
   [ ACCESSRANK integer ] }[,...]
) ]
AS SELECT { * | { MATCH_COLUMNS('pattern') | expression [ [AS] alias ] }[,...] }
   FROM [[database.]schema.]table [ [AS] alias]
   [ ORDER BY column[,...] ]
   [ SEGMENTED BY expression ALL NODES [ OFFSET offset ] | UNSEGMENTED ALL NODES ]
   [ KSAFE [ k-num ]
   [ ON PARTITION RANGE BETWEEN min-val AND max-val ] ]

Arguments

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

[database.]schema

Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.

projection

Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.

projection-column

The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.

If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.

grouped-clause
See GROUPED clause.
ENCODING encoding-type

The column encoding type, by default set to AUTO.

ACCESSRANK integer

Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.

AS SELECT
Columns or column expressions to select from the specified table:
  • * (asterisk): Returns all columns in the queried tables.

  • MATCH_COLUMNS('pattern'): Returns the names of all columns in the queried anchor table that match pattern.

  • expression[[AS]alias]: Resolves to column data from the queried anchor table. You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement—for example, in the ORDER BY or segmentation clause.

If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.

ORDER BY column
Columns from the SELECT list on which to sort the projection. The ORDER BY clause can only be set to ASC (the default). Vertica always stores projection data in ascending sort order.

If you order by a column with a collection data type (ARRAY or SET), queries that use that column in an ORDER BY clause perform the sort again. This is because projections and queries perform the ordering differently.

If you omit the ORDER BY clause, Vertica uses the SELECT list to sort the projection.

SEGMENTED BY expression ALL NODES [ OFFSET offset ] | UNSEGMENTED ALL NODES

How to distribute projection data:

If neither the anchor table nor the projection specifies segmentation, Vertica uses hash segmentation using all columns:

SEGMENTED BY HASH(column[,...]) ALL NODES OFFSET 0;
KSAFE k-num

Specifies K-safety for the projection. The value must be equal to or greater than database K-safety. Vertica ignores this option if set for unsegmented projections.

If you omit this clause, Vertica uses database K-safety.

For general information, see K-safety in an Enterprise Mode database.

ON PARTITION RANGE BETWEEN min-val AND max-val

Limits projection data to a range of partition keys. The minimum value must be less than or equal to the maximum value.

Values can be NULL. A null minimum value indicates no lower bound and a null maximum value indicates no upper bound. If both are NULL, this statement drops the projection endpoints, producing a regular projection instead of a range projection.

For other requirements and usage details, see Partition range projections.

Privileges

Non-superusers:

  • Anchor table owner

  • CREATE privilege on the schema

Examples

See:

11.26.6 - Top-k projection

Stores the top k rows from partitions of selected rows.

Stores the top k rows from partitions of selected rows. For details, see Top-k projections.

Vertica does not regard Top-K projections as superprojections, even those that include all table columns. For other requirements and restrictions, see Creating top-k projections.

Syntax

CREATE PROJECTION [ IF NOT EXISTS ] [[database.]schema.]projection
[ (
   { projection-column | grouped-clause
   [ ENCODING encoding-type ]
   [ ACCESSRANK integer ] }[,...]
) ]
AS SELECT { table-column | expr-with-table-columns }[,...] FROM [[database.]schema.]table [ [AS] alias]
   LIMIT num-rows OVER ( window-partition-clause window-order-clause )
   [ KSAFE [ k-num ] ]

Arguments

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

[database.]schema

Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.

projection

Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.

projection-column

The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.

If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.

grouped-clause
See GROUPED clause.
ENCODING encoding-type

The column encoding type, by default set to AUTO.

ACCESSRANK integer

Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.

AS SELECT

Specifies the table data to query:

{table-column | expr-with-table-columns } [ [AS] alias] }[,...]

You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement.

If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.

LIMIT num-rows
The number of rows to return from the specified partition.
OVER (window-partition-clause window-order-clause)
Specifies window partitioning by one or more comma-delimited column expressions from the SELECT list. The first partition expression must be the first SELECT list item, the second partition expression the second SELECT list item, and so on.

The order clause is required, and specifies the order in which the top k rows are returned. The default is ascending (ASC) order. All column expressions must be from the SELECT list, where the first window order expression must be the first SELECT list item not specified in the window partition clause.

Top-K projections support ORDER BY NULLS FIRST/LAST.

Privileges

Non-superusers:

  • Anchor table owner

  • CREATE privilege on the schema

Examples

See Top-k projection examples.

11.26.7 - UDTF projection

Stores newly-loaded data after it is transformed and/or aggregated by user-defined transformation functions (UDTFs).

Stores newly-loaded data after it is transformed and/or aggregated by user-defined transformation functions (UDTFs). For details and examples, see Pre-aggregating UDTF results.

Syntax

CREATE PROJECTION [ IF NOT EXISTS ] [[database.]schema.]projection
[ (
   { projection-column | grouped-clause
   [ ENCODING encoding-type ]
   [ ACCESSRANK integer ]  }[,...]
) ]
AS { batch-query FROM { prepass-query sq-ref | table [[AS] alias] }
     | prepass-query }

batch-query:

SELECT { table-column | expr-with-table-columns }[,...], batch-udtf(batch-args)
   OVER (PARTITION BATCH BY partition-column-expr[,...])
   [ AS (output-columns) ]

prepass-query:

SELECT { table-column | expr-with-table-columns }[,...], prepass-udtf(prepass-args)
   OVER (PARTITION PREPASS BY partition-column-expr[,...])
   [ AS (output-columns) ] FROM table

Arguments

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

[database.]schema

Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.

projection

Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.

projection-column

The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.

If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.

grouped-clause
See GROUPED clause.
ENCODING encoding-type

The column encoding type, by default set to AUTO.

ACCESSRANK integer

Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.

SELECT

Specifies the table data to query:

{table-column | expr-with-table-columns } [ [AS] alias] }[,...]

You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement.

If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.

batch-udtf(batch-args)
The batch UDTF to invoke each time the following events occur:
  • Tuple mover mergeout

  • Queries on the projection

  • If invoked singly, on data load operations

prepass-udtf(prepass-args)
The pre-pass UDTF to invoke on each load operation such as COPY or INSERT.

If specified in a subquery, the pre-pass UDTF returns transformed data to the batch query for further processing. Otherwise, the pre-pass query results are added to projection data storage.

OVER (PARTITION { BATCH | PREPASS } BY partition-column-expr[,...])
Specifies the UDTF type and how to partition the data it returns:

In both cases, the OVER clause specifies partitioning with one or more column expressions from the SELECT list. The first expression is the first column expression in the SELECT list, the second expression is the second column expression in the SELECT list, and so on.

AS output-columns
Optionally names columns that are returned by the UDTF.

If a pre-pass subquery omits this clause, the outer batch query UDTF arguments (batch-args) must reference the column names as they are defined in the pre-pass UDTF.

table[[AS]alias]
The projection's anchor table, optionally qualified by an alias.
sq-results
Subquery result set that is returned to the outer batch UDTF.

Privileges

Non-superusers:

  • Anchor table owner

  • CREATE privilege on the schema

  • EXECUTE privileges on all UDTFs that are referenced by the projection

Examples

See Pre-aggregating UDTF results.

11.26.8 - Unsegmented clause

Specifies to distribute identical copies of table or projection data on all nodes across the cluster.

Specifies to distribute identical copies of table or projection data on all nodes across the cluster. Use this clause to facilitate distributed query execution on tables and projections that are too small to benefit from segmentation.

Vertica uses the same name to identify all instances of an unsegmented projection. For more information about projection name conventions, see Projection naming.

Syntax

UNSEGMENTED ALL NODES

Examples

This example creates an unsegmented projection for table store.store_dimension:


=> CREATE PROJECTION store.store_dimension_proj (storekey, name, city, state)
             AS SELECT store_key, store_name, store_city, store_state
             FROM store.store_dimension
             UNSEGMENTED ALL NODES;
CREATE PROJECTION

=>  SELECT anchor_table_name anchor_table, projection_name, node_name
      FROM PROJECTIONS WHERE projection_basename='store_dimension_proj';
  anchor_table   |   projection_name    |    node_name
-----------------+----------------------+------------------
 store_dimension | store_dimension_proj | v_vmart_node0001
 store_dimension | store_dimension_proj | v_vmart_node0002
 store_dimension | store_dimension_proj | v_vmart_node0003
(3 rows)

11.27 - CREATE RESOURCE POOL

Creates a user-defined resource pool.

Creates a user-defined resource pool.

Syntax

CREATE RESOURCE POOL pool-name [ FOR subcluster ] [ parameter-name setting ]...

Arguments

pool-name
Name of the resource pool. If you specify a resource pool name with uppercase letters, Vertica converts them to lowercase letters.

The following naming requirements apply:

  • Built-in pool names cannot be used for user-defined pools.

  • All user-defined global resource pools must have unique names.

  • Names of global and subcluster-assigned resource pools cannot be the same. However, the same name can be shared by resource pools of different subclusters.

FOR subcluster

Eon Mode only, the subcluster to associate with this resource pool, where subcluster is one of the following:

  • SUBCLUSTER subcluster-name: Resource pool for an existing subcluster. You cannot be connected to this subcluster, otherwise Vertica returns an error.
  • CURRENT SUBCLUSTER: Resource pool for the subcluster that you are connected to.

If omitted, the resource pool is created globally.

parameter-name setting
A resource pool parameter and its initial value. If you omit this argument, Vertica sets this resource pool's parameters to their default values (see Parameters).

Parameters

Default values specified here pertain only to user-defined resource pools. For built-in pool default values, see Built-in resource pools configuration, or query system table RESOURCE_POOL_DEFAULTS.

CASCADE TO

Secondary resource pool for executing queries that exceed the RUNTIMECAP setting of their assigned resource pool:

CASCADE TO secondary-pool
CPUAFFINITYMODE

Specifies whether the resource pool has exclusive or shared use of the CPUs specified in CPUAFFINITYSET:

CPUAFFINITYMODE { SHARED | EXCLUSIVE | ANY }
  • SHARED: Queries that run in this resource pool share its CPUAFFINITYSET CPUs with other Vertica resource pools.
  • EXCLUSIVE: Dedicates CPUAFFINITYSET CPUs to this resource pool only, and excludes other Vertica resource pools. If CPUAFFINITYSET is set as a percentage, then that percentage of CPU resources available to Vertica is assigned solely for this resource pool.
  • ANY: Queries in this resource pool can run on any CPU, invalid if CPUAFFINITYSET designates CPU resources.

Default: ANY

CPUAFFINITYSET

CPUs available to this resource pool. All cluster nodes must have the same number of CPUs. The CPU resources assigned to this set are unavailable to general resource pools.

CPUAFFINITYSET {
  'cpu-index[,...]'
| 'cpu-indexi-cpu-indexn'
| 'integer%'
| NONE
}
  • cpu-index[,...]: Dedicates one or more comma-delimited CPUs to this resource pool.
  • cpu-indexi-cpu-indexn: Dedicates a range of contiguous CPU indexes i through n to this resource pool.
  • integer%: Percentage of all available CPUs to use for this resource pool. Vertica rounds this percentage down to include whole CPU units.
  • NONE (empty string): No affinity set is assigned to this resource pool. Queries associated with this pool are executed on any CPU.

Default: NONE

EXECUTIONPARALLELISM

Number of threads used to process any single query issued in this resource pool.

EXECUTIONPARALLELISM { limit | AUTO }
  • limit: An integer value between 1 and the number of cores. Setting this parameter to a reduced value increases throughput of short queries issued in the resource pool, especially if queries are executed concurrently.
  • AUTO or 0: Vertica calculates the setting from the number of cores, available memory, and amount of data in the system. Unless memory is limited, or the amount of data is very small, Vertica sets this parameter to the number of cores on the node.

Default: AUTO

MAXCONCURRENCY

Maximum number of concurrent execution slots available to the resource pool across the cluster:

MAXCONCURRENCY { integer | NONE }

NONE (empty string): Unlimited number of concurrent execution slots.

Default: NONE

MAXMEMORYSIZE

Maximum size per node the resource pool can grow by borrowing memory from the GENERAL pool:

MAXMEMORYSIZE {
  'integer%'
  |'integer{K|M|G|T}'
  NONE
}
  • integer%: Percentage of total memory
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
  • NONE (empty string): Unlimited, resource pool can borrow any amount of available memory from the GENERAL pool.

Default: NONE

MAXQUERYMEMORYSIZE

Maximum amount of memory this resource pool can allocate at runtime to process a query. If the query requires more memory than this setting, Vertica stops execution and returns an error.

Set this parameter as follows:

MAXQUERYMEMORYSIZE {
  'integer%'
| 'integer{K|M|G|T}'
| NONE
}
  • integer%: Percentage of MAXMEMORYSIZE for this resource pool.
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes, up to the value of MAXMEMORYSIZE.
  • NONE (empty string): Unlimited; resource pool can borrow any amount of available memory from the GENERAL pool, within the limits set by MAXMEMORYSIZE.

Default: NONE

MEMORYSIZE

Total per-node memory available to the Vertica resource manager that is allocated to this resource pool:

MEMORYSIZE {
  'integer%'
| 'integer{K|M|G|T}'
}
  • integer%: Percentage of total memory
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes

Default: 0%. No memory allocated, the resource pool borrows memory from the GENERAL pool.

PLANNEDCONCURRENCY

Preferred number of queries to execute concurrently in the resource pool. This setting applies to the entire cluster:

PLANNEDCONCURRENCY { num-queries | AUTO }
  • num-queries: Integer value ≥ 1, the preferred number of queries to execute concurrently in the resource pool. When possible, query resource budgets are limited to allow this level of concurrent execution.

  • AUTO: Value is calculated automatically at query runtime. Vertica sets this parameter to the lower of these two calculations, but never less than 4:

    • Number of logical cores

    • Memory divided by 2GB

    If the number of logical cores on each node is different, AUTO is calculated differently for each node. Distributed queries run like the minimal effective planned concurrency. Single node queries run with the planned concurrency of the initiator.

Default: AUTO

PRIORITY

Priority of queries in this resource pool when they compete for resources in the GENERAL pool:

PRIORITY { integer | HOLD }
  • integer: Negative or positive integer value, where higher numbers denote higher priority:

    • User-defined resource pool: -100 to 100

    • Built-in resource pools SYSQUERY, RECOVERY, and TM: -110 to 110

  • HOLD: Sets priority to -999. Queries in this resource pool are queued until QUEUETIMEOUT is reached.

Default: 0

QUEUETIMEOUT

Maximum time a request can wait for pool resources before it is rejected, not more than one year:

QUEUETIMEOUT { integer | 'interval' | 'NONE' }
  • integer: Maximum wait time in seconds

  • [interval](/en/sql-reference/language-elements/literals/datetime-literals/interval-literal/): Maximum wait time expressed in the following format:

    num year num months num [days] HH:MM:SS.ms
    
  • NONE (empty string): No maximum wait time, request can be queued indefinitely, up to one year.

If the value that you specify resolves to more than one year, Vertica returns with a warning and sets the parameter to 365 days:

=> ALTER RESOURCE POOL user_0 QUEUETIMEOUT '11 months 50 days 08:32';
WARNING 5693:  Using 1 year for QUEUETIMEOUT
ALTER RESOURCE POOL
=> SELECT QUEUETIMEOUT FROM resource_pools WHERE name = 'user_0';
 QUEUETIMEOUT
--------------
 365
(1 row)

Default: 00:05 (5 minutes)

RUNTIMECAP

Maximum execution time allowed to queries in this resource pool, not more than one year, otherwise Vertica returns with an error. If a query exceeds this setting, it tries to cascade to a secondary pool:

RUNTIMECAP { 'interval' | NONE }
  • interval: Maximum wait time expressed in the following format:

    num year num month num [day] HH:MM:SS.ms
    
  • NONE (empty string): No maximum wait time, request can be queued indefinitely, up to one year.

If the user or session also has a RUNTIMECAP, the shorter limit applies.

RUNTIMEPRIORITY

Determines how the resource manager should prioritize dedication of run-time resources (CPU, I/O bandwidth) to queries already running in this resource pool:

RUNTIMEPRIORITY { HIGH | MEDIUM | LOW }

Default: MEDIUM

RUNTIMEPRIORITYTHRESHOLD

Maximum time (in seconds) in which query processing must complete before the resource manager assigns to it the resource pool's RUNTIMEPRIORITY. All queries begin execution with a priority of HIGH.

RUNTIMEPRIORITYTHRESHOLD seconds

Default: 2

SINGLEINITIATOR

Set to false for backward compatibility. Do not change this setting.

Privileges

Superuser

Examples

This example shows how to create a resource pool with MEMORYSIZE of 1800 MB.

=> CREATE RESOURCE POOL ceo_pool MEMORYSIZE '1800M' PRIORITY 10;
CREATE RESOURCE POOL

Assuming the CEO report user already exists, associate this user with the preceding resource pool using ALTER USER statement.

=> GRANT USAGE ON RESOURCE POOL ceo_pool to ceo_user;
GRANT PRIVILEGE
=> ALTER USER ceo_user RESOURCE POOL ceo_pool;
ALTER USER

Issue the following command to confirm that the ceo_user is associated with the ceo_pool:

=> SELECT * FROM users WHERE user_name ='ceo_user';
-[ RECORD 1 ]-----+--------------------------------------------------
user_id           | 45035996273733402
user_name         | ceo_user
is_super_user     | f
profile_name      | default
is_locked         | f
lock_time         |
resource_pool     | ceo_pool
memory_cap_kb     | unlimited
temp_space_cap_kb | unlimited
run_time_cap      | unlimited
all_roles         |
default_roles     |
search_path       | "$user", public, v_catalog, v_monitor, v_internal

This exampleshows how to create and designate secondary resource pools.

=> CREATE RESOURCE POOL rp3 RUNTIMECAP '5 minutes';
=> CREATE RESOURCE POOL rp2 RUNTIMECAP '3 minutes' CASCADE TO rp3;
=> CREATE RESOURCE POOL rp1 RUNTIMECAP '1 minute' CASCADE TO rp2;
=> SET SESSION RESOURCE_POOL = rp1;

This Eon Mode example confirms the current subcluster name, then creates a resource pool for the current subcluster:

=> SELECT CURRENT_SUBCLUSTER_NAME();
 CURRENT_SUBCLUSTER_NAME
-------------------------
 analytics_1
(1 row)

=> CREATE RESOURCE POOL dashboard FOR SUBCLUSTER analytics_1;
CREATE RESOURCE POOL

See also

11.27.1 - Built-in pools

Vertica is preconfigured with built-in pools for various system tasks:.

Vertica is preconfigured with built-in pools for various system tasks:

For details on resource pool settings, see ALTER RESOURCE POOL.

GENERAL

Catch-all pool used to answer requests that have no specific resource pool associated with them. Any memory left over after memory has been allocated to all other pools is automatically allocated to the GENERAL pool. The MEMORYSIZE parameter of the GENERAL pool is undefined (variable), however, the GENERAL pool must be at least 1GB in size and cannot be smaller than 25% of the memory in the system.

The MAXMEMORYSIZE parameter of the GENERAL pool has special meaning; when set as a % value it represents the percent of total physical RAM on the machine that the Resource manager can use for queries. By default, it is set to 95%. MAXMEMORYSIZE governs the total amount of RAM that the Resource Manager can use for queries, regardless of whether it is set to a percent or to a specific value (for example, '10GB').

User-defined pools can borrow memory from the GENERAL pool to satisfy requests that need extra memory until the MAXMEMORYSIZE parameter of that pool is reached. If the pool is configured to have MEMORYSIZE equal to MAXMEMORYSIZE, it cannot borrow any memory from the GENERAL pool. When multiple pools request memory from the GENERAL pool, they are granted access to general pool memory according to their priority setting. In this manner, the GENERAL pool provides some elasticity to account for point-in-time deviations from normal usage of individual resource pools.

Vertica recommends reducing the GENERAL pool MAXMEMORYSIZE if your catalog uses over 5 percent of overall memory. You can calculate what percentage of GENERAL pool memory the catalog uses as follows:

=> WITH memory_use_metadata AS (SELECT node_name, memory_size_kb FROM resource_pool_status WHERE pool_name='metadata'),
        memory_use_general  AS (SELECT node_name, memory_size_kb FROM resource_pool_status WHERE pool_name='general')
   SELECT m.node_name, ((m.memory_size_kb/g.memory_size_kb) * 100)::NUMERIC(4,2) pct_catalog_usage
   FROM memory_use_metadata m JOIN memory_use_general g ON m.node_name = g.node_name;
    node_name     | pct_catalog_usage
------------------+-------------------
 v_vmart_node0001 |              0.41
 v_vmart_node0002 |              0.37
 v_vmart_node0003 |              0.36
(3 rows)

BLOBDATA

Controls resource usage for in-memory blobs. In-memory blobs are objects used by a number of the machine learning SQL functions. You should adjust this pool if you plan on processing large machine learning workloads. For information about tuning the pool, see Tuning for machine learning.

If a query using the BLOBDATA pool exceeds its query planning budget, then it spills to disk. For more information about tuning your query budget, see Query budgeting.

DBD

Controls resource usage for Database Designer processing. Use of this pool is enabled by configuration parameter DBDUseOnlyDesignerResourcePool, by default set to false.

By default, QUEUETIMEOUT is set to 0 for this pool. When resources are under pressure, this setting causes the DBD to time out immediately, and not be queued to run later. Database Designer then requests the user to run the designer later, when resources are more available.

JVM

Controls Java Virtual Machine resources used by Java User Defined Extensions. When a Java UDx starts the JVM, it draws resources from the those specified in the JVM resource pool. Vertica does not reserve memory in advance for the JVM pool. When needed, the pool can expand to 10% of physical memory or 2 GB of memory, whichever is smaller. If you are buffering large amounts of data, you may need to increase the size of the JVM resource pool.

You can adjust the size of your JVM resource pool by changing its configuration settings. Unlike other resource pools, the JVM resource pool does not release resources until a session is closed.

METADATA

Tracks memory allocated for catalog data and storage data structures. This pool increases in size as Vertica metadata consumes additional resources. Memory assigned to the METADATA pool is subtracted from the GENERAL pool, enabling the Vertica resource manager to make more effective use of available resources. If the METADATA resource pool reaches 75% of the GENERAL pool, Vertica stops updating METADATA memory size and displays a warning message in vertica.log. You can enable or disable the METADATA pool with configuration parameter EnableMetadataMemoryTracking.

If you created a "dummy" or "swap" resource pool to protect resources for use by your operating system, you can replace that pool with the METADATA pool.

Users cannot change the parameters of the METADATA resource pool.

RECOVERY

Used by queries issued when recovering another node of the database. The MAXCONCURRENCY parameter is used to determine how many concurrent recovery threads to use. You can use the PLANNEDCONCURRENCY parameter (by default, set to twice the MAXCONCURRENCY) to tune how to apportion memory to recovery queries.

See Tuning for recovery.

REFRESH

Used by queries issued by PROJECTION_REFRESHES operations. Refresh does not currently use multiple concurrent threads; thus, changes to the MAXCONCURRENCY values have no effect.

See Scenario: Tuning for Refresh.

SYSQUERY

Runs queries against all system monitoring and catalog tables. The SYSQUERY pool reserves resources for system table queries so that they are never blocked by contention for available resources.

TM

The Tuple Mover (TM) pool. You can set the MAXCONCURRENCY parameter for the TM pool to allow concurrent TM operations.

See Tuning tuple mover pool settings.

11.27.2 - Built-in resource pools configuration

To view the current and default configuration for built-in resource pools, query the system tables RESOURCE_POOLS and RESOURCE_POOL_DEFAULTS, respectively.

To view the current and default configuration for built-in resource pools, query the system tables RESOURCE_POOLS and RESOURCE_POOL_DEFAULTS, respectively. The sections below provide this information, and also indicate which built-in pool parameters can be modified with ALTER RESOURCE POOL:

GENERAL

Parameter Settings
MEMORYSIZE Empty / cannot be set
MAXMEMORYSIZE

The maximum memory to use for all resource pools, one of the following:

MAXMEMORYSIZE {
  'integer%'
 | 'integer{K|M|G|T}'
}
  • integer%: Percentage of total system RAM, must be ≥ 25%
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes, must be ≥ 1GB

For example, if your node has 64GB of memory, setting MAXMEMORYSIZE to 50% allocates half of available memory. Thus, the maximum amount of memory available to all resource pools is 32GB.

Default: 95%

MAXQUERYMEMORYSIZE

The maximum amount of memory allocated by this pool to process any query:

MAXQUERYMEMORYSIZE {
  'integer%'
 | 'integer{K|M|G|T}'
}
  • integer%: Percentage of MAXMEMORYSIZE for this pool.

  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes

EXECUTIONPARALLELISM Default: AUTO
PRIORITY Default: 0
RUNTIMEPRIORITY Default: Medium
RUNTIMEPRIORITYTHRESHOLD Default: 2
QUEUETIMEOUT Default: 00:05 (minutes)
RUNTIMECAP

Prevents runaway queries by setting the maximum time a query in the pool can execute. If a query exceeds this setting, it tries to cascade to a secondary pool:

RUNTIMECAP { 'interval' | NONE }

  • interval: An interval of 1 minute or 100 seconds; should not exceed one year.

  • NONE (default): No time limit on queries running in this pool.

PLANNEDCONCURRENCY

The number of concurrent queries you expect to run against the resource pool, an integer ≥ 4. If set to AUTO (default), Vertica automatically sets PLANNEDCONCURRENCY at query runtime, choosing the lower of these two values:

  • Number of cores

  • Memory/2GB

Default: AUTO

MAXCONCURRENCY

Default: Empty

SINGLEINITIATOR

Default: False

CPUAFFINITYSET Default: Empty
CPUAFFINITYMODE Default: ANY
CASCADETO Default: Empty

BLOBDATA

Parameter Default Setting
MEMORYSIZE 0%
MAXMEMORYSIZE 10
MAXQUERYMEMORYSIZE Empty / cannot be set
EXECUTIONPARALLELISM
PRIORITY
RUNTIMEPRIORITY
RUNTIMEPRIORITYTHRESHOLD
QUEUETIMEOUT
RUNTIMECAP NONE
PLANNEDCONCURRENCY AUTO
MAXCONCURRENCY Empty / cannot be set
SINGLEINITIATOR
CPUAFFINITYSET
CPUAFFINITYMODE ANY / cannot be set
CASCADETO Empty / cannot be set

DBD

Parameter Default Setting
MEMORYSIZE 0%
MAXMEMORYSIZE Unlimited
MAXQUERYMEMORYSIZE Empty / cannot be set
EXECUTIONPARALLELISM AUTO
PRIORITY 0
RUNTIMEPRIORITY MEDIUM
RUNTIMEPRIORITYTHRESHOLD 0
QUEUETIMEOUT 0
RUNTIMECAP NONE
PLANNEDCONCURRENCY AUTO
MAXCONCURRENCY NONE
SINGLEINITIATOR

True

CPUAFFINITYSET Empty / cannot be set
CPUAFFINITYMODE ANY / cannot be set
CASCADETO Empty / cannot be set

JVM

Parameter Default Setting
MEMORYSIZE 0%
MAXMEMORYSIZE 10% of memory or 2 GB, whichever is smaller
MAXQUERYMEMORYSIZE Empty / cannot be set
EXECUTIONPARALLELISM AUTO
PRIORITY 0
RUNTIMEPRIORITY MEDIUM
RUNTIMEPRIORITYTHRESHOLD 2
QUEUETIMEOUT 00:05 (minutes)
RUNTIMECAP NONE
PLANNEDCONCURRENCY AUTO
MAXCONCURRENCY Empty / cannot be set
SINGLEINITIATOR

FALSE

CPUAFFINITYSET Empty / cannot be set
CPUAFFINITYMODE ANY / cannot be set
CASCADETO Empty / cannot be set

METADATA

Parameter Default Setting
MEMORYSIZE 0%
MAXMEMORYSIZE Unlimited
MAXQUERYMEMORYSIZE Empty / cannot be set
EXECUTIONPARALLELISM AUTO
PRIORITY 108
RUNTIMEPRIORITY HIGH
RUNTIMEPRIORITYTHRESHOLD 0
QUEUETIMEOUT 0
RUNTIMECAP NONE
PLANNEDCONCURRENCY AUTO
MAXCONCURRENCY 0
SINGLEINITIATOR

FALSE.

CPUAFFINITYSET Empty / cannot be set
CPUAFFINITYMODE ANY / cannot be set
CASCADETO Empty / cannot be set

RECOVERY

Parameter Default Setting
MEMORYSIZE 0%
MAXMEMORYSIZE

Maximum size per node the resource pool can grow by borrowing memory from the GENERAL pool:

MAXMEMORYSIZE {
  'integer%'
  |'integer{K|M|G|T}'
  NONE
}
  • integer%: Percentage of total memory
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
  • NONE (empty string): Unlimited, resource pool can borrow any amount of available memory from the GENERAL pool.

Default: NONE

MAXQUERYMEMORYSIZE Empty / cannot be set
EXECUTIONPARALLELISM AUTO
PRIORITY

One of the following:

  • Enterprise Mode: 107

  • Eon Mode: 110

RUNTIMEPRIORITY MEDIUM
RUNTIMEPRIORITYTHRESHOLD 60
QUEUETIMEOUT 00:05 (minutes)
RUNTIMECAP NONE
PLANNEDCONCURRENCY AUTO
MAXCONCURRENCY

By default, set as follows:

(numberCores / 2) + 1

Thus, given a system with four cores, MAXCONCURRENCY has a default setting of 3.

SINGLEINITIATOR

True.

CPUAFFINITYSET Empty / cannot be set
CPUAFFINITYMODE ANY / cannot be set
CASCADETO Empty / cannot be set

REFRESH

Parameter Default Setting
MEMORYSIZE 0%
MAXMEMORYSIZE NONE (unlimited)
MAXQUERYMEMORYSIZE Empty / cannot be set
EXECUTIONPARALLELISM AUTO
PRIORITY -10
RUNTIMEPRIORITY MEDIUM
RUNTIMEPRIORITYTHRESHOLD 60
QUEUETIMEOUT 00:05 (minutes)
RUNTIMECAP NONE (unlimited)
PLANNEDCONCURRENCY AUTO (4)
MAXCONCURRENCY

3

This parameter must be set ≥ 1.

SINGLEINITIATOR

True.

CPUAFFINITYSET Empty / cannot be set
CPUAFFINITYMODE ANY / cannot be set
CASCADETO Empty / cannot be set

SYSQUERY

Parameter Default Setting
MEMORYSIZE

1G

MAXMEMORYSIZE Empty (unlimited)
MAXQUERYMEMORYSIZE Empty / cannot be set
EXECUTIONPARALLELISM AUTO
PRIORITY 110
RUNTIMEPRIORITY HIGH
RUNTIMEPRIORITYTHRESHOLD 0
QUEUETIMEOUT 00:05 (minutes)
RUNTIMECAP NONE
PLANNEDCONCURRENCY AUTO
MAXCONCURRENCY

Empty

SINGLEINITIATOR

False.

CPUAFFINITYSET Empty / cannot be set
CPUAFFINITYMODE
CASCADETO

TM

Parameter Default Setting
MEMORYSIZE

5% (of the GENERAL pool's MAXMEMORYSIZE setting) + 2GB

MAXMEMORYSIZE Unlimited
MAXQUERYMEMORYSIZE Empty / cannot be set
EXECUTIONPARALLELISM AUTO
PRIORITY 105
RUNTIMEPRIORITY MEDIUM
RUNTIMEPRIORITYTHRESHOLD 60
QUEUETIMEOUT 00:05 (minutes)
RUNTIMECAP NONE
PLANNEDCONCURRENCY 7
MAXCONCURRENCY

Sets across all nodes the maximum number of concurrent execution slots available to TM pool. In databases created in Vertica releases ≥9.3, the default value is 7. In databases created in earlier versions, the default is 3.This setting specifies the maximum number of merges that can occur simultaneously on multiple threads.

SINGLEINITIATOR

True

CPUAFFINITYSET Empty / cannot be set
CPUAFFINITYMODE ANY / cannot be set
CASCADETO Empty / cannot be set

11.28 - CREATE ROLE

Creates a.

Creates a role. After creating a role, use GRANT statements to specify role permissions.

Syntax

CREATE ROLE role

Parameters

role
The name for the new role, where role conforms to conventions described in Identifiers.

Privileges

Superuser

Examples

This example shows to create an empty role called roleA.

=> CREATE ROLE roleA;
CREATE ROLE

See also

11.29 - CREATE ROUTING RULE

Creates a load balancing routing rule that directs incoming client connections from an IP address range to a group of Vertica nodes.

Creates one of the following rules:

  • A load balancing routing rule that directs incoming client connections from an IP address range to a group of Vertica nodes. This group of Vertica nodes is defined by a load balance group. Once you create a routing rule, any client connection originating from the rule's IP address range is redirected to one of the nodes in the load balance group if the client opts into load balancing.
  • A workload routing rule that routes incoming client connections to the specified subcluster. To view existing workload routing rules, see WORKLOAD_ROUTING_RULES.

Syntax

CREATE ROUTING RULE
{
    rule_name ROUTE 'address_range' TO group_name |
    ROUTE WORKLOAD workload_name TO SUBCLUSTER subcluster_name [,...] [ PRIORITY priority]
}

Arguments

rule_name
A name for the load balancing routing rule.
address_range
An IPv4 or IPv6 address range in CIDR format, the address range of client connections that this rule applies to.
group_name
The name of the load balance group to handle the client connections from the address range. You create this group with CREATE LOAD BALANCE GROUP.
workload_name
The name of the workload.
subcluster_name
A list of subclusters to route client connections to.
priority
A non-negative INTEGER, the priority of the workload (default: 0). The greater the value, the greater the priority. If a user or their role is granted multiple routing rules, the one with the greatest priority applies.

Privileges

Superuser

Examples

The following example creates a routing rule that routes all client connections from 192.168.1.0 to 192.168.1.255 to a load balance group named internal_clients:

=> CREATE ROUTING RULE internal_clients ROUTE '192.168.1.0/24' TO internal_clients;
CREATE ROUTING RULE

The following example creates a routing rule that routes analytics workloads to subclusters sc_analytics and sc_analytics_2 with a priority of 5. For details, see Workload routing:

=> CREATE ROUTING RULE ROUTE WORKLOAD analytics TO SUBCLUSTER sc_analytics, sc_analytics_2 PRIORITY 5;

See also

11.30 - CREATE SCHEDULE

Creates a schedule.

Creates a schedule. For details, see Scheduled execution.

To view existing schedules, query USER_SCHEDULES.

Syntax

CREATE SCHEDULE [ IF NOT EXISTS ] [[database.]schema.]schedule
    { USING CRON 'cron_expression' | USING DATETIMES timestamp_list }

Arguments

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

schedule
The name of the schedule.
cron_expression
A cron expression. You should use this for recurring tasks. Value separators are not currently supported.
timestamp_list
A comma-separated list of timestamps. You should use this to schedule non-recurring events at arbitrary times.

Privileges

Superuser

Examples

To create a schedule for October 2, 2022 and November 2, 2022:

=> CREATE SCHEDULE oct_nov_2 USING DATETIMES('2022-10-02 12:00:00', '2022-11-02 12:00:00');

To create a schedule for January 1 at 12:00 PM:

=> CREATE SCHEDULE annual_schedule USING CRON '0 12 1 1 *';

To create a schedule for the first of every month:

=> CREATE SCHEDULE monthly_schedule USING CRON '0 0 1 * *';

To create a schedule for Sunday at 12:00 AM:

=> CREATE SCHEDULE weekly_schedule USING CRON '0 0 * * 0';

To create a schedule for every day at 1:00 PM:

=> CREATE SCHEDULE daily_schedule USING CRON '0 13 * * *';

To create a schedule for every five minutes:

=> CREATE SCHEDULE every_5_minutes USING CRON '*/5 * * * *';

To create a schedule for every hour from 9:00 AM to 4:59 PM:

=> CREATE SCHEDULE 9_to_5 USING CRON '0 9-16 * * *';

11.31 - CREATE SCHEMA

Defines a schema.

Defines a schema.

Syntax

CREATE SCHEMA [ IF NOT EXISTS ] [{ database | namespace }.]schema
   [ AUTHORIZATION username]
   [ DEFAULT { INCLUDE | EXCLUDE } [ SCHEMA ] PRIVILEGES ]
   [ DISK_QUOTA quota ]

Arguments

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

database
Name of the database. If specified, it must be the current database.
namespace
For Eon Mode databases, namespace under which to create the schema. If unspecified, the schema is created in the default namespace.
schema
Name of the schema to create, with the following requirements:
  • Must be unique among all other schema names in the namespace under which the schema is created.

  • Must comply with keyword restrictions and rules for Identifiers.

  • Cannot begin with v_; this prefix is reserved for Vertica system tables.

  • If schema contains a period, the part before the period cannot be the same as the name of an existing namespace.

AUTHORIZATION username
Valid only for superusers, assigns ownership of the schema to another user. By default, the user who creates a schema is also assigned ownership.

After you create a schema, you can reassign ownership to another user with ALTER SCHEMA.

DEFAULT {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES

Specifies whether to enable or disable default inheritance of privileges for new tables in the specified schema:

  • EXCLUDE SCHEMA PRIVILEGES (default): Disables inheritance of schema privileges.

  • INCLUDE SCHEMA PRIVILEGES: Specifies to grant tables in the specified schema the same privileges granted to that schema. This option has no effect on existing tables in the schema.

If you omit INCLUDE PRIVILEGES, you must explicitly grant schema privileges on the desired tables.

For more information see Enabling schema inheritance.

DISK_QUOTA quota
String, an integer followed by a supported unit: K, M, G, or T. Data-load, DML, and ILM operations that increase the schema's usage beyond the set quota fail. For details, see Disk quotas.

If not specified, the schema has no quota.

Privileges

Supported sub-statements

CREATE SCHEMA can include one or more sub-statements—for example, to create tables or projections within the new schema. Supported sub-statements include:

CREATE SCHEMA statement and all sub-statements are treated as a single transaction. If any statement fails, Vertica rolls back the entire transaction. The owner of the new schema is assigned ownership of all objects that are created within this transaction.

For example, the following CREATE SCHEMA statement also grants privileges on the new schema, and creates a table and view of that table:

=> \c - Joan
You are now connected as user "Joan".
=> CREATE SCHEMA s1
     GRANT USAGE, CREATE ON SCHEMA s1 TO public
     CREATE TABLE s1.t1 (a varchar)
     CREATE VIEW s1.t1v AS SELECT * FROM s1.t1;
CREATE SCHEMA
=> \dtv s1.*
             List of tables
 Schema | Name | Kind  | Owner | Comment
--------+------+-------+-------+---------
 s1     | t1   | table | Joan  |
 s1     | t1v  | view  | Joan  |
(2 rows)

Examples

Create schema s1:

=> CREATE SCHEMA s1;

Create schema s2 if it does not already exist:

=> CREATE SCHEMA IF NOT EXISTS s2;

In an Eon Mode database, create schema s3 in namespace n1:

=> CREATE SCHEMA n1.s3;
CREATE SCHEMA

In an Eon Mode database, create schema s3 in the default_namespace:

=> CREATE SCHEMA s3;
CREATE SCHEMA

If the schema already exists, Vertica returns a rollback message:

=> CREATE SCHEMA IF NOT EXISTS s2;
NOTICE 4214:  Object "s2" already exists; nothing was done

Create table t1 in schema s1, then grant users Fred and Aniket access to all existing tables and all privileges on table t1:

=> CREATE TABLE s1.t1 (c INT);
CREATE TABLE
=> GRANT USAGE ON SCHEMA s1 TO Fred, Aniket;
GRANT PRIVILEGE
=> GRANT ALL PRIVILEGES ON TABLE s1.t1 TO Fred, Aniket;
GRANT PRIVILEGE

Enable inheritance on new schema s3 so all tables created in it automatically inherit its privileges. In this case, new table s3.t2 inherits USAGE, CREATE, and SELECT privileges, which are automatically granted to all database users:

=> CREATE SCHEMA s3 DEFAULT INCLUDE SCHEMA PRIVILEGES;
CREATE SCHEMA

=> GRANT USAGE, CREATE, SELECT, INSERT ON SCHEMA S3 TO PUBLIC;
GRANT PRIVILEGE

=> CREATE TABLE s3.t2(i int);
WARNING 6978:  Table "t2" will include privileges from schema "s3"
CREATE TABLE

See also

11.32 - CREATE SEQUENCE

Defines a named sequence number generator object.

Defines a named sequence number generator. Named sequences let you set the default values of primary key columns. Sequences guarantee uniqueness and avoid constraint enforcement issues.

A sequence has minimum and maximum values, a starting position, and an increment. The increment can be positive (ascending) or negative (descending). By default, an ascending sequence starts at 1, its minimum value, and a descending sequence starts at -1, its maximum value. You can specify minimum, maximum, increment, and start values for a new sequence, so long as the values are mathematically consistent.

For more information about sequence types and usage, see Sequences.

Syntax

CREATE SEQUENCE [ IF NOT EXISTS ] [[database.]schema.]sequence
   [ INCREMENT [ BY ] integer ]
   [ MINVALUE integer | NO MINVALUE ]
   [ MAXVALUE integer | NO MAXVALUE ]
   [ START [ WITH ] integer ]
   [ CACHE integer | NO CACHE ]
   [ CYCLE | NO CYCLE ]

Arguments

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

sequence
Name of the sequence to create, where the name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
INCREMENT integer

Positive or negative minimum value change on each call to NEXTVAL. The default is 1.

This value is a minimum increment size. The sequence can increment by more than this value unless you also specify NO CACHE.

MINVALUE integer | NO MINVALUE
Minimum value the sequence can generate. The default depends on the direction of the sequence:
  • Ascending sequence: 1

  • Descending sequence: -263

MAXVALUE integer | NO MAXVALUE
Maximum value the sequence can generate. The default depends on the direction of the sequence:
  • Ascending sequence: 263

  • Descending sequence: -1

START [WITH] integer
Initial value of the sequence. The next call to NEXTVAL returns this start value. By default, an ascending sequence begins with the minimum value and a descending sequence begins with the maximum value.
CACHE integer | NO CACHE

How many unique sequence numbers to pre-allocate and store in memory on each node for faster access. Vertica sets up caching for each session and distributes it across all nodes. A value of 0 or 1 is equivalent to NO CACHE.

By default, the sequence cache is set to 250,000.

For details, see Distributing sequences.

CYCLE | NO CYCLE
How to handle reaching the end of the sequence. The default is NO CYCLE, meaning that a call to NEXTVAL returns an error after the sequence reaches its upper or lower limit. CYCLE instead wraps, as follows:
  • Ascending sequence: wraps to the minimum value.
  • Descending sequence: wraps to the maximum value.

Privileges

Non-superusers: CREATE privilege on the schema

Examples

See Creating and using named sequences.

See also

11.33 - CREATE SUBNET

Identifies the subnet to which the nodes of a Vertica database belong.

Identifies the subnet to which the nodes of a Vertica database belong. Use this statement to configure import/export from a database to other Vertica clusters.

Syntax

CREATE SUBNET subnet-name WITH 'subnet-prefix'

Parameters

subnet-name
A name you assign to the subnet, where subnet-name conforms to conventions described in Identifiers.
subnet-prefix
The subnet prefix in either a dotted-quad number format for IPv4 addresses, or four colon-delimited four-digit hexadecimal numbers for IPv6 addresses. Refer to system table NETWORK_INTERFACES to get the prefix of all available IP networks.

You can then configure the database to use the subnet for import/export. For details, see Identify the database or nodes used for import/export.

Privileges

Superuser

Examples

=> CREATE SUBNET mySubnet WITH '123.4.5.6';
=> CREATE SUBNET mysubnet WITH 'fd9b:1fcc:1dc4:78d3::';

11.34 - CREATE TABLE

Creates a table in the logical schema.

Creates a table in the logical schema. You can specify the column definitions directly, or you can base a table on an existing table using AS or LIKE.

Syntax

Create with column definitions

CREATE TABLE [ IF NOT EXISTS ] [[ { namespace. | database. } ]schema.]table
   ( column-definition[,...] [, table-constraint [,...]] )
   [ ORDER BY column[,...] ]
   [ segmentation-spec ]
   [ KSAFE [safety] ]
   [ partition-clause]
   [ { INCLUDE | EXCLUDE } [SCHEMA] PRIVILEGES ]
   [ DISK_QUOTA quota ]

Create from another table

CREATE TABLE [ IF NOT EXISTS ] [[{ namespace. | database. } ]schema.]table
   { AS-clause | LIKE-clause }
   [ DISK_QUOTA quota ]

AS-clause:

[ ( column-name-list ) ]
[ {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES ]
AS  [ /*+ LABEL */ ] [ AT epoch ] query [ ENCODED BY column-ref-list ] [ segmentation-spec ]

LIKE-clause:

LIKE [[{ namespace. | database. } ]schema.]existing-table
  [ { INCLUDING | EXCLUDING } PROJECTIONS ]
  [ { INCLUDE | EXCLUDE } [SCHEMA] PRIVILEGES ]

Arguments

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

namespace
For Eon Mode databases, name of the namespace under which to create the table. If unspecified, the table is created under the default_namespace.
database
For Enterprise Mode databases, name of the database. If specified, it must be the current database.
schema
Name of the schema under which to create the table. If no schema is specified, the table is created in the public schema.
table
Name of the table to create, which must be unique among names of all sequences, tables, projections, views, and models within the schema.
column-definition
Column name, data type, and optional constraints. A table can have up to 9800 columns. At least one column in the table must be of a scalar type or native array.
table-constraint
Table-level constraint, as opposed to column constraints.
ORDER BY column[,...]

Invalid for external tables, specifies columns from the SELECT list on which to sort the superprojection that is automatically created for this table. The ORDER BY clause cannot include qualifiers ASC or DESC. Vertica always stores projection data in ascending sort order.

If you omit the `ORDER BY` clause, Vertica uses the SELECT list order as the projection sort order.

segmentation-spec

Invalid for external tables, specifies how to distribute data for auto-projections of this table. Supply one of the following clauses:

If this clause is omitted, Vertica generates auto-projections with default hash segmentation.

KSAFE [ safety ]

Invalid for external tables, specifies K-safety of auto-projections created for this table, where k-num must be equal to or greater than system K-safety. If you omit this option, the projection uses the system K-safety level.

partition-clause
Invalid for external tables, logically divides table data storage through a PARTITION BY clause:
PARTITION BY partition-expression
  [ GROUP BY group-expression ] [ ACTIVEPARTITIONCOUNT integer ]
column-name-list

Valid only when creating a table from a query (AS query), defines column names that map to the query output. If you omit this list, Vertica uses the query output column names.

This clause and the ENCODED BY clause are mutually exclusive. Column name lists are invalid for external tables.

The names in column-name-list and queried columns must be the same in number.

For example:

CREATE TABLE customer_occupations (name, profession)
   AS SELECT customer_name, occupation FROM customer_dimension;
{INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES

Default inheritance of schema privileges for this table:

  • INCLUDE PRIVILEGES specifies that the table inherits privileges that are set on its schema. This is the default behavior if privileges inheritance is enabled for the schema.

  • EXCLUDE PRIVILEGES disables inheritance of privileges from the schema.

For details, see Inherited privileges.

AS query

Creates and loads a table from the results of a query, specified as follows:

AS  [ /*+ LABEL */ ] [ AT  ] epoch query

The query cannot include complex type columns.

ENCODED BY column-ref-list

A comma-delimited list of columns from the source table, where each column is qualified by one or both of the following encoding options:

  • ACCESSRANK integer: Overrides the default access rank for a column, useful for prioritizing access to a column. See Prioritizing column access speed.

  • ENCODING encoding-type: Specifies the type of encoding to use on the column. The default encoding type is AUTO.

This option and *column-name-list *are mutually exclusive. This option is invalid for external tables.

LIKE existing-table
Creates the table by replicating an existing table. The user issuing the CREATE TABLE LIKE statement owns the new table, not the original owner.

You can qualify the LIKE clause with one of the following options:

  • EXCLUDING PROJECTIONS (default): Do not copy projections from the source table.

  • INCLUDING PROJECTIONS: Copy current projections from the source table for the new table.

  • {INCLUDE|EXCLUDE} [SCHEMA] PRIVILEGES: See description above).

DISK_QUOTA quota
String, an integer followed by a supported unit: K, M, G, or T. Data-load, DML, and ILM operations that increase the table's usage beyond the set quota fail. For details, see Disk quotas.

If not specified, the table has no quota.

Privileges

Superuser to set disk quota.

Non-superuser:

  • CREATE privileges on the table schema

  • If creating a table that includes a named sequence:

    • SELECT privilege on sequence object

    • USAGE privilege on sequence schema

Restrictions for complex types

Complex types used in native tables have some restrictions, in addition to the restrictions for individual types listed on their reference pages:

  • A native table must have at least one column that is a primitive type or a native array (one-dimensional array of a primitive type). If a flex table has real columns, it must also have at least one column satisfying this restriction.

  • Complex type columns cannot be used in ORDER BY or PARTITION BY clauses nor as FILLER columns.

  • Complex type columns cannot have constraints.

  • Complex type columns cannot use DEFAULT or SET USING.

  • Expressions returning complex types cannot be used as projection columns, and projections cannot be segmented or ordered by columns of complex types.

Examples

The following example creates a table in the public schema:

CREATE TABLE public.Premium_Customer
(
    ID IDENTITY ,
    lname varchar(25),
    fname varchar(25),
    store_membership_card int
);

In an Eon Mode database, the following example creates the same table from the previous example, but in the store schema of the company namespace:

CREATE TABLE company.store.Premium_Customer
(
    ID IDENTITY ,
    lname varchar(25),
    fname varchar(25),
    store_membership_card int
);

The following example uses LIKE to create a new table from the Premium_Customer table:

=> CREATE TABLE All_Customers LIKE Premium_Customer;
CREATE TABLE

The following example selects columns from one table to use in a new table, using an AS clause:

=> CREATE TABLE cust_basic_profile AS SELECT
     customer_key, customer_gender, customer_age, marital_status, annual_income, occupation
   FROM customer_dimension WHERE customer_age>18 AND customer_gender !='';
CREATE TABLE

=> SELECT customer_age, annual_income, occupation 
   FROM cust_basic_profile
   WHERE customer_age > 23 ORDER BY customer_age;
 customer_age | annual_income |     occupation
--------------+---------------+--------------------
           24 |        469210 | Hairdresser
           24 |        140833 | Butler
           24 |        558867 | Lumberjack
           24 |        529117 | Mechanic
           24 |        322062 | Acrobat
           24 |        213734 | Writer
           ...

The following example creates a table using array columns:

=> CREATE TABLE orders(
    orderkey    INT,
    custkey     INT,
    prodkey     ARRAY[VARCHAR(10)],
    orderprices ARRAY[DECIMAL(12,2)],
    orderdate   DATE
);

The following example uses a ROW complex type:

=> CREATE TABLE inventory
    (store INT, products ROW(name VARCHAR, code VARCHAR));

The following example uses quotas:

=> CREATE SCHEMA internal DISK_QUOTA '10T';
CREATE SCHEMA

=> CREATE TABLE internal.sales (...) DISK_QUOTA '5T';
CREATE TABLE

=> CREATE TABLE internal.leads (...) DISK_QUOTA '12T';
WARNING 0: Table leads has disk quota greater than its schema internal

The following statement creates a flights table in the airline schema under the airport namespace:

=> CREATE TABLE airport.airline.flights(
    flight_number INT,
    leaving_from VARCHAR,
    arriving_at VARCHAR,
    expected_departure DATE,
    gate VARCHAR
);

See also

11.34.1 - Column-constraint

Adds a constraint to a column's metadata.

Adds a constraint to a column's metadata.

Syntax

[ { AUTO_INCREMENT | IDENTITY } [ (args) ] ]
[ CONSTRAINT constraint-name ] {
   [ CHECK (expression) [ ENABLED | DISABLED ] ]
   [ [ DEFAULT expression ] [ SET USING expression } | DEFAULT USING expression ]
   [ NULL | NOT NULL ]
   [ { PRIMARY KEY [ ENABLED | DISABLED ] REFERENCES table [( column )] } ]
   [ UNIQUE [ ENABLED | DISABLED ] ]
}

Arguments

You can specify enforcement of several constraints by qualifying them with the keywords ENABLED or DISABLED. See Enforcing Constraints below.

{ AUTO_INCREMENT | IDENTITY } [ (args) ]
Creates a table column whose values are automatically generated by and managed by the database. You cannot change or load values in this column. You can set this constraint on only one table column. You cannot set this constraint in a temporary table.

AUTO_INCREMENT and IDENTITY are synonyms.

The following arguments can be added:

( { cache-size | start }, increment, [ cache-size ] )
  • start: First value to set for this column (default 1).

  • increment: How much to change the value for each new row insertion, a positive or negative value (default 1).

  • cache-size: How many unique values each node caches per session, or 0 or 1 to disable (default 250,000).

For more information about these arguments, see IDENTITY sequences.

CONSTRAINT constraint-name
Assigns a name to the constraint, valid for the following constraints:
  • PRIMARY KEY

  • REFERENCES (foreign key)

  • CHECK

  • UNIQUE

If you omit assigning a name to these constraints, Vertica assigns its own name. For details, see Naming constraints.

Vertica recommends that you name all constraints.

CHECK (expression)
Sets a Boolean condition to check.
DEFAULT default-expr
Sets this column's default value. Vertica evaluates the expression and sets the column on load operations, if the operation omits a value for the column. For details about valid expressions, see Defining column values.
SET USING using-expr
Sets column values from an expression. Vertica evaluates the expression and refreshes column values only when REFRESH_COLUMNS is invoked. For details about valid expressions, see Defining column values.
DEFAULT USING expression
Defines the column with DEFAULT and SET USING constraints, specifying the same expression for both. DEFAULT USING columns support the same expressions as SET USING columns, and are subject to the same restrictions.
NULL | NOT NULL
Whether the column allows null values.
  • NULL: Allows null values in the column. If you set this constraint on a primary key column, Vertica ignores it and sets it to NOT NULL.

  • NOT NULL: Requires the column to be set to a value during insert and update operations. If the column has no default value and no value is provided, INSERT or UPDATE returns an error.

The default is NULL for all columns except primary key columns, which Vertica always sets to NOT NULL.

For an external table, data violating a NOT NULL constraint can produce unexpected results when the table is queried. Use NOT NULL for an external table column only if you are sure that the data does not contain nulls.

PRIMARY KEY
Identifies this column as the table's primary key.
REFERENCES table [ ( column ) ]
Identifies this column as a foreign key to a column in another table. If you omit the column, Vertica uses the table's primary key.
UNIQUE
Requires column data to be unique with respect to all table rows.

Privileges

Table owner or user WITH GRANT OPTION is grantor.

  • REFERENCES privilege on table to create foreign key constraints that reference this table

  • USAGE privilege on schema that contains the table

Enforcing constraints

The following constraints can be qualified with the keyword ENABLED or DISABLED:

  • PRIMARY KEY

  • UNIQUE

  • CHECK

If you omit ENABLED or DISABLED, Vertica determines whether to enable the constraint automatically by checking the appropriate constraints configuration parameter:

  • EnableNewPrimaryKeysByDefault

  • EnableNewUniqueKeysByDefault

  • EnableNewCheckConstraintsByDefault

See also

11.34.2 - Column-definition

Specifies the name, data type, and constraints to be applied to a column.

Specifies the name, data type, and constraints to be applied to a column.

Syntax

column-name data-type
    [ column-constraint ][...]
    [ ENCODING encoding-type ]
    [ ACCESSRANK integer ]

Parameters

column-name
The name of a column to be created or added.
data-type
A Vertica-supported data type.
column-constraint
A constraint type that Vertica supports—for example, NOT NULL or UNIQUE. For general information, see Constraints.
ENCODING encoding-type

The column encoding type, by default set to AUTO.

ACCESSRANK integer

Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.

Examples

The following example creates a table named Employee_Dimension and its associated superprojection in the public schema. The Employee_key column is designated as a primary key, and RLE encoding is specified for the Employee_gender column definition:

=> CREATE TABLE public.Employee_Dimension (
    Employee_key                   integer PRIMARY KEY NOT NULL,
    Employee_gender                varchar(8) ENCODING RLE,
    Courtesy_title                 varchar(8),
    Employee_first_name            varchar(64),
    Employee_middle_initial        varchar(8),
    Employee_last_name             varchar(64)
);

11.34.3 - Column-name-list

Used to rename columns when creating a table or temporary table from a query; also used to specify the column's encoding type and .

Used to rename columns when creating a table or temporary table from a query; also used to specify the column's encoding type and access rank .

Syntax

column-name-list
    [ ENCODING encoding-type ]
    [ ACCESSRANK integer ]
    [ GROUPED ( column-reference[,...] ) ]

Parameters

column-name
Specifies the new name for the column.
ENCODING [encoding-type](/en/sql-reference/statements/create-statements/create-projection/encoding-types/)
Specifies the type of encoding to use on the column. The default encoding type is AUTO.
ACCESSRANK integer
Overrides the default access rank for a column, useful for prioritizing access to a column. See Prioritizing column access speed.
GROUPED
Groups two or more columns . For detailed information, see GROUPED clause.

Requirements

  • A column in the list can not specify the column's data type or any constraint. These are derived from the queried table.

  • If the query output has expressions other than simple columns (for example, constants or functions) then an alias must be specified for that expression, or the column name list must include all queried columns.

  • CREATE TABLE can specify encoding types and access ranks in the column name list or the query's ENCODED BY clause, but not in both. For example, the following CREATE TABLE statement sets encoding and access rank on two columns in the column name list:

    => CREATE TABLE promo1 (state ENCODING RLE ACCESSRANK 1, zip ENCODING RLE,...)
         AS SELECT * FROM customer_dimension ORDER BY customer_state;
    

    The next statement specifies the same encoding and access rank in the query's ENCODED BY clause.

    
    => CREATE TABLE promo2
         AS SELECT * FROM customer_dimension ORDER BY customer_state
         ENCODED BY customer_state ENCODING RLE ACCESSRANK 1, customer_zip ENCODING RLE;
    

11.34.4 - Partition clause

Specifies partitioning of table data, through a PARTITION BY clause in the table definition:.

Specifies partitioning of table data, through a PARTITION BY clause in the table definition:

PARTITION BY partition-expression [ GROUP BY group-expression ] [ active-partition-count-expr ]
PARTITION BY partition-expression
For each table row, resolves to a partition key that is derived from one or more table columns.
GROUP BY group-expression
For each table row, resolves to a partition group key that is derived from the partition key. Vertica uses group keys to merge partitions into separate partition groups. GROUP BY must use the same expression as PARTITION BY. For example:
...PARTITION BY (i+j) GROUP BY (
     CASE WHEN (i+j) < 5 THEN 1
          WHEN (i+j) < 10 THEN 2
          ELSE 3);

For details on partitioning table data by groups, see Partition grouping and Hierarchical partitioning.

active-partition-count-expr
Specifies how many partitions are active for this table, specified as follows:
  • In partition clause of CREATE TABLE:

    ACTIVEPARTITIONCOUNT integer
    
  • In partition clause of ALTER TABLE:

    SET ACTIVEPARTITIONCOUNT integer
    

This setting supersedes configuration parameter ActivePartitionCount. For details on usage, see Active and inactive partitions.

Partitioning requirements and restrictions

PARTITION BY expressions can specify leaf expressions, functions, and operators. The following requirements and restrictions apply:

  • All table projections must include all columns referenced in the expression; otherwise, Vertica cannot resolve the expression.
  • The expression can reference multiple columns, but it must resolve to a single non-null value for each row.
  • All leaf expressions must be constants or table columns.
  • All other expressions must be functions and operators. The following restrictions apply to functions: * They must be immutable—that is, they return the same value regardless of time and locale and other session- or environment-specific conditions. * They cannot be aggregate functions. * They cannot be Vertica meta-functions.
  • The expression cannot include queries.
  • The expression cannot include user-defined data types such as Geometry.

GROUP BY expressions do not support modulo (%) operations.

Examples

The following statements create the store_orders table and load data into it. The CREATE TABLE statement includes a simple partition clause that specifies to partition data by year:

=> CREATE TABLE public.store_orders
(
    order_no int,
    order_date timestamp NOT NULL,
    shipper varchar(20),
    ship_date date
)
UNSEGMENTED ALL NODES
PARTITION BY YEAR(order_date);
CREATE TABLE
=> COPY store_orders FROM '/home/dbadmin/export_store_orders_data.txt';
41834

As COPY loads the new table data into ROS storage, the Tuple Mover executes the table's partition clause by dividing orders for each year into separate partitions, and consolidating these partitions in ROS containers.

In this case, the Tuple Mover creates four partition keys for the loaded data—2017, 2016, 2015, and 2014—and divides the data into separate ROS containers accordingly:

=> SELECT dump_table_partition_keys('store_orders');
... Partition keys on node v_vmart_node0001
  Projection 'store_orders_super'
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2017
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2016
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2015
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2014

 Partition keys on node v_vmart_node0002
  Projection 'store_orders_super'
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2017
...

(1 row)

As new data is loaded into store_orders, the Tuple Mover merges it into the appropriate partitions, creating partition keys as needed for new years.

See also

Partitioning tables

11.34.5 - Table-constraint

Adds a constraint to table metadata. You can specify table constraints with CREATE TABLE, or add a constraint to an existing table with ALTER TABLE. For details, see Setting constraints.

Adding a constraint to a table that is referenced in a view does not affect the view.

Syntax

[ CONSTRAINT constraint-name ]
{
... PRIMARY KEY (column[,... ]) [ ENABLED | DISABLED ]
... | FOREIGN KEY (column[,... ] ) REFERENCES table [ (column[,...]) ]
... | UNIQUE (column[,...]) [ ENABLED | DISABLED ]
... | CHECK (expression) [ ENABLED | DISABLED ]
}

Parameters

CONSTRAINT constraint‑name
Assigns a name to the constraint. Vertica recommends that you name all constraints.
PRIMARY KEY
Defines one or more NOT NULL columns as the primary key as follows:
PRIMARY KEY (column[,...]) [ ENABLED | DISABLED]

You can qualify this constraint with the keyword ENABLED or DISABLED. See Enforcing Constraints below.

If you do not name a primary key constraint, Vertica assigns the name C_PRIMARY.

FOREIGN KEY
Adds a referential integrity constraint defining one or more columns as foreign keys as follows:
FOREIGN KEY (column[,... ]) REFERENCES table [(column[,... ])]

If you omit column, Vertica references the primary key in table.

If you do not name a foreign key constraint, Vertica assigns the name C_FOREIGN.

UNIQUE
Specifies that the data in a column or group of columns is unique with respect to all table rows, as follows:
UNIQUE (column[,...]) [ENABLED | DISABLED]

You can qualify this constraint with the keyword ENABLED or DISABLED. See Enforcing Constraints below.

If you do not name a unique constraint, Vertica assigns the name C_UNIQUE.

CHECK
Specifies a check condition as an expression that returns a Boolean value, as follows:
CHECK (expression) [ENABLED | DISABLED]

You can qualify this constraint with the keyword ENABLED or DISABLED. See Enforcing Constraints below.

If you do not name a check constraint, Vertica assigns the name C_CHECK.

Privileges

Non-superusers: table owner, or the following privileges:

Enforcing constraints

A table can specify whether Vertica automatically enforces a primary key, unique key or check constraint with the keyword ENABLED or DISABLED. If you omit ENABLED or DISABLED, Vertica determines whether to enable the constraint automatically by checking the appropriate configuration parameter:

  • EnableNewPrimaryKeysByDefault

  • EnableNewUniqueKeysByDefault

  • EnableNewCheckConstraintsByDefault

For details, see Constraint enforcement.

Examples

The following example creates a table (t01) with a primary key constraint.

CREATE TABLE t01 (id int CONSTRAINT sampleconstraint PRIMARY KEY);
CREATE TABLE

This example creates the same table without the constraint, and then adds the constraint with ALTER TABLE ADD CONSTRAINT

CREATE TABLE t01 (id int);
CREATE TABLE

ALTER TABLE t01 ADD CONSTRAINT sampleconstraint PRIMARY KEY(id);
WARNING 2623:  Column "id" definition changed to NOT NULL
ALTER TABLE

The following example creates a table (addapk) with two columns, adds a third column to the table, and then adds a primary key constraint on the third column.

=> CREATE TABLE addapk (col1 INT, col2 INT);
CREATE TABLE

=> ALTER TABLE addapk ADD COLUMN col3 INT;
ALTER TABLE

=> ALTER TABLE addapk ADD CONSTRAINT col3constraint PRIMARY KEY (col3) ENABLED;
WARNING 2623:  Column "col3" definition changed to NOT NULL
ALTER TABLE

Using the sample table addapk, check that the primary key constraint is enabled (is_enabled is t).

=> SELECT constraint_name, column_name, constraint_type, is_enabled FROM PRIMARY_KEYS WHERE table_name IN ('addapk');

 constraint_name | column_name | constraint_type | is_enabled
-----------------+-------------+-----------------+------------
 col3constraint  | col3        | p               | t
(1 row)

This example disables the constraint using ALTER TABLE ALTER CONSTRAINT.

=> ALTER TABLE addapk ALTER CONSTRAINT col3constraint DISABLED;

Check that the primary key is now disabled (is_enabled is f).

=> SELECT constraint_name, column_name, constraint_type, is_enabled FROM PRIMARY_KEYS WHERE table_name IN ('addapk');

 constraint_name | column_name | constraint_type | is_enabled
-----------------+-------------+-----------------+------------
 col3constraint  | col3        | p               | f
(1 row)

For a general discussion of constraints, see Constraints. For additional examples of creating and naming constraints, see Naming constraints.

11.35 - CREATE TEMPORARY TABLE

Creates a table whose data persists only during the current session.

Creates a table whose data persists only during the current session. By default, temporary table data is not visible to other sessions.

Syntax

Create with column definitions:

CREATE [ scope ] TEMP[ORARY] TABLE [ IF NOT EXISTS ] [[{namespace. | database. }]schema.]table-name 
   ( column-definition[,...] )
   [ table-constraint ]
   [ ON COMMIT { DELETE | PRESERVE } ROWS ]
   [ NO PROJECTION ]
   [ ORDER BY table-column[,...] ]
   [ segmentation-spec ]
   [ KSAFE [safety-level] ]
   [ {INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES ]
   [ DISK_QUOTA quota ]

Create from another table:


CREATE TEMP[ORARY] TABLE [ IF NOT EXISTS ] [[{namespace. | database. }]schema.]table-name
   [ ( column-name-list ) ]
   [ ON COMMIT { DELETE | PRESERVE } ROWS ]
   AS  [ /*+ LABEL */ ] [ AT epoch ] query [ ENCODED BY column-ref-list ]
   [ DISK_QUOTA quota ]

Parameters

scope
Visibility of the table definition:
  • GLOBAL: The table definition is visible to all sessions, and persists until you explicitly drop the table.

  • LOCAL: the table definition is visible only to the session in which it is created, and is dropped when the session ends.

If no scope is specified, Vertica uses the default that is set by the DefaultTempTableLocal configuration parameter.

Regardless of this setting, retention of temporary table data is set by the keywords ON COMMIT DELETE and ON COMMIT PRESERVE (see below).

For more information, see Creating temporary tables.

IF NOT EXISTS

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

{ namespace. | database. }
Qualifies the schema name with one of the following names, depending on the mode of the database:
  • Eon Mode: name of the namespace to which the schema belongs. If unspecified, the namespace is assumed to be default_namespace. If scope is set to LOCAL, the namespace must be default_namespace.
  • Enterprise Mode: name of the database. If specified, it must be the current database.
schema
Name of the schema, by default public. If you specify the namespace or database name, you must specify the schema name, even if the schema is public.
table-name
Name of the table to create, where table-name conforms to conventions described in Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.
column-definition
Column names and types. A table can have up to 9800 columns.
table-constraint
Adds a constraint to table metadata.
ON COMMIT
Whether data is transaction- or session-scoped:
ON COMMIT {PRESERVE | DELETE} ROWS
  • DELETE (default) marks the temporary table for transaction-scoped data. Vertica removes all table data after each commit.

  • PRESERVE marks the temporary table for session-scoped data, which is preserved beyond the lifetime of a single transaction. Vertica removes all table data when the session ends.

NO PROJECTION
Prevents Vertica from creating auto-projections for this table. A superprojection is created only when data is explicitly loaded into this table.

NO PROJECTION is invalid with the following clauses:

{INCLUDE | EXCLUDE} [SCHEMA] PRIVILEGES

Default inheritance of schema privileges for this table:

  • INCLUDE PRIVILEGES specifies that the table inherits privileges that are set on its schema. This is the default behavior if privileges inheritance is enabled for the schema.

  • EXCLUDE PRIVILEGES disables inheritance of privileges from the schema.

For details, see Inherited privileges.

ORDER BY table-column[,...]

Invalid for external tables, specifies columns from the SELECT list on which to sort the superprojection that is automatically created for this table. The ORDER BY clause cannot include qualifiers ASC or DESC. Vertica always stores projection data in ascending sort order.

If you omit the `ORDER BY` clause, Vertica uses the SELECT list order as the projection sort order.

segmentation-spec

Invalid for external tables, specifies how to distribute data for auto-projections of this table. Supply one of the following clauses:

If this clause is omitted, Vertica generates auto-projections with default hash segmentation.

KSAFE [safety-level]

Invalid for external tables, specifies K-safety of auto-projections created for this table, where k-num must be equal to or greater than system K-safety. If you omit this option, the projection uses the system K-safety level.

Eon Mode: K-safety of temporary tables is always set to 0, regardless of system K-safety. If a CREATE TEMPORARY TABLE statement sets k-num greater than 0, Vertica returns an warning.

column-name-list

Valid only when creating a table from a query (AS query), defines column names that map to the query output. If you omit this list, Vertica uses the query output column names.

This clause and the ENCODED BY clause are mutually exclusive. Column name lists are invalid for external tables.

The names in column-name-list and queried columns must be the same in number.

For example:

CREATE TEMP TABLE customer_occupations (name, profession)
   AS SELECT customer_name, occupation FROM customer_dimension;
AS query

Creates and loads a table from the results of a query, specified as follows:

AS  [ /*+ LABEL */ ] [ AT  ] epoch query

The query cannot include complex type columns.

ENCODED BY