This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Include Directory

1 -

This image includes the following packages for UDx development:

  • C++
  • Python

2 -

=> SELECT node_name, opeartion_name, status FROM REPLICATION_STATUS 
   ORDER BY node_name;
       node_name       |   operation_name   |  status         
-----------------------+--------------------+-----------
 v_target_db_node0001  | load_snapshot_prep | completed        
 v_target_db_node0001  | data_transfer      | completed         
 v_target_db_node0001  | load_snapshot      | completed        
 v_target_db_node0002  | load_snapshot_prep | completed        
 v_target_db_node0002  | data_transfer      | completed         
 v_target_db_node0002  | load_snapshot      | completed         
 v_target_db_node0003  | load_snapshot_prep | completed
 v_target_db_node0003  | data_transfer      | completed
 v_target_db_node0003  | load_snapshot      | completed
(9 rows)

3 -

--catalog-path string
The absolute path to the catalog directory.

4 -

--cert-file string
The absolute path to the certificate file. If you specify this option, you must also specify --key-file. You should only use --cert-file and --key-file if you have configured the Node Management Agent (NMA) to use custom certificates.

5 -

--communal-storage-location string
[Eon only] The absolute path of your communal storage location.

6 -

--config-param PARAMETER=VALUE[,...]
A comma-separated list of PARAMETER=VALUE pairs. Parameters specified with this option override the ones in configuration files, if any, and take the following parameters:
  • AWSAuth
  • AWSEndpoint
  • AWSEnableHttps
  • AWSRegion

7 -

{ -c | --config } string
The path to write the configuration file.

Default: /opt/vertica/config/vertica_cluster.yaml

8 -

{ -c | --config } string
The path to the config file. If a configuration file is present in the default location (automatically generated by create_db), you do not need to specify this option.

Default: /opt/vertica/config/vertica_cluster.yaml

9 -

--conn string
Path to the connection file.

10 -

--data-path string
The absolute path to the data directory. This should be the same for all nodes in the database.

11 -

{ -d | --db-name } string
The name of the database. You should only use this option if you want to override the database name in your configuration file. This string must conform to the format used for database names.

12 -

--depot-path string
[Eon only] The absolute path to depot directory.

13 -

--depot-size string
[Eon only] Size of depot in one of the following formats:
  • integer{K|M|G|T}, where K is kilobytes, M is megabytes, G is gigabytes, and T is terabytes.
  • integer%, which expresses the depot size as a percentage of the total disk size.

14 -

--drain-seconds int
[Eon only] The time to wait, in seconds, for user connections to close on their own. When the time expires, user connections are automatically closed and the database is shut down. If set to 0, VCluster closes all user connections immediately. If the value is negative, VCluster waits indefinitely until all user connections close.

Default: 60

15 -

--eon-mode
[Eon only] Indicates that the database is an Eon Mode database.

16 -

--force-removal
Whether to delete any existing database directories in the new hosts before attempting to add them.

17 -

{ -h | --help }
Prints help text.

18 -

--hosts strings
A comma-separated list of hosts in database.

19 -

--ipv6
Whether the hosts use IPv6 addresses. Hostnames resolve to IPv4 by default.

20 -

--key-file string
Path to the key file. If you specify this option, you must also specify --cert-file. You should only use --cert-file and --key-file if you have configured the Node Management Agent (NMA) to use custom certificates.

21 -

{ -l | --log-path } string
The absolute path for debug logs.

Default: /opt/vertica/log/vcluster.log

22 -

This string must conform to the format used for database names.

23 -

{ -o | --output-file } string
Writes the output to the specified file instead of STDOUT.

Default: STDOUT

24 -

--password-file string
The absolute path to a file containing the database password.

If you pass a dash(-) (that is, `--password-file -`), the password is read from STDIN.

25 -

{ -p | --password } string
The database password.

26 -

--read-password-from-prompt
Prompts the user to enter the password.

27 -

--skip-rebalance-shards
[Eon only] Whether to skip shard rebalancing.

28 -

[Use only with support guidance]

29 -

--timeout int
The time (in seconds) to wait for nodes to start up.

Default: 300

30 -

--verbose
Shows the details of VCluster run in the console.

31 - acccess-policy-table-example.md

=> \x
=> SELECT policy_type, is_policy_enabled, table_name, column_name, expression FROM access_policy WHERE table_name = 'public.customer_dimension';
-[ RECORD 1 ]-----+----------------------------------------------------------------------------------------
policy_type       | Column Policy
is_policy_enabled | Enabled
table_name        | public.customer_dimension
column_name       | customer_address
expression        | CASE WHEN enabled_role('administrator') THEN customer_address ELSE '**************' END

32 - access-policy-restrictions.md

The following limitations apply to access policies:

  • A column can have only one access policy.

  • Column access policies cannot be set on columns of complex types other than native arrays.

  • Column access policies cannot be set for materialized columns on flex tables. While it is possible to set an access policy for the __raw__ column, doing so restricts access to the whole table.

  • Row access policies are invalid on temporary tables and tables with aggregate projections.

  • Access policy expressions cannot contain:

    • Subqueries

    • Aggregate functions

    • Analytic functions

    • User-defined transform functions (UDTF)

  • If the query optimizer cannot replace a deterministic expression that involves only constants with their computed values, it blocks all DML operations such as INSERT.

33 - access-rank_snip.md

Overrides the default access rank for a column. Use this parameter to increase or decrease the speed at which Vertica accesses a column. For more information, see Overriding Default Column Ranking.

34 - active-data-partition-setting-levels.md

You can specify how many partitions are active for partitioned tables at two levels, in ascending order of precedence:

  • Configuration parameter ActivePartitionCount determines how many partitions are active for partitioned tables in the database. By default, ActivePartitionCount is set to 1. The Tuple Mover applies this setting to all tables that do not set their own active partition count.

  • Individual tables can supersede ActivePartitionCount by setting their own active partition count with CREATE TABLE and ALTER TABLE.

35 - active-data-partition.md

The Tuple Mover assumes that all loads and updates to a partitioned table are targeted to one or more partitions that it identifies as active. In general, the partitions with the largest partition keys—typically, the most recently created partitions—are regarded as active. As the partition ages, its workload typically shrinks and becomes mostly read-only.

36 - add-cert-to-tls-config-privs.md

You must have USAGE privileges on the certificate (either from ownership of the certificate or USAGE on its key, if any) to add it to a TLS Configuration.

37 - after-db-create.md

After creating the database

After you create the database, click Get Started to view the Databases page. To view your database, select Manage and View Your Vertica Database to go to the database Overview.

You can also view your database from the Recent Databases section of the MC home page.

For additionalinformation about managing your cluster, instances, and database using Management Console, see Managing database clusters .

38 - aggregate-projection-types.md

Vertica provides several types of projections for storing data that is returned from aggregate functions or expressions:

  • Live aggregate projection: Projection that contains columns with values that are aggregated from columns in its anchor table. You can also define live aggregate projections that include user-defined transform functions.

  • Top-K projection: Type of live aggregate projection that returns the top k rows from a partition of selected rows. Create a Top-K projection that satisfies the criteria for a Top-K query.

  • Projection that pre-aggregates UDTF results: Live aggregate projection that invokes user-defined transform functions (UDTFs). To minimize overhead when you query those projections of this type, Vertica processes the UDTF functions in the background and stores their results on disk.

  • Projection that contains expressions: Projection with columns whose values are calculated from anchor table columns.

39 - alerts-run-query-step.md

Select Run Query. The Query Results section displays any rows returned from your query. Alternatively, you might encounter one of the following issues:

  • If you use invalid SQL syntax, you receive an error.

  • If the query returns more than 5 columns or 100 rows, you receive a warning. Because every query result set is saved to disk, it is important to be aware of the size of your result set. For more information, see Monitoring database messages and alerts with MC.

A long-running query runs until it succeeds or returns a Query Error that indicates that there are insufficient resources to continue. To stop a long-running query, select Cancel Query.

40 - all-privileges-extend.md

You can qualify ALL with two optional keywords:

  • PRIVILEGES conforms with the SQL standard.

  • EXTEND extends the semantics of ALL to include ALTER and DROP privileges. An unqualified ALL excludes these two privileges. This option enables backward compatibility with GRANT ALL usage in pre-9.2.1 Vertica releases.

41 - allow-short-fpe-description.md

When set to True, SecureData ignores the lower length limit for encoding FPE values. Usually, SecureData does not use FPE to encrypt data shorter than a lower limit (usually, 8 bits). See the SecureData Architecture Guide's section on Data Length Restrictions for more information.

42 - alter-function-params-privileges.md

Parameters

[db-name.]schema
Database and schema. The default schema is public. If you specify a database, it must be the current database.
function-name
Name of the function to alter.
parameter-list
Comma-delimited list of parameters that are defined for this function. If none, specify an empty list.
OWNER TO new-owner
Transfers function ownership to another user.
RENAME TO new-name
Renames this function.
SET FENCED { true | false }
Specifies whether to enable fenced mode for this function.
SET SCHEMA new-schema
Moves the function to another schema.

Privileges

Non-superuser: USAGE on the schema and one of the following:

  • Function owner

  • ALTER privilege on the function

For certain operations, non-superusers must also have the following schema privileges:

Operation Schema privileges required
RENAME TO (rename function) CREATE, USAGE
SET SCHEMA (move function to another schema)
  • CREATE: destination schema

  • USAGE: current schema

43 - alter-subcluster-rename-subcluster.md

=> ALTER SUBCLUSTER default_subcluster RENAME TO load_subcluster;
ALTER SUBCLUSTER

=> SELECT DISTINCT subcluster_name FROM subclusters;
  subcluster_name
-------------------
 load_subcluster
 analytics_cluster
(2 rows)

44 - alter-subcluster-set-deafult-example.md

=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
  subcluster_name
--------------------
 default_subcluster
(1 row)

=> ALTER SUBCLUSTER analytics_cluster SET DEFAULT;
ALTER SUBCLUSTER
=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
  subcluster_name
-------------------
 analytics_cluster
(1 row)

45 - alter-user-cfg-parameter.md

=> SHOW USER Yvonne PARAMETER ALL;
  user  |        parameter        | setting
--------+-------------------------+---------
 Yvonne | DepotOperationsForQuery | Fetches
(1 row)

=> ALTER USER Yvonne SET PARAMETER UseDepotForWrites = 0;
ALTER USER
=> SHOW USER Yvonne PARAMETER ALL;
  user  |        parameter        | setting
--------+-------------------------+---------
 Yvonne | DepotOperationsForQuery | Fetches
 Yvonne | UseDepotForWrites       | 0
(2 rows)

=> ALTER USER Ahmed SET PARAMETER DepotOperationsForQuery = 'Fetches';
ALTER USER
=> SHOW USER ALL PARAMETER ALL;
  user  |        parameter        | setting
--------+-------------------------+---------
 Ahmed  | DepotOperationsForQuery | Fetches
 Yvonne | DepotOperationsForQuery | Fetches
 Yvonne | UseDepotForWrites       | 0
(3 rows)

46 - analytic-function-c-.md

The AnalyticFunction API provides the following methods for extension by subclasses:

virtual void setup(ServerInterface &srvInterface,
        const SizedColumnTypes &argTypes);

virtual void processPartition (ServerInterface &srvInterface,
        AnalyticPartitionReader &input_reader,
        AnalyticPartitionWriter &output_writer)=0;

virtual void cancel(ServerInterface &srvInterface);

virtual void destroy(ServerInterface &srvInterface, const SizedColumnTypes &argTypes);

47 - analytic-function-factory-c-.md

The AnalyticFunctionFactory API provides the following methods for extension by subclasses:

virtual AnalyticFunction * createAnalyticFunction (ServerInterface &srvInterface)=0;

virtual void getPrototype(ServerInterface &srvInterface,
        ColumnTypes &argTypes, ColumnTypes &returnType)=0;

virtual void getReturnType(ServerInterface &srvInterface,
        const SizedColumnTypes &argTypes, SizedColumnTypes &returnType)=0;

virtual void getParameterType(ServerInterface &srvInterface,
        SizedColumnTypes &parameterTypes);

48 - analytic-function-factory-java.md

The AnalyticFunctionFactory API provides the following methods for extension by subclasses:

public abstract AnalyticFunction createAnalyticFunction (ServerInterface srvInterface);

public abstract void getPrototype(ServerInterface srvInterface, ColumnTypes argTypes, ColumnTypes returnType);

public abstract void getReturnType(ServerInterface srvInterface, SizedColumnTypes argTypes,
        SizedColumnTypes returnType) throws UdfException;

public void getParameterType(ServerInterface srvInterface, SizedColumnTypes parameterTypes);

49 - analytic-function-java.md

The AnalyticFunction API provides the following methods for extension by subclasses:

public void setup(ServerInterface srvInterface, SizedColumnTypes argTypes);

public abstract void processPartition (ServerInterface srvInterface,
        AnalyticPartitionReader input_reader, AnalyticPartitionWriter output_writer)
        throws UdfException, DestroyInvocation;

protected void cancel(ServerInterface srvInterface);

public void destroy(ServerInterface srvInterface, SizedColumnTypes argTypes);

50 - analytic-function-syntax.md

analytic-function(arguments) OVER(
  [ window-partition-clause ]
  [ window-order-clause [ window-frame-clause ] ]
)

51 - analytics-intro.md

Vertica analytics are SQL functions based on the ANSI 99 standard. These functions handle complex analysis and reporting tasks—for example:

  • Rank the longest-standing customers in a particular state.

  • Calculate the moving average of retail volume over a specified time.

  • Find the highest score among all students in the same grade.

  • Compare the current sales bonus that salespersons received against their previous bonus.

Analytic functions return aggregate results but they do not group the result set. They return the group value multiple times, once per record. You can sort group values, or partitions, using a window ORDER BY clause, but the order affects only the function result set, not the entire query result set.

52 - analyze-partition-stats-requirements.md

The following requirements and restrictions apply to ANALYZE_STATISTICS_PARTITION:

  • The table must be partitioned and cannot contain unpartitioned data.

  • The table partition expression must specify a single column. The following expressions are supported:

    • Expressions that specify only the column—that is, partition on all column values. For example:

      PARTITION BY ship_date GROUP BY CALENDAR_HIERARCHY_DAY(ship_date, 2, 2)
      
    • If the column is a DATE or TIMESTAMP/TIMESTAMPTZ, the partition expression can specify a supported date/time function that returns that column or any portion of it, such as month or year. For example, the following partition expression specifies to partition on the year portion of column order_date:

      PARTITION BY YEAR(order_date)
      
    • Expressions that perform addition or subtraction on the column. For example:

      PARTITION BY YEAR(order_date) -1
      
  • The table partition expression cannot coerce the specified column to another data type.

  • Vertica collects no statistics from the following projections:

    • Live aggregate and Top-K projections

    • Projections that are defined to include an SQL function within an expression

53 - annotations.md

Custom annotations added to all of the objects that the operator creates. Each annotation is encoded as an environment variable in the Vertica server container. Annotations accept the following characters:

  • Letters
  • Numbers
  • Underscores

Invalid character values are converted to underscore characters. For example:

vertica.com/git-ref: 1234abcd

Is converted to:

VERTICA_COM_GIT_REF=1234abcd

54 - ar-execution.md

The AUTOREGRESSOR function automatically executes the algorithm that fits your input data:

  • One value column: the function executes autoregression and returns a trained AR model.
  • Multiple value columns: the function executes vector autoregression and returns a trained VAR model.

55 - ar-overview.md

AR is a univariate autoregressive time series algorithm that predicts a variable's future values based on its preceding values. The user specifies the number of lagged timesteps taken into account during computation, and the model then predicts future values as a linear combination of the values at each lag.

56 - arguments-c-.md

The factory's getPrototype() method first creates ColumnTypes for the array and row elements and then calls addArrayType() and addRowType() using them:


void getPrototype(ServerInterface &srv,
            ColumnTypes &argTypes,
            ColumnTypes &retTypes)
    {
        // item ID (int), to be used in an array
        ColumnTypes itemIdProto;
        itemIdProto.addInt();

        // row: order = address (varchar) + array of previously-created item IDs
        ColumnTypes orderProto;
        orderProto.addVarchar();                  /* address */
        orderProto.addArrayType(itemIdProto);     /* array of item ID */

        /* argument (input) is array of orders */
        argTypes.addArrayType(orderProto);

        /* return values: index in the array, order */
        retTypes.addInt();                        /* index of element */
        retTypes.addRowType(orderProto);          /* element return type */
    }

The arguments include a sized type (the VARCHAR). The getReturnType() method uses a similar approach, using the Fields class to build the two fields in the order.


void getReturnType(ServerInterface &srv,
            const SizedColumnTypes &argTypes,
            SizedColumnTypes &retTypes)
    {
        Fields itemIdElementFields;
        itemIdElementFields.addInt("item_id");

        Fields orderFields;
        orderFields.addVarchar(32, "address");
        orderFields.addArrayType(itemIdElementFields[0], "item_id");
        // optional third arg: max length, default unbounded

        /* declare return type */
        retTypes.addInt("index");
        static_cast<Fields &>(retTypes).addRowType(orderFields, "element");

        /* NOTE: presumably we have verified that the arguments match the prototype, so really we could just do this: */
        retTypes.addInt("index");
        retTypes.addArg(argTypes.getColumnType(0).getElementType(), "element");
    }

To access complex types in the UDx processing method, use the ArrayReader, ArrayWriter, StructReader, and StructWriter classes.

See C++ example: using complex types for a polymorphic function that uses arrays.

57 - arguments-python.md

The factory's getPrototype() method first uses makeType() and addType() methods to create and construct ColumnTypes for the row and its elements. The method then calls addType() methods to add these constructed ColumnTypes to the arg_types and return_type objects:


def getPrototype(self, srv_interface, arg_types, return_type):
    # item ID (int), to be used in an array
    itemIdProto = vertica_sdk.ColumnTypes.makeInt()

    # row (order): address (varchar) + array of previously-created item IDs
    orderProtoFields = vertica_sdk.ColumnTypes.makeEmpty()
    orderProtoFields.addVarchar()  # address
    orderProtoFields.addArrayType(itemIdProto) # array of item ID
    orderProto = vertica_sdk.ColumnTypes.makeRowType(orderProtoFields)

    # argument (input): array of orders
    arg_types.addArrayType(orderProto)

    # return values: index in the array, order
    return_type.addInt();                        # index of element
    return_type.addRowType(orderProto);          # element return type

The factory's getReturnType() method creates SizedColumnTypes with the makeInt() and makeEmpty() methods and then builds two row fields with the addVarchar() and addArrayType() methods. Note that the addArrayType() method specifies the maximum number of array elements as 1024. getReturnType() then adds these constructed SizedColumnTypes to the object representing the return type.


def getReturnType(self, srv_interface, arg_types, return_type):
    itemsIdElementField = vertica_sdk.SizedColumnTypes.makeInt("item_id")

    orderFields = vertica_sdk.SizedColumnTypes.makeEmpty()
    orderFields.addVarchar(32, "address")
    orderFields.addArrayType(itemIdElementField, 1024, "item_ids")

    # declare return type
    return_type.addInt("index")
    return_type.addRowType(orderFields, "element")

    '''
    NOTE: presumably we have verified that the arguments match the prototype, so really we could just do this:
    return_type.addInt("index")
    return_type.addArrayType(argTypes.getColumnType(0).getElementType(), "element")
    '''

To access complex types in the UDx processing method, use the ArrayReader, ArrayWriter, RowReader, and RowWriter classes. For details, see Python SDK.

See Python example: matrix multiplication for a scalar function that uses complex types.

58 - arima-ar.md

59 - arima-ma.md

60 - array-restrictions.md

  • Native arrays support only data of primitive types, for example, int, UUID, and so on.

  • Array dimensionality is enforced. A column cannot contain arrays of varying dimensions. For example, a column that contains a three-dimensional array can only contain other three-dimensional arrays; it cannot simultaneously include a one-dimensional array. However, the arrays in a column can vary in size, where one array can contain four elements while another contains ten.

  • Array bounds, if specified, are enforced for all operations that load or alter data. Unbounded arrays may have as many elements as will fit in the allotted binary size.

  • An array has a maximum binary size. If this size is not set when the array is defined, a default value is used.

  • Arrays do not support LONG types (like LONG VARBINARY or LONG VARCHAR) or user-defined types (like Geometry).

61 - asm-gsm-param-note.md

You can store this value as a secret in AWS Secrets Manager or Google Secret Manager. For implementation details, see Secrets management.

62 - at-epoch-settings.md

  • EPOCH LATEST: Return data up to but not including the current epoch. The result set includes data from the latest committed DML transaction.

  • EPOCH integer: Return data up to and including the specified epoch.

  • TIME 'timestamp': Return data from the epoch at the specified timestamp.

These options are ignored if used to query temporary or external tables.

See Epochs for additional information about how Vertica uses epochs.

63 - auth-record.md

Next, you must create an authentication record in the database. An authentication record defines a set of authentication and the access methods for the database. You grant this record to a user or role to control how they authenticate to the database:

  1. Create the authentication record. The tls method requires that clients authenticate with a certificate whose Common Name (CN) specifies a database username:
          
       => CREATE AUTHENTICATION auth_record METHOD 'tls' HOST TLS '0.0.0.0/0';
       CREATE AUTHENTICATION
       
    
  2. Grant the authentication record to a user or to a role. The following example grants the authentication record to PUBLIC, the default role for all users:
          
       => GRANT AUTHENTICATION auth_record TO PUBLIC;
       GRANT AUTHENTICATION
       
    

64 - authentication-apikey.md

Authentication

Requires a VerticaAPIKey in the request header.

65 - auto-remove-partition-eviction-policy.md

If you alter or remove table partitioning, Vertica automatically clears all eviction policies previously set on partitions of that table. The table's eviction policy, if any, is unaffected.

66 - autoconfig.md

The installer automatically configures the correct setting if the default value does not meet system requirements. If an issue occurs when setting this value, or you use the --no-system-configuration argument to the installer and the current setting is incorrect, then the installer reports this as an issue.

67 - autoconstraintenforcement.md

If primary key, unique key, or check constraints are enabled for automatic enforcement in the target table, Vertica enforces those constraints when you load new data. If a violation occurs, Vertica rolls back the operation and returns an error.

68 - automatic-topic-creation.md

69 - autoregressor-overview.md

Autoregressive models predict future values of a time series based on the preceding values. More specifically, the user-specified lag determines how many previous timesteps it takes into account during computation, and predicted values are linear combinations of the values at each lag.

70 - aws-enable-https.md

Boolean, specifies whether to use the HTTPS protocol when connecting to S3, can be set only at the database level with ALTER DATABASE. If you choose not to use TLS, this parameter must be set to 0.

Default: 1 (enabled)

71 - aws-support-i-pv6.md

Vertica supports using IPv6 to identify nodes in the database cluster. However, AWS DNS resolution does not support IPv6. To have a cluster in AWS that uses IPv6, use the IPv6 IP addresses instead of using host names when installing Vertica and forming the cluster.

72 - awsca-file.md

File name of the TLS server certificate bundle to use. If set, this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.

73 - awsca-path.md

Path Vertica uses to look up TLS server certificate bundles. If set, this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.

74 - azure-emulator-warning.md

75 - azure-params-table.md

AzureStorageCredentials
Collection of JSON objects, each of which specifies connection credentials for one endpoint. This parameter takes precedence over Azure managed identities.

The collection must contain at least one object and may contain more. Each object must specify at least one of accountName or blobEndpoint, and at least one of accountKey or sharedAccessSignature.

  • accountName: If not specified, uses the label of blobEndpoint.
  • blobEndpoint: Host name with optional port (host:port). If not specified, uses account.blob.core.windows.net.
  • accountKey: Access key for the account or endpoint.
  • sharedAccessSignature: Access token for finer-grained access control, if being used by the Azure endpoint.
AzureStorageEndpointConfig
Collection of JSON objects, each of which specifies configuration elements for one endpoint. Each object must specify at least one of accountName or blobEndpoint.
  • accountName: If not specified, uses the label of blobEndpoint.
  • blobEndpoint: Host name with optional port (host:port). If not specified, uses account.blob.core.windows.net.
  • protocol: HTTPS (default) or HTTP.
  • isMultiAccountEndpoint: true if the endpoint supports multiple accounts, false otherwise (default is false). To use multiple-account access, you must include the account name in the URI. If a URI path contains an account, this value is assumed to be true unless explicitly set to false.

76 - background-depot-warming-example.md


=> SELECT BACKGROUND_DEPOT_WARMING('v_verticadb_node0006');
                          BACKGROUND_DEPOT_WARMING
----------------------------------------------------------------------------
 Depot warming running in background. Check monitoring tables for progress.
(1 row)

77 - backup-restore-api.md

GET backups Returns all the backups that have been created for all vbr configuration files ( *.ini ) that are located in the /opt/vertica/config directory.
POST backups/:config_script_base Creates a new backup as defined by the given vbr configuration script base (filename without the .ini extension).
GET backups/:config_script_base/:archive_id Returns details for a specific backup archive.
POST restore/:archive_id Restores a backup.

78 - backup-security.md

79 - bash-control-character-escaping.md

To set the field separator value to a control character, use your shell's control character escape notation. In Bash, you specify a control character in an argument using a dollar sign ($) followed by a string contained in single quotes. This string can contain C-string escapes (such as \t for tab), or a backslash () followed by an octal value for the character you want to use.

The following example demonstrates setting the separator character to tab (\t), vertical tab (\v) and the octal value of vertical tab (\013).

$ vsql -At -c "SELECT * FROM testtable;"
A|1|2|3
B|4|5|6

$ vsql -F $'\t' -At -c "SELECT * FROM testtable;"
A       1       2       3
B       4       5       6

$ vsql -F $'\v' -At -c "SELECT * FROM testtable;"
A
 1
  2
   3
B
 4
  5
   6
$ vsql -F $'\013' -At -c "SELECT * FROM testtable;"
A
 1
  2
   3
B
 4
  5
   6

80 - basic-stored-procedure.md

The following procedure echoes its inputs as a result set:

=> CREATE PROCEDURE echo_int_varchar(INOUT x INT, INOUT y VARCHAR) LANGUAGE PLvSQL AS $$
BEGIN
    RAISE NOTICE 'This procedure outputs a result set of its inputs:';
END
$$;

=> CALL echo_int_varchar(3, 'a string');
NOTICE 2005:  This procedure outputs a result set of its inputs:
 x |    y
---+----------
 3 | a string
(1 row)

81 - batch-expor-output-file-defaults.md

If a given input query omits DirQueryName and DirQueryComment fields, EXPORT_DIRECTED_QUERIES automatically generates the following output:

  • /* Query: Autoname:timestamp.n */, where n is a zero-based integer index that ensures uniqueness among auto-generated names with the same timestamp.

  • /* Comment: Optimizer-generated directed query */

82 - batch-export-error-file.md

If any errors or warnings occur during EXPORT_DIRECTED_QUERIES execution, it returns with a message like this one:

1 queries successfully exported.
1 warning message was generated.
Queries exported to /home/dbadmin/outputQueries.
See error report, /home/dbadmin/outputQueries.err for details.

EXPORT_DIRECTED_QUERIES writes all errors and warnings to a file that it creates on the same path as the output file, and uses the output file's base name.

83 - batch-export-input-file.md

The input file that you supply to EXPORT_DIRECTED_QUERIES contains one or more input queries. For each input query, you can optionally specify two fields that are used in the generated directed query:

  • DirQueryName provides the directed query's unique identifier, a string that conforms to conventions described in Identifiers.

  • DirQueryComment specifies a quote-delimited string, up to 128 characters.

You format each input query as follows:

--DirQueryName=query-name
--DirQueryComment='comment'
input-query

84 - batch-export-output-file.md

EXPORT_DIRECTED_QUERIES generates SQL for creating directed queries, and writes the SQL to the specified file or to standard output. In both cases, output conforms to the following format:

/* Query: directed-query-name */
/* Comment: directed-query-comment */
SAVE QUERY input-query;
CREATE DIRECTED QUERY CUSTOM 'directed-query-name'
COMMENT 'directed-query-comment'
OPTVER 'vertica-release-num'
PSDATE 'timestamp'
annotated-query

85 - binary-transfer.md

The data output by both modes is identical with the following exceptions for certain data types:

  • FLOAT: Binary transfer has slightly better precision.

  • TIMESTAMPTZ: Binary transfer can fail to get the session time zone and default to the local time zone, while text transfer reliably uses the session time zone.

  • NUMERIC: Binary transfer is forcibly disabled for NUMERIC data by the server for Vertica 11.0.2+.

86 - bit-example.md

The example that follows uses table t with a single column of VARBINARY data type:

=> CREATE TABLE t ( c VARBINARY(2) );
=> INSERT INTO t values(HEX_TO_BINARY('0xFF00'));
=> INSERT INTO t values(HEX_TO_BINARY('0xFFFF'));
=> INSERT INTO t values(HEX_TO_BINARY('0xF00F'));

Query table t to see column c output:

=> SELECT TO_HEX(c) FROM t;
 TO_HEX
--------
 ff00
 ffff
 f00f
(3 rows)

87 - boo-lanalyticfunctions.md

The following example illustrates how you can use the BOOL_AND, BOOL_OR, and BOOL_XOR analytic functions. The sample table, employee, includes a column for type of employee and years paid.

=> CREATE TABLE employee(emptype VARCHAR, yearspaid VARCHAR);
CREATE TABLE

Insert sample data into the table to show years paid. In more than one case, an employee could be paid more than once within one year.

=> INSERT INTO employee
SELECT 'contractor1', '2014'
UNION ALL
SELECT 'contractor2', '2015'
UNION ALL
SELECT 'contractor3', '2014'
UNION ALL
SELECT 'contractor1', '2014'
UNION ALL
SELECT 'contractor2', '2014'
UNION ALL
SELECT 'contractor3', '2015'
UNION ALL
SELECT 'contractor4', '2014'
UNION ALL
SELECT 'contractor4', '2014'
UNION ALL
SELECT 'contractor5', '2015'
UNION ALL
SELECT 'contractor5', '2016';
 OUTPUT
--------
     10
(1 row)

Query the table. The result shows employees that were paid twice in 2014 (BOOL_AND), once or twice in 2014 (BOOL_OR), and specifically not more than once in 2014 (BOOL_XOR).

=> SELECT DISTINCT emptype,
BOOL_AND(yearspaid='2014') OVER (PARTITION BY emptype) AS paidtwicein2014,
BOOL_OR(yearspaid='2014') OVER (PARTITION BY emptype) AS paidonceortwicein2014,
BOOL_XOR(yearspaid='2014') OVER (PARTITION BY emptype) AS paidjustoncein2014
FROM employee;


   emptype   | paidtwicein2014 | paidonceortwicein2014 | paidjustoncein2014
-------------+-----------------+-----------------------+--------------------
 contractor1 | t               | t                     | f
 contractor2 | f               | t                     | t
 contractor3 | f               | t                     | t
 contractor4 | t               | t                     | f
 contractor5 | f               | f                     | f
(5 rows)

88 - boo-lfunctions.md

The following example shows how to use aggregate functions BOOL_AND, BOOL_OR, and BOOL_XOR. The sample table mixers includes columns for models and colors.

=> CREATE TABLE mixers(model VARCHAR(20), colors VARCHAR(20));
CREATE TABLE

Insert sample data into the table. The sample adds two color fields for each model.

=> INSERT INTO mixers
SELECT 'beginner', 'green'
UNION ALL
SELECT 'intermediate', 'blue'
UNION ALL
SELECT 'intermediate', 'blue'
UNION ALL
SELECT 'advanced', 'green'
UNION ALL
SELECT 'advanced', 'blue'
UNION ALL
SELECT 'professional', 'blue'
UNION ALL
SELECT 'professional', 'green'
UNION ALL
SELECT 'beginner', 'green';
 OUTPUT
--------
      8
(1 row)

Query the table. The result shows models that have two blue (BOOL_AND), one or two blue (BOOL_OR), and specifically not more than one blue (BOOL_XOR) mixer.

=> SELECT model,
BOOL_AND(colors= 'blue')AS two_blue,
BOOL_OR(colors= 'blue')AS one_or_two_blue,
BOOL_XOR(colors= 'blue')AS specifically_not_more_than_one_blue
FROM mixers
GROUP BY model;

    model     | two_blue | one_or_two_blue | specifically_not_more_than_one_blue
--------------+----------+-----------------+-------------------------------------
 advanced     | f        | t               | t
 beginner     | f        | f               | f
 intermediate | t        | t               | f
 professional | f        | t               | t
(4 rows)

89 - boo-lminmaxanalyticexample.md

The following example shows you the difference between the MIN and MAX analytic functions when you use them with a Boolean value. The sample creates a table with two columns, adds two rows of data, and shows sample output for MIN and MAX.

CREATE TABLE min_max_functions (emp VARCHAR, torf BOOL);

INSERT INTO min_max_functions VALUES ('emp1', 1);
INSERT INTO min_max_functions VALUES ('emp1', 0);

SELECT DISTINCT emp,
min(torf) OVER (PARTITION BY emp) AS worksasbooleanand,
Max(torf) OVER (PARTITION BY emp) AS worksasbooleanor
FROM min_max_functions;

 emp  | worksasbooleanand | worksasbooleanor
------+-------------------+------------------
 emp1 | f                 | t
(1 row)

90 - bool-min-maxaggregate-example.md

The following example shows you the difference between the MIN and MAX aggregate functions when you use them with a Boolean value. The sample creates a table, adds two rows of data, and shows sample output for MIN and MAX.

=> CREATE TABLE min_max_functions (torf BOOL);

=> INSERT INTO min_max_functions VALUES (1);
=> INSERT INTO min_max_functions VALUES (0);

=> SELECT * FROM min_max_functions;
  torf
------
 t
 f
(2 rows)

=> SELECT min(torf) FROM min_max_functions;
 min
-----
 f
(1 row)

=> SELECT max(torf) FROM min_max_functions;
 max
-----
 t
(1 row)

91 - boolmaxmi-naggregatenote.md

Aggregate functions MIN and MAX can operate with Boolean values. MAX can act upon a Boolean data type or a value that can be implicitly converted to a Boolean. If at least one input value is true, MAX returns t (true). Otherwise, it returns f (false). In the same scenario, MIN returns t (true) if all input values are true. Otherwise it returns f.

92 - boolminma-xanalyticnote.md

The analytic functions MIN() and MAX() can operate with Boolean values. The MAX() function acts upon a Boolean data type or a value that can be implicitly converted to a Boolean value. If at least one input value is true, MAX() returns t (true). Otherwise, it returns f (false). In the same scenario, the MIN() function returns t (true) if all input values are true. Otherwise, it returns f.

93 - bounded-collections.md

When declaring a collection type for a table column, you can limit either the number of elements or the total binary size of the collection. During query processing, Vertica always reserves the maximum memory needed for the column, based on either the element count or the binary size. If this size is much larger than your data actually requires, setting one of these limits can improve query performance by reducing the amount of memory that must be reserved for the column.

You can change the bounds of a collection, including changing between bounded and unbounded collections, by casting. See Casting.

A bounded collection specifies a maximum element count. A value in a bounded collection column may contain fewer elements, but it may not contain more. Any attempt to insert more elements into a bounded collection than the declared maximum is an error. A bounded collection has a binary size that is the product of the data-type size and the maximum number of elements, possibly rounded up.

An unbounded collection specifies a binary size in bytes, explicitly or implicitly. It may contain as many elements as can fit in that binary size.

94 - bth-mc-auto-applies-bt-hto-clusters.md

The BTH license is automatically applied to all clusters you create using a BTH MC instance.

95 - bth-to-byol-steps.md

Obtain a long-term BYOL license from the online licensing portal, described in Obtaining a license key file.Upload the license file to a node in your database. Note the absolute path in the node's filesystem, as you will need this later when installing the license.Connect to the node you uploaded the license file to in the previous step. Connect to your database using vsql and view the licenses table:

=> SELECT * FROM licenses;

Note the name of the hourly license listed in the NAME column, so you can check if it is still present later.

Install the license in the database using the INSTALL_LICENSE function with the absolute path to the license file you uploaded in step 2:

=> SELECT install_license('absolute path to BYOL license');

View the licenses table again:

=> SELECT * FROM licenses;

If only the new BYOL license appears in the table, skip to step 8. If the hourly license whose name you noted in step 4 is still in the table, copy the name and proceed to step 7.

Call the DROP_LICENSE function to drop the hourly license:

=> SELECT drop_license('hourly license name');

96 - ca-bundle-deprecation.md

97 - ca-bundle-join.md

CA_BUNDLES only stores OIDs. Since operations on CA bundles require certificate and owner names, you can use the following query to map bundles to certificate and owner names:

=> SELECT user_name AS owner_name,
       owner     AS owner_oid,
       b.name    AS bundle_name,
       c.name    AS cert_name
FROM   (SELECT name,
               STRING_TO_ARRAY(certificates) :: array[INT] AS certs
        FROM   ca_bundles) b
       LEFT JOIN certificates c
              ON CONTAINS(b.certs, c.oid)
       LEFT JOIN users
              ON user_id = owner
ORDER  BY 1;

 owner_name |     owner_oid     | bundle_name  | cert_name
------------+-------------------+--------------+-----------
 dbadmin    | 45035996273704962 | ca_bundle    | root_ca
 dbadmin    | 45035996273704962 | ca_bundle    | ca_cert
(2 rows)

98 - cache-no-cache-parameter.md

How many unique sequence numbers to pre-allocate and store in memory on each node for faster access. Vertica sets up caching for each session and distributes it across all nodes. A value of 0 or 1 is equivalent to NO CACHE.

By default, the sequence cache is set to 250,000.

For details, see Distributing sequences.

99 - cancel-depot-warming-example.md


=> SELECT CANCEL_DEPOT_WARMING('v_verticadb_node0007');
   CANCEL_DEPOT_WARMING
--------------------------
 Depot warming cancelled.
(1 row)

100 - cascade-in-revoke.md

Revoke privileges from users who received them from the grantee through WITH GRANT OPTION.

101 - catalog-memory-usage.md

You can calculate catalog memory usage on all nodes by querying system table RESOURCE_POOL_STATUS:

=> SELECT node_name, pool_name, memory_size_kb FROM resource_pool_status WHERE pool_name = 'metadata';

102 - categorical-column-support.md

Vertica XGBoost and Random Forest algorithms offer native support for categorical columns (BOOL/VARCHAR). Simply pass the categorical columns as predictors to the models and the algorithm will automatically treat the columns as categorical and will not attempt to split them into bins in the same manner as numerical columns; Vertica treats these columns as true categorical values and does not simply cast them to continuous values under-the-hood.

103 - caution-regex-example.md

For display purposes, this sample regular expression adds new line characters to split long lines of text. To use this expression in a query, first copy and edit the example to remove any new line characters.

104 - ce-license-phone-home.md

As part of the CE license, you agree to the collection of some anonymous, non-identifying usage data. This data lets Vertica understand how customers use the product, and helps guide the development of new features. None of your personal data is collected. For details on what is collected, see the Community Edition End User License Agreement.

105 - client-auth-methods.md

  • trust: Users can authenticate with a valid username (that is, without a password).

  • reject: Rejects the connection attempt.

  • hash: Users must provide a valid username and password. For details, see Hash authentication.

  • gss: Authorizes clients that connect to Vertica with an MIT Kerberos implementation. The Key Distribution Center (KDC) must support Kerberos 5 using the GSS-API. Non-MIT Kerberos implementations must use the GSS-API. For details, see Kerberos authentication.

  • ident: Authenticates the client against a username on an Ident server. For details, see Ident authentication.

  • ldap: Authenticates a client and their username and password with an LDAP or Active Directory server. For details, see LDAP authentication.

  • tls: Authenticates clients that provide a certificate with a Common Name (CN) that specifies a valid database username. Vertica must be configured for mutual mode TLS to use this method. For details, see TLS authentication

  • oauth: Authenticates a client with an access token. For details, see OAuth 2.0 authentication.

106 - client-connection-failover-and-load-balancing.md

When native connection load balancing is enabled, the additional servers specified in the BackupServerNode connection parameter are only used for the initial connection to a Vertica host. If host redirects the client to another host in the database cluster to handle its connection request, the second connection does not use the backup node list. This is rarely an issue, since native connection load balancing is aware of which nodes are currently up in the database.

107 - client-load-balancing-references.md

See Load balancing in ADO.NET, Load balancing in JDBC, and Load balancing, for information on enabling load balancing on the client. For vsql, use the -C command-line option to enable load balancing.

108 - client-log-etw-conversion.md

LogLevel in the windows event viewer

While LogLevel ranges from 0 through 6, this range is compressed for the Windows Event Viewer to a range of 0 through 3.

Vertica LogLevel Setting Vertica LogLevel Description Log level sent to the Windows Event Viewer Log level displayed by the Windows Event Viewer
0 (No logging) 0 (No logging)
1 Fatal Errors 1 Critical
2 Errors 2 Error
3 Warnings 3 Warning
4 Info 4 Information
5 Debug 4
6 Trace (all messages) 4

The following examples show how LogLevel is converted when displayed in the Windows Event Viewer.

  • A LogLevel of 5 sends fatal errors, errors, warnings, info and debug log level entries to Event Viewer as Level 4 (Information).

  • A LogLevel of 6 sends fatal errors, errors, warnings, debug and trace log level entries to Event Viewer as Level 4.

109 - cluster_snip.md

The concept of Cluster in the Vertica Analytics Platform is a collection of hosts with the Vertica software packages (RPM or DEB) that are in one admin tools domain. You can access and manage a cluster from one admintools initiator host.

110 - cluster-actions-mc.md

On the Cluster page, you can perform the following cluster actions:

  • Start Cluster: Starts the instances, then starts the database. For Eon Mode databases, MC repopulates the nodes with data from the storage account container.

  • Stop Cluster: Stops the nodes in the database, then stops their cloud instances.

  • Advanced > Terminate: Stops the database, then terminates the cloud instances.

111 - code-ado-net-using-geometry-data-in-c.md

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using Vertica.Data.VerticaClient;
namespace ConsoleApplication
{
    class Program
    {
        static void Main(string[] args)
        {
            VerticaConnectionStringBuilder builder =
                             new VerticaConnectionStringBuilder();
            builder.Host = "VerticaHost";
            builder.Database = "VMart";
            builder.User = "ExampleUser";
            builder.Password = "password123";
            VerticaConnection _conn = new
                              VerticaConnection(builder.ToString());
            _conn.Open();

            VerticaCommand command = _conn.CreateCommand();
            command.CommandText = "DROP TABLE IF EXISTS polygons";
            command.ExecuteNonQuery();
            command.CommandText =
              "CREATE TABLE polygons (id INTEGER PRIMARY KEY, poly GEOMETRY)";
            command.ExecuteNonQuery();
            // Prepare to insert a polygon using a prepared statement. Use the
            // ST_GeomFromtText SQl function to convert from WKT to GEOMETRY.
            VerticaTransaction txn = _conn.BeginTransaction();
            command.CommandText =
             "INSERT into polygons VALUES(@id, ST_GeomFromText(@polygon))";
            command.Parameters.Add(new
                           VerticaParameter("id", VerticaType.BigInt));
            command.Parameters.Add(new
                           VerticaParameter("polygon", VerticaType.VarChar));
            command.Prepare();
            // Set the values for the parameters
            command.Parameters["id"].Value = 0;
            //
            command.Parameters["polygon"].Value =
                               "polygon((1 1, 1 2, 2 2, 2 1, 1 1))";
            // Execute the query to insert the value
            command.ExecuteNonQuery();

            // Now query the table
            VerticaCommand query = _conn.CreateCommand();
            query.CommandText =
               "SELECT id, ST_AsText(poly), ST_AsBinary(poly) FROM polygons;";
            VerticaDataReader dr = query.ExecuteReader();
            while (dr.Read())
            {
                Console.WriteLine("ID: " + dr[0]);
                Console.WriteLine("Polygon WKT format data type: "
                    + dr.GetDataTypeName(1) +
                    " Value: " + dr[1]);
                // Get the WKB format of the polygon and print it out as hex.
                Console.Write("Polygon WKB format data type: "
                               + dr.GetDataTypeName(2));
                Console.WriteLine(" Value: "
                               + BitConverter.ToString((byte[])dr[2]));
            }
            _conn.Close();
        }
    }
}

The example code prints the following on the system console:

ID: 0
Polygon WKT format data type: LONG VARCHAR Value: POLYGON ((1 1, 1 2,
2 2, 2 1,1 1))
Polygon WKB format data type: LONG VARBINARY Value: 01-03-00-00-00-01
-00-00-00-05-00-00-00-00-00-00-00-00-00-F0-3F-00-00-00-00-00-00-F0-3F
-00-00-00-00-00-00-F0-3F-00-00-00-00-00-00-00-40-00-00-00-00-00-00-00
-40-00-00-00-00-00-00-00-40-00-00-00-00-00-00-00-40-00-00-00-00-00-00
-F0-3F-00-00-00-00-00-00-F0-3F-00-00-00-00-00-00-F0-3F

112 - code-ado-transaction-isolation.md

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
Vertica.Data.VerticaClient;
namespace ConsoleApplication
{
    class Program
    {
        static void Main(string[] args)
        {
            VerticaConnectionStringBuilder builder = new VerticaConnectionStringBuilder();
            builder.Host = "192.168.1.10";
            builder.Database = "VMart";
            builder.User = "dbadmin";
            VerticaConnection _conn1 = new VerticaConnection(builder.ToString());
             _conn1.Open();
            VerticaTransaction txn1 = _conn1.BeginTransaction();
            Console.WriteLine("\n Transaction 1 Transaction Isolation Level: " +
             txn1.IsolationLevel.ToString());
            txn1.Rollback();
            VerticaTransaction txn2 = _conn1.BeginTransaction(IsolationLevel.Serializable);
            Console.WriteLine("\n Transaction 2 Transaction Isolation Level: " +
             txn2.IsolationLevel.ToString());
            txn2.Rollback();
            VerticaTransaction txn3 = _conn1.BeginTransaction(IsolationLevel.ReadCommitted);
            Console.WriteLine("\n Transaction 3 Transaction Isolation Level: " +
             txn3.IsolationLevel.ToString());
            _conn1.Close();
        }
    }
}

113 - code-jdbc-batch-insert-rollback-on-error.md

import java.sql.*;
import java.util.Arrays;
import java.util.Properties;

public class RollbackBatchOnError {
    public static void main(String[] args) {
        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",
                            myProp);
            // Disable auto-commit. This will allow you to roll back a
            // a batch load if there is an error.
            conn.setAutoCommit(false);
            // establish connection and make a table for the data.
            Statement stmt = conn.createStatement();
            // Drop table and recreate.
            stmt.execute("DROP TABLE IF EXISTS customers CASCADE");
            stmt.execute("CREATE TABLE customers (CustID int, Last_Name"
                            + " char(50), First_Name char(50),Email char(50), "
                            + "Phone_Number char(12))");

            // Some dummy data to insert. The one row won't insert because
            // the phone number is too long for the phone column.
            String[] firstNames = new String[] { "Anna", "Bill", "Cindy",
                            "Don", "Eric" };
            String[] lastNames = new String[] { "Allen", "Brown", "Chu",
                            "Dodd", "Estavez" };
            String[] emails = new String[] { "aang@example.com",
                            "b.brown@example.com", "cindy@example.com",
                            "d.d@example.com", "e.estavez@example.com" };
            String[] phoneNumbers = new String[] { "123-456-789",
                            "555-444-3333", "555-867-53094535", "555-555-1212",
                            "781-555-0000" };
            // Create the prepared statement
            PreparedStatement pstmt = conn.prepareStatement(
                            "INSERT INTO customers (CustID, Last_Name, " +
                            "First_Name, Email, Phone_Number) "+
                            "VALUES(?,?,?,?,?)");
            // Add rows to a batch in a loop. Each iteration adds a
            // new row.
            for (int i = 0; i < firstNames.length; i++) {
                // Add each parameter to the row.
                pstmt.setInt(1, i + 1);
                pstmt.setString(2, lastNames[i]);
                pstmt.setString(3, firstNames[i]);
                pstmt.setString(4, emails[i]);
                pstmt.setString(5, phoneNumbers[i]);
                // Add row to the batch.
                pstmt.addBatch();
            }
            // Integer array to hold the results of inserting
            // the batch. Will contain an entry for each row,
            // indicating success or failure.
            int[] batchResults = null;
            try {
                // Batch is ready, execute it to insert the data
                batchResults = pstmt.executeBatch();
                // If we reach here, we inserted the batch without errors.
                // Commit it.
                System.out.println("Batch insert successful. Committing.");
                conn.commit();
            } catch (BatchUpdateException e) {
                    System.out.println("Error message: " + e.getMessage());
                    // Batch results isn't set due to exception, but you
                    // can get it from the exception object.
                    batchResults =  e.getUpdateCounts();
                    // Roll back the batch transaction.
                    System.out.println("Rolling back batch insertion");
                    conn.rollback();
            }
            catch  (SQLException e) {
                // General SQL errors, such as connection issues, throw
                // SQLExceptions. Your application should do something more
                // than just print a stack trace,
                e.printStackTrace();
            }
            System.out.println("Return value from inserting batch: "
                            + Arrays.toString(batchResults));
            System.out.println("Customers table contains:");


            // Print the resulting table.
            ResultSet rs = null;
            rs = stmt.executeQuery("SELECT CustID, First_Name, "
                            + "Last_Name FROM customers ORDER BY CustID");
            while (rs.next()) {
                System.out.println(rs.getInt(1) + " - "
                                + rs.getString(2).trim() + " "
                                + rs.getString(3).trim());
            }

            // Cleanup
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

114 - code-jdbc-batch-load-accepted-and-rejected-rows.md

import java.sql.*;
import java.util.Arrays;
import java.util.Properties;

public class BatchInsertErrorHandlingExample {
    public static void main(String[] args) {
        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");
        Connection conn;

        // establish connection and make a table for the data.
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",
                            myProp);


            // Disable auto commit
            conn.setAutoCommit(false);

            // Create a statement
            Statement stmt = conn.createStatement();
            // Drop table and recreate.
            stmt.execute("DROP TABLE IF EXISTS customers CASCADE");
            stmt.execute("CREATE TABLE customers (CustID int, Last_Name"
                            + " char(50), First_Name char(50),Email char(50), "
                            + "Phone_Number char(12))");

            // Some dummy data to insert. The one row won't insert because
            // the phone number is too long for the phone column.
            String[] firstNames = new String[] { "Anna", "Bill", "Cindy",
                            "Don", "Eric" };
            String[] lastNames = new String[] { "Allen", "Brown", "Chu",
                            "Dodd", "Estavez" };
            String[] emails = new String[] { "aang@example.com",
                            "b.brown@example.com", "cindy@example.com",
                            "d.d@example.com", "e.estavez@example.com" };
            String[] phoneNumbers = new String[] { "123-456-789",
                            "555-444-3333", "555-867-53093453453",
                            "555-555-1212", "781-555-0000" };

            // Create the prepared statement
            PreparedStatement pstmt = conn.prepareStatement(
                            "INSERT INTO customers (CustID, Last_Name, " +
                            "First_Name, Email, Phone_Number)" +
                            " VALUES(?,?,?,?,?)");

            // Add rows to a batch in a loop. Each iteration adds a
            // new row.
            for (int i = 0; i < firstNames.length; i++) {
                // Add each parameter to the row.
                pstmt.setInt(1, i + 1);
                pstmt.setString(2, lastNames[i]);
                pstmt.setString(3, firstNames[i]);
                pstmt.setString(4, emails[i]);
                pstmt.setString(5, phoneNumbers[i]);
                // Add row to the batch.
                pstmt.addBatch();
            }

            // Integer array to hold the results of inserting
            // the batch. Will contain an entry for each row,
            // indicating success or failure.
            int[] batchResults = null;

            try {
                // Batch is ready, execute it to insert the data
                batchResults = pstmt.executeBatch();
            } catch (BatchUpdateException e) {
                // We expect an exception here, since one of the
                // inserted phone numbers is too wide for its column. All of the
                // rest of the rows will be inserted.
                System.out.println("Error message: " + e.getMessage());

                // Batch results isn't set due to exception, but you
                // can get it from the exception object.
                //
                // In your own code, you shouldn't assume the a batch
                // exception occurred, since exceptions can be thrown
                // by the server for a variety of reasons.
                batchResults = e.getUpdateCounts();
            }
            // You should also be prepared to catch SQLExceptions in your own
            // application code, to handle dropped connections and other general
            // problems.

            // Commit the transaction
            conn.commit();


            // Print the array holding the results of the batch insertions.
            System.out.println("Return value from inserting batch: "
                            + Arrays.toString(batchResults));
            // Print the resulting table.
            ResultSet rs = null;
            rs = stmt.executeQuery("SELECT CustID, First_Name, "
                            + "Last_Name FROM customers ORDER BY CustID");
            while (rs.next()) {
                System.out.println(rs.getInt(1) + " - "
                                + rs.getString(2).trim() + " "
                                + rs.getString(3).trim());
            }

            // Cleanup
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

115 - code-jdbc-batch-load-with-prepared-statements.md

import java.sql.*;
import java.util.Properties;

public class BatchInsertExample {
    public static void main(String[] args) {
        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");

     //Set streamingBatchInsert to True to enable streaming mode for batch inserts.
     //myProp.put("streamingBatchInsert", "True");

     Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",
                            myProp);
            // establish connection and make a table for the data.
            Statement stmt = conn.createStatement();


            // Set AutoCommit to false to allow Vertica to reuse the same
            // COPY statement
            conn.setAutoCommit(false);


            // Drop table and recreate.
            stmt.execute("DROP TABLE IF EXISTS customers CASCADE");
            stmt.execute("CREATE TABLE customers (CustID int, Last_Name"
                            + " char(50), First_Name char(50),Email char(50), "
                            + "Phone_Number char(12))");
            // Some dummy data to insert.
            String[] firstNames = new String[] { "Anna", "Bill", "Cindy",
                            "Don", "Eric" };
            String[] lastNames = new String[] { "Allen", "Brown", "Chu",
                            "Dodd", "Estavez" };
            String[] emails = new String[] { "aang@example.com",
                            "b.brown@example.com", "cindy@example.com",
                            "d.d@example.com", "e.estavez@example.com" };
            String[] phoneNumbers = new String[] { "123-456-7890",
                            "555-444-3333", "555-867-5309",
                            "555-555-1212", "781-555-0000" };
            // Create the prepared statement
            PreparedStatement pstmt = conn.prepareStatement(
                            "INSERT INTO customers (CustID, Last_Name, " +
                            "First_Name, Email, Phone_Number)" +
                            " VALUES(?,?,?,?,?)");
            // Add rows to a batch in a loop. Each iteration adds a
            // new row.
            for (int i = 0; i < firstNames.length; i++) {
                // Add each parameter to the row.
                pstmt.setInt(1, i + 1);
                pstmt.setString(2, lastNames[i]);
                pstmt.setString(3, firstNames[i]);
                pstmt.setString(4, emails[i]);
                pstmt.setString(5, phoneNumbers[i]);
                // Add row to the batch.
                pstmt.addBatch();
            }

            try {
                // Batch is ready, execute it to insert the data
                pstmt.executeBatch();
            } catch (SQLException e) {
                System.out.println("Error message: " + e.getMessage());
                return; // Exit if there was an error
            }

            // Commit the transaction to close the COPY command
            conn.commit();


            // Print the resulting table.
            ResultSet rs = null;
            rs = stmt.executeQuery("SELECT CustID, First_Name, "
                            + "Last_Name FROM customers ORDER BY CustID");
            while (rs.next()) {
                System.out.println(rs.getInt(1) + " - "
                                + rs.getString(2).trim() + " "
                                + rs.getString(3).trim());
            }
            // Cleanup
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

116 - code-jdbc-bulk-load-using-copy.md

import java.sql.*;
import java.util.Properties;
import com.vertica.jdbc.*;

public class COPYFromFile {
    public static void main(String[] args) {
        Properties myProp = new Properties();
        myProp.put("user", "ExampleAdmin"); // Must be superuser
        myProp.put("password", "password123");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",myProp);
            // Disable AutoCommit
            conn.setAutoCommit(false);
            Statement stmt = conn.createStatement();
            // Create a table to hold data.
            stmt.execute("DROP TABLE IF EXISTS customers;");
            stmt.execute("CREATE TABLE IF NOT EXISTS customers (Last_Name char(50) "
                            + "NOT NULL, First_Name char(50),Email char(50), "
                            + "Phone_Number char(15))");

             // Use the COPY command to load data. Use ENFORCELENGTH to reject
            // strings too wide for their columns.
            boolean result = stmt.execute("COPY customers FROM "
                            + " '/data/customers.txt' ENFORCELENGTH");

            // Determine if execution returned a count value, or a full result
            // set.
            if (result) {
                System.out.println("Got result set");
            } else {
                // Count will usually return the count of rows inserted.
                System.out.println("Got count");
                int rowCount = stmt.getUpdateCount();
                System.out.println("Number of accepted rows = " + rowCount);
            }


            // Commit the data load
            conn.commit();
        } catch (SQLException e) {
            System.out.print("Error: ");
            System.out.println(e.toString());
        }
    }
}

117 - code-jdbc-connection-example.md

import java.sql.*;
import java.util.Properties;

public class VerySimpleVerticaJDBCExample {
    public static void main(String[] args) {
        /*
         * If your client needs to run under a Java 5 JVM, It will use the older
         * JDBC 3.0-compliant driver, which requires you manually load the
         * driver using Class.forname
         */
        /*
         * try { Class.forName("com.vertica.jdbc.Driver"); } catch
         * (ClassNotFoundException e) { // Could not find the driver class.
         * Likely an issue // with finding the .jar file.
         * System.err.println("Could not find the JDBC driver class.");
         * e.printStackTrace(); return; // Bail out. We cannot do anything
         * further. }
         */
        Properties myProp = new Properties();
        myProp.put("user", "dbadmin");
        myProp.put("password", "vertica");
        myProp.put("loginTimeout", "35");
        myProp.put("KeystorePath", "c:/keystore/keystore.jks");
     myProp.put("KeystorePassword", "keypwd");
     myProp.put("TrustStorePath", "c:/truststore/localstore.jks");
     myProp.put("TrustStorePassword", "trustpwd");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                    "jdbc:vertica://V_vmart_node0001.example.com:5433/vmart", myProp);
            System.out.println("Connected!");
            conn.close();
        } catch (SQLTransientConnectionException connException) {
            // There was a potentially temporary network error
            // Could automatically retry a number of times here, but
            // instead just report error and exit.
            System.out.print("Network connection issue: ");
            System.out.print(connException.getMessage());
            System.out.println(" Try again later!");
            return;
        } catch (SQLInvalidAuthorizationSpecException authException) {
            // Either the username or password was wrong
            System.out.print("Could not log into database: ");
            System.out.print(authException.getMessage());
            System.out.println(" Check the login credentials and try again.");
            return;
        } catch (SQLException e) {
            // Catch-all for other exceptions
            e.printStackTrace();
        }
    }
}

118 - code-jdbc-connection-properties.md

Properties myProp = new Properties();
myProp.put("user", "ExampleUser");
myProp.put("password", "password123");
myProp.put("LoginTimeout", "35");
Connection conn;
try {
    conn = DriverManager.getConnection(
        "jdbc:vertica://VerticaHost:/ExampleDB", myProp);
} catch (SQLException e) {
    e.printStackTrace();
}

119 - code-jdbc-data-types.md

import java.sql.*;
import java.util.Properties;

public class JDBCDataTypes {
    public static void main(String[] args) {
        // If running under a Java 5 JVM, use you need to load the JDBC driver
        // using Class.forname here

        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/VMart",
                             myProp);
            Statement statement = conn.createStatement();
            // Create a table that will hold a row of different types of
            // numeric data.
            statement.executeUpdate(
                    "DROP TABLE IF EXISTS test_all_types cascade");
            statement.executeUpdate("CREATE TABLE test_all_types ("
                            + "c0 INTEGER, c1 TINYINT, c2 DECIMAL, "
                            + "c3 MONEY, c4 DOUBLE PRECISION, c5 REAL)");
            // Add a row of values to it.
            statement.executeUpdate("INSERT INTO test_all_types VALUES("
                            + "111111111111, 444, 55555555555.5555, "
                            + "77777777.77,  88888888888888888.88, "
                            + "10101010.10101010101010)");
            // Query the new table to get the row back as a result set.
            ResultSet rs = statement
                            .executeQuery("SELECT * FROM test_all_types");
            // Get the metadata about the row, including its data type.
            ResultSetMetaData md = rs.getMetaData();
            // Loop should only run once...
            while (rs.next()) {
                // Print out the data type used to defined the column, followed
                // by the values retrieved using several different retrieval
                // methods.

                String[] vertTypes = new String[] {"INTEGER", "TINYINT",
                         "DECIMAL", "MONEY", "DOUBLE PRECISION", "REAL"};

                for (int x=1; x<7; x++) {
                    System.out.println("\n\nColumn " + x + " (" + vertTypes[x-1]
                            + ")");
                    System.out.println("\tgetColumnType()\t\t"
                            + md.getColumnType(x));
                    System.out.println("\tgetColumnTypeName()\t"
                            + md.getColumnTypeName(x));
                    System.out.println("\tgetShort()\t\t"
                            + rs.getShort(x));
                    System.out.println("\tgetLong()\t\t" + rs.getLong(x));
                    System.out.println("\tgetInt()\t\t" + rs.getInt(x));
                    System.out.println("\tgetByte()\t\t" + rs.getByte(x));
                }
            }
            rs.close();
            statement.executeUpdate("drop table test_all_types cascade");
            statement.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

120 - code-jdbc-find-interval-columns.md

// Determine the interval data types by examining the database
// metadata.
DatabaseMetaData dbmd = conn.getMetaData();
ResultSet dbMeta = dbmd.getColumns(null, null, "interval_demo", null);
int colcount = 0;
while (dbMeta.next()) {

    // Get the metadata type for a column.
    int javaType = dbMeta.getInt("DATA_TYPE");

    System.out.println("Column " + ++colcount + " Type name is " +
                    dbMeta.getString("TYPE_NAME"));

    if(javaType == Types.OTHER) {
      // The SQL_DATETIME_SUB column in the metadata tells you
      // Specifically which subtype of interval you have.
      // The VerticaDayTimeInterval.isDayTimeInterval()
      // methods tells you if that value is a day time.
      //
      int intervalType = dbMeta.getInt("SQL_DATETIME_SUB");
      if(VerticaDayTimeInterval.isDayTimeInterval(intervalType)) {
           // Now you know it is one of the 10 day/time interval types.
           // When you select this column you can cast to
           // VerticaDayTimeInterval.
           // You can get more specific by checking intervalType
           // against each of the 10 constants directly, but
           // they all are represented by the same object.
           System.out.println("column " + colcount + " is a " +
                           "VerticaDayTimeInterval intervalType = "
                          + intervalType);
      } else if(VerticaYearMonthInterval.isYearMonthInterval(
                      intervalType)) {
          //now you know it is one of the 3 year/month intervals,
          //and you can select the column and cast to
          // VerticaYearMonthInterval
          System.out.println("column " + colcount + " is a " +
                          "VerticaDayTimeInterval intervalType = "
                          + intervalType);
      } else {
          System.out.println("Not an interval type.");
      }
    }
}

121 - code-jdbc-get-interval-values.md

            // Retrieve the interval values inserted by previous demo.
            // Query the table to get the row back as a result set.
            ResultSet rs = stmt.executeQuery("SELECT * FROM interval_demo");
            // If you do not know the types of data contained in the result set,
            // you can read its metadata to determine the type, and use
            // additional information to determine the interval type.
            ResultSetMetaData md = rs.getMetaData();
            while (rs.next()) {
                for (int x = 1; x <= md.getColumnCount(); x++) {
                    // Get data type from metadata
                    int colDataType = md.getColumnType(x);
                    // You can get the type in a string:
                    System.out.println("Column " + x + " is a "
                            + md.getColumnTypeName(x));
                    // Normally, you'd have a switch statement here to
                    // handle all sorts of column types, but this example is
                    // simplified to just handle database-specific types
                    if (colDataType == Types.OTHER) {
                        // Column contains a database-specific type. Determine
                        // what type of interval it is. Assuming it is an
                        // interval...
                        Object columnVal = rs.getObject(x);
                        if (columnVal instanceof VerticaDayTimeInterval) {
                            // We know it is a date time interval
                            VerticaDayTimeInterval interval =
                                    (VerticaDayTimeInterval) columnVal;
                            // You can use the getters to access the interval's
                            // data
                            System.out.print("Column " + x + "'s value is ");
                            System.out.print(interval.getDay() + " Days ");
                            System.out.print(interval.getHour() + " Hours ");
                            System.out.println(interval.getMinute()
                                    + " Minutes");
                        } else if (columnVal instanceof VerticaYearMonthInterval) {
                            VerticaYearMonthInterval interval =
                                    (VerticaYearMonthInterval) columnVal;
                            System.out.print("Column " + x + "'s value is ");
                            System.out.print(interval.getYear() + " Years ");
                            System.out.println(interval.getMonth() + " Months");
                        } else {
                            System.out.println("Not an interval.");
                        }
                    }
                }
            }
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

122 - code-jdbc-insert-interval-values.md

import java.sql.*;
import java.util.Properties;
// You need to import the Vertica JDBC classes to be able to instantiate
// the interval classes.
import com.vertica.jdbc.*;

public class IntervalDemo {
    public static void main(String[] args) {
        // If running under a Java 5 JVM, use you need to load the JDBC driver
        // using Class.forname here
        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                    "jdbc:vertica://VerticaHost:5433/VMart", myProp);
            // Create table for interval values
            Statement stmt = conn.createStatement();
            stmt.execute("DROP TABLE IF EXISTS interval_demo");
            stmt.executeUpdate("CREATE TABLE interval_demo("
                    + "DayInt INTERVAL DAY TO SECOND, "
                    + "MonthInt INTERVAL YEAR TO MONTH)");
            // Insert data into interval columns using
            // VerticaDayTimeInterval and VerticaYearMonthInterval
            // classes.
            PreparedStatement pstmt = conn.prepareStatement(
                    "INSERT INTO interval_demo VALUES(?,?)");
            // Create instances of the Vertica classes that represent
            // intervals.
            VerticaDayTimeInterval dayInt = new VerticaDayTimeInterval(10, 0,
                    5, 40, 0, 0, false);
            VerticaYearMonthInterval monthInt = new VerticaYearMonthInterval(
                    10, 6, false);
            // These objects can also be manipulated using setters.
            dayInt.setHour(7);
            // Add the interval values to the batch
            ((VerticaPreparedStatement) pstmt).setObject(1, dayInt);
            ((VerticaPreparedStatement) pstmt).setObject(2, monthInt);
            pstmt.addBatch();
            // Set another row from strings.
            // Set day interval in "days HH:MM:SS" format
            pstmt.setString(1, "10 10:10:10");
            // Set year to month value in "MM-YY" format
            pstmt.setString(2, "12-09");
            pstmt.addBatch();
            // Execute the batch to insert the values.
            try {
                pstmt.executeBatch();
            } catch (SQLException e) {
                System.out.println("Error message: " + e.getMessage());
            }

123 - code-jdbc-longvarchar-example.md

import java.sql.*;
import java.util.Properties;

public class LongVarcharExample {
    public static void main(String[] args) {
        try {
            Class.forName("com.vertica.jdbc.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("Could not find the JDBC driver class.");
            e.printStackTrace();
            return;
        }
        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",
                            myProp);
            // establish connection and make a table for the data.
            Statement stmt = conn.createStatement();

            // How long we want the example string to be. This is
            // larger than can fit into a traditional VARCHAR (which is limited
            // to 65000.
            int length = 100000;

            // Create a table with a LONG VARCHAR column that can store
            // the string we want to insert.
            stmt.execute("DROP TABLE IF EXISTS longtable CASCADE");
            stmt.execute("CREATE TABLE longtable (text LONG VARCHAR(" + length
                            + "))");
            // Build a long string by appending an integer to a string builder
            // until we hit the size limit. Will result in a string
            // containing 01234567890123....
            StringBuilder sb = new StringBuilder(length);
            for (int i = 0; i < length; i++)
            {
                sb.append(i % 10);
            }
            String value = sb.toString();

            System.out.println("String value is " + value.length() +
                            " characters long.");

            // Create the prepared statement
            PreparedStatement pstmt = conn.prepareStatement(
                            "INSERT INTO longtable (text)" +
                            " VALUES(?)");
            try {
                // Insert LONG VARCHAR value
                System.out.println("Inserting LONG VARCHAR value");
                pstmt.setString(1, value);
                pstmt.addBatch();
                pstmt.executeBatch();

                // Query the table we created to get the value back.
                ResultSet rs = null;
                rs = stmt.executeQuery("SELECT * FROM longtable");

                // Get metadata about the result set.
                ResultSetMetaData rsmd = rs.getMetaData();
                // Print the type of the first column. Should be
                // LONG VARCHAR. Also check it against the Types class, to
                // recognize it programmatically.
                System.out.println("Column #1 data type is: " +
                                rsmd.getColumnTypeName(1));
                if (rsmd.getColumnType(1) == Types.LONGVARCHAR) {
                    System.out.println("It is a LONG VARCHAR");
                } else {
                    System.out.println("It is NOT a LONG VARCHAR");
                }

                // Print out the string length of the returned value.
                while (rs.next()) {
                    // Use the same getString method to get the value that you
                    // use to get the value of a VARCHAR.
                    System.out.println("Returned string length: " +
                                    rs.getString(1).length());
                }
            } catch (SQLException e) {
                System.out.println("Error message: " + e.getMessage());
                return; // Exit if there was an error
            }
            // Cleanup
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

124 - code-jdbc-set-and-get-locale.md

import java.sql.*;
import java.util.Properties;

public class GetAndSetLocale {
    public static void main(String[] args) {

     // If running under a Java 5 JVM, you need to load the JDBC driver
     // using Class.forname here

     Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");

        // Set Locale to true en_GB on connection. After the connection
        // is established, the JDBC driver runs the statements in the
        // ConnSettings property.
        myProp.put("ConnSettings", "SET LOCALE TO en_GB");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",
                            myProp);

            // Execute a query to get the locale. The results should
            // show "en_GB" as the locale, since it was set by the
            // conn settings property.
            Statement stmt = conn.createStatement();
            ResultSet rs = null;
            rs = stmt.executeQuery("SHOW LOCALE");
            System.out.print("Query reports that Locale is set to: ");
            while (rs.next()) {
                System.out.println(rs.getString(2).trim());
            }

            // Now execute a query to set locale.
            stmt.execute("SET LOCALE TO en_US");

            // Run query again to get locale.
            rs = stmt.executeQuery("SHOW LOCALE");
            System.out.print("Query now reports that Locale is set to: ");
            while (rs.next()) {
                System.out.println(rs.getString(2).trim());
            }
            // Clean up
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

125 - code-jdbc-set-transaction-isolation-level.md

import java.sql.*;
import java.util.Properties;

public class SetTransactionIsolation {
    public static void main(String[] args) {
        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",
                            myProp);
            // Get default transaction isolation
            System.out.println("Transaction Isolation Level: "
                            + conn.getTransactionIsolation());
            // Set transaction isolation to SERIALIZABLE
            conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
            // Get the transaction isolation again
            System.out.println("Transaction Isolation Level: "
                            + conn.getTransactionIsolation());
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

126 - code-jdbc-setting-connection-properties.md

import java.sql.*;
import java.util.Properties;
import com.vertica.jdbc.*;

public class SetConnectionProperties {
    public static void main(String[] args) {
        // Note: If your application needs to run under Java 5, you need to
        // load the JDBC driver using Class.forName() here.
        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser");
        myProp.put("password", "password123");
        // Set ReadOnly to true initially
        myProp.put("ReadOnly", "true");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",
                            myProp);
            // Show state of the ReadOnly property. This was set at the
            // time the connection was created.
            System.out.println("ReadOnly state: "
                            + ((VerticaConnection) conn).getProperty(
                                            "ReadOnly"));

            // Change it and show it again
            ((VerticaConnection) conn).setProperty("ReadOnly", false);
            System.out.println("ReadOnly state is now: " +
                             ((VerticaConnection) conn).getProperty(
                                             "ReadOnly"));
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

127 - code-jdbccopylocal.md

import java.sql.*;
import java.util.Properties;

public class COPYLocal {
    public static void main(String[] args) {
        // Note: If using Java 5, you must call Class.forName to load the
        // JDBC driver.
        Properties myProp = new Properties();
        myProp.put("user", "ExampleUser"); // Do not need to superuser
        myProp.put("password", "password123");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                            "jdbc:vertica://VerticaHost:5433/ExampleDB",myProp);
            // Disable AutoCommit
            conn.setAutoCommit(false);
            Statement stmt = conn.createStatement();
            // Create a table to hold data.
            stmt.execute("DROP TABLE IF EXISTS customers;");
            stmt.execute("CREATE TABLE IF NOT EXISTS customers (Last_Name char(50) "
                            + "NOT NULL, First_Name char(50),Email char(50), "
                            + "Phone_Number char(15))");

            // Use the COPY command to load data. Load directly into ROS, since
            // this load could be over 100MB. Use ENFORCELENGTH to reject
            // strings too wide for their columns.
            boolean result = stmt.execute("COPY customers FROM LOCAL "
                            + " 'C:\\Data\\customers.txt' DIRECT ENFORCELENGTH");

            // Determine if execution returned a count value, or a full result
            // set.
            if (result) {
                System.out.println("Got result set");
            } else {
                // Count will usually return the count of rows inserted.
                System.out.println("Got count");
                int rowCount = stmt.getUpdateCount();
                System.out.println("Number of accepted rows = " + rowCount);
            }

            conn.close();
        } catch (SQLException e) {
            System.out.print("Error: ");
            System.out.println(e.toString());
        }
    }
}

128 - code-odbc-batch-load-with-tracking.md

// Some standard headers
#include <stdio.h>
#include <stdlib.h>
// Only needed for Windows clients
// #include <windows.h>
// Standard ODBC headers
#include <sql.h>
#include <sqltypes.h>
#include <sqlext.h>
// Helper function to print SQL error messages.
template <typename HandleT>
void reportError(int handleTypeEnum, HandleT hdl)
{
    // Get the status records.
    SQLSMALLINT   i, MsgLen;
    SQLRETURN ret2;
    SQLCHAR       SqlState[6], Msg[SQL_MAX_MESSAGE_LENGTH];
    SQLINTEGER    NativeError;
    i = 1;
    printf("\n");
    while ((ret2 = SQLGetDiagRec(handleTypeEnum, hdl, i, SqlState, &NativeError,
        Msg, sizeof(Msg), &MsgLen)) != SQL_NO_DATA) {
            printf("error record %d\n", i);
            printf("sqlstate: %s\n", SqlState);
            printf("detailed msg: %s\n", Msg);
            printf("native error code: %d\n\n", NativeError);
            i++;
    }
}
int main()
{
    // Number of data rows to insert
    const int NUM_ENTRIES = 4;


    SQLRETURN ret;
    SQLHENV hdlEnv;
    ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &hdlEnv);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate a handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated an environment handle.\n");
    }
    ret = SQLSetEnvAttr(hdlEnv, SQL_ATTR_ODBC_VERSION,
        (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_UINTEGER);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not set application version to ODBC3.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Set application to ODBC 3.\n");
    }
    SQLHDBC hdlDbc;
    ret = SQLAllocHandle(SQL_HANDLE_DBC, hdlEnv, &hdlDbc);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate database handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated Database handle.\n");
    }
    // Connect to the database
    printf("Connecting to database.\n");
    const char *dsnName = "ExampleDB";
    const char* userID = "dbadmin";
    const char* passwd = "password123";
    ret = SQLConnect(hdlDbc, (SQLCHAR*)dsnName,
        SQL_NTS,(SQLCHAR*)userID,SQL_NTS,
        (SQLCHAR*)passwd, SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not connect to database.\n");
        reportError<SQLHDBC>(SQL_HANDLE_DBC, hdlDbc);
        exit(EXIT_FAILURE);
    } else {
        printf("Connected to database.\n");
    }
    // Set up a statement handle
    SQLHSTMT hdlStmt;
    SQLAllocHandle(SQL_HANDLE_STMT, hdlDbc, &hdlStmt);
    SQLExecDirect(hdlStmt, (SQLCHAR*)"DROP TABLE IF EXISTS customers",
        SQL_NTS);
    // Create a table into which we can store data
    printf("Creating table.\n");
    ret = SQLExecDirect(hdlStmt, (SQLCHAR*)"CREATE TABLE customers "
        "(CustID int, CustName varchar(50), Phone_Number char(15));",
        SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        reportError<SQLHDBC>( SQL_HANDLE_STMT, hdlStmt );
        exit(EXIT_FAILURE);
    } else {
        printf("Created table.\n");
    }
    // Create the prepared statement. This will insert data into the
    // table we created above.
    printf("Creating prepared statement\n");
    ret = SQLPrepare (hdlStmt, (SQLTCHAR*)"INSERT INTO customers (CustID, "
        "CustName,  Phone_Number) VALUES(?,?,?)", SQL_NTS) ;
    if(!SQL_SUCCEEDED(ret)) {
        reportError<SQLHDBC>( SQL_HANDLE_STMT, hdlStmt );
        exit(EXIT_FAILURE);
    } else {
        printf("Created prepared statement.\n");
    }
    // This is the data to be inserted into the database.
    char custNames[][50] = { "Allen, Anna", "Brown, Bill", "Chu, Cindy",
        "Dodd, Don" };
    SQLINTEGER custIDs[] = { 100, 101, 102, 103};
    char phoneNums[][15] = {"1-617-555-1234", "1-781-555-1212",
        "1-508-555-4321", "1-617-555-4444"};
    // Bind the data arrays to the parameters in the prepared SQL
    // statement
    ret = SQLBindParameter(hdlStmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER,
        0, 0, (SQLPOINTER)custIDs, sizeof(SQLINTEGER) , NULL);
    if(!SQL_SUCCEEDED(ret)) {
        reportError<SQLHDBC>( SQL_HANDLE_STMT, hdlStmt );
        exit(EXIT_FAILURE);
    } else {
        printf("Bound CustIDs array to prepared statement\n");
    }
    // Bind CustNames
    SQLBindParameter(hdlStmt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,
        50, 0, (SQLPOINTER)custNames, 50, NULL);
    if(!SQL_SUCCEEDED(ret)) {
        reportError<SQLHDBC>( SQL_HANDLE_STMT, hdlStmt );
        exit(EXIT_FAILURE);
    } else {
        printf("Bound CustNames array to prepared statement\n");
    }
    // Bind phoneNums
    SQLBindParameter(hdlStmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR,
        15, 0, (SQLPOINTER)phoneNums, 15, NULL);
    if(!SQL_SUCCEEDED(ret)) {
        reportError<SQLHDBC>( SQL_HANDLE_STMT, hdlStmt );
        exit(EXIT_FAILURE);
    } else {
        printf("Bound phoneNums array to prepared statement\n");
    }
    // Set up a variable to recieve number of parameters processed.
    SQLULEN rowsProcessed;
    // Set a statement attribute to point to the variable
    SQLSetStmtAttr(hdlStmt, SQL_ATTR_PARAMS_PROCESSED_PTR, &rowsProcessed, 0);
    // Set up an array to hold the result of each row insert
    SQLUSMALLINT   rowResults[ NUM_ENTRIES ];
    // Set a statement attribute to point to the array
    SQLSetStmtAttr(hdlStmt, SQL_ATTR_PARAM_STATUS_PTR, rowResults, 0);
    // Tell the ODBC driver how many rows we have in the
    // array.
    SQLSetStmtAttr(hdlStmt, SQL_ATTR_PARAMSET_SIZE, (SQLPOINTER)NUM_ENTRIES, 0);
    // Add multiple batches to the database. This just adds the same
    // batch of data over and over again for simplicity's sake.
    for (int batchLoop=1; batchLoop<=5; batchLoop++) {
        // Execute the prepared statement, loading all of the data
        // in the arrays.
        printf("Adding Batch #%d...", batchLoop);
        ret = SQLExecute(hdlStmt);
        if(!SQL_SUCCEEDED(ret)) {
            reportError<SQLHDBC>( SQL_HANDLE_STMT, hdlStmt );
            exit(EXIT_FAILURE);
        }
        // Number of rows processed is in rowsProcessed
        printf("Params processed: %d\n", rowsProcessed);
        printf("Results of inserting each row:\n");
        int i;
        for (i = 0; i<NUM_ENTRIES; i++) {
            SQLUSMALLINT result = rowResults[i];
            switch(rowResults[i]) {
                case SQL_PARAM_SUCCESS:
                case SQL_PARAM_SUCCESS_WITH_INFO:
                    printf("  Row %d inserted successsfully\n", i+1);
                    break;
                case SQL_PARAM_ERROR:
                    printf("  Row %d was not inserted due to an error.", i+1);
                    break;
                default:
                    printf("  Row %d had some issue with it: %d\n", i+1, result);
            }
        }
    }
    // Done with batches, commit the transaction
    printf("Commit Transaction\n");
    ret = SQLEndTran(SQL_HANDLE_DBC, hdlDbc, SQL_COMMIT);
    if(!SQL_SUCCEEDED(ret)) {
        reportError<SQLHDBC>( SQL_HANDLE_STMT, hdlStmt );
    }


    // Clean up
    printf("Free handles.\n");
    ret = SQLDisconnect( hdlDbc );
    if(!SQL_SUCCEEDED(ret)) {
        printf("Error disconnecting. Transaction still open?\n");
        exit(EXIT_FAILURE);
    }
    SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
    SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
    SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
    exit(EXIT_SUCCESS);
}

129 - code-odbc-batch-load.md

// Some standard headers
#include <stdio.h>
#include <stdlib.h>
// Only needed for Windows clients
// #include <windows.h>
// Standard ODBC headers
#include <sql.h>
#include <sqltypes.h>
#include <sqlext.h>
int main()
{
    // Number of data rows to insert
    const int NUM_ENTRIES = 4;

    // Set up the ODBC environment
    SQLRETURN ret;
    SQLHENV hdlEnv;
    ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &hdlEnv);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate a handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated an environment handle.\n");
    }
    // Tell ODBC that the application uses ODBC 3.
    ret = SQLSetEnvAttr(hdlEnv, SQL_ATTR_ODBC_VERSION,
        (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_UINTEGER);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not set application version to ODBC3.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Set application to ODBC 3.\n");
    }
    // Allocate a database handle.
    SQLHDBC hdlDbc;
    ret = SQLAllocHandle(SQL_HANDLE_DBC, hdlEnv, &hdlDbc);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate database handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated Database handle.\n");
    }
    // Connect to the database
    printf("Connecting to database.\n");
    const char *dsnName = "ExampleDB";
    const char* userID = "dbadmin";
    const char* passwd = "password123";
    ret = SQLConnect(hdlDbc, (SQLCHAR*)dsnName,
        SQL_NTS,(SQLCHAR*)userID,SQL_NTS,
        (SQLCHAR*)passwd, SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not connect to database.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Connected to database.\n");
    }


    // Disable AUTOCOMMIT
    printf("Disabling autocommit.\n");
    ret = SQLSetConnectAttr(hdlDbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF,
                            SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not disable autocommit.\n");
        exit(EXIT_FAILURE);
    }

    // Set up a statement handle
    SQLHSTMT hdlStmt;
    SQLAllocHandle(SQL_HANDLE_STMT, hdlDbc, &hdlStmt);

    // Create a table to hold the data
    SQLExecDirect(hdlStmt, (SQLCHAR*)"DROP TABLE IF EXISTS customers",
        SQL_NTS);
    SQLExecDirect(hdlStmt, (SQLCHAR*)"CREATE TABLE customers "
        "(CustID int, CustName varchar(100), Phone_Number char(15));",
        SQL_NTS);

    // Create the prepared statement. This will insert data into the
    // table we created above.
    printf("Creating prepared statement\n");
    ret = SQLPrepare (hdlStmt, (SQLTCHAR*)"INSERT INTO customers (CustID, "
        "CustName,  Phone_Number) VALUES(?,?,?)", SQL_NTS) ;
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not create prepared statement\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Created prepared statement.\n");
    }
    // This is the data to be inserted into the database.
    SQLCHAR custNames[][50] = { "Allen, Anna", "Brown, Bill", "Chu, Cindy",
        "Dodd, Don" };
    SQLINTEGER custIDs[] = { 100, 101, 102, 103};
    // year, month, day, hour, minute, second, fraction
    // struct accepts fraction in billionths of a second, but Vertica supports millionths
    SQL_TIMESTAMP_STRUCT accountCreationDates[] = { 
        {1997, 4, 1, 12, 35, 29, 0},          // 1997-04-01 12:35:29
        {2002, 6, 13, 1,  0, 12, 1000},       // 2002-06-13 01:00:12.000001
        {2000, 9, 2, 2,  59, 37, 999000000},  // 2000-09-02 02:59:37.999
        {2009, 1, 25, 3,  7, 59, 999999000},  // 2009-01-25 03:07:59.999999
    };
    SQLCHAR phoneNums[][15] = {"1-617-555-1234", "1-781-555-1212",
        "1-508-555-4321", "1-617-555-4444"};
    // Bind the data arrays to the parameters in the prepared SQL
    // statement. First is the custID.
    ret = SQLBindParameter(hdlStmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER,
        0, 0, (SQLPOINTER)custIDs, sizeof(SQLINTEGER) , NULL);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not bind custID array\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Bound CustIDs array to prepared statement\n");
    }
    // Bind the customer account creation date
    // timestamp column size is safe at 23 + length of longest fractional component precision
    // Max precision that vertica supports for timestamp second precision is 6 digits
    ret = SQLBindParameter(stmt.hstmt, 2, SQL_PARAM_INPUT, SQL_C_TYPE_TIMESTAMP, SQL_TIMESTAMP, 29, 6,
                          (SQLPOINTER)accountCreationDates, sizeof(SQL_TIMESTAMP_STRUCT), NULL);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not bind account creation dates\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Bound account creation date array to prepared statement\n");
    }
    // Bind CustNames
    ret = SQLBindParameter(hdlStmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,
        50, 0, (SQLPOINTER)custNames, 50, NULL);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not bind custNames\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Bound CustNames array to prepared statement\n");
    }
    // Bind phoneNums
    ret = SQLBindParameter(hdlStmt, 4, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR,
        15, 0, (SQLPOINTER)phoneNums, 15, NULL);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not bind phoneNums\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Bound phoneNums array to prepared statement\n");
    }
    // Tell the ODBC driver how many rows we have in the
    // array.
    ret = SQLSetStmtAttr( hdlStmt, SQL_ATTR_PARAMSET_SIZE,
        (SQLPOINTER)NUM_ENTRIES, 0 );
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not bind set parameter size\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Bound phoneNums array to prepared statement\n");
    }

    // Add multiple batches to the database. This just adds the same
    // batch of data four times for simplicity's sake. Each call adds
    // the 4 rows into the database.
    for (int batchLoop=1; batchLoop<=5; batchLoop++) {
        // Execute the prepared statement, loading all of the data
        // in the arrays.
        printf("Adding Batch #%d...", batchLoop);
        ret = SQLExecute(hdlStmt);
        if(!SQL_SUCCEEDED(ret)) {
           printf("not successful!\n");
        }  else {
            printf("successful.\n");
        }
    }
    // Done with batches, commit the transaction
    printf("Committing transaction\n");
    ret = SQLEndTran(SQL_HANDLE_DBC, hdlDbc, SQL_COMMIT);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not commit transaction\n");
    }  else {
        printf("Committed transaction\n");
    }

    // Clean up
    printf("Free handles.\n");
    ret = SQLDisconnect( hdlDbc );
    if(!SQL_SUCCEEDED(ret)) {
        printf("Error disconnecting. Transaction still open?\n");
        exit(EXIT_FAILURE);
    }
    SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
    SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
    SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
    exit(EXIT_SUCCESS);
}

130 - code-odbc-connect-to-database.md

// Demonstrate connecting to Vertica using ODBC.
// Standard i/o library
#include <stdio.h>
#include <stdlib.h>
// Only needed for Windows clients
// #include <windows.h>
// SQL include files that define data types and ODBC API
// functions
#include <sql.h>
#include <sqlext.h>
#include <sqltypes.h>
int main()
{
    SQLRETURN ret;   // Stores return value from ODBC API calls
    SQLHENV hdlEnv;  // Handle for the SQL environment object
    // Allocate an a SQL environment object
    ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &hdlEnv);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate a handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated an environment handle.\n");
    }

    // Set the ODBC version we are going to use to
    // 3.
    ret = SQLSetEnvAttr(hdlEnv, SQL_ATTR_ODBC_VERSION,
            (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_UINTEGER);
    if(!SQL_SUCCEEDED(ret)) {
         printf("Could not set application version to ODBC 3.\n");
         exit(EXIT_FAILURE);
    } else {
         printf("Set application version to ODBC 3.\n");
    }
    // Allocate a database handle.
    SQLHDBC hdlDbc;
     ret = SQLAllocHandle(SQL_HANDLE_DBC, hdlEnv, &hdlDbc);
     if(!SQL_SUCCEEDED(ret)) {
          printf("Could not allocate database handle.\n");
          exit(EXIT_FAILURE);
     } else {
          printf("Allocated Database handle.\n");
     }
    // Connect to the database using
    // SQL Connect
    printf("Connecting to database.\n");
    const char *dsnName = "ExampleDB";
    const char* userID = "ExampleUser";
    const char* passwd = "password123";
    ret = SQLConnect(hdlDbc, (SQLCHAR*)dsnName,
        SQL_NTS,(SQLCHAR*)userID,SQL_NTS,
        (SQLCHAR*)passwd, SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not connect to database.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Connected to database.\n");
    }
    // We're connected. You can do real
    // work here

    // When done, free all of the handles to close them
    // in an orderly fashion.
    printf("Disconnecting and freeing handles.\n");
    ret = SQLDisconnect( hdlDbc );
    if(!SQL_SUCCEEDED(ret)) {
        printf("Error disconnecting from database. Transaction still open?\n");
        exit(EXIT_FAILURE);
    }

    SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
    SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
    exit(EXIT_SUCCESS);
}

131 - code-odbc-disable-auto-commit.md

// Some standard headers
#include <stdio.h>
#include <stdlib.h>
// Only needed for Windows clients
// #include <windows.h>
// Standard ODBC headers
#include <sql.h>
#include <sqltypes.h>
#include <sqlext.h>
int main()
{
    // Set up the ODBC environment
    SQLRETURN ret;
    SQLHENV hdlEnv;
    ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &hdlEnv);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate a handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated an environment handle.\n");
    }
    // Tell ODBC that the application uses ODBC 3.
    ret = SQLSetEnvAttr(hdlEnv, SQL_ATTR_ODBC_VERSION,
        (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_UINTEGER);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not set application version to ODBC3.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Set application to ODBC 3.\n");
    }
    // Allocate a database handle.
    SQLHDBC hdlDbc;
    ret = SQLAllocHandle(SQL_HANDLE_DBC, hdlEnv, &hdlDbc);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate database handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated Database handle.\n");
    }
    // Connect to the database
    printf("Connecting to database.\n");
    const char *dsnName = "ExampleDB";
    const char* userID = "dbadmin";
    const char* passwd = "password123";
    ret = SQLConnect(hdlDbc, (SQLCHAR*)dsnName,
        SQL_NTS,(SQLCHAR*)userID,SQL_NTS,
        (SQLCHAR*)passwd, SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not connect to database.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Connected to database.\n");
    }
    // Get the AUTOCOMMIT state
    SQLINTEGER  autoCommitState;
    SQLGetConnectAttr(hdlDbc, SQL_ATTR_AUTOCOMMIT, &autoCommitState, 0, NULL);
    printf("Autocommit is set to: %d\n", autoCommitState);


    // Disable AUTOCOMMIT
    printf("Disabling autocommit.\n");
    ret = SQLSetConnectAttr(hdlDbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF,
        SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not disable autocommit.\n");
        exit(EXIT_FAILURE);
    }

    // Get the AUTOCOMMIT state again
    SQLGetConnectAttr(hdlDbc, SQL_ATTR_AUTOCOMMIT, &autoCommitState, 0, NULL);
    printf("Autocommit is set to: %d\n", autoCommitState);

    // Set up a statement handle
    SQLHSTMT hdlStmt;
    SQLAllocHandle(SQL_HANDLE_STMT, hdlDbc, &hdlStmt);


    // Create a table to hold the data
    SQLExecDirect(hdlStmt, (SQLCHAR*)"DROP TABLE IF EXISTS customers",
        SQL_NTS);
    SQLExecDirect(hdlStmt, (SQLCHAR*)"CREATE TABLE customers "
        "(CustID int, CustName varchar(100), Phone_Number char(15));",
        SQL_NTS);


    // Insert a single row.
    ret = SQLExecDirect(hdlStmt, (SQLCHAR*)"INSERT INTO customers VALUES(500,"
        "'Smith, Sam', '123-456-789');", SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not perform single insert.\n");
    } else {
        printf("Performed single insert.\n");
    }


    // Need to commit the transaction before closing, since autocommit is
    // disabled. Otherwise SQLDisconnect returns an error.
    printf("Committing transaction.\n");
    ret =  SQLEndTran(SQL_HANDLE_DBC, hdlDbc, SQL_COMMIT);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Error committing transaction.\n");
        exit(EXIT_FAILURE);
    }

    // Clean up
    printf("Free handles.\n");
    ret = SQLDisconnect(hdlDbc);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Error disconnecting from database. Transaction still open?\n");
        exit(EXIT_FAILURE);
    }
    SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
    SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
    SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
    exit(EXIT_SUCCESS);
}

132 - code-odbc-prepared-statements.md

// Some standard headers
#include <stdio.h>
#include <stdlib.h>
// Only needed for Windows clients
// #include <windows.h>
// Standard ODBC headers
#include <sql.h>
#include <sqltypes.h>
#include <sqlext.h>
// Some constants for the size of the data to be inserted.
#define CUST_NAME_LEN 50
#define PHONE_NUM_LEN 15
#define NUM_ENTRIES 4
int main()
{
    // Set up the ODBC environment
    SQLRETURN ret;
    SQLHENV hdlEnv;
    ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &hdlEnv);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate a handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated an environment handle.\n");
    }
    // Tell ODBC that the application uses ODBC 3.
    ret = SQLSetEnvAttr(hdlEnv, SQL_ATTR_ODBC_VERSION,
        (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_UINTEGER);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not set application version to ODBC3.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Set application to ODBC 3.\n");
    }
    // Allocate a database handle.
    SQLHDBC hdlDbc;
    ret = SQLAllocHandle(SQL_HANDLE_DBC, hdlEnv, &hdlDbc);
    // Connect to the database
    printf("Connecting to database.\n");
    const char *dsnName = "ExampleDB";
    const char* userID = "dbadmin";
    const char* passwd = "password123";
    ret = SQLConnect(hdlDbc, (SQLCHAR*)dsnName,
        SQL_NTS,(SQLCHAR*)userID,SQL_NTS,
        (SQLCHAR*)passwd, SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not connect to database.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Connected to database.\n");
    }

    // Disable AUTOCOMMIT
    printf("Disabling autocommit.\n");
    ret = SQLSetConnectAttr(hdlDbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF,
        SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not disable autocommit.\n");
        exit(EXIT_FAILURE);
    }


    // Set up a statement handle
    SQLHSTMT hdlStmt;
    SQLAllocHandle(SQL_HANDLE_STMT, hdlDbc, &hdlStmt);
    SQLExecDirect(hdlStmt, (SQLCHAR*)"DROP TABLE IF EXISTS customers",
        SQL_NTS);
    SQLExecDirect(hdlStmt, (SQLCHAR*)"CREATE TABLE customers "
        "(CustID int, CustName varchar(100), Phone_Number char(15));",
        SQL_NTS);

    // Set up a bunch of variables to be bound to the statement
    // parameters.

    // Create the prepared statement. This will insert data into the
    // table we created above.
    printf("Creating prepared statement\n");
    ret = SQLPrepare (hdlStmt, (SQLTCHAR*)"INSERT INTO customers (CustID, "
        "CustName,  Phone_Number) VALUES(?,?,?)", SQL_NTS) ;
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not create prepared statement\n");
        SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
        SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
        SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
        exit(EXIT_FAILURE);
    } else {
        printf("Created prepared statement.\n");
    }
    SQLINTEGER custID = 1234;
    SQLCHAR custName[100] = "Fein, Fredrick";
    SQLVARCHAR phoneNum[15] = "555-123-6789";
    SQLLEN strFieldLen = SQL_NTS;
    SQLLEN custIDLen = 0;
    // Bind the data arrays to the parameters in the prepared SQL
    // statement
    ret = SQLBindParameter(hdlStmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER,
        0, 0, &custID, 0 , &custIDLen);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not bind custID array\n");
        SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
        SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
        SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
        exit(EXIT_FAILURE);
    } else {
        printf("Bound custID to prepared statement\n");
    }
    // Bind CustNames
    SQLBindParameter(hdlStmt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_VARCHAR,
        50, 0, (SQLPOINTER)custName,  0, &strFieldLen);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not bind custNames\n");
        SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
        SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
        SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
        exit(EXIT_FAILURE);
    } else {
        printf("Bound custName to prepared statement\n");
    }
    // Bind phoneNums
    SQLBindParameter(hdlStmt, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR,
        15, 0, (SQLPOINTER)phoneNum, 0, &strFieldLen);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not bind phoneNums\n");
        SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
        SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
        SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
        exit(EXIT_FAILURE);
    } else {
        printf("Bound phoneNum to prepared statement\n");
    }
    // Execute the prepared statement.
    printf("Running prepared statement...");
    ret = SQLExecute(hdlStmt);
    if(!SQL_SUCCEEDED(ret)) {
        printf("not successful!\n");
    }  else {
        printf("successful.\n");
    }

    // Done with batches, commit the transaction
    printf("Committing transaction\n");
    ret = SQLEndTran(SQL_HANDLE_DBC, hdlDbc, SQL_COMMIT);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not commit transaction\n");
    }  else {
        printf("Committed transaction\n");
    }

    // Clean up
    printf("Free handles.\n");
    ret = SQLDisconnect( hdlDbc );
    if(!SQL_SUCCEEDED(ret)) {
        printf("Error disconnecting. Transaction still open?\n");
        exit(EXIT_FAILURE);
    }
    SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
    SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
    SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
    exit(EXIT_SUCCESS);
}

133 - code-odbc-set-locale.md

// Standard i/o library
#include <stdio.h>
#include <stdlib.h>
// Only needed for Windows clients
// #include <windows.h>
// SQL include files that define data types and ODBC API
// functions
#include <sql.h>
#include <sqlext.h>
#include <sqltypes.h>
// Vertica-specific definitions. This include file is located as
// /opt/vertica/include on database hosts.
#include <verticaodbc.h>
int main()
{
    SQLRETURN ret;   // Stores return value from ODBC API calls
    SQLHENV hdlEnv;  // Handle for the SQL environment object
    // Allocate an a SQL environment object
    ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &hdlEnv);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate a handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated an environment handle.\n");
    }
    // Set the ODBC version we are going to use to 3.
    ret = SQLSetEnvAttr(hdlEnv, SQL_ATTR_ODBC_VERSION,
        (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_UINTEGER);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not set application version to ODBC 3.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Set application version to ODBC 3.\n");
    }
    // Allocate a database handle.
    SQLHDBC hdlDbc;
    ret = SQLAllocHandle(SQL_HANDLE_DBC, hdlEnv, &hdlDbc);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not allocate database handle.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Allocated Database handle.\n");
    }
    // Connect to the database using SQLDriverConnect
    printf("Connecting to database.\n");
    // Set the locale to English in Great Britain.
    const char *connStr = "DSN=ExampleDB;locale=en_GB;"
        "UID=dbadmin;PWD=password123";
    ret = SQLDriverConnect(hdlDbc, NULL, (SQLCHAR*)connStr, SQL_NTS,
               NULL, 0, NULL, SQL_DRIVER_NOPROMPT );
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not connect to database.\n");
        exit(EXIT_FAILURE);
    } else {
        printf("Connected to database.\n");
    }
    // Get the Locale
    char locale[256];
    SQLGetConnectAttr(hdlDbc, SQL_ATTR_VERTICA_LOCALE, locale, sizeof(locale),
        0);
    printf("Locale is set to: %s\n", locale);
    // Set the locale to a new value
    const char* newLocale = "en_GB";
    SQLSetConnectAttr(hdlDbc, SQL_ATTR_VERTICA_LOCALE, (SQLCHAR*)newLocale,
        SQL_NTS);

    // Get the Locale again
    SQLGetConnectAttr(hdlDbc, SQL_ATTR_VERTICA_LOCALE, locale, sizeof(locale),
        0);
    printf("Locale is now set to: %s\n", locale);

    // Set the encoding
    SQLSetConnectAttr (hdbc, SQL_ATTR_APP_WCHAR_TYPE, (void *)SQL_DD_CP_UTF16,
        SQL_IS_INTEGER);

    // When done, free all of the handles to close them
    // in an orderly fashion.
    printf("Disconnecting and freeing handles.\n");
    ret = SQLDisconnect( hdlDbc );
    if(!SQL_SUCCEEDED(ret)) {
        printf("Error disconnecting from database. Transaction still open?\n");
        exit(EXIT_FAILURE);
    }
    SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
    SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
    exit(EXIT_SUCCESS);
}

134 - code-odbc-storing-and-retrieving-geometry-data-types.md

// Compile on Linux using:
//  g++ -g -I/opt/vertica/include -L/opt/vertica/lib64 -lodbc -o SpatialData SpatialData.cpp
// Some standard headers
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <sstream>
// Only needed for Windows clients
// #include <windows.h>
// Standard ODBC headers
#include <sql.h>
#include <sqltypes.h>
#include <sqlext.h>
// Helper function to print SQL error messages.
template <typename HandleT>
void reportError(int handleTypeEnum, HandleT hdl)
{
    // Get the status records.
    SQLSMALLINT   i, MsgLen;
    SQLRETURN     ret2;
    SQLCHAR       SqlState[6], Msg[SQL_MAX_MESSAGE_LENGTH];
    SQLINTEGER    NativeError;
    i = 1;
    printf("\n");
    while ((ret2 = SQLGetDiagRec(handleTypeEnum, hdl, i, SqlState, &NativeError,
                                Msg, sizeof(Msg), &MsgLen)) != SQL_NO_DATA) {
        printf("error record %d\n", i);
        printf("sqlstate: %s\n", SqlState);
        printf("detailed msg: %s\n", Msg);
        printf("native error code: %d\n\n", NativeError);
        i++;
    }
    exit(EXIT_FAILURE); // bad form... but Ok for this demo
}
int main()
{
    // Set up the ODBC environment
    SQLRETURN ret;
    SQLHENV hdlEnv;
    ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &hdlEnv);
    assert(SQL_SUCCEEDED(ret));
    // Tell ODBC that the application uses ODBC 3.
    ret = SQLSetEnvAttr(hdlEnv, SQL_ATTR_ODBC_VERSION,
        (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_UINTEGER);
    assert(SQL_SUCCEEDED(ret));
    // Allocate a database handle.
    SQLHDBC hdlDbc;
    ret = SQLAllocHandle(SQL_HANDLE_DBC, hdlEnv, &hdlDbc);
    assert(SQL_SUCCEEDED(ret));
    // Connect to the database
    printf("Connecting to database.\n");
    const char *dsnName = "ExampleDB";
    const char* userID = "dbadmin";
    const char* passwd = "password123";
    ret = SQLConnect(hdlDbc, (SQLCHAR*)dsnName,
        SQL_NTS,(SQLCHAR*)userID,SQL_NTS,
        (SQLCHAR*)passwd, SQL_NTS);
    if(!SQL_SUCCEEDED(ret)) {
        printf("Could not connect to database.\n");
        reportError<SQLHDBC>(SQL_HANDLE_DBC, hdlDbc);

    } else {
        printf("Connected to database.\n");
    }

    // Disable AUTOCOMMIT
    ret = SQLSetConnectAttr(hdlDbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF,
        SQL_NTS);

    // Set up a statement handle
    SQLHSTMT hdlStmt;
    SQLAllocHandle(SQL_HANDLE_STMT, hdlDbc, &hdlStmt);

    // Drop any previously defined table.
    ret = SQLExecDirect(hdlStmt, (SQLCHAR*)"DROP TABLE IF EXISTS polygons",
        SQL_NTS);
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}

    // Run query to create a table to hold a geometry.
    ret = SQLExecDirect(hdlStmt,
        (SQLCHAR*)"CREATE TABLE polygons(id INTEGER PRIMARY KEY, poly GEOMETRY);",
        SQL_NTS);
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}

    // Create the prepared statement. This will insert data into the
    // table we created above. It uses the ST_GeomFromText function to convert the
    // string-formatted polygon definition to a GEOMETRY datat type.
    printf("Creating prepared statement\n");
    ret = SQLPrepare (hdlStmt,
        (SQLTCHAR*)"INSERT INTO polygons(id, poly) VALUES(?, ST_GeomFromText(?))",
        SQL_NTS) ;
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}

    SQLINTEGER id = 0;
    int numBatches = 5;
    int rowsPerBatch = 10;

    // Polygon definition as a string.
    char polygon[] = "polygon((1 1, 1 2, 2 2, 2 1, 1 1))";
    // Bind variables to the parameters in the prepared SQL statement
    ret = SQLBindParameter(hdlStmt, 1, SQL_PARAM_INPUT, SQL_C_LONG, SQL_INTEGER,
        0, 0, &id, 0 , NULL);
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT,hdlStmt);}
    // Bind polygon string to the geometry column
    SQLBindParameter(hdlStmt, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_LONGVARCHAR,
        strlen(polygon), 0, (SQLPOINTER)polygon, strlen(polygon), NULL);
     if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT,hdlStmt);}
    // Execute the insert
    ret = SQLExecute(hdlStmt);
    if(!SQL_SUCCEEDED(ret)) {
       reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);
    }  else {
        printf("Executed batch.\n");
    }

    // Commit the transaction
    printf("Committing transaction\n");
    ret = SQLEndTran(SQL_HANDLE_DBC, hdlDbc, SQL_COMMIT);
    if(!SQL_SUCCEEDED(ret)) {
        reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);
    }  else {
        printf("Committed transaction\n");
    }
    // Now, create a query to retrieve the geometry.
    ret = SQLAllocHandle(SQL_HANDLE_STMT, hdlDbc, &hdlStmt);
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}
    printf("Getting data from table.\n");
    // Execute a query to get the id, raw geometry data, and
    // the geometry data as a string. Uses the ST_AsText SQL function to
    // format raw data back into a string polygon definition
    ret = SQLExecDirect(hdlStmt,
        (SQLCHAR*)"select id,ST_AsBinary(poly),ST_AsText(poly) from polygons ORDER BY id;",
        SQL_NTS);
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT,hdlStmt);}

    SQLINTEGER idval;
    // 10MB buffer to hold the raw data from the geometry (10Mb is the maximum
    // length of a GEOMETRY)
    SQLCHAR* polygonval = (SQLCHAR*)malloc(10485760);
    SQLLEN polygonlen, polygonstrlen;
    // Buffer to hold a LONGVARCHAR that can result from converting the
    // geometry to a string.
    SQLTCHAR* polygonstr = (SQLTCHAR*)malloc(33554432);

    // Get the results of the query and print each row.
    do {
        ret = SQLFetch(hdlStmt);
        if (SQL_SUCCEEDED(ret)) {
            // ID column
            ret = SQLGetData(hdlStmt, 1, SQL_C_LONG, &idval, 0, NULL);
            if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}
            printf("id: %d\n",idval);
            // The WKB format geometry data
            ret = SQLGetData(hdlStmt, 2, SQL_C_BINARY, polygonval, 10485760,
                &polygonlen);
            if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}
            printf("Polygon in WKB format: ");
            // Print each byte of polygonval buffer in hex format.
            for (int z = 0; z < polygonlen; z++)
                printf("%02x ",polygonval[z]);
            printf("\n");
            // Geometry data formatted as a string.
            ret = SQLGetData(hdlStmt, 3, SQL_C_TCHAR, polygonstr, 33554432, &polygonstrlen);
            if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}
            printf("Polygon in WKT format: %s\n", polygonstr);
        }
    } while(SQL_SUCCEEDED(ret));


    free(polygonval);
    free(polygonstr);
    // Clean up
    printf("Free handles.\n");
    ret = SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}
    ret = SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc);
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}
    ret = SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv);
    if (!SQL_SUCCEEDED(ret)) {reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);}
    exit(EXIT_SUCCESS);
}

The output of running the above example is:

Connecting to database.
Connected to database.
Creating prepared statement
Executed batch.
Committing transaction
Committed transaction
Getting data from table.
id: 0
Polygon in WKB format: 01 03 00 00 00 01 00 00 00 05 00 00 00 00 00 00 00 00 00 f0 3f 00 00 00 00 00 00 f0 3f 00 00 00 00 00 00 f0 3f 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00 40 00 00 00 00 00 00 f0 3f 00 00 00 00 00 00 f0 3f 00 00 00 00 00 00 f0 3f
Polygon in WKT format: POLYGON ((1 1, 1 2, 2 2, 2 1, 1 1))
Free handles.

135 - code-perl-connect-without-dsn.md

#!/usr/bin/perl
use strict;
use DBI;
my $server='VerticaHost';
my $port = '5433';
my $database = 'VMart';
my $user = 'ExampleUser';
my $password = 'password123';
# Connect without a DSN by supplying all of the information for the connection.
# The DRIVER value on UNIX platforms depends on the entry in the odbcinst.ini
# file.
my $dbh = DBI->connect("dbi:ODBC:DRIVER={Vertica};Server=$server;" .
        "Port=$port;Database=$database;UID=$user;PWD=$password")
        or die "Could not connect to database: " . DBI::errstr;
print "Connected!\n";
$dbh->disconnect();

136 - code-perl-copylocal.md

#!/usr/bin/perl
use strict;
use DBI;
# Filesystem path handling module
use File::Spec;
# Create a hash reference that holds a hash of parameters for the
# connection.
 my $attr = {AutoCommit => 0}; # Turn off AutoCommit
# Open a connection using a DSN. Supply the username and password.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123",
    $attr) or die "Failed to connect: $DBI::errstr";
print "Connected!\n";
# Drop any existing table.
$dbh->do("DROP TABLE IF EXISTS Customers CASCADE;");
# Create a table to hold data.
$dbh->do("CREATE TABLE Customers( \
               ID  INT, \
               FirstName  VARCHAR(100),\
               LastName   VARCHAR(100),\
               Email      VARCHAR(100),\
               Birthday   DATE)");
# Find the absolute path to the data file located in the current working
# directory and named data.txt
my $currDir = File::Spec->rel2abs(File::Spec->curdir());
my $dataFile = File::Spec->catfile($currDir, 'data.txt');
print "Loading file $dataFile\n";
# Load local file using copy local. Return value is the # of rows affected
# which equates to the number of rows inserted.
my $rows = $dbh->do("COPY Customers FROM LOCAL '$dataFile' DIRECT")
     or die $dbh->errstr;
print "Copied $rows rows into database.\n";
$dbh->commit();
# Prepare a query to get the first 15 rows of the results
my $sth = $dbh->prepare("SELECT * FROM Customers WHERE ID < 15 \
                                 ORDER BY ID");

$sth->execute() or die "Error querying table: " . $dbh->errstr;
my @row; # Pre-declare variable to hold result row used in format statement.
# Use Perl formats to pretty print the output. Declare the heading for the
# form.
format STDOUT_TOP =
ID  First           Last          EMail                         Birthday
==  =====           ====          =====                         ========
.
# The Perl write statement will output a formatted line with values from the
# @row array. See http://perldoc.perl.org/perlform.html for details.
format STDOUT =
@>  @<<<<<<<<<<<<<  @<<<<<<<<<<<  @<<<<<<<<<<<<<<<<<<<<<<<<<<<  @<<<<<<<<<
@row
.
# Loop through result rows while we have them
while (@row = $sth->fetchrow_array()) {
         write; # Format command does the work of extracting the columns from
                # the @row array and writing them out to STDOUT.
}
# Call commit to prevent Perl from complaining about uncommitted transactions
# when disconnecting
$dbh->commit();
$dbh->disconnect();

137 - code-perl-data-type-conversion.md

#!/usr/bin/perl
use strict;
use DBI;
# Create a hash reference that holds a hash of parameters for the
# connection.
 my $attr = {AutoCommit => 0, # Turn off autocommit
             PrintError => 0   # Turn off print error. Manually handled
             };
# Open a connection using a DSN. Supply the username and password.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123",
    $attr);
if (defined DBI::err) {
    # Conection failed.
    die "Failed to connect: $DBI::errstr";
}
print "Connection AutoCommit state is: " . $dbh->{AutoCommit} . "\n";
# Create table to hold inserted data
$dbh->do("DROP TABLE IF EXISTS TEST CASCADE;");
$dbh->do("CREATE TABLE TEST( \
               C_ID  INT, \
               C_FP  FLOAT,\
               C_VARCHAR VARCHAR(100),\
               C_DATE DATE, C_TIME TIME,\
               C_TS TIMESTAMP,\
               C_BOOL BOOL)");
# Populate an array of arrays with values.
my @data = (
                # Start with matching data types
                [1,1.111,'Matching datatypes','2001-01-01','01:01:01'
                    ,'2001-01-01 01:01:01','t'],
                # Force floats -> int and int -> float.
                [2.0,2,"Ints <-> floats",'2002-02-02','02:02:02'
                    ,'2002-02-02 02:02:02',1],
                # Float -> int *only* works when there is no loss of precision.
                # this row will fail to insert:
                [3.1,3,"float -> int with trunc?",'2003-03-03','03:03:03'
                    ,'2003-03-03 03:03:03',1],
                # String values are converted into numbers
                ["4","4.4","Strings -> numbers", '2004-04-04','04:04:04',
                    ,'2004-04-04 04:04:04',0],
                # String -> numbers only works if the entire string can be
                # converted into a number
                ["5 and a half","5.5","Strings -> numbers", '2005-05-05',
                    '05:05:05', ,'2005-05-05 05:05:05',0],
                # Number are converted into string values automatically,
                # assuming they fit into the column width.
                [6,6.6,3.14159, '2006-06-06','06:06:06',
                    ,'2006-06-06 06:06:06',0],
                # There are some variations in the accepted date strings
                [7,7.7,'Date/time formats', '07/07/2007','07:07:07',
                    ,'07-07-2007 07:07:07',1],
             );
# Create a prepared statement to use parameters for inserting values.
my $sth = $dbh->prepare_cached("INSERT into test values(?,?,?,?,?,?,?)");
my $rowcount = 0; # Count # of rows
# Loop through the arrays to insert values
foreach my $tuple (@data) {
    $rowcount++;
    # Insert the row
    my $retval = $sth->execute(@$tuple);

    # See if the row was successfully inserted.
    if ($retval == 1) {
        # Value of 1 means the row was inserted (1 row was affected by insert)
        print "Row $rowcount successfully inserted\n";
    } else {
        print "Inserting row $rowcount failed with error " .
                $sth->state . " " . $sth->errstr . "\n";
    }
}
# Commit the data
$dbh->commit();
# Prepare a query to get the content of the table
$sth = $dbh->prepare("SELECT * FROM TEST ORDER BY C_ID ASC");
$sth->execute() or die "Error: " . $dbh->errstr;
my @row; # Need to pre-declare to use in the format statement.
# Use Perl formats to pretty print the output.
format STDOUT_TOP =
Int  Float          VarChar        Date      Time      Timestamp     Bool
===  =====  ==================  ========== ======== ================ ====
.
format STDOUT =
@>>  @<<<<  @<<<<<<<<<<<<<<<<<  @<<<<<<<<< @<<<<<<< @<<<<<<<<<<<<<<< @<<<<
@row
.
# Loop through result rows while we have them
while (@row = $sth->fetchrow_array()) {
         write; # Format command does the work of extracting the columns from
                 # the array.
}
# Commit to stop Perl complaining about in-progress transactions.
$dbh->commit();
$dbh->disconnect();

138 - code-perl-execute-ddl-statement.md

#!/usr/bin/perl
use strict;
use DBI;
# Disable autocommit
 my $attr = {AutoCommit => 0};
# Open a connection using a DSN.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123",
    $attr);
unless (defined $dbh) {
    # Conection failed.
    die "Failed to connect: $DBI::errstr";
}
# You can use the do function to perform DDL commands.
# Drop any existing table.
$dbh->do("DROP TABLE IF EXISTS TEST CASCADE;");
# Create a table to hold data.
$dbh->do("CREATE TABLE TEST( \
               C_ID  INT, \
               C_FP  FLOAT,\
               C_VARCHAR VARCHAR(100),\
               C_DATE DATE, C_TIME TIME,\
               C_TS TIMESTAMP,\
               C_BOOL BOOL)");
# Commit changes and exit.
$dbh->commit();
$dbh->disconnect();

139 - code-perl-list-data-sources.md

#!/usr/bin/perl
use strict;
# Attempt to load the DBI module in an eval using require. Prevents
# script from erroring out if DBI is not installed.
eval
{
    require DBI;
    DBI->import();
};
if ($@) {
    # The eval failed, so DBI must not be installed
    print "DBI module is not installed\n";
} else {
    # Eval was successful, so DBI is installed
    print "DBI Module is installed\n";
    # List the drivers that DBI knows about.
    my @drivers = DBI->available_drivers;
    print "Available Drivers: \n";
    foreach my $driver (@drivers) {
        print "\t$driver\n";
    }
    # See if DBD::ODBC is installed by searching driver array.
    if (grep {/ODBC/i} @drivers) {
        print "\nDBD::ODBC is installed.\n";
        # List the ODBC data sources (DSNs) defined on the system
        print "Defined ODBC Data Sources:\n";
        my @dsns = DBI->data_sources('ODBC');
        foreach my $dsn (@dsns) {
            print "\t$dsn\n";
        }
    } else {
        print "DBD::ODBC is not installed\n";
    }
}

140 - code-perl-query-with-bound-variables.md

#!/usr/bin/perl
use strict;
use DBI;
my $attr = {RaiseError => 1 }; # Make SQL errors fatal to the Perl script.
# Open a connection using a DSN. Supply the username and password.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN32","ExampleUser","password123",
                        $attr);
# Prepare a query to get the content of the table
my $sth = $dbh->prepare("SELECT * FROM TEST ORDER BY C_ID ASC");
$sth->execute();
# Create a set of variables to bind to the column values.
my ($C_ID, $C_FP, $C_VARCHAR, $C_DATE, $C_TIME, $C_TS, $C_BOOL);
# Bind the variable references to the columns in the result set.
$sth->bind_columns(\$C_ID, \$C_FP, \$C_VARCHAR, \$C_DATE, \$C_TIME,
                    \$C_TS, \$C_BOOL);

# Now, calling fetch() to get a row of data updates the values of the bound
# variables. Continue calling fetch until it returns undefined.
while ($sth->fetch()) {
    # Note, you should always check that values are defined before using them,
    # since NULL values are translated into Perl as undefined. For this
    # example, just check the VARCHAR column for undefined values.
    if (!defined $C_VARCHAR) {
        $C_VARCHAR = "NULL";
    }
    # Just print values separated by tabs.
    print "$C_ID\t$C_FP\t$C_VARCHAR\t$C_DATE\t$C_TIME\t$C_TS\t$C_BOOL\n";
}
$dbh->disconnect();

141 - code-perl-set-connection-attributes.md

#!/usr/bin/perl
use strict;
use DBI;
# Create a hash that holds attributes for the connection
 my $attr = {
                RaiseError => 1, # Make database errors fatal to script
                AutoCommit => 0, # Prevent statements from committing
                                 # their transactions.
            };
# Open a connection using a DSN. Supply the username and password.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123",
    $attr);

if (defined $dbh->err) {
    # Connection failed.
    die "Failed to connect: $DBI::errstr";
}
print "Connected!\n";
# The database handle lets you access the connection attributes directly:
print "The AutoCommit attribute is: " . $dbh->{AutoCommit} . "\n";
print "The RaiseError attribute is: " . $dbh->{RaiseError} . "\n";
# And you can change values, too...
$dbh->{AutoCommit} = 1;
print "The AutoCommit attribute is now: " . $dbh->{AutoCommit} . "\n";
$dbh->disconnect();

142 - code-perl-set-locale-with-selectrow-array.md

#!/usr/bin/perl
use strict;
use DBI;
my $attr = {RaiseError => 1 }; # Make SQL errors fatal to the Perl script.
# Open a connection using a DSN. Supply the username and password.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123",
                        $attr);
# Demonstrate setting/getting locale.
# Use selectrow_array to combine preparing a statement, executing it, and
# getting an array as a result.
my @localerv = $dbh->selectrow_array("SHOW LOCALE;");
# The locale name is the 2nd column (array index 1) in the result set.
print "Locale: $localerv[1]\n";
# Use do() to execute a SQL statement to set the locale.
$dbh->do("SET LOCALE TO en_GB");
# Get the locale again.
@localerv = $dbh->selectrow_array("SHOW LOCALE;");
print "Locale is now: $localerv[1]\n";
$dbh->disconnect();

143 - code-perl-set-odbc-connection-parameters.md

#!/usr/bin/perl -w
use strict;
use DBI;
# Instead of just using the DSN name, use name and value pairs.
my $dbh = DBI->connect("dbi:ODBC:DSN=VerticaDSN;Locale=en_GB@collation=binary","ExampleUser","password123");
unless (defined $dbh) {
    # Conection failed.
    die "Failed to connect: $DBI::errstr";
}
print "Connected!\n";
$dbh->disconnect();

144 - code-perl-simple-batch-insert.md

#!/usr/bin/perl
use strict;
use DBI;
# Create a hash reference that holds a hash of parameters for the
# connection.
 my $attr = {AutoCommit => 0, # Turn off autocommit
             PrintError => 0   # Turn off automatic error printing.
                               # This is handled manually.
             };
# Open a connection using a DSN. Supply the username and password.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123",
    $attr);
if (defined DBI::err) {
    # Conection failed.
    die "Failed to connect: $DBI::errstr";
}
print "Connection AutoCommit state is: " . $dbh->{AutoCommit} . "\n";
# Create table to hold inserted data
$dbh->do("DROP TABLE IF EXISTS TEST CASCADE;") or die "Could not drop table";
$dbh->do("CREATE TABLE TEST( \
               C_ID  INT, \
               C_FP  FLOAT,\
               C_VARCHAR VARCHAR(100),\
               C_DATE DATE, C_TIME TIME,\
               C_TS TIMESTAMP,\
               C_BOOL BOOL)") or die "Could not create table";
# Populate an array of arrays with values. One of these rows contains
# data that will not be sucessfully inserted. Another contains an
# undef value, which gets inserted into the database as a NULL.
my @data = (
                [1,1.111,'Hello World!','2001-01-01','01:01:01'
                    ,'2001-01-01 01:01:01','t'],
                [2,2.22222,'How are you?','2002-02-02','02:02:02'
                    ,'2002-02-02 02:02:02','f'],
                ['bad value',2.22222,'How are you?','2002-02-02','02:02:02'
                    ,'2002-02-02 02:02:02','f'],
                [4,4.22222,undef,'2002-02-02','02:02:02'
                    ,'2002-02-02 02:02:02','f'],
             );
# Create a prepared statement to use parameters for inserting values.
my $sth = $dbh->prepare_cached("INSERT into test values(?,?,?,?,?,?,?)");
my $rowcount = 0; # Count # of rows
# Loop through the arrays to insert values
foreach my $tuple (@data) {
    $rowcount++;
    # Insert the row
    my $retval = $sth->execute(@$tuple);

    # See if the row was successfully inserted.
    if ($retval == 1) {
        # Value of 1 means the row was inserted (1 row was affected by insert)
        print "Row $rowcount successfully inserted\n";
    } else {
        print "Inserting row $rowcount failed";
        # Error message is not set on some platforms/versions of DBUI. Check to
        # ensure a message exists to avoid getting an unitialized var warning.
        if ($sth->err()) {
                print ": " . $sth->errstr();
        }
        print "\n";
    }
}
# Commit changes. With AutoCommit off, you need to use commit for batched
# data to actually be committed into the database. If your Perl script exits
# without committing its data, Vertica rolls back the transaction and the
# data is not committed.
$dbh->commit();
$dbh->disconnect();

145 - code-perl-simple-connect-with-dsn.md

#!/usr/bin/perl -w
use strict;
use DBI;
# Open a connection using a DSN. Supply the username and password.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123");
unless (defined $dbh) {
    # Conection failed.
    die "Failed to connect: $DBI::errstr";
}
print "Connected!\n";
$dbh->disconnect();

146 - code-perl-unicode-example.md

#!/usr/bin/perl
use strict;
use DBI;
# Open a connection using a DSN.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123");
unless (defined $dbh) {
    # Conection failed.
    die "Failed to connect: $DBI::errstr";
}
# Output to a file. Displaying Unicode characters to a console or terminal
# window has many problems. This outputs a UTF-8 text file that can
# be handled by many Unicode-aware text editors:
open OUTFILE, '>:utf8', "unicodeout.txt";
# See if the DBD::ODBC driver was compiled with Unicode support. If this returns
# 1, your Perl script will get get strings from the driver with the UTF-8
# flag set on them, ensuring that Perl handles them correctly.
print OUTFILE "Was DBD::ODBC compiled with Unicode support? " .
    $dbh->{odbc_has_unicode} . "\n";

# Create a table to hold VARCHARs
$dbh->do("DROP TABLE IF EXISTS TEST CASCADE;");

# Create a table to hold data. Remember that the width of the VARCHAR column
# is the number of bytes set aside to store strings, which often does not equal
# the number of characters it can hold when it comes to Unicode!
$dbh->do("CREATE TABLE test( C_VARCHAR VARCHAR(100) )");
print OUTFILE "Inserting data...\n";
# Use Do to perform simple inserts
$dbh->do("INSERT INTO test VALUES('Hello')");
# This string contains several non-latin accented characters and symbols, encoded
# with Unicode escape notation. They are converted by Perl into UTF-8 characters
$dbh->do("INSERT INTO test VALUES('My favorite band is " .
    "\N{U+00DC}ml\N{U+00E4}\N{U+00FC}t \N{U+00D6}v\N{U+00EB}rk\N{U+00EF}ll" .
    " \N{U+263A}')");
# Some Chinese (Simplified) characters. This again uses escape sequence
# that Perl translates into UTF-8 characters.
$dbh->do("INSERT INTO test VALUES('\x{4F60}\x{597D}')");
print OUTFILE "Getting data...\n";
# Prepare a query to get the content of the table
my $sth = $dbh->prepare_cached("SELECT * FROM test");
# Execute the query by calling execute on the statement handle
$sth->execute();
# Loop through result rows while we have them
while (my @row = $sth->fetchrow_array()) {
    # Loop through the column values
    foreach my $column (@row) {
        print OUTFILE "$column\t";
    }
    print OUTFILE "\n";
}
close OUTFILE;
$dbh->disconnect();

147 - code-retrieve-data-through-odbc.md

// Demonstrate running a query and getting results by querying the tables
// system table for a list of all tables in the current schema.
// Some standard headers
#include <stdlib.h>
#include <sstream>
#include <iostream>
#include <assert.h>
// Standard ODBC headers
#include <sql.h>
#include <sqltypes.h>
#include <sqlext.h>

// Use std namespace to make output easier
using namespace std;
// Helper function to print SQL error messages.
template <typename HandleT>
void reportError(int handleTypeEnum, HandleT hdl)
{
    // Get the status records.
    SQLSMALLINT   i, MsgLen;
    SQLRETURN ret2;
    SQLCHAR       SqlState[6], Msg[SQL_MAX_MESSAGE_LENGTH];
    SQLINTEGER    NativeError;
    i = 1;
    cout << endl;
    while ((ret2 = SQLGetDiagRec(handleTypeEnum, hdl, i, SqlState, &NativeError,
        Msg, sizeof(Msg), &MsgLen)) != SQL_NO_DATA) {
        cout << "error record #" << i++ << endl;
        cout << "sqlstate: " << SqlState << endl;
        cout << "detailed msg: " << Msg << endl;
        cout << "native error code: " << NativeError << endl;
    }
}

typedef struct {
    SQLHENV hdlEnv;
    SQLHDBC hdlDbc;
} DBConnection;

void connect(DBConnection *pConnInfo)
{
    // Set up the ODBC environment
    SQLRETURN ret;
    ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &pConnInfo->hdlEnv);
    assert(SQL_SUCCEEDED(ret));
    // Tell ODBC that the application uses ODBC 3.
    ret = SQLSetEnvAttr(pConnInfo->hdlEnv, SQL_ATTR_ODBC_VERSION,
        (SQLPOINTER)SQL_OV_ODBC3, SQL_IS_UINTEGER);
    assert(SQL_SUCCEEDED(ret));

    // Allocate a database handle.
    ret = SQLAllocHandle(SQL_HANDLE_DBC, pConnInfo->hdlEnv, &pConnInfo->hdlDbc);
    assert(SQL_SUCCEEDED(ret));
    // Connect to the database
    cout << "Connecting to database." << endl;
    const char* dsnName = "ExampleDB";
    const char* userID = "dbadmin";
    const char* passwd = "password123";
    ret = SQLConnect(pConnInfo->hdlDbc, (SQLCHAR*)dsnName,
        SQL_NTS, (SQLCHAR*)userID, SQL_NTS,
        (SQLCHAR*)passwd, SQL_NTS);
    if (!SQL_SUCCEEDED(ret)) {
        cout << "Could not connect to database" << endl;
        reportError<SQLHDBC>(SQL_HANDLE_DBC, pConnInfo->hdlDbc);
        exit(EXIT_FAILURE);
    }
    else {
        cout << "Connected to database." << endl;
    }
}

void disconnect(DBConnection *pConnInfo)
{
    SQLRETURN ret;
    // Clean up by shutting down the connection
    cout << "Free handles." << endl;
    ret = SQLDisconnect(pConnInfo->hdlDbc);
    if (!SQL_SUCCEEDED(ret)) {
        cout << "Error disconnecting. Transaction still open?" << endl;
        exit(EXIT_FAILURE);
    }
    SQLFreeHandle(SQL_HANDLE_DBC, pConnInfo->hdlDbc);
    SQLFreeHandle(SQL_HANDLE_ENV, pConnInfo->hdlEnv);
}

void executeQuery(SQLHDBC hdlDbc, SQLCHAR* pQuery)
{
    SQLRETURN ret;
    // Set up a statement handle
    SQLHSTMT hdlStmt;
    SQLAllocHandle(SQL_HANDLE_STMT, hdlDbc, &hdlStmt);
    assert(SQL_SUCCEEDED(ret));

    // Execute a query to get the names and IDs of all tables in the schema
    // search p[ath (usually public).
    ret = SQLExecDirect(hdlStmt, pQuery, SQL_NTS);

    if (!SQL_SUCCEEDED(ret)) {
        // Report error an go no further if statement failed.
        cout << "Error executing statement." << endl;
        reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);
        exit(EXIT_FAILURE);
    }
    else {
        // Query succeeded, so bind two variables to the two colums in the
        // result set,
        cout << "Fetching results..." << endl;
        SQLBIGINT table_id;       // Holds the ID of the table.
        SQLTCHAR table_name[256]; // buffer to hold name of table
        ret = SQLBindCol(hdlStmt, 1, SQL_C_SBIGINT, (SQLPOINTER)&table_id,
            sizeof(table_id), NULL);
        ret = SQLBindCol(hdlStmt, 2, SQL_C_TCHAR, (SQLPOINTER)table_name,
            sizeof(table_name), NULL);

        // Loop through the results,
        while (SQL_SUCCEEDED(ret = SQLFetchScroll(hdlStmt, SQL_FETCH_NEXT, 1))) {
            // Print the bound variables, which now contain the values from the
            // fetched row.
            cout << table_id << " | " << table_name << endl;
        }


        // See if loop exited for reasons other than running out of data
        if (ret != SQL_NO_DATA) {
            // Exited for a reason other than no more data... report the error.
            reportError<SQLHDBC>(SQL_HANDLE_STMT, hdlStmt);
        }
    }
    SQLFreeHandle(SQL_HANDLE_STMT, hdlStmt);
}

int main()
{
    DBConnection conn;

    connect(&conn);
    executeQuery(conn.hdlDbc,
        (SQLCHAR*)"SELECT table_id, table_name FROM tables ORDER BY table_name");
    executeQuery(conn.hdlDbc,
        (SQLCHAR*)"SELECT table_id, table_name FROM tables ORDER BY table_id");
    disconnect(&conn);
    exit(EXIT_SUCCESS);
}

148 - code-simple-query-database.md

#!/usr/bin/perl
use strict;
use DBI;
my $attr = {RaiseError => 1 }; # Make errors fatal to the Perl script.
# Open a connection using a DSN. Supply the username and password.
my $dbh = DBI->connect("dbi:ODBC:VerticaDSN","ExampleUser","password123",
                        $attr);
# Prepare a query to get the content of the table
my $sth = $dbh->prepare("SELECT * FROM TEST ORDER BY C_ID ASC");
# Execute the query by calling execute on the statement handle
$sth->execute();
# Loop through result rows while we have them, getting each row as an array
while (my @row = $sth->fetchrow_array()) {
    # The @row array contains the column values for this row of data
    # Loop through the column values
    foreach my $column (@row) {
        if (!defined $column) {
            # NULLs are signaled by undefs. Set to NULL for clarity
            $column = "NULL";
        }
        print "$column\t"; # Output the column separated by a tab
    }
    print "\n";
}
$dbh->disconnect();

149 - collection-comparison.md

All collections support equality (=), inequality (<>), and null-safe equality (<=>). 1D collections also support comparison operators (<, <=, >, >=) between collections of the same type (arrays or sets). Comparisons follow these rules:

  • A null collection is ordered last.

  • Non-null collections are compared element by element, using the ordering rules of the element's data type. The relative order of the first pair of non-equal elements determines the order of the two collections.

  • If all elements in both collections are equal up to the length of the shorter collection, the shorter collection is ordered before the longer one.

  • If all elements in both collections are equal and the collections are of equal length, the collections are equal.

150 - collection-definition.md

collection (array or set)

151 - collection-group-by.md

=> SELECT department, grants, SUM(apply_sum(grant_values))
   FROM employees
   GROUP BY grants, department;
 department |          grants          |  SUM
------------+--------------------------+--------
 Physics    | ["US-7376","DARPA-1567"] | 235000
 Astronomy  | ["US-7376","DARPA-1567"] |   9000
 Physics    | ["US-7376"]              |  30000
(3 rows)

152 - collection-operators.md

See Collection functions for a comprehensive list of functions that can be used to manipulate arrays and sets.

Collections can be used in the following ways:

Collections cannot be used in the following ways:

  • As part of an IN or NOT IN expression.

  • As partition columns when creating tables.

  • With ANALYZE_STATISTICS or TopK projections.

  • Non-native arrays only: ORDER BY, PARTITION BY, DEFAULT, SET USING, or constraints.

153 - collection-order-by.md

=> SELECT * FROM employees
   ORDER BY grant_values;
 id | department |          grants          |  grant_values
----+------------+--------------------------+----------------
 36 | Astronomy  | ["US-7376","DARPA-1567"] | [5000,4000]
 36 | Physics    | ["US-7376","DARPA-1567"] | [10000,25000]
 33 | Physics    | ["US-7376"]              | [30000]
 42 | Physics    | ["US-7376","DARPA-1567"] | [65000,135000]
(4 rows)

154 - comment-limit.md

Comments can be up to 8192 characters in length. If a comment exceeds that limitation, Vertica truncates the comment and emits a message.

155 - comment-privileges.md

Non-superuser: object owner

156 - common-create-db-arguments.md

Argument Description
-l The absolute path to the Vertica license file to apply to the new database.
-p The password for the new database.
--depot-size

The maximum size for the depot. Defaults to 60% of the filesystem containing the depot path.

You can specify the size in two ways:

  • integer%: Percentage of filesystem's disk space to allocate.

  • integer{K|M|G|T}: Amount of disk space to allocate for the depot in kilobytes, megabytes, gigabytes, or terabytes.

However you specify this value, the depot size cannot be more than 80 percent of disk space of the file system where the depot is stored.

157 - complex-types-conversion-table.md

java.sql Type Vertica Type Java Type
BIT BOOL java.lang.Boolean
BIGINT INT java.lang.Long
DOUBLE FLOAT java.lang.Double
CHAR CHAR java.lang.String
VARCHAR VARCHAR java.lang.String
LONGVARCHAR LONGVARCHAR java.lang.String
DATE DATE java.sql.Date
TIME TIME java.sql.Time
TIME TIMETZ java.sql.Time
TIMESTAMP TIMESTAMP java.sql.Timestamp
TIMESTAMP TIMESTAMPTZ com.vertica.dsi.dataengine.utilities.TimestampTz
getIntervalRange(oid, typmod) INTERVAL com.vertica.jdbc.VerticaDayTimeInterval
getIntervalRange(oid, typmod) INTERVALYM com.vertica.jdbc.VerticaYearMonthInterval
BINARY BINARY byte[]
VARBINARY VARBINARY byte[]
LONGVARBINARY LONGVARBINARY byte[]
NUMERIC NUMERIC java.math.BigDecimal
TYPE_SQL_GUID UUID java.util.UUID
ARRAY ARRAY java.lang.Object[]
ARRAY SET java.lang.Object[]
STRUCT ROW java.sql.Struct
ARRAY MAP java.lang.Object[]

158 - complex-types-native-tables-restrictions.md

Complex types used in native tables have some restrictions, in addition to the restrictions for individual types listed on their reference pages:

  • A native table must have at least one column that is a primitive type or a native array (one-dimensional array of a primitive type). If a flex table has real columns, it must also have at least one column satisfying this restriction.

  • Complex type columns cannot be used in ORDER BY or PARTITION BY clauses nor as FILLER columns.

  • Complex type columns cannot have constraints.

  • Complex type columns cannot use DEFAULT or SET USING.

  • Expressions returning complex types cannot be used as projection columns, and projections cannot be segmented or ordered by columns of complex types.

159 - complex-types-system-table-example.md

=> CREATE EXTERNAL TABLE warehouse(
    name VARCHAR, id_map MAP<INT,VARCHAR>,
    data row(record INT, total FLOAT, description VARCHAR(100)),
    prices ARRAY[INT], comment VARCHAR(200), sales_total FLOAT, storeID INT)
  AS COPY FROM ... PARQUET;

=> SELECT type_id,type_kind,type_name,field_id,field_name,field_type_name,field_position
    FROM COMPLEX_TYPES ORDER BY type_id,field_name;

      type_id      | type_kind |       type_name       | field_id | field_name  | field_type_name | field_position
-------------------+-----------+-----------------------+----------+-------------+-----------------+----------------
 45035996274278280 | Map       | _ct_45035996274278280 |        6 | key         | int             |              0
 45035996274278280 | Map       | _ct_45035996274278280 |        9 | value       | varchar(80)     |              1
 45035996274278282 | Row       | _ct_45035996274278282 |        9 | description | varchar(80)     |              2
 45035996274278282 | Row       | _ct_45035996274278282 |        6 | record      | int             |              0
 45035996274278282 | Row       | _ct_45035996274278282 |        7 | total       | float           |              1
 45035996274278284 | Array     | _ct_45035996274278284 |        6 |             | int             |              0
(6 rows)

160 - config-dfs-path-desc.md

String containing the file name of the configuration file to use when authenticating with the SecureData appliance. You must create this file using VoltageSecureConfigure. If you do not supply this parameter, you must set session parameters to configure access to SecureData. See Configuring access to SecureData. Any values set in session parameters override the values in this file.

161 - config-params.md

For best performance, set the following parameters with the specified minimum values:

Parameter Minimum Value
HDFS block size 512MB
Namenode Java Heap 1GB
Datanode Java Heap 2GB

162 - configuration-parameters.md

Vertica supports a wide variety of configuration parameters that affect many facets of database behavior. These parameters can be set with the appropriate ALTER statements at one or more levels, listed here in descending order of precedence:

  1. User (ALTER USER)

  2. Session (ALTER SESSION)

  3. Node (ALTER NODE)

  4. Database (ALTER DATABASE)

Not all parameters can be set at all levels. Consult the documentation of individual parameters for restrictions.

You can query the CONFIGURATION_PARAMETERS system table to obtain the current settings for all user-accessible parameters. For example, the following query returns settings for partitioning parameters: their current and default values, which levels they can be set at, and whether changes require a database restart to take effect:

=> SELECT parameter_name, current_value, default_value, allowed_levels, change_requires_restart
      FROM configuration_parameters  WHERE parameter_name ILIKE '%partitioncount%';
    parameter_name    | current_value | default_value | allowed_levels | change_requires_restart
----------------------+---------------+---------------+----------------+-------------------------
 MaxPartitionCount    | 1024          | 1024          | NODE, DATABASE | f
 ActivePartitionCount | 1             | 1             | NODE, DATABASE | f
(2 rows)

163 - connecting-from-a-non-cluster-host.md

You can use the Vertica vsql executable image on a non-cluster Linux host to connect to a Vertica database.

  • On Red Hat, CentOS, and SUSE systems, you can install the client driver RPM, which includes the vsql executable. See Installing the vsql client for details.

  • If the non-cluster host is running the same version of Linux as the cluster, copy the image file to the remote system. For example:

    $ scp host01:/opt/vertica/bin/vsql .$ ./vsql
    
  • If the non-cluster host is running a different distribution or version of Linux than your cluster hosts, you must install the Vertica server RPM in order to get vsql:

    1. Download the appropriate RPM package by browsing to Vertica website. On the Support tab, select Customer Downloads.

    2. If the system you used to download the RPM is not the non-cluster host, transfer the file to the non-cluster host.

    3. Log into the non-cluster host as root and install the RPM package using the command:

      # rpm -Uvh filename
      

      Where filename is the package you downloaded. Note that you do not have to run the install_vertica script on the non-cluster host to use vsql.

Notes

  • Use the same Command-line options that you would on a cluster host.

  • You cannot run vsql on a Cygwin bash shell (Windows). Use ssh to connect to a cluster host, then run vsql.

164 - connection-failover-lead-in.md

If a client application attempts to connect to a host in the Vertica cluster that is down, the connection attempt fails when using the default connection configuration. This failure usually returns an error to the user. The user must either wait until the host recovers and retry the connection or manually edit the connection settings to choose another host.

Due to Vertica Analytic Database's distributed architecture, you usually do not care which database host handles a client application's connection. You can use the client driver's connection failover feature to prevent the user from getting connection errors when the host specified in the connection settings is unreachable. The JDBC driver gives you several ways to let the client driver automatically attempt to connect to a different host if the one specified in the connection parameters is unreachable:

  • Configure your DNS server to return multiple IP addresses for a host name. When you use this host name in the connection settings, the client attempts to connect to the first IP address from the DNS lookup. If the host at that IP address is unreachable, the client tries to connect to the second IP, and so on until it either manages to connect to a host or it runs out of IP addresses.

  • Supply a list of backup hosts for the client driver to try if the primary host you specify in the connection parameters is unreachable.

  • (JDBC only) Use driver-specific connection properties to manage timeouts before attempting to connect to the next node.

For all methods, the process of failover is transparent to the client application (other than specifying the list of backup hosts, if you choose to use the list method of failover). If the primary host is unreachable, the client driver automatically tries to connect to other hosts.

Failover only applies to the initial establishment of the client connection. If the connection breaks, the driver does not automatically try to reconnect to another host in the database.

Choosing a failover method

You usually choose to use one of the two failover methods. However, they do work together. If your DNS server returns multiple IP addresses and you supply a list of backup hosts, the client first tries all of the IPs returned by the DNS server, then the hosts in the backup list.

The DNS method of failover centralizes the configuration client failover. As you add new nodes to your Vertica Analytic Database cluster, you can choose to add them to the failover list by editing the DNS server settings. All client systems that use the DNS server to connect to Vertica Analytic Database automatically use connection failover without having to change any settings. However, this method does require administrative access to the DNS server that all clients use to connect to the Vertica Analytic Database cluster. This may not be possible in your organization.

Using the backup server list is easier than editing the DNS server settings. However, it decentralizes the failover feature. You may need to update the application settings on each client system if you make changes to your Vertica Analytic Database cluster.

Using DNS failover

To use DNS failover, you need to change your DNS server's settings to map a single host name to multiple IP addresses of hosts in your Vertica Analytic Database cluster. You then have all client applications use this host name to connect to Vertica Analytic Database.

You can choose to have your DNS server return as many IP addresses for the host name as you want. In smaller clusters, you may choose to have it return the IP addresses of all of the hosts in your cluster. However, for larger clusters, you should consider choosing a subset of the hosts to return. Otherwise there can be a long delay as the client driver tries unsuccessfully to connect to each host in a database that is down.

Using the backup host list

To enable backup list-based connection failover, your client application has to specify at least one IP address or host name of a host in the BackupServerNode parameter. The host name or IP can optionally be followed by a colon and a port number. If not supplied, the driver defaults to the standard Vertica port number (5433). To list multiple hosts, separate them by a comma.

The following example demonstrates setting the BackupServerNode connection parameter to specify additional hosts for the connection attempt. The connection string intentionally has a non-existent node, so that the initial connection fails. The client driver has to resort to trying the backup hosts to establish a connection to Vertica.

165 - connectivity-pack-reboot-message.md

166 - consumer-group-id-description.md

The name of the Kafka consumer group to which Vertica reports its progress consuming messages. Set this value to disable progress reports to a Kafka consumer group. For details, see Monitoring Vertica message consumption with consumer groups.

Default: vertica_database-name

167 - continuous-ud-filter-c-.md

The ContinuousUDFilter class extends UDFilter and adds the following methods for extension by subclasses:

virtual void initialize(ServerInterface &srvInterface);

virtual void run();

virtual void deinitialize(ServerInterface &srvInterface);

168 - continuous-ud-parser-c-.md

The ContinuousUDParser class extends UDParser and adds the following methods for extension by subclasses:

virtual void initialize(ServerInterface &srvInterface);

virtual void run();

virtual void deinitialize(ServerInterface &srvInterface);

169 - continuous-ud-parser-java.md

The ContinuousUDParser class extends UDParser and adds the following methods for extension by subclasses:


public void initialize(ServerInterface srvInterface, SizedColumnTypes returnType);

public abstract void run() throws UdfException;

public void deinitialize(ServerInterface srvInterface, SizedColumnTypes returnType);

170 - continuous-ud-source-c-.md

The ContinuousUDSource class extends UDSource and adds the following methods for extension by subclasses:

virtual void initialize(ServerInterface &srvInterface);

virtual void run();

virtual void deinitialize(ServerInterface &srvInterface);

171 - copy-collection-bounds.md

For columns of collection types, these options indicate the characters that mark the beginning and end of the collection. It is an error to use these characters elsewhere within the list of elements without escaping them. No COLLECTION option may have the same value as any other COLLECTION option.

Default: Square brackets ('[' and ']').

172 - copy-collection-delimiter.md

For columns of collection types, indicates the single ASCII character used to separate elements within each collection. You can use any ASCII value in the range E'\000' to E'\177', inclusive. No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.

Default: Comma (',').

173 - copy-collection-enclose.md

For columns of collection types, sets the quote character within which to enclose individual elements, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000').

No COLLECTION option may have the same value as any other COLLECTION option.

Default: double quote ('"')

174 - copy-collection-null.md

The string representing a null element value in a collection. You can specify a null value as any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII value except NULL: E'\000'). No COLLECTION option may have the same value as any other COLLECTION option. For more information, see Delimited data.

Default: 'null'

175 - copy-delimiter.md

Indicates the single ASCII character used to separate columns within each record of a file. You can use any ASCII value in the range E'\000' to E'\177', inclusive. You cannot use the same character for both the DELIMITER and NULL parameters. For more information, see Delimited data.

Default: Vertical bar ('|').

176 - copy-enclosed.md

Sets the quote character within which to enclose data, allowing delimiter characters to be embedded in string values. You can choose any ASCII value in the range E'\001' to E'\177' inclusive (any ASCII character except NULL: E'\000'). By default, ENCLOSED BY has no value, meaning data is not enclosed by any sort of quote character.

177 - copy-escape.md

Sets the escape character. Once set, the character following the escape character is interpreted literally, rather than as a special character. You can define an escape character using any ASCII value in the range E'\001' to E'\177', inclusive (any ASCII character except NULL: E'\000').

The COPY statement does not interpret the data it reads in as String literals. It also does not follow the same escape rules as other SQL statements (including the COPY parameters). When reading data, COPY interprets only the characters defined by these options as special values:

  • ESCAPE [AS]

  • DELIMITER

  • ENCLOSED [BY]

  • RECORD TERMINATOR

  • All COLLECTION options

Default: Backslash ('\').

178 - copy-function-prerequisites.md

Additionally, If access policies exist on the source table, the following must be true:

  • Access policies on both tables must be identical.

  • One of the following must be true:

    • The executing user owns the source table.

    • AccessPolicyManagementSuperuserOnly is set to true. See Managing access policies for details.

179 - copy-local-must-be-first-note.md

180 - copy-no-escape.md

Eliminates escape-character handling. Use this option if you do not need any escape character and you want to prevent characters in your data from being interpreted as escape sequences.

181 - copy-parsers-arrays.md

If the total size of an array exceeds the size defined by the target table, the parser sets the value to null.

182 - copy-record-term.md

Specifies the literal character string indicating the end of a data file record. For more information about using this parameter, see Delimited data.

183 - copy-trailing.md

Specifies that if Vertica encounters a record with insufficient data to match the columns in the table column list, COPY inserts the missing columns with NULL values. For other information and examples, see Fixed-width format data.

184 - count-distinct-examples.md

Examples

The following query returns the number of distinct values in a column:

=> SELECT COUNT (DISTINCT date_key) FROM date_dimension;

 COUNT
-------
  1826
(1 row)

This example returns the number of distinct return values from an expression:

=> SELECT COUNT (DISTINCT date_key + product_key) FROM inventory_fact;

 COUNT
-------
 21560
(1 row)

You can create an equivalent query using the LIMIT keyword to restrict the number of rows returned:

=> SELECT COUNT(date_key + product_key) FROM inventory_fact GROUP BY date_key LIMIT 10;

 COUNT
-------
   173
    31
   321
   113
   286
    84
   244
   238
   145
   202
(10 rows)

The following query uses GROUP BY to count distinct values within groups:

=> SELECT product_key, COUNT (DISTINCT date_key) FROM inventory_fact
   GROUP BY product_key LIMIT 10;

 product_key | count
-------------+-------
           1 |    12
           2 |    18
           3 |    13
           4 |    17
           5 |    11
           6 |    14
           7 |    13
           8 |    17
           9 |    15
          10 |    12
(10 rows)

The following query returns the number of distinct products and the total inventory within each date key:

=> SELECT date_key, COUNT (DISTINCT product_key), SUM(qty_in_stock) FROM inventory_fact
   GROUP BY date_key LIMIT 10;

 date_key | count |  sum
----------+-------+--------
        1 |   173 |  88953
        2 |    31 |  16315
        3 |   318 | 156003
        4 |   113 |  53341
        5 |   285 | 148380
        6 |    84 |  42421
        7 |   241 | 119315
        8 |   238 | 122380
        9 |   142 |  70151
       10 |   202 |  95274
(10 rows)

This query selects each distinct product_key value and then counts the number of distinct date_key values for all records with the specific product_key value. It also counts the number of distinct warehouse_key values in all records with the specific product_key value:

=> SELECT product_key, COUNT (DISTINCT date_key), COUNT (DISTINCT warehouse_key) FROM inventory_fact
   GROUP BY product_key LIMIT 15;

 product_key | count | count
-------------+-------+-------
           1 |    12 |    12
           2 |    18 |    18
           3 |    13 |    12
           4 |    17 |    18
           5 |    11 |     9
           6 |    14 |    13
           7 |    13 |    13
           8 |    17 |    15
           9 |    15 |    14
          10 |    12 |    12
          11 |    11 |    11
          12 |    13 |    12
          13 |     9 |     7
          14 |    13 |    13
          15 |    18 |    17
(15 rows)

This query selects each distinct product_key value, counts the number of distinct date_key and warehouse_key values for all records with the specific product_key value, and then sums all qty_in_stock values in records with the specific product_key value. It then returns the number of product_version values in records with the specific product_key value:

=> SELECT product_key, COUNT (DISTINCT date_key),
      COUNT (DISTINCT warehouse_key),
      SUM (qty_in_stock),
      COUNT (product_version)
      FROM inventory_fact GROUP BY product_key LIMIT 15;

 product_key | count | count |  sum  | count
-------------+-------+-------+-------+-------
           1 |    12 |    12 |  5530 |    12
           2 |    18 |    18 |  9605 |    18
           3 |    13 |    12 |  8404 |    13
           4 |    17 |    18 | 10006 |    18
           5 |    11 |     9 |  4794 |    11
           6 |    14 |    13 |  7359 |    14
           7 |    13 |    13 |  7828 |    13
           8 |    17 |    15 |  9074 |    17
           9 |    15 |    14 |  7032 |    15
          10 |    12 |    12 |  5359 |    12
          11 |    11 |    11 |  6049 |    11
          12 |    13 |    12 |  6075 |    13
          13 |     9 |     7 |  3470 |     9
          14 |    13 |    13 |  5125 |    13
          15 |    18 |    17 |  9277 |    18
(15 rows)

185 - create-address-example-i-pv4.md

=> SELECT node_name,node_address,node_address_family FROM v_catalog.nodes;
    node_name     | node_address | node_address_family
------------------+--------------+----------------------
 v_vmart_node0001 | 10.20.110.21 | ipv4
 v_vmart_node0002 | 10.20.110.22 | ipv4
 v_vmart_node0003 | 10.20.110.23 | ipv4
(4 rows)


=> CREATE NETWORK ADDRESS node01 ON v_vmart_node0001 WITH '10.20.110.21';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node02 ON v_vmart_node0002 WITH '10.20.110.22';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node03 on v_vmart_node0003 WITH '10.20.110.23';
CREATE NETWORK ADDRESS

186 - create-alter-user-account.md

Locks or unlocks user access to the database, one of the following:

  • UNLOCK (default)

  • LOCK prevents a new user from logging in. This can be useful when creating an account for a user who does not need immediate access.

187 - create-alter-user-grace-period.md

Specifies how long a user query can block on any session socket, one of the following:

  • NONE (default): Removes any grace period previously set on session queries.

  • 'interval': Specifies as an interval the maximum grace period for current session queries, up to 20 days.

For details, see Handling session socket blocking.

188 - create-alter-user-idle-session-timeout.md

The length of time the system waits before disconnecting an idle session, one of the following:

  • NONE (default): No limit set for this user. If you omit this parameter, no limit is set for this user.

  • 'interval': An interval value, up to one year.

For details, see Managing client connections.

189 - create-alter-user-max-connections.md

Sets the maximum number of connections the user can have to the server, one of the following:

  • NONE (default): No limit set. If you omit this parameter, the user can have an unlimited number of connections across the database cluster.

  • integer ON DATABASE: Sets to integer the maximum number of connections across the database cluster.

  • integer ON NODE: Sets to integer the maximum number of connections to each node.

For details, see Managing client connections.

190 - create-alter-user-memory-cap.md

Sets how much memory can be allocated to user requests, one of the following:

  • NONE (default): No limit

  • A string value that specifies the memory limit, one of the following:

    • 'int%' expresses the maximum as a percentage of total memory available to the Resource Manager, where int is an integer value between 0 and 100.For example:

      MEMORYCAP '40%'

    • 'int{K|M|G|T}' expresses memory allocation in kilobytes, megabytes, gigabytes, or terabytes. For example:

      MEMORYCAP '10G'

191 - create-alter-user-password-expiration.md

Forces immediate expiration of the user's password. The user must change the password on the next login.

192 - create-alter-user-profile.md

Assigns a profile that controls password requirements for this user, one of the following:

  • DEFAULT (default): Assigns the default database profile to this user.

  • profile-name: A profile that is defined by CREATE PROFILE.

193 - create-alter-user-resource-pool.md

Assigns a resource pool to this user. The user must also be granted privileges to this pool, unless privileges to the pool are set to PUBLIC.

The FOR SUBCLUSTER clause assigns a subcluster-specific resource pool to the user. You can assign only one subcluster-specific resource pool to each user.

194 - create-alter-user-runtime-cap.md

Sets how long this user's queries can execute, one of the following:

  • NONE (default): No limit set for this user. If you omit this parameter, no limit is set for this user.

  • 'interval': An interval value, up to one year.

A query's runtime limit can be set at three levels: the user's runtime limit, the user's resource pool, and the session setting. For more information, see Setting a runtime limit for queries.

195 - create-alter-user-search-path.md

Specifies the user's default search path, that tells Vertica which schemas to search for unqualified references to tables and UDFs, one of the following:

  • DEFAULT (default): Sets the search path as follows:

    "$user", public, v_catalog, v_monitor, v_internal
    
  • Comma-delimited list of schemas.

For details, see Setting Search Paths.

196 - create-alter-user-temp-space-cap.md

Sets how much temporary file storage is available for user requests, one of the following:

  • NONE (default): No limit

  • String value that specifies the storage limit, one of the following:

    • int% expresses the maximum as a percentage of total temporary storage available to the Resource Manager, where int is an integer value between 0 and 100. For example:

      TEMPSPACECAP '40%'

    • int{K|M|G|T} expresses storage allocation in kilobytes, megabytes, gigabytes, or terabytes. For example:

      TEMPSPACECAP '10G'

197 - create-client-keys.md

The following steps generate a client key and certificate, and then make them available to the client:

  1. Generate the client key:

          
       => CREATE KEY client_private_key TYPE 'RSA' LENGTH 2048;
       CREATE KEY
       
    

  2. Generate the client certificate. Mutual TLS requires that the Common Name (CN) in the SUBJECT specifies a database username:

          
       => CREATE CERTIFICATE client_certificate
       SUBJECT '/C=US/ST=Massachusetts/L=Cambridge/O=OpenText/OU=Vertica/CN=dbadmin/emailAddress=example@example.com'
       SIGNED BY ca_certificate
       EXTENSIONS 'nsComment' = 'Vertica client cert', 'extendedKeyUsage' = 'clientAuth'
       KEY client_private_key;
       CREATE CERTIFICATE
       
    

  3. On the client machine, export the client key and client certificate to the client filesystem. The following commands use the vsql client:

    $ vsql -At -c "SELECT key FROM cryptographic_keys WHERE name = 'client_private_key';" -o client_private_key.key
    $ vsql -At -c "SELECT certificate_text FROM certificates WHERE name = 'client_certificate';" -o client_cert.pem
    

    In the preceding command:

    • -A: enables unaligned output.
    • -t: prevents the command from outputting metadata, such as column names.
    • -c: instructs the shell to run one command and then exit.
    • -o: writes the query output to the specified filename.

    For details about all vsql command line options, see Command-line options

  4. Copy or move the client key and certificate to a location that your client recognizes.

    The following commands move the client key and certificate to the hidden directory ~/.client-creds, and then grants the file owner read and write permissions with chmod:

    $ mkdir ~/.client-creds
    $ mv client_private_key.key ~/.client-creds/client_key.key
    $ mv client_cert.pem ~/.client-creds/client_cert.pem
    $ chmod 600 ~/.client-creds/client_key.key ~/.client-creds/client_cert.pem
    

198 - create-cr-details.md

For implementation details, see VerticaDB custom resource definition.

199 - create-description.md

Cannot be used with --delete, --read, or --update.

200 - create-directed-query-intro.md

CREATE DIRECTED QUERY has two variants:

  • CREATE DIRECTED QUERY OPTIMIZER directs the query optimizer to generate annotated SQL from the specified input query. The annotated query contains hints that the optimizer can use to recreate its current query plan for that input query.
  • CREATE DIRECTED QUERY CUSTOM specifies an annotated query supplied by the user. Vertica associates the annotated query with the input query specified by the last SAVE QUERY statement.

In both cases, Vertica associates the annotated query and input query, and registers their association in the system table DIRECTED_QUERIES under query_name.

201 - create-examples.md

The following example defines an external table for delimited data stored in HDFS:

=> CREATE EXTERNAL TABLE sales (itemID INT, date DATE, price FLOAT)
    AS COPY FROM 'hdfs:///data/ext1.csv' DELIMITER ',';

The following example uses data in the ORC format that is stored in S3. The data has two partition columns. For more information about partitions, see Partitioned data.

=> CREATE EXTERNAL TABLE records (id int, name varchar(50), created date, region varchar(50))
   AS COPY FROM 's3://datalake/sales/*/*/*'
   PARTITION COLUMNS created, region;

The following example shows how you can read from all Parquet files in a local directory, with no partitions and no globs:

=> CREATE EXTERNAL TABLE sales (itemID INT, date DATE, price FLOAT)
    AS COPY FROM '/data/sales/*.parquet' PARQUET;

202 - create-external-table-privileges.md

Superuser, or non-superuser with the following privileges:

  • READ privileges on the USER-accessible storage location. See GRANT (storage location)

  • Full access (including SELECT) to an external table that the user has privileges to create

203 - create-function-factory-name.md

Name of the factory class that generates the function instance.

204 - create-function-name.md

Name of the function to create. This is the name used in SQL invocations of the function. It does not need to match the name of the factory, but it is less confusing if they are the same or similar.

The function name must conform to the restrictions on Identifiers.

205 - create-load-baklancing-group-with-fault-group.md

=> CREATE FAULT GROUP fault_1;
CREATE FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE  v_vmart_node0001;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE  v_vmart_node0002;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE  v_vmart_node0003;
ALTER FAULT GROUP
=> ALTER FAULT GROUP fault_1 ADD NODE  v_vmart_node0004;
ALTER FAULT GROUP
=> SELECT node_name,node_address,node_address_family,export_address
   FROM v_catalog.nodes;
    node_name     | node_address | node_address_family | export_address
------------------+--------------+---------------------+----------------
 v_vmart_node0001 | 10.20.110.21 | ipv4                | 10.20.110.21
 v_vmart_node0002 | 10.20.110.22 | ipv4                | 10.20.110.22
 v_vmart_node0003 | 10.20.110.23 | ipv4                | 10.20.110.23
 v_vmart_node0004 | 10.20.110.24 | ipv4                | 10.20.110.24
(4 rows)

=> CREATE LOAD BALANCE GROUP group_all WITH FAULT GROUP fault_1 FILTER
   '0.0.0.0/0';
CREATE LOAD BALANCE GROUP

=> CREATE LOAD BALANCE GROUP group_some WITH FAULT GROUP fault_1 FILTER
   '10.20.110.21/30';
CREATE LOAD BALANCE GROUP

=> SELECT * FROM LOAD_BALANCE_GROUPS;
      name      |   policy   |     filter      |         type          | object_name
----------------+------------+-----------------+-----------------------+-------------
 group_all      | ROUNDROBIN | 0.0.0.0/0       | Fault Group           | fault_1
 group_some     | ROUNDROBIN | 10.20.110.21/30 | Fault Group           | fault_1
(2 rows)

206 - create-load-balance-group-from-network-addresses.md

=> CREATE NETWORK ADDRESS addr01 ON v_vmart_node0001 WITH '10.20.110.21';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr02 ON v_vmart_node0002 WITH '10.20.110.22';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr03 on v_vmart_node0003 WITH '10.20.110.23';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS addr04 on v_vmart_node0004 WITH '10.20.110.24';
CREATE NETWORK ADDRESS
=> CREATE LOAD BALANCE GROUP group_1 WITH ADDRESS addr01, addr02;
CREATE LOAD BALANCE GROUP
=> CREATE LOAD BALANCE GROUP group_2 WITH ADDRESS addr03, addr04;
CREATE LOAD BALANCE GROUP

=> SELECT * FROM LOAD_BALANCE_GROUPS;
    name    |   policy   |     filter      |         type          | object_name
------------+------------+-----------------+-----------------------+-------------
 group_1    | ROUNDROBIN |                 | Network Address Group | addr01
 group_1    | ROUNDROBIN |                 | Network Address Group | addr02
 group_2    | ROUNDROBIN |                 | Network Address Group | addr03
 group_2    | ROUNDROBIN |                 | Network Address Group | addr04
(4 rows)

207 - create-segmented-projection-example.md

The following CREATE PROJECTION statement creates projection public.employee_dimension_super. It specifies to include all columns in table public.employee_dimension. The hash segmentation clause invokes the Vertica HASH function to segment projection data on the column employee_key; it also includes the ALL NODES clause, which specifies to distribute projection data evenly across all nodes in the cluster:

=> CREATE PROJECTION public.employee_dimension_super
    AS SELECT * FROM public.employee_dimension
    ORDER BY employee_key
    SEGMENTED BY hash(employee_key) ALL NODES;

208 - create-superprojection-options.md

Under certain conditions, Vertica automatically creates a table's superprojection immediately on table creation. Vertica also creates a superprojection when you first load data into that table, if none already exists. CREATE PROJECTION can create a superprojection if it specifies to include all table columns. A table can have multiple superprojections.

209 - create-table-as-query.md

Creates and loads a table from the results of a query, specified as follows:

AS  [ /*+ LABEL */ ] [ AT  ] epoch query

The query cannot include complex type columns.

210 - create-table-column-name-list.md

Valid only when creating a table from a query (AS query), defines column names that map to the query output. If you omit this list, Vertica uses the query output column names.

This clause and the ENCODED BY clause are mutually exclusive. Column name lists are invalid for external tables.

The names in column-name-list and queried columns must be the same in number.

211 - create-table-example.md

=> CREATE TABLE orders(
    orderkey    INT,
    custkey     INT,
    prodkey     ARRAY[VARCHAR(10)],
    orderprices ARRAY[DECIMAL(12,2)],
    orderdate   DATE
);

212 - create-table-order-by.md

Invalid for external tables, specifies columns from the SELECT list on which to sort the superprojection that is automatically created for this table. The ORDER BY clause cannot include qualifiers ASC or DESC. Vertica always stores projection data in ascending sort order.

If you omit the `ORDER BY` clause, Vertica uses the SELECT list order as the projection sort order.

213 - create-text-index-1.md

The text index contains two columns:

  • doc_id uses the unique identifier from the source table.

  • token is populated with text strings from the designated column from the source table. The word column results from tokenizing and stemming the words found in the text column.

If your table is partitioned then your text index also contains a third column named partition.

214 - create-text-index-2.md

You create a text index on a source table only once. In the future, you do not have to re-create the text index each time the source table is updated or changed.

Your text index stays synchronized to the contents of the source table through any operation that is run on the source table. These operations include, but are not limited to:

  • COPY

  • INSERT

  • UPDATE

  • DELETE

  • DROP PARTITION

  • MOVE_PARTITIONS_TO_TABLE

    When you move or swap partitions in a source table that is indexed, verify that the destination table already exists and is indexed in the same way.

215 - ctas-example.md

=> CREATE TABLE cust_basic_profile AS SELECT
     customer_key, customer_gender, customer_age, marital_status, annual_income, occupation
   FROM customer_dimension WHERE customer_age>18 AND customer_gender !='';
CREATE TABLE

=> SELECT customer_age, annual_income, occupation 
   FROM cust_basic_profile
   WHERE customer_age > 23 ORDER BY customer_age;
 customer_age | annual_income |     occupation
--------------+---------------+--------------------
           24 |        469210 | Hairdresser
           24 |        140833 | Butler
           24 |        558867 | Lumberjack
           24 |        529117 | Mechanic
           24 |        322062 | Acrobat
           24 |        213734 | Writer
           ...

216 - current-epoch_snip.md

The epoch into which data (COPY, INSERT, UPDATE, and DELETE operations) is currently being written.

217 - d-lock.md

This lock is only compatible with I-lock operations, so only table load operations such as INSERT and COPY are allowed during drop partition operations.

218 - data-buffer-c-.md

The DataBuffer class has a pointer to a buffer and size, and an offset indicating how much of the stream has been consumed.

/**
* A contiguous in-memory buffer of char *
*/
    struct DataBuffer {
    /// Pointer to the start of the buffer
    char * buf;

    /// Size of the buffer in bytes
    size_t size;

    /// Number of bytes that have been processed by the UDL
    size_t offset;
};

219 - data-buffer-java.md

The DataBuffer class has an offset indicating how much of the stream has been consumed. Because Java is a language whose strings require attention to character encodings, the UDx must decode or encode buffers. A parser can interact with the stream by accessing the buffer directly.

/**
* DataBuffer is a a contiguous in-memory buffer of data.
*/
public class DataBuffer {

/**
* The buffer of data.
*/
public byte[] buf;

/**
* An offset into the buffer that is typically used to track progress
* through the DataBuffer. For example, a UDParser advances the
* offset as it consumes data from the DataBuffer.
*/
public int offset;}

220 - data-collector-components.md

Query DATA_COLLECTOR to get a list of components:

=> SELECT DISTINCT component, description FROM DATA_COLLECTOR 
   WHERE component ILIKE '%Depot%' ORDER BY component;
   component    |          description
----------------+-------------------------------
 DepotEvictions | Files evicted from the Depot
 DepotFetches   | Files fetched to the Depot
 DepotUploads   | Files Uploaded from the Depot
(3 rows)

221 - data-collector-interval-retention.md

How long to retain data in the component's Data Collector table, an INTERVAL. The INTERVAL_TIME retention policy property is set from this value. If the value is positive, it also sets the INTERVAL_SET policy property to true.

For example, if you specify the TupleMoverEvents component and set this value to two days ('2 days'::interval), the DC_TUPLE_MOVER_EVENTS Data Collector table retains records over the last 48 hours. Older Tuple Mover data is automatically dropped from this table.

Setting a component's policy's INTERVAL_TIME property has no effect on how much data storage the Data Collector retains on disk for that component. Maximum disk storage capacity is determined by the DISK_SIZE_KB property. Setting the INTERVAL_TIME property only affects how long data is retained by the component's Data Collector table. For details, see Configuring data retention policies.

To disable the INTERVAL_TIME policy property, set this value to a negative integer. Doing so reverts two retention policy properties to their default settings:

  • INTERVAL_SET: false

  • INTERVAL_TIME: 0

With these two properties thus set, the component's Data Collector table retains data on all component events until it reaches its maximum limit, as set by the DISK_SIZE_KB retention policy property.

222 - database-migration-status-example.md

=> SELECT node_name, phase, status, bytes_to_transfer, bytes_transferred, communal_storage_location FROM database_migration_status ORDER BY node_name, start_time;
     node_name    |       phase        |  status   | bytes_to_transfer | bytes_transferred | communal_storage_location
------------------+--------------------+-----------+-------------------+------------------+---------------------------
 v_vmart_node0001 | Catalog Conversion | COMPLETED |                 0 |                0 | s3://verticadbbucket/
 v_vmart_node0001 | Data Transfer      | COMPLETED |              1134 |             1134 | s3://verticadbbucket/
 v_vmart_node0001 | Catalog Transfer   | COMPLETED |              3765 |             3765 | s3://verticadbbucket/
 v_vmart_node0002 | Catalog Conversion | COMPLETED |                 0 |                0 | s3://verticadbbucket/
 v_vmart_node0002 | Data Transfer      | COMPLETED |              1140 |             1140 | s3://verticadbbucket/
 v_vmart_node0002 | Catalog Transfer   | COMPLETED |              3766 |             3766 | s3://verticadbbucket/
 v_vmart_node0003 | Catalog Conversion | COMPLETED |                 0 |                0 | s3://verticadbbucket/
 v_vmart_node0003 | Data Transfer      | RUNNING   |           5272616 |           183955 | s3://verticadbbucket/

223 - database-name-setting.md

Specifies the current database, set to the database name or DEFAULT.

224 - database-superuser.md

The automatically-created database user who has the same name as the Linux database administrator account and who can bypass all GRANT/REVOKE authorization, or any user that has been granted the PSEUDOSUPERUSER role. Do not confuse the concept of a database superuser with Linux superuser (root) privilege. A database superuser cannot have Linux superuser privilege.

225 - database.md

A cluster of nodes that, when active, can perform distributed data storage and SQL statement execution through administrative, interactive, and programmatic user interfaces.

226 - databases-api.md

GET databases Returns a list of databases, their properties, and current status.
POST databases Creates a new database by supplying a valid set of parameters.
GET databases/:database_name Returns details about a specific database.
PUT databases/:database_name Starts, stops, rebalances, or runs Workload Analyzer on a database.
DELETE databases/:database_name Deletes an existing database.
GET databases/:database_name/configuration Returns the current configuration parameters from the database.
PUT databases/:database_name/configuration Sets one or more configuration parameters in the database.
GET databases/:database_name/hosts Returns hosts details for a specific database.
POST databases/:database_name/hosts Adds a new host to the database.
DELETE databases/:database_name/hosts/:host_id Removes a host from the database.
POST databases/:database_name/hosts/:host_id/process Starts the database process on a specific host.
DELETE databases/:database_name/hosts/:host_id/process Stops the database on a specific host.
POST databases/:database_name/hosts/:host_id/replace_with/:host_id_new Replaces a host with a standby host in the database.
GET databases/:database_name/license Returns the Vertica license that the specified database is using.
GET databases/:database_name/licenses Returns all the feature licenses that the specified database is using.
GET databases/:database_name/nodes Returns a list of nodes for the specified database.
GET databases/:database_name/nodes/:node_id Returns details on a specific node for the specified database.
POST databases/:database_name/process Starts the specified database.
GET databases/:database_name/process Returns the state of the database as either UP or DOWN.
DELETE databases/:database_name/process Stops the specified database on all hosts.
POST databases/:database_name/rebalance/process Rebalances the specified database. This option can have a long run time.
GET databases/:database_name/status [broken] Retrieves the database properties structure.
POST databases/:database_name/Workload Analyzer/process Runs the analyze workload action against the specified database.This option can have a long run time.

227 - date-function-input-data-types.md

The date to process, one of the following data types:

228 - datetimebulletlist.md

229 - db-schema-name.md

Database and schema. The default schema is public. If you specify a database, it must be the current database.

230 - dbd-add-design-query-errors.md

Errors

Database Designer returns an error in the following cases:

  • The query contains illegal syntax.

  • The query references:

    • External or system tables only

    • Local temporary or other non-design tables

  • DELETE or UPDATE query has one or more subqueries.

  • INSERT query does not include a SELECT clause.

  • Database Designer cannot optimize the query.

231 - dbd-api-categories-of-functions.md

Database Designer functions perform the following operations, generally performed in the following order:

  1. Create a design.

  2. Set design properties.

  3. Populate a design.

  4. Create design and deployment scripts.

  5. Get design data.

  6. Clean up.

For detailed information, see Workflow for running Database Designer programmatically. For information on required privileges, see Privileges for running Database Designer functions

Create a design

DESIGNER_CREATE_DESIGN directs Database Designer to create a design.

Set design properties

The following functions let you specify design properties:

Populate a design

The following functions let you add tables and queries to your Database Designer design:

Create design and deployment scripts

The following functions populate the Database Designer workspace and create design and deployment scripts. You can also analyze statistics, deploy the design automatically, and drop the workspace after the deployment:

Reset a design

DESIGNER_RESET_DESIGN discards all the run-specific information of the previous Database Designer build or deployment of the specified design but retains its configuration.

Get design data

The following functions display information about projections and scripts that the Database Designer created:

Cleanup

The following functions cancel any running Database Designer operation or drop a Database Designer design and all its contents:

232 - dbd-privileges.md

Non-superuser: design creator

233 - default-authentication-records_snip.md

=> SELECT auth_name,is_auth_enabled,auth_host_type,auth_method,auth_priority,is_fallthrough_enabled FROM client_auth;
         auth_name         | is_auth_enabled | auth_host_type | auth_method | auth_priority | is_fallthrough_enabled
---------------------------+-----------------+----------------+-------------+---------------+------------------------
 default_hash_network_ipv4 | True            | HOST           | PASSWORD    |            -1 | False
 default_hash_network_ipv6 | True            | HOST           | PASSWORD    |            -1 | False
 default_hash_local        | True            | LOCAL          | PASSWORD    |            -1 | False
(3 rows)

234 - definer-invoker-example.md

In this example, this table:

records(i INT, updated_date TIMESTAMP DEFAULT sysdate, updated_by VARCHAR(128) DEFAULT current_user())

Contains the following content:

=> SELECT * FROM records;
 i |        updated_date        | updated_by
---+----------------------------+------------
 1 | 2021-08-27 15:54:05.709044 | Bob
 2 | 2021-08-27 15:54:07.051154 | Bob
 3 | 2021-08-27 15:54:08.301704 | Bob
(3 rows)

Bob creates a procedure to update the table and uses the SECURITY DEFINER option and grants EXECUTE on the procedure to Alice. Alice can now use the procedure to update the table without any additional privileges:

=> GRANT EXECUTE ON PROCEDURE update_records(int,int) to Alice;
GRANT PRIVILEGE

=> \c - Alice
You are now connected as user "Alice".

=> CALL update_records(99,1);
 update_records
---------------
             0
(1 row)

Because calls to update_records() effectively run the procedure as Bob, Bob is listed as the updater of the table rather than Alice:

=> SELECT * FROM records;
 i  |        updated_date        | updated_by
----+----------------------------+------------
 99 | 2021-08-27 15:55:42.936404 | Bob
  2 | 2021-08-27 15:54:07.051154 | Bob
  3 | 2021-08-27 15:54:08.301704 | Bob
(3 rows)

235 - delegation-token-h-cat.md

$ vsql -U dbuser1

-- set delegation token for user and HiveServer2
=> ALTER SESSION SET
   HadoopImpersonationConfig='[
     {"nameservice":"hadoopNS","token":"JQAHcmVsZWFzZQdyZWxlYXNlAIoBYVJKrYSKAWF2VzGEjgmzj_IUCIrI9b8Dqu6awFTHk5nC-fHB8xsSV0VCSERGUyBkZWxlZ2F0aW9uETEwLjIwLjQyLjEwOTo4MDIw"},
     {"schema":"access","token":"UwAHcmVsZWFzZQdyZWxlYXNlL2hpdmUvZW5nLWc5LTEwMC52ZXJ0aWNhY29ycC5jb21AVkVSVElDQUNPUlAuQ09NigFhUkmyTooBYXZWNk4BjgETFKN2xPURn19Yq9tf-0nekoD51TZvFUhJVkVfREVMRUdBVElPTl9UT0tFThZoaXZlc2VydmVyMkNsaWVudFRva2Vu"}]';

-- uses HiveServer2 token to get metadata
=> CREATE HCATALOG SCHEMA access WITH hcatalog_schema 'access';

-- uses both tokens
=> SELECT * FROM access.t1;

--uses only HiveServer2 token
=> SELECT * FROM hcatalog_tables;

236 - delete-option-description.md

Cannot be used with --create, --read, or --update.

237 - delimited-basic-collection-example.md

=> CREATE TABLE researchers (id INT, name VARCHAR, grants ARRAY[VARCHAR], values ARRAY[INT]);
CREATE TABLE

=> COPY researchers FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 42|Sheldon Cooper|[US-7376,DARPA-1567]|[65000,135000]
>> 17|Howard Wolowitz|[NASA-1683,NASA-7867,SPX-76]|[16700,85000,45000]
>> \.

=> SELECT * FROM researchers;
 id |      name       |               grants               |       values
----+-----------------+------------------------------------+---------------------
 17 | Howard Wolowitz | ["NASA-1683","NASA-7867","SPX-76"] | [16700,85000,45000]
 42 | Sheldon Cooper  | ["US-7376","DARPA-1567"]           | [65000,135000]
(2 rows)

238 - delimited-basic-example.md

=> CREATE TABLE employees (id INT, name VARCHAR(50), department VARCHAR(50));
CREATE TABLE

=> COPY employees FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 42|Sheldon Cooper|Physics
>> 17|Howard Wolowitz|Astronomy
>> \.

=> SELECT * FROM employees;
 id |      name       |  department
----+-----------------+--------------
 17 | Howard Wolowitz | Astrophysics
 42 | Sheldon Cooper  | Physics
(2 rows)

239 - demote-subcluster-restrictions.md

  • The subcluster contains a critical node.

  • The subcluster is the only primary subcluster in the database. You must have at least one primary subcluster.

  • The initiator node is a member of the subcluster you are trying to demote. You must call DEMOTE_SUBCLUSTER_TO_SECONDARY from another subcluster.

240 - dependent-independent-expressions.md

expression1
The dependent DOUBLE PRECISION expression
expression2
The independent DOUBLE PRECISION expression

241 - depends-keyword.md

Files or libraries on which this library depends, one or more files or directories on the initiator node file system or other supported file systems or object stores. For a directory, end the path entry with a slash (/), optionally followed by a wildcard (*). To specify more than one file, separate entries with colons (:).

If any path entry contain colons, such as a URI, place brackets around the entire DEPENDS path and use double quotes for the individual path elements, as in the following example:

DEPENDS '["s3://mybucket/gson-2.3.1.jar"]'

To specify libraries with multiple directory levels, see Multi-level Library Dependencies.

DEPENDS has no effect for libraries written in R. R packages must be installed locally on each node, including external dependencies.

242 - depot-eviction-unpinned-object.md

After the object is unpinned, it can be evicted from the depot by any unpinned or pinned object.

243 - depot-fetch-options.md

  • ALL (default): Fetch file data from communal storage, if necessary displace existing files by evicting them from the depot.

  • FETCHES: Fetch file data from communal storage only if space is available; otherwise, read the queried data directly from communal storage.

  • NONE: Do not fetch file data to the depot, read the queried data directly from communal storage.

244 - depot-warming-intro.md

On startup, the depots of new nodes are empty, while the depots of restarted nodes often contain stale data that must be refreshed. When depot warming is enabled, a node that is undergoing startup preemptively loads its depot with frequently queried and pinned data. When the node completes startup and begins to execute queries, its depot already contains much of the data it needs to process those queries. This reduces the need to fetch data from communal storage, and expedites query performance accordingly.

245 - describe-load-balance-decision-example.md

=> SELECT describe_load_balance_decision('192.168.1.25');
                        describe_load_balance_decision
--------------------------------------------------------------------------------
 Describing load balance decision for address [192.168.1.25]
Load balance cache internal version id (node-local): [2]
Considered rule [etl_rule] source ip filter [10.20.100.0/24]... input address
does not match source ip filter for this rule.
Considered rule [internal_clients] source ip filter [192.168.1.0/24]... input
address matches this rule
Matched to load balance group [group_1] the group has policy [ROUNDROBIN]
number of addresses [2]
(0) LB Address: [10.20.100.247]:5433
(1) LB Address: [10.20.100.248]:5433
Chose address at position [1]
Routing table decision: Success. Load balance redirect to: [10.20.100.248] port [5433]

(1 row)

=> SELECT describe_load_balance_decision('192.168.2.25');
                        describe_load_balance_decision
--------------------------------------------------------------------------------
 Describing load balance decision for address [192.168.2.25]
Load balance cache internal version id (node-local): [2]
Considered rule [etl_rule] source ip filter [10.20.100.0/24]... input address
does not match source ip filter for this rule.
Considered rule [internal_clients] source ip filter [192.168.1.0/24]... input
address does not match source ip filter for this rule.
Considered rule [subnet_192] source ip filter [192.0.0.0/8]... input address
matches this rule
Matched to load balance group [group_all] the group has policy [ROUNDROBIN]
number of addresses [3]
(0) LB Address: [10.20.100.247]:5433
(1) LB Address: [10.20.100.248]:5433
(2) LB Address: [10.20.100.249]:5433
Chose address at position [1]
Routing table decision: Success. Load balance redirect to: [10.20.100.248] port [5433]

(1 row)

=> SELECT describe_load_balance_decision('1.2.3.4');
                         describe_load_balance_decision
--------------------------------------------------------------------------------
 Describing load balance decision for address [1.2.3.4]
Load balance cache internal version id (node-local): [2]
Considered rule [etl_rule] source ip filter [10.20.100.0/24]... input address
does not match source ip filter for this rule.
Considered rule [internal_clients] source ip filter [192.168.1.0/24]... input
address does not match source ip filter for this rule.
Considered rule [subnet_192] source ip filter [192.0.0.0/8]... input address
does not match source ip filter for this rule.
Routing table decision: No matching routing rules: input address does not match
any routing rule source filters. Details: [Tried some rules but no matching]
No rules matched. Falling back to classic load balancing.
Classic load balance decision: Classic load balancing considered, but either
the policy was NONE or no target was available. Details: [NONE or invalid]

(1 row)

246 - disconnect-database-best-practice-note.md

247 - disk-quota-example.md

=> CREATE SCHEMA internal DISK_QUOTA '10T';
CREATE SCHEMA

=> CREATE TABLE internal.sales (...) DISK_QUOTA '5T';
CREATE TABLE

=> CREATE TABLE internal.leads (...) DISK_QUOTA '12T';
ROLLBACK 0:  Table can not have a greater disk quota than its Schema

248 - disk-usage-example.md

=> SELECT * FROM DISK_QUOTA_USAGES;
    object_oid     | object_name | is_schema | total_disk_usage_in_bytes | disk_quota_in_bytes
-------------------+-------------+-----------+---------------------+---------------------
 45035996273705100 | s           | t         |                 307 |               10240
 45035996273705104 | public.t    | f         |                 614 |                1024
 45035996273705108 | s.t         | f         |                 307 |                2048
(3 rows)

249 - do-you-need-to-put-your-query-on-a-budger.md

Do You Need to Put Your Query on a Budget?

250 - driver-download-link.md

Vertica driver downloads page

251 - drop-secondary-pool.md

If you try to drop a resource pool that is the secondary pool for another resource pool, Vertica returns an error. The error lists the resource pools that depend on the secondary pool you tried to drop. To drop a secondary resource pool, first set the CASCADE TO parameter to DEFAULT on the primary resource pool, and then drop the secondary pool.

252 - droppable-tls-configs.md

Predefined TLS Configurations and TLS Configurations that manage a connection type cannot be dropped, nor can the keys and certificates referenced by such TLS Configurations. For details, see TLS configurations.

253 - ecs-mode-hint-example.md

The following example shows the query plan for a simple single-table query that is forced to use the compute-optimized strategy:

=> EXPLAIN SELECT /*+ECSMode(COMPUTE_OPTIMIZED)*/ employee_last_name,
             employee_first_name,employee_age
             FROM employee_dimension
             ORDER BY employee_age DESC;

                                   QUERY PLAN
--------------------------------------------------------------------------------
 ------------------------------
 QUERY PLAN DESCRIPTION:
 The execution of this query involves non-participating nodes.
 Crunch scaling strategy preserves data segmentation
 ------------------------------
. . .

254 - ecs-mode-hint-parameters.md

  • AUTO: The optimizer chooses the strategy to use, useful only if ECS mode is set at the session level (see Setting the ECS Strategy for the Session or Database).

  • IO_OPTIMIZED: Use I/O-optimized strategy.

  • COMPUTE_OPTIMIZED: Use compute-optimized strategy.

  • NONE: Disable use of ECS for this query. Only participating nodes are involved in query execution; collaborating nodes are not.

255 - ecs-session-subscriptions-example.md

=> SELECT node_name, shard_name, is_collaborating, is_participating
        FROM V_CATALOG.SESSION_SUBSCRIPTIONS
        WHERE is_participating = TRUE OR is_collaborating = TRUE
        ORDER BY shard_name, node_name;
      node_name       | shard_name  | is_collaborating | is_participating
----------------------+-------------+------------------+------------------
 v_verticadb_node0004 | replica     | f                | t
 v_verticadb_node0005 | replica     | f                | t
 v_verticadb_node0006 | replica     | t                | f
 v_verticadb_node0007 | replica     | f                | t
 v_verticadb_node0008 | replica     | t                | f
 v_verticadb_node0009 | replica     | t                | f
 v_verticadb_node0007 | segment0001 | f                | t
 v_verticadb_node0008 | segment0001 | t                | f
 v_verticadb_node0005 | segment0002 | f                | t
 v_verticadb_node0009 | segment0002 | t                | f
 v_verticadb_node0004 | segment0003 | f                | t
 v_verticadb_node0006 | segment0003 | t                | f
(12 rows)

256 - effective-privileges-example.md

In the following example, user Glenn has set the REPORTER role and wants to check his effective privileges on schema s1 and table s1.articles.

  • Table s1.articles inherits privileges from its schema (s1).

  • The REPORTER role has the following privileges:

    • SELECT on schema s1

    • INSERT WITH GRANT OPTION on table s1.articles

  • User Glenn has the following privileges:

    • UPDATE and USAGE on schema s1.

    • DELETE on table s1.articles.

GET_PRIVILEGES_DESCRIPTION returns the following effective privileges for Glenn on schema s1:

=> SELECT GET_PRIVILEGES_DESCRIPTION('schema', 's1');
   GET_PRIVILEGES_DESCRIPTION
--------------------------------
 SELECT, UPDATE, USAGE
(1 row)

GET_PRIVILEGES_DESCRIPTION returns the following effective privileges for Glenn on table s1.articles:


=> SELECT GET_PRIVILEGES_DESCRIPTION('table', 's1.articles');
   GET_PRIVILEGES_DESCRIPTION
--------------------------------
 INSERT*, SELECT, UPDATE, DELETE
(1 row)

257 - enable-file-cache-description.md

When set to True, Vertica caches the SecureData policy file and encryption keys to disk, rather than just to memory. Defaults to false.

258 - enable-with-clause-materialization_snip.md

Enables materialization of all queries in the current WITH clause. Otherwise, materialization is set by configuration parameter WithClauseMaterialization, by default set to 0 (disabled). If WithClauseMaterialization is disabled, materialization is automatically cleared when the primary query of the WITH clause returns. For details, see Materialization of WITH clause.

259 - encoded-by-column-def.md

A comma-delimited list of columns from the source table, where each column is qualified by one or both of the following encoding options:

  • ACCESSRANK integer: Overrides the default access rank for a column, useful for prioritizing access to a column. See Prioritizing column access speed.

  • ENCODING encoding-type: Specifies the type of encoding to use on the column. The default encoding type is AUTO.

This option and *column-name-list *are mutually exclusive. This option is invalid for external tables.

260 - encoding_snip.md

The process of converting data into a standard format. In Vertica, encoded data can be processed directly, while compressed data cannot. Vertica uses a number of different encoding strategies, depending on column data type, table cardinality, and sort order.

261 - encoding-supported-numeric-datatypes.md

Vertica supports the following encoding for numeric data types:

  • Precision ≤ 18: AUTO, BLOCK_DICT, BLOCKDICT_COMP, COMMONDELTA_COMP, DELTAVAL, GCDDELTA, and RLE

  • Precision > 18: AUTO, BLOCK_DICT, BLOCKDICT_COMP, RLE

262 - eon-on-prem-choose-depot-path.md

Step 4: choose a depot path on all nodes

Choose or create a directory on each node for the depot storage path. The directory you supply for the depot storage path parameter must:

  • Have the same path on all nodes in the cluster (i.e. /home/dbadmin/depot).

  • Be readable and writable by the dbadmin user.

  • Have sufficient storage. By default, Vertica uses 60% of the filesystem space containing the directory for depot storage. You can limit the size of the depot by using the --depot-size argument in the create_db command. See Configuring your Vertica cluster for Eon Mode for guidelines on choosing a size for your depot.

The admintools create_db tool will attempt to create the depot path for you if it doesn't exist.

263 - eon-on-prem-depot-size-spec.md

You can specify the size in two ways:

  • integer%: Percentage of filesystem's disk space to allocate.

  • integer{K|M|G|T}: Amount of disk space to allocate for the depot in kilobytes, megabytes, gigabytes, or terabytes.

However you specify this value, the depot size cannot be more than 80 percent of disk space of the file system where the depot is stored.

264 - eon-on-prem-disable-streaming-limits.md

After creating the database, disable the AWSStreamingConnectionPercentage configuration parameter. This setting is unnecessary for an Eon Mode on-premises install with communal storage on FlashBlade or MinIO. This configuration parameter controls the number of connections to the object store that Vertica uses for streaming reads. In a cloud environment, this setting helps avoid having streaming data from the object store use up all of the available file handles. It leaves some file handles available for other object store operations. Due to the low latency of on-premises object stores, this option is unnecessary. Set it to 0 to disable it.

The following example shows how to disable this parameter using ALTER DATABASE...SET PARAMETER:

=> ALTER DATABASE DEFAULT SET PARAMETER AWSStreamingConnectionPercentage = 0;
ALTER DATABASE

265 - eon-on-prem-install-step1-s3-bucket-creds.md

Step 1: create a bucket and credentials on the Pure Storage FlashBlade

To use a Pure Storage FlashBlade appliance as a communal storage location for an Eon Mode database you must have:

  • The IP address of the FlashBlade appliance. You must also have the connection port number if your FlashBlade is not using the standard port 80 or 443 to access the bucket. All of the nodes in your Vertica cluster must be able to access this IP address. Make sure any firewalls between the FlashBlade appliance and the nodes are configured to allow access.

  • The name of the bucket on the FlashBlade to use for communal storage.

  • An access key and secret key for a user account that has read and write access to the bucket.

See the Pure Storage support site for instructions on how to create the bucket and the access keys needed for a communal storage location.

266 - eon-on-prem-install-vertica-steps.md

Step 2: install Vertica on your cluster

To install Vertica:

  1. Ensure your nodes are configured properly by reviewing all of the content in the Before you install Vertica section.

  2. Use the install_vertica script to verify that your nodes are correctly configured and to install the Vertica binaries on all of your nodes. Follow the steps under Install Vertica using the command line to install Vertica.

267 - epsilon.md

FLOAT in the range (0.0, 1.0), the error value at which to stop training. Training stops if either the difference between the actual and predicted values is less than or equal to epsilon or if the number of iterations exceeds max_iterations.

Default: 1e-6

268 - error-c-.md

The following function divides two integers. To prevent division by zero, it tests the second parameter and fails if it is zero:

class Div2ints : public ScalarFunction
{
public:
  virtual void processBlock(ServerInterface &srvInterface,
                            BlockReader &arg_reader,
                            BlockWriter &res_writer)
  {
    // While we have inputs to process
    do
      {
        const vint a = arg_reader.getIntRef(0);
        const vint b = arg_reader.getIntRef(1);
        if (b == 0)
          {
            vt_report_error(1,"Attempted divide by zero");
          }
        res_writer.setInt(a/b);
        res_writer.next();
      }
    while (arg_reader.next());
  }
};

Loading and invoking the function demonstrates how the error appears to the user. Fenced and unfenced modes use different error numbers.

=> CREATE LIBRARY Div2IntsLib AS '/home/dbadmin/Div2ints.so';
CREATE LIBRARY
=> CREATE FUNCTION div2ints AS LANGUAGE 'C++' NAME 'Div2intsInfo' LIBRARY Div2IntsLib;
CREATE FUNCTION
=> SELECT div2ints(25, 5);
 div2ints
----------
        5
(1 row)
=> SELECT * FROM MyTable;
 a  | b
----+---
 12 | 6
  7 | 0
 12 | 2
 18 | 9
(4 rows)
=> SELECT * FROM MyTable WHERE div2ints(a, b) > 2;
ERROR 3399:  Error in calling processBlock() for User Defined Scalar Function
div2ints at Div2ints.cpp:21, error code: 1, message: Attempted divide by zero

269 - error-java.md

In the following example, if either of the arguments is NULL, the processBlock() method throws an exception:

@Override
public void processBlock(ServerInterface srvInterface,
                         BlockReader argReader,
                         BlockWriter resWriter)
            throws UdfException, DestroyInvocation
{
  do {
      // Test for NULL value. Throw exception if one occurs.
      if (argReader.isLongNull(0) || argReader.isLongNull(1) ) {
          // No nulls allowed. Throw exception
          throw new UdfException(1234, "Cannot add a NULL value");
     }

When your UDx throws an exception, the side process running your UDx reports the error back to Vertica and exits. Vertica displays the error message contained in the exception and a stack trace to the user:

=> SELECT add2ints(2, NULL);
ERROR 3399:  Failure in UDx RPC call InvokeProcessBlock(): Error in User Defined Object [add2ints], error code: 1234
com.vertica.sdk.UdfException: Cannot add a NULL value
        at com.example.Add2intsFactory$Add2ints.processBlock(Add2intsFactory.java:37)
        at com.vertica.udxfence.UDxExecContext.processBlock(UDxExecContext.java:700)
        at com.vertica.udxfence.UDxExecContext.run(UDxExecContext.java:173)
        at java.lang.Thread.run(Thread.java:662)

270 - error-python.md

In this example, if one of the arguments is less than 100, then the Python UDx throws an error:

    while(True):
        # Example of error checking best practices.
        product_id = block_reader.getInt(2)
        if product_id < 100:
            raise ValueError("Invalid Product ID")

An error generates a message like the following:

=> SELECT add2ints(prod_cost, sale_price, product_id) FROM bunch_of_numbers;
ERROR 3399:  Failure in UDx RPC call InvokeProcessBlock(): Error calling processBlock() in User Defined Object [add2ints]
at [/udx/PythonInterface.cpp:168], error code: 0,
message: Error [/udx/PythonInterface.cpp:385] function ['call_method']
(Python error type [<class 'ValueError'>])
Traceback (most recent call last):
  File "/home/dbadmin/py_db/v_py_db_node0001_catalog/Libraries/02fc4af0ace6f91eefa74baecf3ef76000a0000000004fc4/pylib_02fc4af0ace6f91eefa74baecf3ef76000a0000000004fc4.py",
line 13, in processBlock
    raise ValueError("Invalid Product ID")
ValueError: Invalid Product ID

271 - error-r.md

In this example, if the third column of the data frame does not match the specified Product ID, then the R UDx throws an error:


Calculate_Cost_w_Tax <- function(input.data.frame) {
  # Must match the Product ID 11444
  if ( !is.numeric(input.data.frame[, 3]) == 11444 ) {
    stop("Invalid Product ID!")
  } else {
    cost_w_tax <- data.frame(input.data.frame[, 1] * input.data.frame[, 2])
  }
  return(cost_w_tax)
}

Calculate_Cost_w_TaxFactory <- function() {
  list(name=Calculate_Cost_w_Tax,
       udxtype=c("scalar"),
       intype=c("float","float", "float"),
       outtype=c("float"))
}

An error generates a message like the following:

=> SELECT Calculate_Cost_w_Tax(item_price, tax_rate, prod_id) FROM Inventory_Sales_Data;
vsql:sql_test_multiply.sql:21: ERROR 3399:  Failure in UDx RPC call InvokeProcessBlock():
Error calling processBlock() in User Defined Object [mul] at
[/udx/RInterface.cpp:1308],
error code: 0, message: Exception in processBlock :Invalid Product ID!

272 - error-tolerance-approximate-count-distinct.md

Numeric value that represents the desired percentage of error tolerance, distributed around the value returned by this function. The smaller the error tolerance, the closer the approximation.

You can set error-tolerance to a minimum value of 0.88. Vertica imposes no maximum restriction, but any value greater than 5 is implemented with 5% error tolerance.

273 - escape-characters.md

To escape a ' (single quote) character, use ''. Similarly, to escape a " (double quote) character, use "".

274 - et-analytics-allsales-sample-schema.md

CREATE TABLE allsales(state VARCHAR(20), name VARCHAR(20), sales INT);
INSERT INTO allsales VALUES('MA', 'A', 60);
INSERT INTO allsales VALUES('NY', 'B', 20);
INSERT INTO allsales VALUES('NY', 'C', 15);
INSERT INTO allsales VALUES('MA', 'D', 20);
INSERT INTO allsales VALUES('MA', 'E', 50);
INSERT INTO allsales VALUES('NY', 'F', 40);
INSERT INTO allsales VALUES('MA', 'G', 10);
COMMIT;

275 - et-connection-with-locale-returns-success-with-info.md

If you specify a locale either in the connection string or in the DSN, the call to the connection function returns SQL_SUCCESS_WITH_INFO on a successful connection, with messages about the state of the locale.

276 - et-copyright-notice-content.md

Open Text Corporation Open Text Corporation, and its licensors. All rights reserved.

Open Text Corporation

55 Blue Sky Drive, Suite 102

Burlington, MA 01803

Phone: +1 617 386 4400

E-Mail: contactvertica@microfocus.com

Web site: http://www.vertica.com

The software described in this copyright notice is furnished under a license and may be used or copied only in accordance with the terms of such license. Open Text Corporation software contains proprietary information, as well as trade secrets of Open Text Corporation, and is protected under international copyright law. Reproduction, adaptation, or translation, in whole or in part, by any means — graphic, electronic or mechanical, including photocopying, recording, taping, or storage in an information retrieval system — of any part of this work covered by copyright is prohibited without prior written permission of the copyright owner, except as allowed under the copyright laws.

This product or products depicted herein may be protected by one or more U.S. or international patents or pending patents.

Trademarks

Vertica™, the Vertica Analytics Platform™, and FlexStore™ are trademarks of Open Text Corporation.

Adobe®, Acrobat®, and Acrobat® Reader® are registered trademarks of Adobe Systems Incorporated.

AMD™ is a trademark of Advanced Micro Devices, Inc., in the United States and other countries.

DataDirect® and DataDirect Connect® are registered trademarks of Progress Software Corporation in the U.S. and other countries.

Fedora™ is a trademark of Red Hat, Inc.

Intel® is a registered trademark of Intel.

Linux® is a registered trademark of Linus Torvalds.

Microsoft® is a registered trademark of Microsoft Corporation.

Novell® is a registered trademark and SUSE™ is a trademark of Novell, Inc., in the United States and other countries.

Oracle® is a registered trademark of Oracle Corporation.

Red Hat® is a registered trademark of Red Hat, Inc.

VMware® is a registered trademark or trademark of VMware, Inc., in the United States and/or other jurisdictions. Other products mentioned may be trademarks or registered trademarks of their respective companies.

Information on third-party software used in Vertica, including details on open-source software, is available in the LICENSES file in the installation directory (/opt/vertica by default).

277 - et-create-object-if-not-exists.md

If an object with the same name exists, return without creating the object. If you do not use this directive and the object already exists, Vertica returns with an error message.

The IF NOT EXISTS clause is useful for SQL scripts where you might not know if the object already exists. The ON ERROR STOP directive can be helpful in scripts.

278 - et-create-object-names.md

conforms to conventions described in Identifiers.

279 - et-encoding-type.md

The column encoding type, by default set to AUTO.

280 - et-external-unstructured-example.md

To create an external flex table:

=> CREATE flex external table mountains() AS COPY FROM 'home/release/KData/kmm_ountains.json' PARSER fjsonparser();
CREATE TABLE

As with other flex tables, creating an external flex table produces two regular tables: the named table and its associated _keys table. The keys table is not an external table:

=> \dt mountains
                 List of tables
 Schema |   Name    | Kind  |  Owner  | Comment
--------+-----------+-------+---------+---------
 public | mountains | table | release |
(1 row)

You can use the helper function, COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW, to compute keys and create a view for the external table:

=> SELECT compute_flextable_keys_and_build_view ('appLog');

                     compute_flextable_keys_and_build_view
--------------------------------------------------------------------------------------------------
Please see public.appLog_keys for updated keys
The view public.appLog_view is ready for querying
(1 row)

Check the keys from the _keys table for the results of running the helper application:

=> SELECT * FROM appLog_keys;
                          key_name                       | frequency |   data_type_guess
----------------------------------------------------------+-----------+------------------
contributors                                             |         8 | varchar(20)
coordinates                                              |         8 | varchar(20)
created_at                                               |         8 | varchar(60)
entities.hashtags                                        |         8 | long varbinary(186)
.
.
.
retweeted_status.user.time_zone                          |         1 | varchar(20)
retweeted_status.user.url                                |         1 | varchar(68)
retweeted_status.user.utc_offset                         |         1 | varchar(20)
retweeted_status.user.verified                           |         1 | varchar(20)
(125 rows)

You can query the view:

=> SELECT "user.lang" FROM appLog_view;
 user.lang
-----------
it
en
es
en
en
es
tr
en
(12 rows)

281 - et-internal-tm-shot-for-olock.md

If the operation cannot obtain an O lock on the target table, Vertica tries to close any internal Tuple Mover sessions that are running on that table. If successful, the operation can proceed. Explicit Tuple Mover operations that are running in user sessions do not close. If an explicit Tuple Mover operation is running on the table, the operation proceeds only when the operation is complete.

282 - et-locale.md

Locale specifies the user's language, country, and any special variant preferences, such as collation. Vertica uses locale to determine the behavior of certain string functions. Locale also determines the collation for various SQL commands that require ordering and comparison, such as aggregate GROUP BY and ORDER BY clauses, joins, and the analytic ORDER BY clause.

The default locale for a Vertica database is en_US@collation=binary (English US). You can define a new default locale that is used for all sessions on the database. You can also override the locale for individual sessions. However, projections are always collated using the default en_US@collation=binary collation, regardless of the session collation. Any locale-specific collation is applied at query time.

If you set the locale to null, Vertica sets the locale to en_US_POSIX. You can set the locale back to the default locale and collation by issuing the vsql meta-command \locale. For example:

You can set locale through ODBC, JDBC, and ADO.net.

283 - et-no-special-permissions-required.md

Non-superuser: No explicit privileges required. You only see records for tables that you have privileges to view.

284 - et-null-input-behavior-table.md

Value Description
CALLED_ON_NULL_INPUT The function must be called, even if one or more arguments are NULL.
RETURN_NULL_ON_NULL_INPUT The function always returns a NULL value if any of its arguments are NULL.
STRICT A synonym for RETURN_NULL_ON_NULL_INPUT
DEFAULT_STRICTNESS The default strictness setting. This is the same as CALLED_ON_NULL_INPUT.

285 - et-regular-expression-function-notes.md

This function operates on UTF-8 strings using the default locale, even if the locale is set otherwise.

286 - et-regular-expression-search-modifiers.md

One or more single-character flags that modify how the regular expression pattern is matched to string-expression:

  • b: Treat strings as binary octets, rather than UTF-8 characters.

  • c (default): Force the match to be case sensitive.

  • i: Force the match to be case insensitive.

  • m: Treat the string to match as multiple lines. Using this modifier, the start of line (^) and end of line ($) regular expression operators match line breaks (\n) within the string. Without the m modifier, the start and end of line operators match only the start and end of the string.

  • n: Match the regular expression operator (.) to a newline (\n). By default, the . operator matches any character except a newline.

  • x: Add comments to regular expressions. The x modifier causes the function to ignore all un-escaped space characters and comments in the regular expression. Comments start with hash (#) and end with a newline (\n). All spaces in the regular expression to be matched in strings must be escaped with a backslash (\).

287 - et-setting-connection-properties.md

You can set a connection property in one of the following ways:

  • Include the property name and value as part of the connection string you pass to the method DriverManager.getConnection().

  • Set the properties in a Properties object, and then pass it to the method DriverManager.getConnection().

  • Use the method VerticaConnection.setProperty(). With this approach, you can change only those connection properties that remain changeable after the connection has been established.

Also, some standard JDBC connection properties have getters and setters on the Connection interface, such as Connection.setAutoCommit().

288 - et-sort-optimization-schema.md

CREATE TABLE sortopt (
    a INT NOT NULL,
    b INT NOT NULL,
    c INT,
    d INT
);
CREATE PROJECTION sortopt_p (
   a_proj,
   b_proj,
   c_proj,
   d_proj )
AS SELECT * FROM sortopt
ORDER BY a,b,c 
UNSEGMENTED ALL NODES;
INSERT INTO sortopt VALUES(5,2,13,84);
INSERT INTO sortopt VALUES(14,22,8,115);
INSERT INTO sortopt VALUES(79,9,401,33);

289 - et-string-to-numeric-conversion.md

String-to-numeric data type conversions accept formats of quoted constants for scientific notation, binary scaling, hexadecimal, and combinations of numeric-type literals:

  • Scientific notation:

    => SELECT FLOAT '1e10';
      ?column?
    -------------
     10000000000
    (1 row)
    
  • BINARY scaling:

    => SELECT NUMERIC '1p10';
     ?column?
    ----------
         1024
    (1 row)
    
  • hexadecimal:

    => SELECT NUMERIC '0x0abc';
     ?column?
    ----------
         2748
    (1 row)
    

290 - et-systab-column-id-column-definition-concat.md

Catalog-assigned VARCHAR value that uniquely identifies a table column.

291 - et-systab-oid-genericglobal.md

Catalog-assigned integer value that uniquely identifies the

292 - et-systab-projection-id-column-definition-global.md

Catalog-assigned numeric value that uniquely identifies the projection.

293 - et-systab-table-id-column-definition-global.md

Catalog-assigned numeric value that uniquely identifies the table.

294 - et-ticks-table-schema.md

DROP TABLE Ticks CASCADE;


CREATE TABLE Ticks (ts TIMESTAMP, Stock varchar(10), Bid float);
INSERT INTO Ticks VALUES('2011-07-12 10:23:54', 'abc', 10.12);
INSERT INTO Ticks VALUES('2011-07-12 10:23:58', 'abc', 10.34);
INSERT INTO Ticks VALUES('2011-07-12 10:23:59', 'abc', 10.75);
INSERT INTO Ticks VALUES('2011-07-12 10:25:15', 'abc', 11.98);
INSERT INTO Ticks VALUES('2011-07-12 10:25:16', 'abc');
INSERT INTO Ticks VALUES('2011-07-12 10:25:22', 'xyz', 45.16);
INSERT INTO Ticks VALUES('2011-07-12 10:25:27', 'xyz', 49.33);
INSERT INTO Ticks VALUES('2011-07-12 10:31:12', 'xyz', 65.25);
INSERT INTO Ticks VALUES('2011-07-12 10:31:15', 'xyz');

COMMIT;

295 - et-udx-grant-warning.md

Installing an untrusted UDL function can compromise the security of the server. UDxs can contain arbitrary code. In particular, user-defined parser functions can read data from any arbitrary location. It is up to the developer of the function to enforce proper security limitations. Superusers must not grant access to UDxs to untrusted users.

296 - et-volatility-settings-table.md

Value Description
VOLATILE Repeated calls to the function with the same arguments always result in different values. Vertica always calls volatile functions for each invocation.
IMMUTABLE Calls to the function with the same arguments always results in the same return value.
STABLE Repeated calls to the function with the same arguments within the same statement returns the same output. For example, a function that returns the current user name is stable because the user cannot change within a statement. The user name could change between statements.
DEFAULT_VOLATILITY The default volatility. This is the same as VOLATILE.

297 - et-windows-copylocal-path-limitation-note.md

298 - etce-license-blurb.md

Your Vertica Community Edition download package includes the Community Edition license, which allows three nodes and 1TB of data. The Vertica Community Edition license does not expire.

299 - etconnect-authentication-note.md

When importing from or exporting to a Vertica database, you can connect only to a database that uses trusted (username only) or password-based authentication, as described in Security and authentication. OAuth and Kerberos authentication methods are not supported.

300 - etdate-part-and-extract-fields.md

CENTURY
The century number.

The first century starts at 0001-01-01 00:00:00 AD. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from –1 to 1.

DAY
The day (of the month) field (1–31).
DECADE
The year field divided by 10.
DOQ
The day within the current quarter. DOQ recognizes leap year days.
DOW
Zero-based day of the week, where Sunday=0.
DOY
The day of the year (1–365/366)
EPOCH
Specifies to return one of the following:
  • For DATE and TIMESTAMP values: the number of seconds before or since 1970-01-01 00:00:00-00 (if before, a negative number).

  • For INTERVAL values, the total number of seconds in the interval.

HOUR
The hour field (0–23).
ISODOW
The ISO day of the week, an integer between 1 and 7 where Monday is 1.
ISOWEEK
The ISO week of the year, an integer between 1 and 53.
ISOYEAR
The ISO year.
MICROSECONDS
The seconds field, including fractional parts, multiplied by 1,000,000. This includes full seconds.
MILLENNIUM
The millennium number, where the first millennium is 1 and each millenium starts on 01-01-y001. For example, millennium 2 starts on 01-01-1001.
MILLISECONDS
The seconds field, including fractional parts, multiplied by 1000. This includes full seconds.
MINUTE
The minutes field (0 - 59).
MONTH
For TIMESTAMP values, the number of the month within the year (1 - 12) ; for interval values the number of months, modulo 12 (0 - 11).
QUARTER
The calendar quarter of the specified date as an integer, where the January-March quarter is 1, valid only for TIMESTAMP values.
SECOND
The seconds field, including fractional parts, 0–59, or 0-60 if the operating system implements leap seconds.
TIME ZONE
The time zone offset from UTC, in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC.
TIMEZONE_HOUR
The hour component of the time zone offset.
TIMEZONE_MINUTE
The minute component of the time zone offset.
WEEK
The number of the week of the calendar year that the day is in.
YEAR
The year field. There is no 0 AD, so subtract BC years from AD years accordingly.

301 - etdisplay-license-example.md

=> SELECT DISPLAY_LICENSE();
                  DISPLAY_LICENSE
---------------------------------------------------
 Vertica Systems, Inc.
2007-08-03
Perpetual
500GB

(1 row)

302 - etget-compliance-status-example.md


=> select GET_COMPLIANCE_STATUS();
                       GET_COMPLIANCE_STATUS
---------------------------------------------------------------------------------
 Raw Data Size: 2.00GB +/- 0.003GB
 License Size : 4.000GB
 Utilization  : 50%
 Audit Time   : 2011-03-09 09:54:09.538704+00
 Compliance Status : The database is in compliance with respect to raw data size.
 License End Date: 04/06/2011
 Days Remaining: 28.59
(1 row)

303 - etto-timestamp-and-to-date-notes.md

  • TO_TIMESTAMP, TO_TIMESTAMP_TZ, and TO_DATE skip multiple blank spaces in the input string if the FX option is not used. FX must be specified as the first item in the template. For example:

    • TO_TIMESTAMP('2000 JUN', 'YYYY MON') is correct.

    • TO_TIMESTAMP('2000 JUN', 'FXYYYY MON') returns an error, because TO_TIMESTAMP expects one space only.

  • The YYYY conversion from string to TIMESTAMP or DATE has a restriction if you use a year with more than four digits. You must use a non-digit character or template after YYYY, otherwise the year is always interpreted as four digits. For example, given the following arguments, TO_DATE interprets the five-digit year 20000 as a four-digit year:

    => SELECT TO_DATE('200001131','YYYYMMDD');
      TO_DATE
    ------------
     2000-01-13
    (1 row)
    

    Instead, use a non-digit separator after the year. For example:

    => SELECT TO_DATE('20000-1131', 'YYYY-MMDD');
       TO_DATE
    -------------
     20000-12-01
    (1 row)
    
  • In conversions from string to TIMESTAMP or DATE, the CC field is ignored if there is a YYY, YYYY or Y,YYY field. If CC is used with YY or Y, then the year is computed as (CC–1)*100+YY.

304 - etudtf-query-restrictions.md

A query that includes a UDTF cannot:

305 - eval-functions-main-class-param.md

Used only if targets is of type CHAR/VARCHAR, specifies the class to associate with the probabilities argument.

306 - eval-functions-num-classes.md

An integer > 1, specifies the number of classes to pass to the function.

You must set this parameter if the specified input columns are of type CHAR/VARCHAR. Otherwise, the function processes this parameter according to the column data types:

  • INTEGER: By default set to 2, you must set this parameter correctly if the number of classes is any other value.

  • BOOLEAN: By default set to 2, cannot be set to any other value.

307 - eval-functions-targets-predictions-args.md

targets
An input column that contains the true values of the response variable.
predictions
An input column that contains the predicted class labels.

Arguments targets and predictions must be set to input columns of the same data type, one of the following: INTEGER, BOOLEAN, or CHAR/VARCHAR. Depending on their data type, these columns identify classes as follows:

  • INTEGER: Zero-based consecutive integers between 0 and (num-classes-1) inclusive, where num-classes is the number of classes. For example, given the following input column values— {0, 1, 2, 3, 4}—Vertica assumes five classes.

  • BOOLEAN: Yes or No

  • CHAR/VARCHAR: Class names. If the input columns are of type CHAR/VARCHAR columns, you must also set parameter num_classes to the number of classes.

308 - eval-functions-targets-probabilities-args.md

targets
An input column that contains the true values of the response variable, one of the following data types: INTEGER, BOOLEAN, or CHAR/VARCHAR. Depending on the column data type, the function processes column data as follows:
  • INTEGER: Uses the input column as containing the true value of the response variable.

  • BOOLEAN: Resolves Yes to 1, 0 to No.

  • CHAR/VARCHAR: Resolves the value specified by parameter main_class to 1, all other values to 0.

probabilities
A FLOAT input column that contains the predicted probability of response being the main class, set to 1 if targets is of type INTEGER.

309 - example-begin-end.md

=> BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED READ WRITE;
BEGIN
=> CREATE TABLE sample_table (a INT);
CREATE TABLE
=> INSERT INTO sample_table (a) VALUES (1);
OUTPUT
--------
1
(1 row)

=> END;
COMMIT

310 - exclude-columns-param.md

Comma-separated list of column names from input-columns to exclude from processing.

311 - exclude-columns-predictor-param.md

Comma-separated list of columns from predictor-columns to exclude from processing.

312 - exists-versus-in-operators.md

EXISTS versus IN

Whether you use EXISTS or IN subqueries depends on which predicates you select in outer and inner query blocks. For example, the following query gets a list of all the orders placed by all stores on January 2, 2007 for vendors with records in the vendor table:

=> SELECT store_key, order_number, date_ordered
   FROM store.store_orders_fact WHERE EXISTS
     (SELECT 1 FROM public.vendor_dimension vd JOIN store.store_orders_fact ord ON vd.vendor_key = ord.vendor_key)
   AND date_ordered = '2007-01-02';
 store_key | order_number | date_ordered
-----------+--------------+--------------
       114 |       271071 | 2007-01-02
        19 |       290888 | 2007-01-02
       132 |        58942 | 2007-01-02
       232 |         9286 | 2007-01-02
       126 |       224474 | 2007-01-02
       196 |        63482 | 2007-01-02
  ...
       196 |        83327 | 2007-01-02
       138 |       278373 | 2007-01-02
       179 |       293586 | 2007-01-02
       155 |       213413 | 2007-01-02
(506 rows)

The above query looks for existence of the vendor and date ordered. To return a particular value, rather than simple existence, the query looks for orders placed by the vendor who got the best deal on January 2, 2007:

=> SELECT store_key, order_number, date_ordered, vendor_name
   FROM store.store_orders_fact ord JOIN public.vendor_dimension vd ON ord.vendor_key = vd.vendor_key
   WHERE vd.deal_size IN (SELECT MAX(deal_size) FROM public.vendor_dimension) AND date_ordered = '2007-01-02';
 store_key | order_number | date_ordered |     vendor_name
-----------+--------------+--------------+----------------------
        50 |        99234 | 2007-01-02   | Everything Wholesale
        81 |       200802 | 2007-01-02   | Everything Wholesale
       115 |        13793 | 2007-01-02   | Everything Wholesale
       204 |        41842 | 2007-01-02   | Everything Wholesale
       133 |       169025 | 2007-01-02   | Everything Wholesale
       163 |       208580 | 2007-01-02   | Everything Wholesale
        29 |       154972 | 2007-01-02   | Everything Wholesale
       145 |       236790 | 2007-01-02   | Everything Wholesale
       249 |        54838 | 2007-01-02   | Everything Wholesale
         7 |       161536 | 2007-01-02   | Everything Wholesale
(10 rows)

313 - explain-dq-usage.md

=> EXPLAIN SELECT employee_first_name, employee_last_name FROM employee_dimension
      WHERE employee_city='San Francisco' AND job_title='Branch Manager' ORDER BY employee_last_name, employee_first_name;
 ...
 ------------------------------
 QUERY PLAN DESCRIPTION:
 ------------------------------
 EXPLAIN SELECT employee_first_name, employee_last_name FROM employee_dimension WHERE employee_city='San Francisco' AND job_title='Branch Manager' ORDER BY employee_last_name, employee_first_name;

 The following active directed query(query name: findEmployeesCityJobTitle_OPT) is being executed:
 SELECT /*+verbatim*/  employee_dimension.employee_first_name, employee_dimension.employee_last_name
 FROM public.employee_dimension employee_dimension/*+projs('public.employee_dimension')*/
 WHERE ((employee_dimension.employee_city = 'San Francisco'::varchar(13)) AND (employee_dimension.job_title = 'Branch Manager'::varchar(14)))
 ORDER BY employee_dimension.employee_last_name, employee_dimension.employee_first_name

 Access Path:
 +-SORT [Cost: 222, Rows: 10K (NO STATISTICS)] (PATH ID: 1)
 |  Order: employee_dimension.employee_last_name ASC, employee_dimension.employee_first_name ASC
 |  Execute on: All Nodes
 | +---> STORAGE ACCESS for employee_dimension [Cost: 60, Rows: 10K (NO STATISTICS)] (PATH ID: 2)
 | |      Projection: public.employee_dimension_super
 | |      Materialize: employee_dimension.employee_first_name, employee_dimension.employee_last_name
 | |      Filter: (employee_dimension.employee_city = 'San Francisco')
 | |      Filter: (employee_dimension.job_title = 'Branch Manager')
 | |      Execute on: All Nodes
 ...

314 - explode-array-multi.md

The following example uses a multi-dimensional array:

=> SELECT name, pingtimes FROM network_tests;
 name |                       pingtimes
------+-------------------------------------------------------
 eng1 | [[24.24,25.27,27.16,24.97],[23.97,25.01,28.12,29.5]]
 eng2 | [[27.12,27.91,28.11,26.95],[29.01,28.99,30.11,31.56]]
 qa1  | [[23.15,25.11,24.63,23.91],[22.85,22.86,23.91,31.52]]
(3 rows)

=> SELECT EXPLODE(name, pingtimes USING PARAMETERS explode_count=1) OVER()
   FROM network_tests;
 name | position |           value
------+----------+---------------------------
 eng1 |        0 | [24.24,25.27,27.16,24.97]
 eng1 |        1 | [23.97,25.01,28.12,29.5]
 eng2 |        0 | [27.12,27.91,28.11,26.95]
 eng2 |        1 | [29.01,28.99,30.11,31.56]
 qa1  |        0 | [23.15,25.11,24.63,23.91]
 qa1  |        1 | [22.85,22.86,23.91,31.52]
(6 rows)

You can rewrite the previous query as follows to produce the same results:

=> SELECT name, EXPLODE(pingtimes USING PARAMETERS skip_partitioning=true)
   FROM network_tests;

315 - explode-array.md

Consider an orders table with the following contents:

=> SELECT orderkey, custkey, prodkey, orderprices, email_addrs
   FROM orders LIMIT 5;
  orderkey  | custkey |                    prodkey                    |            orderprices            |                                                  email_addrs
------------+---------+-----------------------------------------------+-----------------------------------+----------------------------------------------------------------------------------------------------------------
 113-341987 |  342799 | ["MG-7190 ","VA-4028 ","EH-1247 ","MS-7018 "] | ["60.00","67.00","22.00","14.99"] | ["bob@example,com","robert.jones@example.com"]
 111-952000 |  342845 | ["ID-2586 ","IC-9010 ","MH-2401 ","JC-1905 "] | ["22.00","35.00",null,"12.00"]    | ["br92@cs.example.edu"]
 111-345634 |  342536 | ["RS-0731 ","SJ-2021 "]                       | ["50.00",null]                    | [null]
 113-965086 |  342176 | ["GW-1808 "]                                  | ["108.00"]                        | ["joe.smith@example.com"]
 111-335121 |  342321 | ["TF-3556 "]                                  | ["50.00"]                         | ["789123@example-isp.com","alexjohnson@example.com","monica@eng.example.com","sara@johnson.example.name",null]
(5 rows)

The following query explodes the order prices for a single customer. The other two columns are passed through and are repeated for each returned row:

=> SELECT EXPLODE(orderprices, custkey, email_addrs
                  USING PARAMETERS skip_partitioning=true)
            AS (position, orderprices, custkey, email_addrs)
   FROM orders WHERE custkey='342845' ORDER BY orderprices;
 position | orderprices | custkey |         email_addrs
----------+-------------+---------+------------------------------
        2 |             |  342845 | ["br92@cs.example.edu",null]
        3 |       12.00 |  342845 | ["br92@cs.example.edu",null]
        0 |       22.00 |  342845 | ["br92@cs.example.edu",null]
        1 |       35.00 |  342845 | ["br92@cs.example.edu",null]
(4 rows)

The previous example uses the skip_partitioning parameter. Instead of setting it for each call to EXPLODE, you can set it as a session parameter. EXPLODE is part of the ComplexTypesLib UDx library. The following example returns the same results:

=> ALTER SESSION SET UDPARAMETER FOR ComplexTypesLib skip_partitioning=true;

=> SELECT EXPLODE(orderprices, custkey, email_addrs)
            AS (position, orderprices, custkey, email_addrs)
   FROM orders WHERE custkey='342845' ORDER BY orderprices;

You can explode more than one column by specifying the explode_count parameter:

=> SELECT EXPLODE(orderkey, prodkey, orderprices
                  USING PARAMETERS explode_count=2, skip_partitioning=true)
          AS (orderkey,pk_idx,pk_val,ord_idx,ord_val)
   FROM orders
   WHERE orderkey='113-341987';
  orderkey  | pk_idx |  pk_val  | ord_idx | ord_val
------------+--------+----------+---------+---------
 113-341987 |      0 | MG-7190  |       0 |   60.00
 113-341987 |      0 | MG-7190  |       1 |   67.00
 113-341987 |      0 | MG-7190  |       2 |   22.00
 113-341987 |      0 | MG-7190  |       3 |   14.99
 113-341987 |      1 | VA-4028  |       0 |   60.00
 113-341987 |      1 | VA-4028  |       1 |   67.00
 113-341987 |      1 | VA-4028  |       2 |   22.00
 113-341987 |      1 | VA-4028  |       3 |   14.99
 113-341987 |      2 | EH-1247  |       0 |   60.00
 113-341987 |      2 | EH-1247  |       1 |   67.00
 113-341987 |      2 | EH-1247  |       2 |   22.00
 113-341987 |      2 | EH-1247  |       3 |   14.99
 113-341987 |      3 | MS-7018  |       0 |   60.00
 113-341987 |      3 | MS-7018  |       1 |   67.00
 113-341987 |      3 | MS-7018  |       2 |   22.00
 113-341987 |      3 | MS-7018  |       3 |   14.99
(16 rows)

316 - explode-unnest-join.md

You can use the output of this function as if it were a relation by using CROSS JOIN or LEFT JOIN LATERAL in a query. Other JOIN types are not supported.

Consider the following table of students and exam scores:

=> SELECT * FROM tests;
 student |    scores     |    questions
---------+---------------+-----------------
 Bob     | [92,78,79]    | [20,20,100]
 Lee     |               |
 Pat     | []            | []
 Sam     | [97,98,85]    | [20,20,100]
 Tom     | [68,75,82,91] | [20,20,100,100]
(5 rows)

317 - export-arguments.md

LABEL
Assigns a label to a statement to identify it for profiling and debugging.
over-clause
Specifies how to partition table data using PARTITION BY. Within partitions you can sort using ORDER BY. See SQL analytics. This clause may contain column references but not expressions.

If you partition data, Vertica creates a partition directory structure, transforming column names to lowercase. See Partitioned data for a description of the directory structure. If you use the fileName parameter, you cannot use partitioning.

If you omit this clause, Vertica optimizes for maximum parallelism.

query-expression
Specifies the data to export. See Query Restrictions for important limitations.

318 - export-description.md

There are some limitations on the queries you can use in an export statement. See Query Restrictions.

You can export data stored in Vertica in ROS format and data from external tables.

File exporters are UDxs that cannot be run in fenced mode. If the ForceUDxFencedMode configuration parameter is set to true to support other UDxs, you can set it to false at the session level before performing the export.

This statement returns the number of rows written and logs information about exported files in a system table. See Monitoring exports.

During an export to HDFS or an NFS mount point, Vertica writes files to a temporary directory in the same location as the destination and renames the directory when the export is complete. Do not attempt to use the files in the temporary directory. During an export to S3, GCS, or Azure, Vertica writes files directly to the destination path, so you must wait for the export to finish before reading the files. For more information, see Exporting to object stores.

320 - export-output-files.md

The export operation always creates (or appends to) an output directory, even if all output is written to a single file or the query produces zero rows.

By default, output file names follow the pattern: prefix-nodename-threadId[-sequenceNumber].extension. prefix is typically an 8-character hash, but can be longer if the export appended to an existing directory. A sequence number is added if an export needs to be broken into pieces to satisfy fileSizeMB. You can change the prefix using the filePrefix parameter or, for appends, by setting ifDirExists=appendNoHash. You can write all output to a single named file using the filename parameter.

Column names in partition directories are lowercase.

Files exported to a local file system by any Vertica user are owned by the Vertica superuser. Files exported to HDFS or object stores are owned by the Vertica user who exported the data.

Making concurrent exports to the same output destination is an error and can produce incorrect results.

Exports to the local file system can be to an NFS mount (shared) or to the Linux file system on each node (non-shared). For details, see Exporting to the Linux file system. Exports to non-shared local file systems have the following restrictions:

  • The output directory must not exist on any node.

  • You must have a USER storage location or superuser privileges.

  • You cannot override the permissions mode of 700 for directories and 600 for files.

Exports to object-store file systems are not atomic. Be careful to wait for the export to finish before using the data. For details, see Exporting to object stores.

321 - export-parameter-dir-mode.md

For writes to HDFS only, permission to apply to all exported directories. Values follow the same rules as those for fileMode. Further, you must give the Vertica HDFS user full permission, at least rwx------ or 700.

When writing files to any destination other than HDFS, this parameter has no effect.

Default: 755, regardless of the value of fs.permissions.umask-mode in hdfs-site.xml.

322 - export-parameter-directory.md

The destination directory for the output files. The current user must have permission to write it. The destination can be on any of the following file systems:

See also: ifDirExists.

323 - export-parameter-file-mode.md

For writes to HDFS only, permission to apply to all exported files. You can specify the value in Unix octal format (such as 665) or user-group-other format—for example, rwxr-xr-x. The value must be formatted as a string even if using the octal format.

Valid octal values range between 0 and 1777, inclusive. See HDFS Permissions in the Apache Hadoop documentation.

When writing files to any destination other than HDFS, this parameter has no effect.

Default: 660, regardless of the value of fs.permissions.umask-mode in hdfs-site.xml.

324 - export-parameter-file-prefix.md

Prefix to use for exported files instead of the default hash. If used with ifDirExists=appendNoHash, choose a unique prefix to avoid overwriting existing files.

325 - export-parameter-file-size-mb.md

The maximum file size of a single output file. This value is a hint, not a hard limit. A value of 0 specifies no limit. If filename is also specified, fileSizeMB is ignored.

This value affects the size of individual output files, not the total output size. For smaller values, Vertica divides the output into more files; all data is still exported.

Default: 10GB

326 - export-parameter-filename.md

If specified, all output is written to a single file of this name in the location specified by directory. While the query can be processed by multiple nodes, only a single node generates the output data. The fileSizeMB parameter is ignored, and the query cannot use partitioning in the OVER() clause.

327 - export-parameter-if-dir-exists.md

What to do if directory already exists, one of:

  • fail (default)

  • overwrite: replace the entire directory

  • append: export new files into the existing directory with a prepended hash that identifies all files in the exported batch

  • appendNoHash: export new files into the existing directory without a prepended hash; with this option, Vertica cannot clean up partial exports

If you specify overwrite for an export to an object store, the existing directory is deleted recursively at the beginning of the operation and is not restored if the operation fails. Be careful not to export to a directory containing data you want to keep. For an export to a Linux file system or HDFS, the directory is only overwritten if the export succeeds.

Do not do concurrent exports to the same directory. In particular, if you do so with a value of overwrite, all operations appear to succeed, but the results are incorrect.

When using append, be careful to use the same table schema. Otherwise, queries of external tables using this data path could fail.

328 - export-privileges.md

Non-superusers:

  • Source table: SELECT

  • Source table schema: USAGE

  • Destination directory: Write

329 - export-script-requirements.md

The SQL script:

  • Only includes objects to which the user has access.

  • Orders CREATE statements based on object dependencies so they can be recreated in the correct sequence. For example, if a table is in a non-public schema, the required CREATE SCHEMA statement precedes the CREATE TABLE statement. Similarly, a table's CREATE ACCESS POLICY statement follows the table's CREATE TABLE statement.

  • If possible, creates projections with a KSAFE clause, if any, otherwise with an OFFSET clause.

330 - export-table-script-requirements.md

The SQL script conforms to the following requirements:

  • Only includes objects to which the user has access.

  • Orders CREATE statements according to object dependencies so they can be recreated in the correct sequence. For example, if a table references a named sequence, a CREATE SEQUENCE statement precedes the CREATE TABLE statement. Similarly, a table's CREATE ACCESS POLICY statement follows the table's CREATE TABLE statement.

331 - export-to-parquet-composite-queries.md

The query can contain only a single outer SELECT statement. For example, you cannot use UNION:

=> EXPORT TO PARQUET(directory = '/mnt/shared_nfs/accounts/rm')
   OVER(PARTITION BY hash)
   AS
   SELECT 1 as account_id, '{}' as json, 0 hash
   UNION ALL
   SELECT 2 as account_id, '{}' as json, 1 hash;
ERROR 8975:  Only a single outer SELECT statement is supported
HINT:  Please use a subquery for multiple outer SELECT statements

Instead, rewrite the query to use a subquery:

=> EXPORT TO PARQUET(directory = '/mnt/shared_nfs/accounts/rm')
   OVER(PARTITION BY hash)
   AS
   SELECT
    account_id,
    json
   FROM
   (
     SELECT 1 as account_id, '{}' as json, 0 hash
     UNION ALL
     SELECT 2 as account_id, '{}' as json, 1 hash
   ) a;
 Rows Exported
---------------
             2
(1 row)

To use composite statements such as UNION, INTERSECT, and EXCEPT, rewrite them as subqueries.

332 - exported-stored-procedures.md

To export a particular implementation of a stored procedure, specify either the types or both the names and types of the procedure's formal parameters. The following example specifies the types:

=> SELECT EXPORT_OBJECTS('','raiseXY(int, int)');
    EXPORT_OBJECTS
----------------------

CREATE PROCEDURE public.raiseXY(x int, y int)
LANGUAGE 'PL/vSQL'
SECURITY INVOKER
AS '
BEGIN
RAISE NOTICE ''x = %'', x;
RAISE NOTICE ''y = %'', y;
-- some processing statements
END
';

SELECT MARK_DESIGN_KSAFE(0);

(1 row)

To export all implementations of an overloaded stored procedure, export its parent schema:

=> SELECT EXPORT_OBJECTS('','public');
    EXPORT_OBJECTS
----------------------

...

CREATE PROCEDURE public.raiseXY(x int, y varchar)
LANGUAGE 'PL/vSQL'
SECURITY INVOKER
AS '
BEGIN
RAISE NOTICE ''x = %'', x;
RAISE NOTICE ''y = %'', y;
-- some processing statements
END
';

CREATE PROCEDURE public.raiseXY(x int, y int)
LANGUAGE 'PL/vSQL'
SECURITY INVOKER
AS '
BEGIN
RAISE NOTICE ''x = %'', x;
RAISE NOTICE ''y = %'', y;
-- some processing statements
END
';

SELECT MARK_DESIGN_KSAFE(0);

(1 row)

333 - fail-on-conf-parse-error.md

Determines whether the function fails when kafka_conf contains incorrectly formatted options and values, or invalid configuration properties.

Default Value: FALSE

For accepted option and value formats, see Directly setting Kafka library options.

For a list of valid configuration properties, see the rdkafka GitHub repository.

334 - fallthrough-incompatibility.md

Fallthrough cannot be enabled for authentication records that use the following authentication methods:

  • gss

  • oauth

  • reject

  • trust

335 - file-systems-supported.md

Choose the storage format type based on deployment requirements. Vertica recommends the following storage format types where applicable:

  • ext3

  • ext4

  • NFS for backup

  • XFS

  • Amazon S3 Standard, Azure Blob Storage, or Google Cloud Storage for communal storage and related backup tasks when running in Eon Mode

The storage format type at your backup and temporary directory locations must support fcntl lockf (POSIX) file locking.

336 - filler-columns.md

In the following example, the table has columns for first name, last name, and full name, but the data being loaded contains columns for first, middle, and last names. The COPY statement reads all of the source data but only loads the source columns for first and last names. It constructs the data for the full name by concatenating each of the source data columns, including the middle name. The middle name is read as a FILLER column so it can be used in the concatenation, but is ignored otherwise. (There is no table column for middle name.)

=> CREATE TABLE names(first VARCHAR(20), last VARCHAR(20), full VARCHAR(60));
CREATE TABLE
=> COPY names(first,
              middle FILLER VARCHAR(20),
              last,
              full AS first||' '||middle||' '||last)
      FROM STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> Marc|Gregory|Smith
>> Sue|Lucia|Temp
>> Jon|Pete|Hamilton
>> \.
=> SELECT * from names;
 first |   last   |        full
-------+----------+--------------------
 Jon   | Hamilton | Jon Pete Hamilton
 Marc  | Smith    | Marc Gregory Smith
 Sue   | Temp     | Sue Lucia Temp
(3 rows)

337 - filter-factory-c-.md

The FilterFactory API provides the following methods for extension by subclasses:

virtual void plan(ServerInterface &srvInterface, PlanContext &planCtxt);

virtual UDFilter * prepare(ServerInterface &srvInterface, PlanContext &planCtxt)=0;

virtual void getParameterType(ServerInterface &srvInterface, SizedColumnTypes &parameterTypes);

After creating your FilterFactory, you must register it with the RegisterFactory macro.

338 - filter-factory-java.md

The FilterFactory API provides the following methods for extension by subclasses:

public void plan(ServerInterface srvInterface, PlanContext planCtxt)
    throws UdfException;

public abstract UDFilter prepare(ServerInterface srvInterface, PlanContext planCtxt)
    throws UdfException;

public void getParameterType(ServerInterface srvInterface, SizedColumnTypes parameterTypes);

339 - filter-factory-python.md

The FilterFactory API provides the following methods for extension by subclasses:


class PyFilterFactory(vertica_sdk.SourceFactory):
    def __init__(self):
        pass
    def plan(self):
        pass
    def prepare(self, planContext):
        #User implement the function to create PyUDSources.
        pass

340 - filter-predicate.md

Resolves to one or more directed queries that are filtered from system table DIRECTED_QUERIES.

341 - flex-complex-flatten.md

The COPY statement shown in this example sets flatten_maps to false. Without that change, the keys for the complex columns would not work as expected, because record and array keys would be "flattened" at the top level. Querying menu['0']['item'] would produce no results. Instead, query flattened values as in the following example:

=> SELECT menu['0.item'] FROM restaurant;
     menu
--------------
 veggie taco
 cheese pizza
(2 rows)

Flattening directives apply to the entire COPY statement. You cannot flatten some columns and not others, or prevent flattening values in a complex column that is itself within a flattened flex table. Because flexible complex types and strongly-typed complex types require different values for flattening, you cannot combine strong and flexible complex types in the same load operation.

342 - flex-complex-input-file.md

{
    "name" : "Bob's pizzeria",
    "cuisine" : "Italian",
    "location_city" : ["Cambridge", "Pittsburgh"],
    "menu" : [{"item" : "cheese pizza", "price" : "$8.25"},
              {"item" : "spinach pizza", "price" : "$10.50"}]
}
{
    "name" : "Bakersfield Tacos",
    "cuisine" : "Mexican",
    "location_city" : ["Pittsburgh"],
    "menu" : [{"item" : "veggie taco", "price" : "$9.95"},
              {"item" : "steak taco", "price" : "$10.95"}]
}

343 - flex-complex-readable-output.md

=> SELECT MAPTOSTRING(location_city), MAPTOSTRING(menu) FROM restaurant;
                   maptostring                   |             maptostring
-------------------------------------------------+--------------------------------------------------------
 {
    "0": "Cambridge",
    "1": "Pittsburgh"
} | {
    "0": {
        "item": "cheese pizza",
        "price": "$8.25"
    },
    "1": {
        "item": "spinach pizza",
        "price": "$10.50"
    }
}
 {
    "0": "Pittsburgh"
}                       | {
    "0": {
        "item": "veggie taco",
        "price": "$9.95"
    },
    "1": {
        "item": "steak taco",
        "price": "$10.95"
    }
}
(2 rows)

=> SELECT menu['0']['item'] FROM restaurant;
     menu
--------------
 cheese pizza
 veggie taco
(2 rows)

344 - flex-key-data-types.md

Assigning flex key data types

When the EnableBetterFlexTypeGuessing configuration parameter is ON (the default), COMPUTE_FLEXTABLE_KEYS produces more specific results.

Use the sample CSV data in this section to compare the results of using or not using the parameter:

Year,Quarter,Region,Species,Grade,Pond Value,Number of Quotes,Available
2015,1,2 - Northwest Oregon & Willamette,Douglas-fir,1P,$615.12 ,12,No
2015,1,2 - Northwest Oregon & Willamette,Douglas-fir,SM,$610.78 ,12,Yes
2015,1,2 - Northwest Oregon & Willamette,Douglas-fir,2S,$596.00 ,20,Yes
2015,1,2 - Northwest Oregon & Willamette,Hemlock,P,$520.00 ,6,Yes
2015,1,2 - Northwest Oregon & Willamette,Hemlock,SM,$510.00 ,6,No
2015,1,2 - Northwest Oregon & Willamette,Hemlock,2S,$490.00 ,14,No

Create a flex table and load this data:

=> CREATE FLEX TABLE trees();
=> COPY trees FROM '/home/dbadmin/tempdat/trees.csv' PARSER fcsvparser();

Compute keys and check the results with the default setting:

=> SELECT COMPUTE_FLEXTABLE_KEYS('trees');
            COMPUTE_FLEXTABLE_KEYS
-----------------------------------------------
 Please see public.trees_keys for updated keys
(1 row)

=> SELECT * FROM trees_keys;
     key_name     | frequency | data_type_guess
------------------+-----------+-----------------
 Year             |         6 | Integer
 Quarter          |         6 | Integer
 Region           |         6 | Varchar(66)
 Available        |         6 | Boolean
 Number of Quotes |         6 | Integer
 Grade            |         6 | Varchar(20)
 Species          |         6 | Varchar(22)
 Pond Value       |         6 | Numeric(8,3)
(8 rows)

Set EnableBetterFlexTypeGuessing to 0 (OFF), compute keys again, and compare the results:


=> SELECT * FROM trees_keys;
    key_name     | frequency | data_type_guess
------------------+-----------+-----------------
 Year             |         6 | varchar(20)
 Quarter          |         6 | varchar(20)
 Region           |         6 | varchar(66)
 Available        |         6 | varchar(20)
 Grade            |         6 | varchar(20)
 Number of Quotes |         6 | varchar(20)
 Pond Value       |         6 | varchar(20)
 Species          |         6 | varchar(22)
(8 rows)

345 - flex-keys-columns.md

Column Description
KEY_NAME The name of the virtual column (key). Keys larger than 65,000 bytes are truncated.
FREQUENCY The number of times the key occurs in the VMap.
DATA_TYPE_GUESS Estimate of the data type for the key based on the non-null values found in the VMap. The function determines the type of each non-string value, depending on the length of the key, and whether the key includes nested maps. If the EnableBetterFlexTypeGuessing configuration parameter is 0 (OFF), this function instead treats all flex table keys as string types ([LONG] VARCHAR or [LONG] VARBINARY).

346 - force-split-parameter.md

Optional Boolean argument, specifies whether to split ROS containers if the range of partition keys spans multiple containers or part of a single container:

  • true: Split ROS containers as needed.

  • false (default): Return with an error if ROS containers must be split to implement this operation.

347 - fpe-column-format-warning.md

348 - function-arg-list.md

A comma-delimited list of arguments as defined for this function when it was created, specified as follows:

[arg-name] arg-type[,...]

where arg-name optionally qualifies arg-type:

  • arg-name is typically a column name.

  • arg-type is the name of an SQL data type supported by Vertica.

349 - g-validity.md

Geospatial object to test for validity, value of type GEOMETRY or GEOGRAPHY (WGS84).

350 - g1-def.md

Spatial object of type GEOMETRY or GEOGRAPHY

351 - g1-geom-only.md

Spatial object of type GEOMETRY.

352 - gcp-custom-service-account.md

Using a custom service account

In general, you should use the default service account created by the GCP deployment (project_number-compute@developer.gserviceaccount.com), but if you want to use a custom service account:

  • The custom service account must have the Editor role.

  • Individual user accounts must have the Service Account User role on the custom service account.

353 - gcp-default-service-account.md

Service Account: Service accounts allow automated processes to authenticate with GCP. Select the default service account, identified by project_number-compute@developer.gserviceaccount.com.

354 - geom-def.md

A geometry or geography (WGS84) column that contains points. The g column can contain only point geometries or geographies. If the column contains a different geometry or geography type, STV_Intersect terminates with an error.

355 - get-all-arguments-for-create-db.md

To view all arguments for the create_db tool, run the command:

admintools -t create_db --help

356 - get-client-info-set-client-info.md

The JDBC Client has a method to set and return the client connection label: getClientInfo() and setClientInfo(). You can use these methods with the SQL Functions GET_CLIENT_LABEL and SET_CLIENT_LABEL.

When you use these two methods, make sure you pass the string value APPLICATIONNAME to both the setter and getter methods.

Use setClientInfo() to create a client label, and use getClientInfo() to return the client label:

import java.sql.*;
import java.util.Properties;

public class ClientLabelJDBC {

    public static void main(String[] args) {
        Properties myProp = new Properties();
        myProp.put("user", "dbadmin");
        myProp.put("password", "");
        myProp.put("loginTimeout", "35");
        Connection conn;
        try {
            conn = DriverManager.getConnection(
                    "jdbc:vertica://example.com:5433/mydb", myProp);
            System.out.println("Connected!");
            conn.setClientInfo("APPLICATIONNAME", "JDBC Client - Data Load");
            System.out.println("New Conn label: " + conn.getClientInfo("APPLICATIONNAME"));
            conn.close();
        } catch (SQLTransientConnectionException connException) {
            // There was a potentially temporary network error
            // Could automatically retry a number of times here, but
            // instead just report error and exit.
            System.out.print("Network connection issue: ");
            System.out.print(connException.getMessage());
            System.out.println(" Try again later!");
            return;
        } catch (SQLInvalidAuthorizationSpecException authException) {
            // Either the username or password was wrong
            System.out.print("Could not log into database: ");
            System.out.print(authException.getMessage());
            System.out.println(" Check the login credentials and try again.");
            return;
        } catch (SQLException e) {
            // Catch-all for other exceptions
            e.printStackTrace();
        }
    }
}

When you run this method, it prints the following result to the standard output:

Connected!
New Conn Label: JDBC Client - Data Load

357 - get-data-colector-notify-policy-example.md

=> SELECT GET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures');
                   GET_DATA_COLLECTOR_NOTIFY_POLICY
----------------------------------------------------------------------
 Notifiable;  Notifier: vertica_stats; Channel: vertica_notifications
(1 row)

358 - get-data-collector-notify-policy-no-policy-example.md


=> SELECT GET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures');
 GET_DATA_COLLECTOR_NOTIFY_POLICY
----------------------------------
 Not notifiable;
(1 row)

359 - gid.md

Name of an integer column that uniquely identifies the polygon. The gid cannot be NULL.

360 - github-examples.md

You can find more UDx examples in the Vertica Github repository, https://github.com/vertica/UDx-Examples.

362 - grant-authentication-examples.md

  • Associate v_ldap authentication with user jsmith:

    => GRANT AUTHENTICATION v_ldap TO jsmith;
    
  • Associate v_gss authentication to the role DBprogrammer:

    
    => CREATE ROLE DBprogrammer;
    => GRANT AUTHENTICATION v_gss TO DBprogrammer;
    
  • Associate client authentication method v_localpwd with role PUBLIC, which is assigned by default to all users:

    => GRANT AUTHENTICATION v_localpwd TO PUBLIC;
    

363 - grant-execute-linux-dbadmin-priv.md

External procedures that you create with CREATE PROCEDURE (external) are always run with Linux dbadmin privileges. If a dbadmin or pseudosuperuser grants a non-dbadmin permission to run a procedure using GRANT (procedure), be aware that the non-dbadmin user runs the procedure with full Linux dbadmin privileges.

364 - grant-statement-privileges.md

Privileges

Non-superusers require USAGE on the schema and one of the following:

  • Owner

  • Privileges grantee given the option (WITH GRANT OPTION) of granting privileges to other users or roles.

365 - grantee-parameter-description.md

Who is granted privileges, one of the following:

366 - granttrusted.md

Specifies that GRANT statements take precedence over the access policy in determining whether users can perform DML operations on the target table. If omitted, users can only modify table data if the access policy allows them to see the stored data in its original, unaltered state. For more information, see Access policies and DML operations.

367 - grouped-ros_snip.md

A grouped ROS is a highly-optimized, read-oriented physical storage structure organized by projection. A grouped ROS makes heavy use of compression and indexing. Unlike a ROS, a grouped ROS stores data for two or more grouped columns in one disk file.

368 - h-cat-properties.md

The HCatalog Connector uses the following values from the Hadoop configuration files if you do not override them when creating the schema.

File Properties
hive-site.xml hive.server2.thrift.bind.host (used for HIVESERVER2_HOSTNAME)
hive.server2.thrift.port
hive.server2.transport.mode
hive.server2.authentication
hive.server2.authentication.kerberos.principal
hive.server2.support.dynamic.service.discovery
hive.zookeeper.quorum (used as HIVESERVER2_HOSTNAME if dynamic service discovery is enabled)
hive.zookeeper.client.port
hive.server2.zookeeper.namespace
hive.metastore.uris (used for HOSTNAME and PORT)
ssl-client.xml ssl.client.truststore.location
ssl.client.truststore.password

369 - h-catalog-definfe-schema.md

=> CREATE HCATALOG SCHEMA hcat WITH HOSTNAME='hcathost' PORT=9083
   HCATALOG_SCHEMA='default' HIVESERVER2_HOSTNAME='hs.example.com'
   SSL_CONFIG='/etc/hadoop/conf/ssl-client.xml' HCATALOG_USER='admin';
CREATE SCHEMA
=> \x
Expanded display is on.

=> SELECT * FROM v_catalog.hcatalog_schemata;
-[ RECORD 1 ]----------------+-------------------------------------------
schema_id                    | 45035996273748224
schema_name                  | hcat
schema_owner_id              | 45035996273704962
schema_owner                 | admin
create_time                  | 2017-12-05 14:43:03.353404-05
hostname                     | hcathost
port                         | -1
hiveserver2_hostname         | hs.example.com
webservice_hostname          |
webservice_port              | 50111
webhdfs_address              | hs.example.com:50070
hcatalog_schema_name         | default
ssl_config                   | /etc/hadoop/conf/ssl-client.xml
hcatalog_user_name           | admin
hcatalog_connection_timeout  | -1
hcatalog_slow_transfer_limit | -1
hcatalog_slow_transfer_time  | -1
custom_partitions            | f

=> SELECT * FROM v_catalog.hcatalog_table_list;
-[ RECORD 1 ]------+------------------
table_schema_id    | 45035996273748224
table_schema       | hcat
hcatalog_schema    | default
table_name         | nation
hcatalog_user_name | admin
-[ RECORD 2 ]------+------------------
table_schema_id    | 45035996273748224
table_schema       | hcat
hcatalog_schema    | default
table_name         | raw
hcatalog_user_name | admin
-[ RECORD 3 ]------+------------------
table_schema_id    | 45035996273748224
table_schema       | hcat
hcatalog_schema    | default
table_name         | raw_rcfile
hcatalog_user_name | admin
-[ RECORD 4 ]------+------------------
table_schema_id    | 45035996273748224
table_schema       | hcat
hcatalog_schema    | default
table_name         | raw_sequence
hcatalog_user_name | admin

370 - has-role-example.md

In the following example, a dbadmin user checks whether user MikeL is assigned the admnistrator role:

=> \c
You are now connected as user "dbadmin".
=> SELECT HAS_ROLE('MikeL', 'administrator');
 HAS_ROLE
----------
 t
(1 row)

User MikeL checks whether he has the regional_manager role:

=> \c - MikeL
You are now connected as user "MikeL".
=> SELECT HAS_ROLE('regional_manager');
 HAS_ROLE
----------
 f
(1 row)

The dbadmin grants the regional_manager role to the administrator role. On checking again, MikeL verifies that he now has the regional_manager role:

dbadmin=> \c
You are now connected as user "dbadmin".
dbadmin=> GRANT regional_manager to administrator;
GRANT ROLE
dbadmin=> \c - MikeL
You are now connected as user "MikeL".
dbadmin=> SELECT HAS_ROLE('regional_manager');
 HAS_ROLE
----------
 t
(1 row)

371 - hash-segmentation_snip.md

Specifies how to segment projection data for distribution across all cluster nodes. You can specify segmentation for a table and a projection. If a table definition specifies segmentation, Vertica uses it for that table's auto-projections.

It is strongly recommended that you use Vertica's built-in HASH function, which distributes data evenly across the cluster, and facilitates optimal query execution.

372 - hash-segmentation-clause_snip.md

Invalid for external tables, specifies how to distribute data for auto-projections of this table. Supply one of the following clauses:

If this clause is omitted, Vertica generates auto-projections with default hash segmentation.

373 - hierarchical-partitioning-best-practices.md

374 - hive-permissions.md

Users also require access to Hive data, one of the following:

  • USAGE permissions on hcat_schema, if Hive does not use an authorization service to manage access.

  • Permission through an authorization service (Sentry or Ranger), and access to the underlying files in HDFS. (Sentry can provide that access through ACL synchronization.)

  • dbadmin user privileges, with or without an authorization service.

375 - hive.md

If the input relation is defined in Hive, use SYNC_WITH_HCATALOG_SCHEMA to sync the hcatalog schema, and then run the machine learning function.

376 - hostname-load-balancing.md

192.0.2.0 verticahost.example.com
192.0.2.1 verticahost.example.com
192.0.2.2 verticahost.example.com

377 - hosts-api.md

GET hosts Returns a list of hosts in this cluster.
GET hosts/:hostid Returns details for a specific host in this cluster.

378 - how-columns-are-ranked.md

Vertica stores columns included in the projection sort order on the fastest available storage locations. Columns not included in the projection sort order are stored on slower disks. Columns for each projection are ranked as follows:

  • Columns in the sort order are given the highest priority (numbers > 1000).

  • The last column in the sort order is given the rank number 1001.

  • The next-to-last column in the sort order is given the rank number 1002, and so on until the first column in the sort order is given 1000 + # of sort columns.

  • The remaining columns are given numbers from 1000–1, starting with 1000 and decrementing by one per column.

Vertica then stores columns on disk from the highest ranking to the lowest ranking. It places highest-ranking columns on the fastest disks and the lowest-ranking columns on the slowest disks.

379 - how-mc-validates-new-users.md

How MC validates new users

After you click OK to close the Add permissions dialog box, MC tries to validate the database username and password entered against the selected MC-managed database or against your organization's LDAP directory. If the credentials are found to be invalid, you are asked to re-enter them.

If the database is not available at the time you create the new user, MC saves the username/password and prompts for validation when the user accesses the Database and Clusters page later.

380 - identical-attributes-swap-move-partition-tables.md

The following attributes of both tables must be identical:

  • Column definitions, including NULL/NOT NULL constraints

  • Segmentation

  • Partition clause

  • Number of projections

  • Shard count (Eon Mode only)

  • Projection sort order

  • Primary and unique key constraints. However, the key constraints do not have to be identically enabled. For more information on constraints, see Constraints.

  • Check constraints. For MOVE_PARTITIONS_TO_TABLE and COPY_PARTITIONS_TO_TABLE, Vertica enforces enabled check constraints on the target table only. For SWAP_PARTITIONS_BETWEEN_TABLES, Vertica enforces enabled check constraints on both tables. If there is a violation of an enabled check constraint, Vertica rolls back the operation.

  • Number and definitions of text indices.

381 - idl-north-south-pole.md

GEOGRAPHY Polygons with a vertex or border on the International Date Line (IDL) or the North or South pole are not supported.

382 - if-not-exists-user-defined-function.md

If a function with the same name and arguments exists, return without creating the function.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

383 - if-not-exists.md

Use this clause in SQL scripts to avoid errors on dropping non-existent objects before attempting to create them.

384 - ignore-constants-input-annotated-queries.md

=> SELECT input_query, annotated_query FROM V_CATALOG.DIRECTED_QUERIES
      WHERE query_name = 'findEmployeesCityJobTitle_OPT';
-[ RECORD 1 ]---+----------------------------------------------------------------------------
input_query     | SELECT employee_dimension.employee_first_name, employee_dimension.employee_last_name FROM public.employee_dimension
WHERE ((employee_dimension.employee_city = 'Boston'::varchar(6) /*+:v(1)*/) AND (employee_dimension.job_title = 'Cashier'::varchar(7) /*+:v(2)*/))
ORDER BY employee_dimension.employee_last_name, employee_dimension.employee_first_name
annotated_query | SELECT /*+verbatim*/ employee_dimension.employee_first_name AS employee_first_name, employee_dimension.employee_last_name AS employee_last_name FROM public.employee_dimension AS employee_dimension/*+projs('public.employee_dimension')*/
WHERE (employee_dimension.employee_city = 'Boston'::varchar(6) /*+:v(1)*/) AND (employee_dimension.job_title = 'Cashier'::varchar(7) /*+:v(2)*/)
ORDER BY 2 ASC, 1 ASC

385 - immutable-invariant-functions_snip.md

When run with a given set of arguments, immutable functions always produce the same result, regardless of environment or session settings such as locale.

386 - implicit-crypto-link.md

Superusers have limited access to cryptographic objects that they do not own. For details, see Database object privileges.

387 - implode-exampe-wgoby.md

=> SELECT itemkey AS key, IMPLODE(itemprice) WITHIN GROUP (ORDER BY itemprice) AS prices
    FROM filtered GROUP BY itemkey ORDER BY itemkey;
 key |      prices
-----+-------------------
 123 | ["14.99"]
 345 | ["14.99","35.99"]
 567 | ["18.99","27.99"]
(3 rows)

388 - implode-example-no-wgoby.md

=> SELECT itemkey AS key, IMPLODE(itemprice) AS prices
    FROM filtered GROUP BY itemkey ORDER BY itemkey;
 key |      prices
-----+-------------------
 123 | ["14.99"]
 345 | ["35.99","14.99"]
 567 | ["27.99","18.99"]
(3 rows)

389 - import-export-identity-columns.md

  • If both source and destination tables have an IDENTITY column and configuration parameter CopyFromVerticaWithIdentity is set to true (1), you do not need to list them.

  • If source table has an IDENTITY column, but target table does not, you must explicitly list the source and target columns.

390 - import-export-tls-values.md

  • PREFER: Try TLS but fall back to plaintext if TLS fails.

  • REQUIRE: Use TLS and fail if the server does not support TLS.

  • VERIFY_CA: Require TLS (as with REQUIRE), and also validate the other server's certificate using the CA specified by the "server" TLS Configuration's CA certificates (in this case, "ca_cert" and "ica_cert"):

    => SELECT name, certificate, ca_certificate, mode FROM tls_configurations WHERE name = 'server';
      name  |   certificate    |   ca_certificate   |   mode
    --------+------------------+--------------------+-----------
     server | server_cert      | ca_cert,ica_cert   | VERIFY_CA
    (1 row)
    
  • VERIFY_FULL: Require TLS and validate the certificate (as with VERIFY_CA), and also validate the server certificate's hostname.

  • REQUIRE_FORCE, VERIFY_CA_FORCE, and VERIFY_FULL_FORCE: Same behavior as REQUIRE, VERIFY_CA, and VERIFY_FULL, respectively, and cannot be overridden by CONNECT TO VERTICA.

391 - include-exclude-schema-privileges.md

Default inheritance of schema privileges for this table:

  • INCLUDE PRIVILEGES specifies that the table inherits privileges that are set on its schema. This is the default behavior if privileges inheritance is enabled for the schema.

  • EXCLUDE PRIVILEGES disables inheritance of privileges from the schema.

For details, see Inherited privileges.

392 - increment-sequence-param.md

Positive or negative minimum value change on each call to NEXTVAL. The default is 1.

This value is a minimum increment size. The sequence can increment by more than this value unless you also specify NO CACHE.

393 - index-def-g.md

Name of a geometry or geography (WGS84) column or expression that contains polygons and multipolygons. Only polygon and multipolygon can be indexed. Other shape types are excluded from the index.

394 - index-name.md

Name of the index, type VARCHAR. Index names cannot exceed 110 characters. The slash, backslash, and tab characters are not allowed in index names.

395 - indexed.md

Number of polygons indexed during the operation.

396 - infer-json-with-differences.md

=> SELECT INFER_TABLE_DDL ('/data/*.json'
    USING PARAMETERS table_name='restaurants', format='json',
max_files=3, max_candidates=3);
WARNING 0:  This generated statement contains one or more float types which might lose precision
WARNING 0:  This generated statement contains one or more varchar/varbinary types which default to length 80

                INFER_TABLE_DDL
------------------------------------------------------------------------
 Candidate matched 1 out of 2 total files:
  create table "restaurants"(
    "cuisine" varchar,
    "location_city" Array[varchar],
    "menu" Array[Row(
      "item" varchar,
      "price" float
    )],
    "name" varchar
  );
Candidate matched 1 out of 2 total files:
  create table "restaurants"(
    "cuisine" varchar,
    "location_city" Array[varchar],
    "menu" Array[Row(
      "items" Array[Row(
        "item" varchar,
        "price" numeric
      )],
      "time" varchar
    )],
    "name" varchar
  );

(1 row)

397 - info.md

Lists the number of excluded spatial objects as well as their type that were excluded from the index.

398 - inherited-privileges_snip.md

399 - inherited-schema-privileges.md

  • SELECT: Query tables and views. SELECT privileges are granted by default to the PUBLIC role.

  • INSERT: Insert rows, or and load data into tables with COPY.

  • UPDATE: Update table rows.

  • DELETE: Delete table rows.

  • REFERENCES: Create foreign key constraints on this table. This privilege must be set on both referencing and referenced tables.

  • TRUNCATE: Truncate table contents. Non-owners of tables can also execute the following partition operations on them:

  • ALTER: Modify the DDL of tables and views with ALTER TABLE and ALTER VIEW, respectively.

  • DROP: Drop tables and views.

400 - input-columns-argument.md

Comma-separated list of columns to use from the input relation, or asterisk (*) to select all columns.

401 - insert-update-delete-single-transaction.md

Committing successive table changes

Vertica follows the SQL-92 transaction model, so successive INSERT, UPDATE, and DELETE statements are included in the same transaction. You do not need to explicitly start this transaction; however, you must explicitly end it with COMMIT, or implicitly end it with COPY. Otherwise, Vertica discards all changes that were made within the transaction.

402 - install-crds-cmd.md

Install all Custom resource definitions. Because the size of the CRD is too large for client-side operations, you must use the server-side=true and --force-conflicts options to apply the manifests:

kubectl apply --server-side=true --force-conflicts -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/crds.yaml

For additional details about these commands, see Server-Side Apply documentation.

403 - install-management-console.md

  1. Download the MC package from the Vertica website:

    vertica-console-current-version.Linux-distro)
    

    Save the package to a location on the target server, such as /tmp.

  2. On the target server, log in as root or a user with sudo privileges.

  3. Change to the directory where you saved the MC package.

  4. Install MC using your local Linux distribution package management system—rpm, yum, zypper, apt, dpkg. For example:

    Red Hat 8

    # rpm -Uvh vertica-console-current-version.x86_64.rpm
    

    Debian and Ubuntu

    # dpkg -i vertica-console-current-version.deb
    
  5. If you stopped the database before upgrading MC, restart the database.

    As the root user, use the following command:

    /etc/init.d/verticad start
    

    For versions of Red Hat 8/CentOS 8 and above, run:

    # systemctl start verticad
    
  6. Open a browser and enter the URL of the MC installation, one of the following:

    • IP address:

      https://ip-address:mc-port/
      
    • Server host name:

      https://hostname:mc-port/
      

    By default, mc-port is 5450.

If MC was not previously configured, the Configuration Wizard dialog box appears. Configuration steps are described in Configuring MC.

If MC was previously configured, Vertica prompts you to accept the end-user license agreement (EULA) when you first log in to MC after the upgrade.

Additionally, you can choose to provide Vertica with analytic information about your MC usage. For details, see Management Console settings.

404 - install-update-helm-repo.md

The add command downloads the chart to your local repository, and the update command gets the latest charts from the remote repository. When you add the Helm chart to your local chart repository, provide a descriptive name for future reference.

The following add command names the charts vertica-charts:

$ helm repo add vertica-charts https://vertica.github.io/charts
  "vertica-charts" has been added to your repositories
$ helm repo update
  Hang tight while we grab the latest from your chart repositories...
  ...Successfully got an update from the "vertica-charts" chart repository
  Update Complete. ⎈Happy Helming!⎈

405 - installing-external-procedure-executable-files_snip.md

  1. Run the Administration tools.

    $ /opt/vertica/bin/adminTools
    
  2. On the AdminTools Main Menu, click Configuration Menu, and then click OK.

  3. On the Configuration Menu, click Install External Procedure and then click OK.

  4. Select the database on which you want to install the external procedure.

  5. Either select the file to install or manually type the complete file path, and then click OK.

  6. If you are not the superuser, you are prompted to enter your password and click OK.

    The Administration Tools automatically create the database-name/procedures directory on each node in the database and installs the external procedure in these directories for you.

  7. Click OK in the dialog that indicates that the installation was successful.

406 - installing-java-on-vertica-hosts.md

Installing Java on your Vertica cluster is a two-step process:

  1. Install a Java runtime on all of the hosts in your cluster.

  2. Set the JavaBinaryForUDx configuration parameter to tell Vertica the location of the Java executable.

Installing a Java runtime

For Java-based features, Vertica requires a 64-bit Java 6 (Java version 1.6). Vertica supports runtimes from either Oracle or OpenJDK. You can choose to install either the Java Runtime Environment (JRE) or Java Development Kit (JDK), since the JDK also includes the JRE.

Many Linux distributions include a package for the OpenJDK runtime. See your Linux distribution's documentation for information about installing and configuring OpenJDK.

To install the Oracle Java runtime, see the Java Standard Edition (SE) Download Page. You usually run the installation package as root in order to install it. See the download page for instructions.

Once you have installed a JVM on each host, ensure that the java command is in the search path and calls the correct JVM by running the command:

$ java -version

This command should print something similar to:

java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)

Setting the JavaBinaryForUDx configuration parameter

The JavaBinaryForUDx configuration parameter tells Vertica where to look for the JRE to execute Java UDxs. After you have installed the JRE on all of the nodes in your cluster, set this parameter to the absolute path of the Java executable. You can use the symbolic link that some Java installers create (for example /usr/bin/java). If the Java executable is in your shell search path, you can get the path of the Java executable by running the following command from the Linux command line shell:

$ which java
/usr/bin/java

If the java command is not in the shell search path, use the path to the Java executable in the directory where you installed the JRE. Suppose you installed the JRE in /usr/java/default (which is where the installation package supplied by Oracle installs the Java 1.6 JRE). In this case the Java executable is /usr/java/default/bin/java.

You set the configuration parameter by executing the following statement as a database superuser:

=> ALTER DATABASE DEFAULT SET PARAMETER JavaBinaryForUDx = '/usr/bin/java';

See ALTER DATABASE for more information on setting configuration parameters.

To view the current setting of the configuration parameter, query the CONFIGURATION_PARAMETERS system table:

=> \x
Expanded display is on.
=> SELECT * FROM CONFIGURATION_PARAMETERS WHERE parameter_name = 'JavaBinaryForUDx';
-[ RECORD 1 ]-----------------+----------------------------------------------------------
node_name                     | ALL
parameter_name                | JavaBinaryForUDx
current_value                 | /usr/bin/java
default_value                 |
change_under_support_guidance | f
change_requires_restart       | f
description                   | Path to the java binary for executing UDx written in Java

Once you have set the configuration parameter, Vertica can find the Java executable on each node in your cluster.

407 - intermediate-ca-limitation.md

408 - internode-certificate.md

=> CREATE CERTIFICATE internode_cert
SUBJECT '/C=US/ST=Massachusetts/L=Cambridge/O=Micro Focus/OU=Vertica/CN=data channel'
SIGNED BY SSCA_cert
EXTENSIONS 'nsComment' = 'Vertica internode cert', 'extendedKeyUsage' = 'serverAuth, clientAuth'
KEY internode_key;

409 - internode-key.md

=> CREATE KEY internode_key TYPE 'RSA' LENGTH 2048;

410 - ipv4-ipv6connection-string.md

You can provide an IPv4 address, IPv6 address, or host name.

In mixed IPv4/IPv6 networks, the DNS server configuration determines which IP version address is sent first. Use the PreferredAddressFamily option to force the connection to use either IPv4 or IPv6.

411 - iso-week-year.md

According to the ISO-8601 standard, the week starts on Monday, and the first week of a year contains January 4. Thus, an early January date can sometimes be in the week 52 or 53 of the previous calendar year. For example:

=> SELECT YEAR_ISO('01-01-2016'::DATE), WEEK_ISO('01-01-2016'), DAYOFWEEK_ISO('01-01-2016');
 YEAR_ISO | WEEK_ISO | DAYOFWEEK_ISO
----------+----------+---------------
     2015 |       53 |             5
(1 row)

412 - java-udl-distribute-data-files-py.md

#!/usr/bin/python
# Save this file as UDLDataGen.py
import string
import random
import sys
import os

# Read in the dictionary file to provide random words. Assumes the words
# file is located in /usr/share/dict/words
wordFile = open("/usr/share/dict/words")
wordDict = []
for line in wordFile:
    if len(line) > 6:
        wordDict.append(line.strip())

MAXSTR = 4 # Maximum number of words to concatentate
NUMROWS = 1000 # Number of rows of data to generate
#FILEPATH = '/tmp/UDLdata.txt' # Final filename to use for UDL source
TMPFILE = '/tmp/UDLtemp.txt'  # Temporary filename.

# Generate a random string by concatenating several words together. Max
# number of words set by MAXSTR
def randomWords():
    words = [random.choice(wordDict) for n in xrange(random.randint(1, MAXSTR))]
    sentence = " ".join(words)
    return sentence

# Create a temporary data file that will be moved to a node. Number of
# rows for the file is set by NUMROWS. Adds the name of the node which will
# get the file, to show which node loaded the data.
def generateFile(node):
    outFile = open(TMPFILE, 'w')
    for line in xrange(NUMROWS):
        outFile.write('{0}|{1}|{2}\n'.format(line,randomWords(),node))
    outFile.close()

# Copy the temporary file to a node. Only works if passwordless SSH login
# is enabled, which it is for the database administrator account on
# Vertica hosts.
def copyFile(fileName,node):
    os.system('scp "%s" "%s:%s"' % (TMPFILE, node, fileName) )

# Loop through the comma-separated list of nodes given in the first
# parameter, creating and copying data files whose full comma-separated
# paths are passed in the second parameter
for node in [x.strip() for x in sys.argv[1].split(',')]:
    for fileName in [y.strip() for y in sys.argv[2].split(',')]:
        print "generating file", fileName, "for", node
        generateFile(node)
        print "Copying file to",node
        copyFile(fileName,node)

413 - jdbctls-mode-description.md

TLSmode identifies the security level that Vertica applies to the JDBC connection. Vertica must be configured to handle TLS connections before you can establish an encrypted connection to it. See TLS protocol for details. Valid values are:

  • disable: JDBC connects using plain text and implements no security measures.

  • prefer: JDBC connects using TLS. If the server does not support TLS, JDBC connects with a channel that is not encrypted. This is the default mode.

  • require: JDBC connects using TLS without verifying the CA certificate.

  • verify-ca: JDBC connects using TLS and confirms that the server certificate has been signed by the certificate authority. This setting is equivalent to the deprecated ssl=true property.

  • verify-full: JDBC connects using TLS, confirms that the server certificate has been signed by the certificate authority, and verifies that the host name matches the name provided in the server certificate.

If this property and the SSL property are set, this property takes precedence.

414 - jobs_snip.md

GET jobs Returns a list of jobs the agent is tracking, along with their current status and exit codes.
GET jobs/:id Returns the details (the saved output) for a specific job.

415 - k-safety_snip.md

K-safety sets fault tolerance for the database cluster, where K can be set to 0, 1, or 2. The value of K specifies how many copies Vertica creates of Segmented projectionsdata. If K-safety for a database is set to 1 or 2, Vertica creates K+1 instances, or buddies, of each projection segment. Vertica distributes these buddies across the database cluster, such that projection data is protected in the event of node failure. If any node fails, the database can continue to process queries so long as buddies of data on the failed node remain available elsewhere on the cluster.

416 - k-safety-temp-table.md

K-safety of temporary tables is always set to 0, regardless of system K-safety. If a CREATE TEMPORARY TABLE statement sets k-num greater than 0, Vertica returns an warning.

417 - k8s-availability-zone-note.md

418 - k8s-service-types-desc.md

  • ClusterIP: The default service type. This service provides internal load balancing, and sets a stable IP and port that is accessible from within the subcluster only.

  • NodePort: Provides external client access. You can specify a port number for each host node in the subcluster to open for client connections.

  • LoadBalancer: Uses a cloud provider load balancer to create NodePort and ClusterIP services as needed. For details about implementation, see the Kubernetes documentation and your cloud provider documentation.

419 - kafka-conf-description.md

A JSON string of property/value pairs to pass directly to the rdkafka, the library that Vertica uses to communicate with Kafka. This parameter directly sets global configuration properties that are not available through the Vertica integration with Kafka.

For details, see Directly setting Kafka library options.

420 - kafka-conf-params.md

--kafka_conf 'kafka_configuration_setting'

421 - kafka-conf-secret-desc.md

Conceals sensitive configuration data that you must pass directly to the rdkafka library, such as passwords. This parameter accepts settings in the same format as kafka_conf.

Values passed to this parameter are not logged or stored in system tables.

422 - kafka-conf-secret-params.md

--kafka_conf_secret 'kafka_configuration_setting'

423 - kafka-export-syntax.md

SELECT KafkaExport(partitionColumn, keyColumn, valueColumn
    USING PARAMETERS brokers='host[:port][,host...]',
    topic='topicname'
    [,kafka_conf='kafka_configuration_setting']
    [,kafka_topic_conf='kafka_configuration_setting']
    [,kafka_conf_secret='kafka_configuration_setting']
    [,fail_on_conf_parse_error=Boolean])
OVER (partition_clause) FROM table;

424 - kafka-source-kafka-conf-example.md

=> CREATE FLEX TABLE iot_data();
CREATE TABLE
=> COPY public.iot_data SOURCE KafkaSource(stream='iot_json|0|-2',
                                           brokers='kafka-01.example.com:9092',
                                           stop_on_eof=True,
                                           kafka_conf='{"api.version.request":false}')
        PARSER KafkaJSONParser();
 Rows Loaded
-------------
        5000
(1 row)

425 - kafka-topic-conf-description.md

A JSON string of property/value pairs to pass directly to the rdkafka, the library that Vertica uses to communicate with Kafka. This parameter directly sets topic-level configuration properties that are not available through the Vertica integration with Kafka.

For details, see Directly setting Kafka library options.

426 - key-columns-output-table-param.md

Comma-separated list of column names from input-columns that identify its data rows. These columns are included in the output table.

427 - key-columns-param.md

Comma-separated list of predictor column names that identify the output rows. To exclude these and other predictor columns from being used for prediction, include them in the argument list for parameter exclude_columns.

428 - key-privileges.md

A privilege, one of the following:

  • USAGE: Allows a user to perform the following actions:

    USAGE on the key also gives implicit USAGE privileges on a certificate that uses it as its private key. Users can also get these privileges from ownership of the key or certificate. USAGE privileges on a certificate allow a user to perform the following actions:

    • View the contents of the certificate.

    • Add (with CREATE or ALTER) the certificate to a TLS Configuration.

    • Reuse the CA certificate when importing certificates signed by it. For example, if a user imports a chain of certificates A > B > C and have USAGE on B, the database reuses B (as opposed to creating a duplicate of B).

    • Specify that the CA certificate signed an imported certificate. For example, if certificate B signed certificate C, USAGE on B allows a user to import C and specify that it was SIGNED BY B.

  • DROP

  • ALTER: Allows a user to see the key and its associated certificates in their respective system tables, but not their contents.

429 - key-to-certificate-privs.md

430 - keystore-format-admonition.md

431 - kmeans-model-example.md

The following example creates k-means model myKmeansModel and applies it to input table iris1. The call to APPLY_KMEANS mixes column names and constants. When a constant is passed in place of a column name, the constant is substituted for the value of the column in all rows:

=> SELECT KMEANS('myKmeansModel', 'iris1', '*', 5
USING PARAMETERS max_iterations=20, output_view='myKmeansView', key_columns='id', exclude_columns='Species, id');
           KMEANS
----------------------------
 Finished in 12 iterations

(1 row)
=> SELECT id, APPLY_KMEANS(Sepal_Length, 2.2, 1.3, Petal_Width
USING PARAMETERS model_name='myKmeansModel', match_by_pos='true') FROM iris2;
 id  | APPLY_KMEANS
-----+--------------
   5 |            1
  10 |            1
  14 |            1
  15 |            1
  21 |            1
  22 |            1
  24 |            1
  25 |            1
  32 |            1
  33 |            1
  34 |            1
  35 |            1
  38 |            1
  39 |            1
  42 |            1
...
 (60 rows)

432 - kprototypes_snip.md

The following example creates k-prototypes model small_model and applies it to input table small_test_mixed:

=> SELECT KPROTOTYPES('small_model_initcenters', 'small_test_mixed', 'x0, country', 3 USING PARAMETERS initial_centers_table='small_test_mixed_centers', key_columns='pid');
      KPROTOTYPES
---------------------------
Finished in 2 iterations

(1 row)

=> SELECT country, x0, APPLY_KPROTOTYPES(country, x0
USING PARAMETERS model_name='small_model')
FROM small_test_mixed;
  country   | x0  | apply_kprototypes
------------+-----+-------------------
 'China'    |  20 |                 0
 'US'       |  85 |                 2
 'Russia'   |  80 |                 1
 'Brazil'   |  78 |                 1
 'US'       |  23 |                 0
 'US'       |  50 |                 0
 'Canada'   |  24 |                 0
 'Canada'   |  18 |                 0
 'Russia'   |  90 |                 2
 'Russia'   |  98 |                 2
 'Brazil'   |  89 |                 2
...
(45 rows)

433 - ksafety-levelof-projection2.md

Invalid for external tables, specifies K-safety of auto-projections created for this table, where k-num must be equal to or greater than system K-safety. If you omit this option, the projection uses the system K-safety level.

434 - kubernetes.md

Kubernetes only

435 - label-description.md

Assigns a label to a statement to identify it for profiling and debugging.

436 - label-hint.md

LABEL hints are valid in the following statements:

437 - lambda-contains-example.md

In the following example, a schedules table includes an array of events, where each event is a ROW with several fields:

=> CREATE TABLE schedules
       (guest VARCHAR,
       events ARRAY[ROW(e_date DATE, e_name VARCHAR, price NUMERIC(8,2))]);

You can use the CONTAINS function with a lambda expression to find people who have more than one event on the same day. The second argument, idx, is the index of the current element:

=> SELECT guest FROM schedules
WHERE CONTAINS(events, (e, idx) ->
                       (idx < ARRAY_LENGTH(events) - 1)
                       AND (e.e_date = events[idx + 1].e_date));
    guest
-------------
 Alice Adams
(1 row)

438 - ldap-anonymous-bind.md

By providing an empty string for the LDAPLinkBindPswd argument, you can also perform an anonymous bind if your LDAP server allows unauthenticated binds.

439 - ldap-anonymous-search.md

By providing an empty string for the LDAPLinkBindPswd argument, you can also perform an anonymous search if your LDAP server's Access Control List (ACL) is configured to allow unauthenticated searches. The settings for allowing anonymous binds are different from the ACL settings for allowing anonymous searches.

440 - ldap-dryrun-clerk.md

The dryrun and LDAP_LINK_SYNC_START functions must be run from the clerk node. To determine the clerk node, query NODE_RESOURCES:

=> SELECT node_name, dbclerk FROM node_resources WHERE dbclerk='t';
    node_name     | dbclerk
------------------+---------
 v_vmart_node0001 | t
(1 row)

441 - ldap-dryrun-connect-example.md

This tests the connection to an LDAP server at ldap://example.dc.com with the DN CN=amir,OU=QA,DC=dc,DC=com.

=> SELECT LDAP_LINK_DRYRUN_CONNECT('ldap://example.dc.com','CN=amir,OU=QA,DC=dc,DC=com','password');

                ldap_link_dryrun_connect
---------------------------------------------------------------------------------
Dry Run Connect Completed. Query v_monitor.ldap_link_dryrun_events for results.

To check the results of the bind, query the system table LDAP_LINK_DRYRUN_EVENTS.

=> SELECT event_timestamp, event_type, entry_name, role_name, link_scope, search_base from LDAP_LINK_DRYRUN_EVENTS;
        event_timestamp       |       event_type      |      entry_name      | link_scope | search_base
------------------------------+-----------------------+----------------------+------------+-------------
2019-12-09 15:41:43.589398-05 | BIND_STARTED          | -------------------- | ---------- | -----------
2019-12-09 15:41:43.590504-05 | BIND_FINISHED         | -------------------- | ---------- | -----------

442 - ldap-dryrun-search-example.md

This searches for users and groups in the LDAP server. In this case, the LDAPLinkSearchBase parameter specifies the dc.com domain and a sub scope, which replicates the entire subtree under the DN.

To further filter results, the function checks for users and groups with the person and group objectClass attributes. It then searches the group attribute cn, identifying members of that group with the member attribute, and then identifying those individual users with the attribute uid.

=> SELECT LDAP_LINK_DRYRUN_SEARCH('ldap://example.dc.com','CN=amir,OU=QA,DC=dc,DC=com','$vertica$','dc=DC,dc=com','sub',
'(objectClass=person)','(objectClass=group)','uid','cn','member',10,'dn');

                ldap_link_dryrun_search
--------------------------------------------------------------------------------
Dry Run Search Completed. Query v_monitor.ldap_link_dryrun_events for results.

To check the results of the search, query the system table LDAP_LINK_DRYRUN_EVENTS.

=> SELECT event_timestamp, event_type, entry_name, ldapurihash, link_scope, search_base from LDAP_LINK_DRYRUN_EVENTS;
        event_timestamp          |    event_type    |       entry_name       | ldapurihash | link_scope | search_base
---------------------------------+------------------+------------------------+-------------+------------+--------------
2020-01-03 21:03:26.411753+05:30 | BIND_STARTED     | ---------------------- |           0 | sub        | dc=DC,dc=com
2020-01-03 21:03:26.422188+05:30 | BIND_FINISHED    | ---------------------- |           0 | sub        | dc=DC,dc=com
2020-01-03 21:03:26.422223+05:30 | SYNC_STARTED     | ---------------------- |           0 | sub        | dc=DC,dc=com
2020-01-03 21:03:26.422229+05:30 | SEARCH_STARTED   | **********             |           0 | sub        | dc=DC,dc=com
2020-01-03 21:03:32.043107+05:30 | LDAP_GROUP_FOUND | Account Operators      |           0 | sub        | dc=DC,dc=com
2020-01-03 21:03:32.04312+05:30  | LDAP_GROUP_FOUND | Administrators         |           0 | sub        | dc=DC,dc=com
2020-01-03 21:03:32.043182+05:30 | LDAP_USER_FOUND  | user1                  |           0 | sub        | dc=DC,dc=com
2020-01-03 21:03:32.043186+05:30 | LDAP_USER_FOUND  | user2                  |           0 | sub        | dc=DC,dc=com
2020-01-03 21:03:32.04319+05:30  | SEARCH_FINISHED  | **********             |           0 | sub        | dc=DC,dc=com

443 - ldap-dryrun-sync-example.md

To perform a dry run to map the users and groups returned from LDAP_LINK_DRYRUN_SEARCH, pass the same parameters as arguments to LDAP_LINK_DRYRUN_SYNC.

=> SELECT LDAP_LINK_DRYRUN_SYNC('ldap://example.dc.com','CN=amir,OU=QA,DC=dc,DC=com','$vertica$','dc=DC,dc=com','sub',
'(objectClass=person)','(objectClass=group)','uid','cn','member',10,'dn');

                          LDAP_LINK_DRYRUN_SYNC
------------------------------------------------------------------------------------------
Dry Run Connect and Sync Completed. Query v_monitor.ldap_link_dryrun_events for results.

To check the results of the sync, query the system table LDAP_LINK_DRYRUN_EVENTS.

=> SELECT event_timestamp, event_type, entry_name, ldapurihash, link_scope, search_base from LDAP_LINK_DRYRUN_EVENTS;
        event_timestamp          |     event_type      |       entry_name       | ldapurihash | link_scope | search_base
---------------------------------+---------------------+------------------------+-------------+------------+--------------
2020-01-03 21:08:30.883783+05:30 | BIND_STARTED        | ---------------------- |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:30.890574+05:30 | BIND_FINISHED       | ---------------------- |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:30.890602+05:30 | SYNC_STARTED        | ---------------------- |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:30.890605+05:30 | SEARCH_STARTED      | **********             |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939369+05:30 | LDAP_GROUP_FOUND    | Account Operators      |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939395+05:30 | LDAP_GROUP_FOUND    | Administrators         |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939461+05:30 | LDAP_USER_FOUND     | user1                  |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939463+05:30 | LDAP_USER_FOUND     | user2                  |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939468+05:30 | SEARCH_FINISHED     | **********             |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939718+05:30 | PROCESSING_STARTED  | **********             |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939887+05:30 | USER_CREATED        | user1                  |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939895+05:30 | USER_CREATED        | user2                  |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939949+05:30 | ROLE_CREATED        | Account Operators      |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.939959+05:30 | ROLE_CREATED        | Administrators         |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.940603+05:30 | PROCESSING_FINISHED | **********             |           0 | sub        | dc=DC,dc=com
2020-01-03 21:08:31.940613+05:30 | SYNC_FINISHED       | ---------------------- |           0 | sub        | dc=DC,dc=com

444 - ldaptls-contexts.md

Vertica establishes a connection to an LDAP server in two contexts, and each context has a corresponding TLS Configuration that controls if each connection should use TLS:

  1. LDAPLink: using the LDAPLink service or its dry run functions to synchronize users and groups between Vertica and the LDAP server.

  2. LDAPAuth: when a user with an ldap authentication method attempts to log into Vertica, Vertica attempts to bind the user to a matching user in the LDAP server. If the bind succeeds, Vertica allows the user to log in.

Query TLS_CONFIGURATIONS to view existing TLS Configurations:

=> SELECT * FROM tls_configurations WHERE name IN ('LDAPLink', 'LDAPAuth');
   name   |  owner  | certificate | ca_certificate | cipher_suites |  mode
----------+---------+-------------+----------------+---------------+----------
 LDAPLink | dbadmin | client_cert | ldap_ca        |               | VERIFY_CA
 LDAPAuth | dbadmin | client_cert | ldap_ca        |               | DISABLE
(2 rows)

445 - library-metadata-details.md

You declare the metadata for your library by calling the RegisterLibrary() function in one of the source files for your UDx. If there is more than one function call in the source files for your UDx, whichever gets interpreted last as Vertica Analytic Database loads the library is used to determine the library's metadata.

The RegisterLibrary() function takes eight string parameters:

RegisterLibrary(author,
                library_build_tag,
                library_version,
                library_sdk_version,
                source_url,
                description,
                licenses_required,
                signature);
  • author contains whatever name you want associated with the creation of the library (your own name or your company's name for example).

  • library_build_tag is a string you want to use to represent the specific build of the library (for example, the SVN revision number or a timestamp of when the library was compiled). This is useful for tracking instances of your library as you are developing them.

  • library_version is the version of your library. You can use whatever numbering or naming scheme you want.

  • library_sdk_version is the version of the Vertica Analytic Database SDK Library for which you've compiled the library.

  • source_url is a URL where users of your function can find more information about it. This can be your company's website, the GitHub page hosting your library's source code, or whatever site you like.

  • description is a concise description of your library.

  • licenses_required is a placeholder for licensing information. You must pass an empty string for this value.

  • signature is a placeholder for a signature that will authenticate your library. You must pass an empty string for this value.

446 - library-metadata-lead-in.md

You can add metadata, such as author name, the version of the library, a description of your library, and so on to your library. This metadata lets you track the version of your function that is deployed on a Vertica Analytic Database cluster and lets third-party users of your function know who created the function. Your library's metadata appears in the USER_LIBRARIES system table after your library has been loaded into the Vertica Analytic Database catalog.

447 - licenses-api.md

POST licenses Uploads and applies a new license to this cluster.
GET licenses Returns the license field that databases created on this cluster use.

448 - live-aggregate-projection_snip.md

A live aggregate projection contains columns with values that are aggregated from columns in its anchor table. When you load data into the table, Vertica aggregates the data before loading it into the live aggregate projection. On subsequent loads—for example, through INSERTor COPY—Vertica recalculates aggregations with the new data and updates the projection.

449 - load-balance-group-policies.md

  • ROUNDROBIN (default) rotates among the available members of the load balancing group. The initially-contacted node keeps track of which node it chose last time, and chooses the next one in the cluster.

  • RANDOM chooses an available node from the group randomly.

  • NONE disables load balancing.

450 - load-sample-data-ml.md

Before you begin the example, load the Machine Learning sample data.

451 - load-streams-managing.md

Column name Value...
ACCEPTED_ROW_COUNT Increases during parsing, up to the maximum number of rows in the input file.
PARSE_COMPLETE_PERCENT

Remains zero (0) until all named pipes return an EOF. While COPY awaits an EOF from multiple pipes, it can appear to be hung. However, before canceling the COPY statement, check your system CPU and disk accesses to determine if any activity is in progress.

In a typical load, the PARSE_COMPLETE_PERCENT value can either increase slowly or jump quickly to 100%, if you are loading from named pipes or STDIN.

SORT_COMPLETE_PERCENT Remains at 0 when loading from named pipes or STDIN. After PARSE_COMPLETE_PERCENT reaches 100 percent, SORT_COMPLETE_PERCENT increases to 100 percent.

Depending on the data sizes, a significant lag can occur between the time PARSE_COMPLETE_PERCENT reaches 100 percent and the time SORT_COMPLETE_PERCENT begins to increase.

452 - log-c-.md

    virtual void processBlock(ServerInterface &srvInterface,
                              BlockReader &argReader,
                              BlockWriter &resWriter)
    {
        try {
            // While we have inputs to process
            do {
                if (argReader.isNull(0) || argReader.isNull(1)) {
                    resWriter.setNull();
                } else {
                    const vint a = argReader.getIntRef(0);
                    const vint b = argReader.getIntRef(1);
                    srvInterface.log("got a: %d and b: %d", (int) a, (int) b);
                    resWriter.setInt(a+b);
                }
                resWriter.next();
            } while (argReader.next());
        } catch(std::exception& e) {
            // Standard exception. Quit.
            vt_report_error(0, "Exception while processing block: [%s]", e.what());
        }
    }

453 - log-java.md

        @Override
        public void processBlock(ServerInterface srvInterface,
                                 BlockReader argReader,
                                 BlockWriter resWriter)
                    throws UdfException, DestroyInvocation
        {
            do {
                // Get the two integer arguments from the BlockReader
                long a = argReader.getLong(0);
                long b = argReader.getLong(1);

                // Log the input values
                srvInterface.log("Got values a=%d and b=%d", a, b);

                long result = a+b;
                resWriter.setLong(result);
                resWriter.next();
            } while (argReader.next());
        }
    }

454 - log-python.md

    def processBlock(self, server_interface, arg_reader, res_writer):
        server_interface.log("Python UDx - Adding 2 ints!")
        while(True):
            first_int = block_reader.getInt(0)
            second_int = block_reader.getInt(1)
            block_writer.setInt(first_int + second_int)
            server_interface.log("Values: first_int is {} second_int is {}".format(first_int, second_int))
            block_writer.next()
            if not block_reader.next():
                break

455 - logrotate-affected-files

If the following files exceed the specified maximum size, they are rotated:

  • vertica.log
  • UDxFencedProcesses.log
  • MemoryReport.log
  • editor.log
  • dbLog

Rotated files are compressed and marked with a timestamp in the same location as the original log file: path/to/logfile.logtimestamp.gz. For example, /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log is rotated to /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log.2023-11-08-14-09-02-381909-05.gz.

If a log file was rotated, the previously rotated logs (.gz) in that directory are checked against the specified maximum age. Rotated logs older than the maximum age are deleted.

456 - logrotate-manual-example

To rotate logs that are larger than 1 kilobyte, and then remove rotated logs that are older than 1 day:

=> SELECT do_logrotate_local('max_size=1K;max_age=1 day');

                                             do_logrotate_local
-----------------------------------------------------------------------------------------------------------
 Doing Logrotate
Considering file /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log
File size: 35753 Bytes
Force rotate? no
Renaming to /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log.2023-11-08-13-55-51-651129-05
Opening new log file /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log
Compressing /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log.2023-11-08-13-55-51-651129-05 to /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log.2023-11-08-13-55-51-651129-05.gz
Done with /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log

Considering file /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log
File size: 68 Bytes
Force rotate? no
Rotation not required for file /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log
Done with /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log
(1 row)

To force rotation and then remove all logs older than 4 days (default value of LogRotateMaxAge):

=> SELECT do_logrotate_local('force;max_age=4 days');

                                             do_logrotate_local
-----------------------------------------------------------------------------------------------------------
 Doing Logrotate
Considering file /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log
File size: 4310245 Bytes
Force rotate? yes
Renaming to /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log.2023-11-10-13-45-15-53837-05
Opening new log file /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log
Compressing /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log.2023-11-10-13-45-15-53837-05 to /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log.2023-11-10-13-45-15-53837-05.gz
Done with /scratch_b/qa/VMart/v_vmart_node0001_catalog/vertica.log

Considering file /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log
File size: 68 Bytes
Force rotate? yes
Remove old log file /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log.2023-11-06-13-18-27-23141-05.gz
Remove old log file /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log.2023-11-07-13-18-30-059008-05.gz
Remove old log file /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log.2023-11-08-13-47-11-707903-05.gz
Remove old log file /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log.2023-11-09-14-09-02-386402-05.gz
Renaming to /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log.2023-11-10-13-45-15-647762-05
Opening new log file /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log
Compressing /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log.2023-11-10-13-45-15-647762-05 to /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log.2023-11-10-13-45-15-647762-05.gz
Done with /scratch_b/qa/VMart/v_vmart_node0001_catalog/UDxLogs/UDxFencedProcesses.log


(1 row)

457 - long-data-types-rejected-due-to-encoding-note.md

458 - low-cache-setting.md

If sequence caching is set to a low number, nodes are liable to request a new set of cache values more frequently. While it supplies a new cache, Vertica must lock the catalog. Until Vertica releases the lock, other database activities such as table inserts are blocked, which can adversely affect overall performance.

459 - m-ldailytemp-dataset-desc.md

This example uses the daily-min-temperatures dataset, which contains data on the daily minimum temperature in Melbourne, Australia from 1981 through 1990:

460 - manage-cluster-steps-mc.md

Managing your cluster in MC

  1. On the MC home page, click View Infrastructure. MC displays the Database and Cluster View. This view shows your infrastructure platform, cluster, and database.

  2. On the left side of the screen next to Clusters, click the square for the cluster you want to manage. MC displays a window with your cluster name, an information summary, and several buttons.

  3. Click Manage. The Cluster page displays.

  4. On the Cluster page, you can view the following information:

    • The instances in your cluster in visual format.

    • The status of each instance, whether it is running.

    • The private and public IP address for each cluster instance.

    • The Vertica version that is running, your region, and your instance type in the Cluster pane.

461 - managing-workloads_snip.md

Vertica's resource management scheme allows diverse, concurrent workloads to run efficiently on the database. For basic operations, Vertica pre-configures the built-in GENERAL pool based on RAM and machine cores. You can customize the General pool to handle specific concurrency requirements.

You can also define new resource pools that you configure to limit memory usage, concurrency, and query priority. You can then optionally assign each database user to use a specific resource pool, which controls memory resources used by their requests.

User-defined pools are useful if you have competing resource requirements across different classes of workloads. Example scenarios include:

  • A large batch job takes up all server resources, leaving small jobs that update a web page without enough resources. This can degrade user experience.

    In this scenario, create a resource pool to handle web page requests and ensure users get resources they need. Another option is to create a limited resource pool for the batch job, so the job cannot use up all system resources.

  • An application has lower priority than other applications and you want to limit the amount of memory and number of concurrent users for the low-priority application.

    In this scenario, create a resource pool with an upper limit on the query's memory and associate the pool with users of the low-priority application.

462 - match-by-position-parameter.md

Boolean value that specifies how input columns are matched to model features:

  • true: Match by the position of columns in the input columns list.

  • false (default): Match by name.

463 - max-breadth.md

Integer in the range [1, 1e9], the maximum number of leaf nodes a tree can have.

Default: 32

464 - max-depth.md

Integer in the range [1, 100], the maximum depth for growing each tree. For example, a max_depth of 0 represents a tree with only a root node, and a max_depth of 2 represents a tree with four leaf nodes.

Default: 5

465 - max-iterations.md

INTEGER in the range (0, 1000000), the maximum number of training iterations. Training stops if either the number of iterations exceeds max_iterations or if the difference between the actual and predicted values is less than or equal to epsilon.

Default: 100

466 - max-message-bytes-description.md

Specifies the maximum size, in bytes, of a Kafka protocol batch message.

Default: 25165824

467 - max-min.md

Coordinates of the minimum bounding rectangle (MBR) of the indexed geometries. (min_x, min_y) are the south-west coordinates, and (max_x, max_y) are the north-east coordinates.

468 - max-model-size-kb-note.md

The machine learning configuration parameter MaxModelSizeKB sets the maximum size of a model that can be imported into Vertica.

469 - mc-certificates-wizard.md

The MC Certificates wizard lets you configure a CA certificate for the Vertica database server and client certificates for MC to allow secure TLS communication over the JDBC connections between MC and the Vertica database server. Each screen presents options. When you select an option, the wizard displays additional options and details.

  1. The first wizard screen provides helpful overview information. Read it, and click Configure TLS Certificates to continue.

  2. On the Configure CA Certificates screen, configure a CA certificate (public key) to add to MC. MC uses this trusted certificate to verify the server's identity during TLS communications over JDBC connections between MC and the Vertica database server.

    Complete one of these options:

    • Upload a new CA certificate Browse and select the certificate file and enter an alias for this certificate

      • To add another CA certificate, click Add More CA Certificates.

      • Continue adding additional CA certificates until you are finished.

    • Choose a certificate alias from previously uploaded certificates Select the alias for the previously uploaded CA certificate you wish to configure for the current database.

  3. When you are done adding CA certificates, click Next.

  4. The Configure Client Certificate screen displays the check box Add Client Certificate and Private Key for Mutual Mode TLS Connection.

  5. If the database is configured for server mode, you do not need a client certificate or key.

    • Leave the Add Client Certificate check box unchecked and click Review.

    • Skip to step 10.

  6. If the database is configured for mutual mode:

    • Click the Add Client Certificate check box.

    • Select one of the options below.

      • Upload Client Certificate and Private Key files on MC (shown above.) MC uses its https connection from the browser to MC's host to upload the files.)

        • To add an additional client certificate and create a certificate chain, click Add Certificate to Chain. MC reinitializes the Client Certificate file field so you can add another certificate. After you add the last certificate path, click Next.

        • To upload an existing certificate chain file, click Browse next to the Upload Client Certificate/Certificate chain file field, select the file, and click Open.

      • Manually upload client Certificate and Private Key on MC host and provide paths Avoids sending the encrypted certificate and private key files over an https connection. To add an additional path for a client certificate and create a certificate chain, click Add More Certificate Paths. MC reinitializes the path field so you can add another path. After you add the last certificate path, click Next.

      • Choose Client Certificate and Private Key alias of previously uploaded keypair to use for this database. (To use existing certificate and key files.)

  7. Complete the detail fields for the client certificate and private key option you have chosen above, then click Next.

  8. The Apply TLS configuration to MC users mapped to database window allows you to configure the client certificate-key pair you have just entered, for use by multiple MC users.

  9. Click Review. The wizard displays a review window with the TLS options you have configured.

  10. Select one of these options:

    • To modify your TLS choices, click Back.

    • To confirm your choices:

      • If you are importing a database, click Configure TLS and Import DB.

      • If you are configuring TLS for a database already imported to MC, click Configure TLS for DB.

      • Click Close to complete the wizard.

    • To close the wizard without importing the database and without setting up TLS configuration, click Cancel.

470 - mc-client-sessions.md

Each client session to MC uses a connection from MaxClientSessions, a database configuration parameter. This parameter determines the maximum number of sessions that can run on a single database cluster node. Sometimes multiple MC users, mapped to the same database account, are concurrently monitoring the Overview and Activity pages.

471 - mc-eon-aws-steps.md

  1. Log in to the Vertica Management Console.

  2. On the Management Console home page, select Create new database.

  3. On Database Storage Mode, click Eon Mode.

  4. Click Next. On Vertica License, select one of the following license mode options:

    • Community Edition: A free Vertica license to preview Vertica functionality. This license provides limited features. If you use a Community Edition license for your deployment, you can upgrade the license later to expand your cluster load. See Managing licenses form more information.

    • Premium Edition: Use your Vertica license. After you select this option, click Browse to locate and upload your Vertica license key file, or manually enter it in the field.

472 - mc-general-features-platform-availability.md

Some of the features on this page are currently available in MC only on AWS and GCP.

473 - mc-intro-blurb.md

Management Console (MC) is the Vertica in-browser monitoring and management tool. Its graphical user interface provides a unified view of your Vertica database operations. Through user-friendly, step-by-step screens, you can create, configure, manage, and monitor your Vertica databases and their associated clusters. You can use MC to operate your Vertica database in Eon Mode or in Enterprise Mode. You can use MC to provision and deploy a Vertica Eon Mode database.

474 - mc-node-actions-platform-coverage-enterprise.md

For Enterprise Mode databases, MC supports these actions:

  • In the cloud on AWS: Add Node action, Add Instance action.

  • On-premises: Add Node action.

475 - mc-node-start-stop-on-prem.md

When you start or stop a node on-premises, MC starts or stops the node in the database, but does not start or stop the Vertica host machine. The Restart Node action is not available for on-premises Eon Mode databases.

476 - mc-scale-up-down-subcluster-summary.md

When you scale up a subcluster on-premises, the MC wizard displays a list of the available Vertica host machines that are not currently part of a database. You select the ones you want to add to the subcluster as nodes, then confirm that you want to scale up the subcluster.

When you scale down a subcluster on-premises, MC removes the nodes from the subcluster in the database, but does not terminate the Vertica host machines. The hosts are now available for scaling up other subclusters.

477 - mc-self-signed-certs-and-time-sync.md

When you connect to MC through a client browser, Vertica assigns each HTTPS request a self-signed certificate, which includes a timestamp. To increase security and protect against password replay attacks, the timestamp is valid for several seconds only, after which it expires.

To avoid being blocked out of MC, synchronize time on the hosts in your Vertica cluster, and on the MC host if it resides on a dedicated server. To recover from loss or lack of synchronization, resync system time and the Network Time Protocol.

478 - mc-start-stop-subcluster-on-prem-summary.md

When you start or stop a subcluster in an Eon Mode database on-premises, MC starts or stops the subcluster nodes on the Vertica host machines, but not the machines themselves.

479 - mc-subcluster-actions-platform-coverage.md

For Eon Mode databases, MC supports actions for subcluster and node management for the following public and private cloud providers:

480 - mc-subclusters-note-if-adding-second-primary.md

You can add a second primary subcluster in order to stop the original primary. Be sure to make the replacement primary subcluster at least one node larger than the original. (If you make them the same size, they both count equally toward the quorum, and stopping either would violate the quorum, so you cannot stop either one. For more information, see Data integrity and high availability in an Eon Mode database.)

481 - mco-auth-sslhttps.md

Management Console (MC) uses a combination of OAuth (Open Authorization), Secure Socket Layer (SSL), and locally-encrypted passwords to secure HTTPS requests between a user's browser and MC, and between MC and the agents. Authentication occurs through MC and between agents within the cluster. Agents also authenticate and authorize jobs.

The MC configuration process sets up SSL automatically, but you must have the openssl package installed on your Linux environment first.

482 - memory-poller-malloc-bloat-threshold.md

The memory poller calls glibc function malloc_info() to obtain the amount of free memory in malloc. It then compares MemoryPollerMallocBloatThreshold—by default, set to 0.3—with the following expression:

free-memory-in-malloc / RSS

If this expression evaluates to a value higher than MemoryPollerMallocBloatThreshold, the memory poller calls glibc function malloc_trim(). This function reclaims free memory from malloc and returns it to the operating system. Details on calls to malloc_trim() are written to system table MEMORY_EVENTS.

483 - memory-poller-report-threshold.md

The memory poller compares MemoryPollerReportThreshold with the following expression:

RSS / available-memory

When this expression evaluates to a value higher than MemoryPollerReportThreshold—by default, set to 0.93, then the memory poller writes a report to MemoryReport.log, in the Vertica working directory. This report includes information about Vertica memory pools, how much memory is consumed by individual queries and session, and so on. The memory poller also logs the report as an event in system table MEMORY_EVENTS, where it sets EVENT_TYPE to MEMORY_REPORT.

484 - memory-poller-trim-treshold.md

The memory poller compares MemoryPollerTrimThreshold—by default, set to 0.83— with the following expression:

RSS / available-memory

If this expression evaluates to a value higher than MemoryPollerTrimThreshold, then the memory poller starts checking the next threshold—set in MemoryPollerMallocBloatThreshold—for glibc memory bloat.

485 - merge-out-interval.md

  1. If the queue contains mergeout requests, the Tuple Mover does nothing and goes back to sleep.

  2. If the queue is empty, the Tuple Mover:

    • Processes pending storage location move requests.

    • Checks for new unqueued purge requests and adds them to the queue.

    It then goes back to sleep.

486 - merge-restrictions_snip.md

Constraint enforcement

If primary key, unique key, or check constraints are enabled for automatic enforcement in the target table, Vertica enforces those constraints when you load new data. If a violation occurs, Vertica rolls back the operation and returns an error.

Columns prohibited from merge

The following columns cannot be specified in a merge operation; attempts to do so return with an error:

  • IDENTITY columns, or columns whose default value is set to a named sequence.

  • Vmap columns such as __raw__ in flex tables.

  • Columns of complex types ARRAY, SET, or ROW.

487 - mergeout_snip.md

Mergeout is a Tuple Mover process that consolidates ROS containers and purges deleted records.

488 - message-c-.md

Before calling a ServerInterface reporting method, construct and populate a message with the ClientMessage class.

The ServerInterface API provides the following methods for reporting messages:


// ClientMessage methods
template<typename... Argtypes>
static ClientMessage makeMessage(int errorcode, const char *fmt, Argtypes&&... args);

template <typename... Argtypes>
ClientMessage & setDetail(const char *fmt, Argtypes&&... args);

template <typename... Argtypes>
ClientMessage & setHint(const char *fmt, Argtypes&&... args);

// ServerInterface reporting methods
virtual void reportError(ClientMessage msg);

virtual void reportInfo(ClientMessage msg);

virtual void reportNotice(ClientMessage msg);

virtual void reportWarning(ClientMessage msg);

489 - message-python.md

The ServerInterface API provides the following methods for reporting messages:


def reportError(self, code, text, hint='', detail=''):

def reportInfo(self, code, text, hint='', detail=''):

def reportNotice(self, code, text, hint='', detail=''):

def reportWarning(self, code, text, hint='', detail=''):

490 - meta-function-restrictions.md

This is a meta-function. You must call meta-functions in a top-level SELECT statement.

491 - microbatch-description.md

represent an individual segment of a data load from a Kafka stream. They combine the definitions for your cluster, source, target, and load spec that you create using the other vkconfig tools. The scheduler uses all of the information in the microbatch to execute COPY statements using the KafkaSource UDL function to transfer data from Kafka to Vertica. The statistics on each microbatch's load is stored in the stream_microbatch_history table.

492 - min-info-gain.md

Float in the range [0.0, 1.0), the minimum threshold for including a split. A split with information gain less than this threshold is discarded.

Default: 0.0

493 - min-leaf-size.md

Integer in the range [1, 1e6], the minimum number of samples each branch must have after splitting a node. A split that results in fewer remaining samples in its left or right branch is be discarded, and the node is treated as a leaf node.

Default: 1

494 - missing-ser-de-error.md

=> SELECT * FROM hcat.jsontable;
ERROR 3399:  Failure in UDx RPC call InvokePlanUDL(): Error in User Defined
Object [VHCatSource], error code: 0
com.vertica.sdk.UdfException: Error message is [
org.apache.hcatalog.common.HCatException : 2004 : HCatOutputFormat not
initialized, setOutput has to be called. Cause : java.io.IOException:
java.lang.RuntimeException:
MetaException(message:org.apache.hadoop.hive.serde2.SerDeException
SerDe com.cloudera.hive.serde.JSONSerDe does not exist) ] HINT If error
message is not descriptive or local, may be we cannot read metadata from hive
metastore service thrift://hcathost:9083 or HDFS namenode (check
UDxLogs/UDxFencedProcessesJava.log in the catalog directory for more information)
at com.vertica.hcatalogudl.HCatalogSplitsNoOpSourceFactory
.plan(HCatalogSplitsNoOpSourceFactory.java:98)
at com.vertica.udxfence.UDxExecContext.planUDSource(UDxExecContext.java:898)
. . .

495 - model-name-parameter.md

Name of the model (case-insensitive).

496 - model-privileges-super-owner-usage.md

Non-superusers: model owner, or USAGE privileges on the model

497 - modify-authentication-examples.md

Enabling and Disabling Authentication Methods

This example uses ALTER AUTHENTICATION to disable the v_ldap authentication method and then enable it again:

=> ALTER AUTHENTICATION v_ldap DISABLE;
=> ALTER AUTHENTICATION v_ldap ENABLE;

Renaming Authentication Methods

This example renames the v_kerberos authentication method to K5. All users who have been granted the v_kerberos authentication method now have the K5 method granted instead.

=> ALTER AUTHENTICATION v_kerberos RENAME TO K5;

Modifying Authentication Parameters

This example sets the system user for ident1 authentication to user1:

=> CREATE AUTHENTICATION ident1 METHOD 'ident' LOCAL;
=> ALTER AUTHENTICATION ident1 SET system_users='user1';

When you set or modify LDAP or Ident parameters using ALTER AUTHENTICATION, Vertica validates them.

This example changes the IP address and specifies the parameters for an LDAP authentication method named Ldap1. Specify the bind parameters for the LDAP server. Vertica connects to the LDAP server, which authenticates the database client. If authentication succeeds, Vertica authenticates any users who have been associated with (granted) the Ldap1 authentication method on the designated LDAP server:

=> CREATE AUTHENTICATION Ldap1 METHOD 'ldap' HOST '172.16.65.196';

=> ALTER AUTHENTICATION Ldap1 SET host='ldap://172.16.65.177',
   binddn_prefix='cn=', binddn_suffix=',dc=qa_domain,dc=com';

The next example specifies the parameters for an LDAP authentication method named Ldap2. Specify the LDAP search and bind parameters. Sometimes, Vertica does not have enough information to create the distinguished name (DN) for a user attempting to authenticate. In such cases, you must specify to use LDAP search and bind:

=> CREATE AUTHENTICATION Ldap2 METHOD 'ldap' HOST '172.16.65.196';
=> ALTER AUTHENTICATION Ldap2 SET basedn='dc=qa_domain,dc=com',
   binddn='cn=Manager,dc=qa_domain,
   dc=com',search_attribute='cn',bind_password='secret';

Changing the Authentication Method

This example changes the localpwd authentication from hash to trust:

=> CREATE AUTHENTICATION localpwd METHOD 'hash' LOCAL;
=> ALTER AUTHENTICATION localpwd METHOD 'trust';

498 - monitor-storage-locations.md

Monitoring storage locations

For information about the disk storage used on each node, query the DISK_STORAGE system table.

499 - monitoring-disk-space.md

You can use these system tables to monitor disk space usage on your cluster:

System table Description
DISK_STORAGE Monitors the amount of disk storage used by the database on each node.
COLUMN_STORAGE Monitors the amount of disk storage used by each column of each projection on each node.
PROJECTION_STORAGE Monitors the amount of disk storage used by each projection on each node.

500 - move-partitions-create-table.md

If the table does not exist, Vertica creates a table from the source table's definition, by calling CREATE TABLE with LIKE and INCLUDING PROJECTIONS clause. The new table inherits ownership from the source table. For details, see Replicating a table.

501 - moving-average-overview.md

Moving average models use the errors of previous predictions to make future predictions. More specifically, the user-specified lag determines how many previous predictions and errors it takes into account during computation.

502 - multi-level-dependencies.md

Multi-level library dependencies

If a DEPENDS clause specifies a library with multiple directory levels, Vertica follows the library path to include all subdirectories of that library. For example, the following CREATE LIBRARY statement enables the UDx library mylib to import all Python packages and modules that it finds in subdirectories of site-packages:

=> CREATE LIBRARY mylib AS '/path/to/python_udx' DEPENDS '/path/to/python/site-packages' LANGUAGE 'Python';

503 - multi-phase-transform-function-factory-c-.md

The MultiPhaseTransformFunctionFactory class extends TransformFunctionFactory The API provides the following additional methods for extension by subclasses:

virtual void getPhases(ServerInterface &srvInterface,
        std::vector< TransformFunctionPhase * > &phases)=0;

If using this factory you must also extend TransformFunctionPhase. See the SDK reference documentation.

504 - multi-phase-transform-function-factory-java.md

The MultiPhaseTransformFunctionFactory class extends TransformFunctionFactory. The API provides the following methods for extension by subclasses:

public abstract void getPhases(ServerInterface srvInterface,
        Vector< TransformFunctionPhase > phases);

If using this factory you must also extend TransformFunctionPhase. See the SDK reference documentation.

505 - multi-phase-transform-function-factory-python.md

The TransformFunctionFactory class extends TransformFunctionFactory. For each phase, the factory must define a class that extends TransformFunctionPhase.

The factory adds the following method:


def getPhase(cls, srv)

TransformFunctionPhase has the following methods:


def createTransformFunction(cls, srv)

def getReturnType(self, srv_interface, input_types, output_types)

506 - multiple-communal-dropped.md

If a storage location has been dropped since the backup was taken, the restore operation attempts to reinstate the dropped location before restoring the data. If the dropped storage location cannot be reinstated, its associated data is copied to the main communal storage location.

507 - mutual-tweak-warning.md

508 - n-bins.md

An integer value that determines the number of decision boundaries. Decision boundaries are set at equally spaced intervals between 0 and 1, inclusive. The function computes the table at each num-bin + 1 point.

Default: 100

509 - n-definition.md

Point in the spatial object to be returned. The index is one-based, type INTEGER

510 - namespace-3-part-schema.md

{ database | namespace }
Name of the database or namespace that contains schema:
  • Database name: If specified, it must be the current database.

  • Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.

schema
Name of the schema, by default public.

If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.

511 - namespace-3-part-table.md

{ database | namespace }
Name of the database or namespace that contains table:
  • Database name: If specified, it must be the current database.

  • Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.

schema
Name of the schema, by default public. If you specify the namespace or database name, you must provide the schema name, even if the schema is public.

If you specify a schema name that contains a period, the part before the period cannot be the same as the name of an existing namespace.

512 - namespace-3-part.md

  • Database name: If specified, it must be the current database.

  • Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.

513 - namespace-ilm-source.md

{ namespace. | database. }
Name of the database or namespace that contains table:
  • Database name: If specified, it must be the current database.

  • Namespace name (Eon Mode only): You must specify the namespace of objects in non-default namespaces. If no namespace is provided, Vertica assumes the object is in the default namespace.

For Eon Mode databases, the namespaces of staging-table and target-table must have the same shard count.

schema
Name of the schema, by default public. If you specify the namespace or database name, you must provide the schema name, even if the schema is public.

514 - namespace-name-table.md

For Eon Mode databases, name of the namespace that contains the table.

515 - namespace-name.md

For Eon Mode databases, name of the namespace that contains the schema.

516 - namespace-naming.md

517 - namespace-schema-name.md

If the schema belongs to a non-default namespace in an Eon Mode database, the schema name is front-qualified with the name of the schema's namespace. For example, n1.s refers to schema s in namespace n1.

518 - native-c-onnection-load-balancing-intro.md

Native connection load balancing helps spread the overhead caused by client connections on the hosts in the Vertica database. Both the server and the client must enable native connection load balancing. If enabled by both, then when the client initially connects to a host in the database, the host picks a host to handle the client connection from a list of the currently up hosts in the database, and informs the client which host it has chosen.

If the initially-contacted host does not choose itself to handle the connection, the client disconnects, then opens a second connection to the host selected by the first host. The connection process to this second host proceeds as usual—if SSL is enabled, then SSL negotiations begin, otherwise the client begins the authentication process. See About native connection load balancing for details.

519 - native-connection-load-balancing-options.md

  • NONE (default): Disables native connection load balancing.

  • ROUNDROBIN: Chooses the next host from a circular list of hosts in the cluster that are up—for example, in a three-node cluster, iterates over node1, node2, and node3, then wraps back to node1. Each host in the cluster maintains its own pointer to the next host in the circular list, rather than there being a single cluster-wide state.

  • RANDOM: Randomly chooses a host from among all hosts in the cluster that are up.

520 - nested-stored-procedure.md

The example uses a set of procedures that convert a given Fahrenheit temperature to Celsius and Kelvin and returns them all as a result set. The procedure f_to_c_and_k() calls the helper procedures f_to_c() and f_to_k() to convert to Celsius and Kelvin, respectively. The f_to_k() procedure uses the output of f_to_c() for part of the conversion:

=> CREATE PROCEDURE f_to_c_and_k(IN f_temp DOUBLE PRECISION, OUT c_temp DOUBLE PRECISION, OUT k_temp DOUBLE PRECISION) AS $$
BEGIN
    c_temp := CALL f_to_c(f_temp);
    k_temp := CALL f_to_k(f_temp);
END;
$$;

=> CREATE PROCEDURE f_to_c(INOUT temp DOUBLE PRECISION) AS $$
BEGIN
    temp := (temp - 32) * 5/9;
END;
$$;

=> CREATE PROCEDURE f_to_k(INOUT temp DOUBLE PRECISION) AS $$
BEGIN
    temp := CALL f_to_c(temp);
    temp := temp + 273.15;
END;
$$;

=> CALL f_to_c_and_k(80);
      c_temp      |      k_temp
------------------+------------------
 26.6666666666667 | 299.816666666667
(1 row)

521 - network-timeout-description.md

Configures the network timeout in seconds. Defaults to its maximum value of 300 seconds.

522 - new-snippet.md

'^(?<time>\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d+)
 (?<thread_name>[A-Za-z ]+):(?<thread_id>(?:0x)?[0-9a-f]+)
-?(?<transaction_id>[0-9a-f])?(?:[(?<component>\w+)]
\<(?<level>\w+)\> )?(?:<(?<elevel>\w+)> @[?(?<enode>\w+)]?: )
?(?<text>.*)'

523 - nfs-storage-locations-note.md

While no technical issue prevents you from using CREATE LOCATION to add one or more Network File System (NFS) storage locations, Vertica does not support NFS data or catalog storage except for MapR mount points. You will be unable to run queries against any other NFS data. When creating locations on MapR file systems, you must specify ALL NODES SHARED.

524 - no-params-export-all-objects.md

If you omit all arguments, this function exports to standard output all objects to which you have access.

525 - no-sandbox-revive.md

526 - node_snip.md

A host configured to run an instance of Vertica. It is a member of the database cluster. For a database to have the ability to recover from the failure of a node requires a database K-safety value of at least 1 (3+ nodes).

527 - node-manage-overview-mc.md

Node management

You add or delete nodes by scaling subclusters up or down. You can also start, stop, and restart nodes. For details, see the following:

528 - node-states_snip.md

  • UP

  • DOWN

  • READY

  • UNSAFE

  • SHUTDOWN

  • SHUTDOWN_ERROR

  • RECOVERING

  • RECOVER_ERROR

  • RECOVERED

  • INITIALIZING

  • STANDBY

  • NEEDS_CATCHUP

529 - node-type.md

  • PERMANENT: (default): A node that stores data.

  • EPHEMERAL: A node that is in transition from one type to another—typically, from PERMANENT to either STANDBY or EXECUTE.

  • STANDBY: A node that is reserved to replace any node when it goes down. A standby node stores no segments or data until it is called to replace a down node. When used as a replacement node, Vertica changes its type to PERMANENT. For more information, see Active standby nodes.

  • EXECUTE: A node that is reserved for computation purposes only. An execute node contains no segments or data.

530 - nodes-api.md

GET nodes Returns a list of nodes in this cluster.
GET nodes/:nodeid Returns details for a specific node in this cluster.

531 - nodeselector.md

List of label key/value pairs that restrict Vertica pod scheduling to nodes with matching labels. For details, see the Kubernetes documentation.

532 - non-super-user-grant-option.md

Non-superuser, one of the following:

533 - normalize-fit-model-example.md

The following example creates a model with NORMALIZE_FIT using the wt and hp columns in table mtcars , and then uses this model in successive calls to APPLY_NORMALIZE and REVERSE_NORMALIZE.

=> SELECT NORMALIZE_FIT('mtcars_normfit', 'mtcars', 'wt,hp', 'minmax');
NORMALIZE_FIT
---------------
Success
(1 row)

The following call to APPLY_NORMALIZE specifies the hp and cyl columns in table mtcars, where hp is in the normalization model and cyl is not in the normalization model:

=> CREATE TABLE mtcars_normalized AS SELECT APPLY_NORMALIZE (hp, cyl USING PARAMETERS model_name = 'mtcars_normfit') FROM mtcars;
CREATE TABLE
=> SELECT * FROM mtcars_normalized;
          hp        | cyl
--------------------+-----
  0.434628975265018 | 8
  0.681978798586572 | 8
  0.434628975265018 | 6
                  1 | 8
  0.540636042402827 | 8
                  0 | 4
  0.681978798586572 | 8
 0.0459363957597173 | 4
  0.434628975265018 | 8
  0.204946996466431 | 6
  0.250883392226148 | 6
  0.049469964664311 | 4
  0.204946996466431 | 6
  0.201413427561837 | 4
  0.204946996466431 | 6
  0.250883392226148 | 6
  0.049469964664311 | 4
  0.215547703180212 | 4
 0.0353356890459364 | 4
  0.187279151943463 | 6
  0.452296819787986 | 8
  0.628975265017668 | 8
  0.346289752650177 | 8
  0.137809187279152 | 4
  0.749116607773852 | 8
  0.144876325088339 | 4
  0.151943462897526 | 4
  0.452296819787986 | 8
  0.452296819787986 | 8
  0.575971731448763 | 8
  0.159010600706714 | 4
  0.346289752650177 | 8
(32 rows)

=> SELECT REVERSE_NORMALIZE (hp, cyl USING PARAMETERS model_name='mtcars_normfit') FROM mtcars_normalized;
  hp | cyl
-----+-----
 175 | 8
 245 | 8
 175 | 6
 335 | 8
 205 | 8
  52 | 4
 245 | 8
  65 | 4
 175 | 8
 110 | 6
 123 | 6
  66 | 4
 110 | 6
 109 | 4
 110 | 6
 123 | 6
  66 | 4
 113 | 4
  62 | 4
 105 | 6
 180 | 8
 230 | 8
 150 | 8
  91 | 4
 264 | 8
  93 | 4
  95 | 4
 180 | 8
 180 | 8
 215 | 8
  97 | 4
 150 | 8
(32 rows)

The following call to REVERSE_NORMALIZE also specifies the hp and cyl columns in table mtcars, where hp is in normalization model mtcars_normfit, and cyl is not in the normalization model.

=> SELECT REVERSE_NORMALIZE (hp, cyl USING PARAMETERS model_name='mtcars_normfit') FROM mtcars_normalized;
       hp        | cyl
-----------------+-----
205.000005722046 |   8
150.000000357628 |   8
150.000000357628 |   8
93.0000016987324 |   4
 174.99999666214 |   8
94.9999992102385 |   4
214.999997496605 |   8
97.0000009387732 |   4
245.000006556511 |   8
 174.99999666214 |   6
             335 |   8
245.000006556511 |   8
62.0000002086163 |   4
 174.99999666214 |   8
230.000002026558 |   8
              52 |   4
263.999997675419 |   8
109.999999523163 |   6
123.000002324581 |   6
64.9999996386468 |   4
66.0000005029142 |   4
112.999997898936 |   4
109.999999523163 |   6
180.000000983477 |   8
180.000000983477 |   8
108.999998658895 |   4
109.999999523163 |   6
104.999999418855 |   6
123.000002324581 |   6
180.000000983477 |   8
66.0000005029142 |   4
90.9999999701977 |   4
(32 rows)

534 - normalized-time-series-required.md

This algorithm expects a stationary time series as input; using a time series with a mean that shifts over time may lead to weaker results.

535 - notifier-tls-config.md

The TLS CONFIGURATION to use for TLS.

Notifiers support the following TLS modes:

  • DISABLE

  • TRY_VERIFY (behaves like VERIFY_CA)

  • VERIFY_CA

  • VERIFY_FULL

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

536 - notifiercabundle.md

Specifies a CA bundle. The certificates inside the bundle are used to validate the Kafka server's certificate if the TLSMODE requires it.

If a CA bundle is specified for a notifier that currently uses disable, which doesn't validate the Kafka server's certificate, the bundle will go unused when connecting to the Kafka server. This behavior persists unless the TLSMODE is changed to one that validates server certificates.

Changes to contents of the CA bundle take effect either after the notifier is disabled and re-enabled or after the database restarts. However, changes to which CA bundle the notifier uses takes effect immediately.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

537 - notifiercertificate.md

Specifies a client certificate for validation by the endpoint.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

538 - notifiertlsmode.md

Specifies the type of connection between the notifier and an endpoint, one of the following:

  • disable (default): Plaintext connection.

  • verify-ca: Encrypted connection, and the server's certificate is verified as being signed by a trusted CA.

If you set this parameter to verify-ca, the generated TLS Configuration will be set to TRY_VERIFY, which has the same behavior as VERIFY_CA.

If the notifier ACTION is 'syslog' or 'sns', this parameter has no effect.

To encrypt messages sent to syslog, you must configure syslog for TLS.

To encrypt messages sent to an SNS endpoint, you must set the following configuration parameters:

  • SNSCAFile or AWSCAFile

  • SNSCAPath or AWSCAPath

  • SNSEnableHttps

539 - ntree.md

Integer in the range [1, 1000], the number of trees in the forest.

Default: 20

540 - null-handling-arrays.md

Null semantics for collections are consistent with normal columns in most regards. See NULL sort order for more information on null-handling.

The null-safe equality operator (<=>) behaves differently from equality (=) when the collection is null rather than empty. Comparing a collection to NULL strictly returns null:

=> SELECT ARRAY[1,3] = NULL;
?column?
----------

(1 row)

=> SELECT ARRAY[1,3] <=> NULL;
 ?column?
----------
 f
(1 row)

In the following example, the grants column in the table is null for employee 99:

=> SELECT grants = NULL FROM employees WHERE id=99;
 ?column?
----------

(1 row)

=> SELECT grants <=> NULL FROM employees WHERE id=99;
 ?column?
----------
 t
(1 row)

Empty collections are not null and behave as expected:

=> SELECT ARRAY[]::ARRAY[INT] = ARRAY[]::ARRAY[INT];
 ?column?
----------
 t
(1 row)

Collections are compared element by element. If a comparison depends on a null element, the result is unknown (null), not false. For example, ARRAY[1,2,null]=ARRAY[1,2,null] and ARRAY[1,2,null]=ARRAY[1,2,3] both return null, but ARRAY[1,2,null]=ARRAY[1,4,null] returns false because the second elements do not match.

541 - null-handlingfor-array-aggregation.md

The following cases return NULL:

  • if the input collection is NULL

  • if the input collection contains only null values

  • if the input collection is empty

542 - numeric-response-column-cast2-varchar.md

543 - o-auth-authentication-records.md

The following authentication record v_oauth authenticates users from any IP address by contacting the identity provider to validate the OAuth token (rather than a username and password) and uses the following parameters:

  • validate_type: The method used to validate the OAuth token. This should be set to IDP (default) to validate the OAuth token for confidential clients.

  • client_id: The client in the identity provider.

  • client_secret: The client secret generated by the identity provider. This is required if validate_type is IDP.

  • discovery_url: Also known as the OpenID Provider Configuration Document, Vertica uses this endpoint to retrieve information about the identity provider's configuration and other endpoints (Keycloak only).

  • introspect_url: Used by Vertica to introspect (validate) access tokens. You must specify the introspect_url if you do not specify the discovery_url and are not using JWT validation.

If discovery_url and introspect_url are both set, discovery_url takes precedence. The following example sets both for demonstration purposes; in general, you should prefer to set the discovery_url:

=> CREATE AUTHENTICATION v_oauth METHOD 'oauth' HOST '0.0.0.0/0';
=> ALTER AUTHENTICATION v_oauth SET validate_type = 'IDP';
=> ALTER AUTHENTICATION v_oauth SET client_id = 'vertica';
=> ALTER AUTHENTICATION v_oauth SET client_secret = 'client_secret';
=> ALTER AUTHENTICATION v_oauth SET discovery_url = 'https://203.0.113.1:8443/realms/myrealm/.well-known/openid-configuration';
=> ALTER AUTHENTICATION v_oauth SET introspect_url = 'https://203.0.113.1:8443/realms/myrealm/protocol/openid-connect/token/introspect';

Alternatively, if your identity provider supports the OpenID Connect protocol and your client is public, Vertica can use JWT validation, where Vertica validates OAuth tokens by verifying that it was signed by the identity provider's private key.

Vertica does not contact the identity provider for JWT validation.

JWT validation requires the following parameters:

  • validate_type: The method used to validate the OAuth token. This should be set to JWT to validate the OAuth token for public clients.

  • jwt_rsa_public_key: In PEM format, the public key used to sign the client's OAuth token. Vertica uses this to validate the OAuth token. If your identity provider does not natively provide PEM-formatted public keys, you must convert them to PEM format. For example, keys retrieved from an Okta endpoint are in JWK format and must be converted.

  • jwt_issuer: The issuer of the OAuth token. For Keycloak, this is the token endpoint.

  • jwt_user_mapping: The name of the Vertica user.

You can also specify the following parameters to define a whitelist based on fields of the OAuth token:

  • jwt_accepted_audience_list: Optional, a comma-delimited list of values to accept from the client JWT's aud field. If set, tokens must include in aud one of the accepted audiences to authenticate.

  • jwt_accepted_scope_list: Optional, a comma-delimited list of values to accept from the client JWT's scope field. If set, tokens must include in scope at least one of the accepted scopes to authenticate.

The following authentication record v_oauth_jwt authenticates users from any IP address by verifying that the client's OAuth token was signed by the identity provider's private key. It also requires the user to provide the proper values in the token's aud and scope fields:

=> CREATE AUTHENTICATION v_oauth_jwt METHOD 'oauth' HOST '0.0.0.0/0';
=> ALTER AUTHENTICATION v_oauth_jwt SET validate_type = 'JWT';
=> ALTER AUTHENTICATION v_oauth_jwt SET jwt_rsa_public_key = '-----BEGIN PUBLIC KEY-----public-key-value-----END PUBLIC KEY-----';
=> ALTER AUTHENTICATION v_oauth_jwt SET jwt_issuer = 'http://10.20.30.40:8080/realms/realm_name';
=> ALTER AUTHENTICATION v_oauth_jwt SET jwt_user_mapping = 'preferred_username';
=> ALTER AUTHENTICATION v_oauth_jwt SET jwt_accepted_audience_list = 'vertica,local';
=> ALTER AUTHENTICATION v_oauth_jwt SET jwt_accepted_scope_list = 'email,profile,user';

544 - o-auth-introspection-failure.md

In cases where introspection fails (e.g. when the access token expires), Vertica responds to the request with an error. If introspection fails and OAuthRefreshToken is specified, the driver attempts to refresh and silently retrieve a new access token. Otherwise, the driver passes error to the client application.

545 - o-auth-jit-description.md

  1. Creates the user if they do not exist in the database. The username must conform to conventions described in Identifiers.

  2. If you configure automatic role assignment, Vertica identifies roles that exist in both the user or group identity and the database and grants each role to the Vertica database user as a default role.

  3. Grants to the user the authentication record that is associated with the IdP if the user or role does not already have a grant on that authentication record.

546 - o-auth-login-time-persistence.md

547 - oauth-accesstoken-requirement

Either OAuthAccessToken or OAuthRefreshToken must be set (programmatically or manually) to authenticate to Vertica with OAuth authentication.

You can omit both OAuthAccessToken and OAuthRefreshToken only if you authenticate to your identity provider directly with single sign-on through the client driver, which requires the machine running the ODBC driver to have access to a web browser.

For details on the different methods for retrieving access tokens, see Retrieving access tokens.

548 - object-k-safety-precedence-settings.md

Specifies K-safety for the projection. The value must be equal to or greater than database K-safety. Vertica ignores this option if set for unsegmented projections.

If you omit this clause, Vertica uses database K-safety.

For general information, see K-safety in an Enterprise Mode database.

549 - on-partition-range.md

Limits projection data to a range of partition keys. The minimum value must be less than or equal to the maximum value.

Values can be NULL. A null minimum value indicates no lower bound and a null maximum value indicates no upper bound. If both are NULL, this statement drops the projection endpoints, producing a regular projection instead of a range projection.

550 - op-version-upgrade.md

VerticaDB operator versions 1.x are namespace-scoped, and versions 2.x are cluster-scoped. To upgrade from version 1.x to 2.x, you must uninstall the operator in each namespace before you upgrade.

551 - operator-automated-tasks.md

  • Installs Vertica

  • Creates an Eon Mode database

  • Upgrades Vertica

  • Revives an existing Eon Mode database

  • Restarts and reschedules DOWN pods

  • Scales subclusters

  • Manages services for pods

  • Monitors pod health

  • Handles load balancing for internal and external traffic

552 - operator-limitation-AT.md

553 - optimizer-generated-dq-example.md


=> CREATE DIRECTED QUERY OPTIMIZER 'findEmployeesCityJobTitle_OPT'
     SELECT employee_first_name, employee_last_name FROM public.employee_dimension
     WHERE employee_city='Boston' and job_title='Cashier' ORDER BY employee_last_name, employee_first_name;
CREATE DIRECTED QUERY

=> ACTIVATE DIRECTED QUERY findEmployeesCityJobTitle_OPT;
ACTIVATE DIRECTED QUERY

554 - or-replace-user-defined-function.md

If a function with the same name and arguments exists, replace it. You can use this to change between fenced and unfenced modes, for example. If you do not use this directive and the function already exists, the CREATE statement returns with a rollback error.

OR REPLACE and IF NOT EXISTS are mutually exclusive.

555 - orc-and-parquet-license-description.md

Vertica licenses can include a raw data allowance. Since 2016, Vertica licenses have allowed you to use ORC and Parquet data in external tables. This data has always counted against any raw data allowance in your license. Previously, the audit of data in ORC and Parquet format was handled manually. Because this audit was not automated, the total amount of data in your native tables and external tables could exceed your licensed allowance for some time before being spotted.

Starting in version 9.1.0, Vertica automatically audits ORC and Parquet data in external tables. This auditing begins soon after you install or upgrade to version 9.1.0. If your Vertica license includes a raw data allowance and you have data in external tables based on Parquet or ORC files, review your license compliance before upgrading to Vertica 9.1.x. Verifying your database is compliant with your license terms avoids having your database become non-compliant soon after you upgrade.

556 - orc-parquet-restriction.md

Invalid for ORC or Parquet data

557 - orc-parquet-supported-types.md

Vertica can natively read columns of all Hive primitive data types. For a complete list, see HIVE Data Types (specifically the Numeric, Date/Time, String, and Misc lists). Vertica can also load UUIDs and complex types (arrays and structs in any combination).

The data types you specify for COPY or CREATE EXTERNAL TABLE AS COPY must exactly match the types in the data, though Vertica permits implicit casting among compatible numeric types.

558 - over-clause.md

559 - overwrite.md

Boolean, whether to overwrite the index, if an index exists. This parameter cannot be NULL.

Default: False

560 - parameter-enable.md

Changes to this parameter take effect either after the notifier is disabled and reenabled or after the database restarts.

561 - parser-factory-c-.md

The ParserFactory API provides the following methods for extension by subclasses:

virtual void plan(ServerInterface &srvInterface, PerColumnParamReader &perColumnParamReader, PlanContext &planCtxt);

virtual UDParser * prepare(ServerInterface &srvInterface, PerColumnParamReader &perColumnParamReader,
            PlanContext &planCtxt, const SizedColumnTypes &returnType)=0;

virtual void getParameterType(ServerInterface &srvInterface, SizedColumnTypes &parameterTypes);

virtual void getParserReturnType(ServerInterface &srvInterface, PerColumnParamReader &perColumnParamReader,
            PlanContext &planCtxt, const SizedColumnTypes &argTypes,
            SizedColumnTypes &returnType);

virtual bool isParserApportionable();

// C++ API only:
virtual bool isChunkerApportionable(ServerInterface &srvInterface);

virtual UDChunker * prepareChunker(ServerInterface &srvInterface, PerColumnParamReader &perColumnParamReader,
            PlanContext &planCtxt, const SizedColumnTypes &returnType);

If you are using Apportioned load to divide a single input into multiple load streams, implement isParserApportionable() and/or isChunkerApportionable() and return true. Returning true from these methods does not guarantee that Verticawill apportion the load. However, returning false from both indicates that it will not try to do so.

If you are using Cooperative parse, implement prepareChunker() and return an instance of your UDChunker subclass. Cooperative parse is supported only for the C++ API.

Vertica calls the prepareChunker() method only for unfenced functions. This method is not available when you use the function in fenced mode.

If you want your chunker to be available for apportioned load, implement isChunkerApportionable() and return true.

After creating your ParserFactory, you must register it with the RegisterFactory macro.

562 - parser-factory-java.md

The ParserFactory API provides the following methods for extension by subclasses:

public void plan(ServerInterface srvInterface, PerColumnParamReader perColumnParamReader, PlanContext planCtxt)
    throws UdfException;

public abstract UDParser prepare(ServerInterface srvInterface, PerColumnParamReader perColumnParamReader,
                PlanContext planCtxt, SizedColumnTypes returnType)
    throws UdfException;

public void getParameterType(ServerInterface srvInterface, SizedColumnTypes parameterTypes);

public void getParserReturnType(ServerInterface srvInterface, PerColumnParamReader perColumnParamReader,
                PlanContext planCtxt, SizedColumnTypes argTypes, SizedColumnTypes returnType)
    throws UdfException;

563 - parser-factory-python.md

The ParserFactory API provides the following methods for extension by subclasses:

class PyParserFactory(vertica_sdk.SourceFactory):
    def __init__(self):
        pass
    def plan(self):
        pass
    def prepareUDSources(self, srvInterface):
        # User implement the function to create PyUDParser.
        pass

564 - parser-parameter-reject-on-materialized-type-error.md

Boolean, whether to reject a data row that contains a materialized column value that cannot be coerced into a compatible data type. If the value is false and the type cannot be coerced, the parser sets the value in that column to NULL.

If the column is an array and the data to be loaded is too large, then false sets the column value to NULL and true rejects the row.

If the column is a strongly-typed complex type, as opposed to a flexible complex type, then a type mismatch anywhere in the complex type causes the entire column to be treated as a mismatch. The parser does not partially load complex types.

565 - partitioned-object-stores.md

For external tables using highly partitioned data in an object store, see the ObjectStoreGlobStrategy configuration parameter and Partitions on Object Stores.

566 - password-authentication.md

When you create a new user with the CREATE USER command, you can configure the password or leave it empty. You cannot bypass the password if the user was created with a password configured. You can change a user's password using the ALTER USER command.

567 - pca-model-example.md


=> SELECT PCA ('pcamodel', 'world','country,HDI,em1970,em1971,em1972,em1973,em1974,em1975,em1976,em1977,
em1978,em1979,em1980,em1981,em1982,em1983,em1984 ,em1985,em1986,em1987,em1988,em1989,em1990,em1991,em1992,
em1993,em1994,em1995,em1996,em1997,em1998,em1999,em2000,em2001,em2002,em2003,em2004,em2005,em2006,em2007,
em2008,em2009,em2010,gdp1970,gdp1971,gdp1972,gdp1973,gdp1974,gdp1975,gdp1976,gdp1977,gdp1978,gdp1979,gdp1980,
gdp1981,gdp1982,gdp1983,gdp1984,gdp1985,gdp1986,gdp1987,gdp1988,gdp1989,gdp1990,gdp1991,gdp1992,gdp1993,
gdp1994,gdp1995,gdp1996,gdp1997,gdp1998,gdp1999,gdp2000,gdp2001,gdp2002,gdp2003,gdp2004,gdp2005,gdp2006,
gdp2007,gdp2008,gdp2009,gdp2010' USING PARAMETERS exclude_columns='HDI,country');
PCA
---------------------------------------------------------------
Finished in 1 iterations.
Accepted Rows: 96  Rejected Rows: 0
(1 row)
=> CREATE TABLE worldPCA AS SELECT
APPLY_PCA (HDI,country,em1970,em1971,em1972,em1973,em1974,em1975,em1976,em1977,em1978,em1979,
em1980,em1981,em1982,em1983,em1984 ,em1985,em1986,em1987,em1988,em1989,em1990,em1991,em1992,em1993,em1994,
em1995,em1996,em1997,em1998,em1999,em2000,em2001,em2002,em2003,em2004,em2005,em2006,em2007,em2008,em2009,
em2010,gdp1970,gdp1971,gdp1972,gdp1973,gdp1974,gdp1975,gdp1976,gdp1977,gdp1978,gdp1979,gdp1980,gdp1981,gdp1982,
gdp1983,gdp1984,gdp1985,gdp1986,gdp1987,gdp1988,gdp1989,gdp1990,gdp1991,gdp1992,gdp1993,gdp1994,gdp1995,
gdp1996,gdp1997,gdp1998,gdp1999,gdp2000,gdp2001,gdp2002,gdp2003,gdp2004,gdp2005,gdp2006,gdp2007,gdp2008,
gdp2009,gdp2010 USING PARAMETERS model_name='pcamodel', exclude_columns='HDI, country', key_columns='HDI,
country',cutoff=.3)OVER () FROM world;
CREATE TABLE

=> SELECT * FROM worldPCA;
HDI   |       country       |       col1
------+---------------------+-------------------
0.886 | Belgium             |  79002.2946705704
0.699 | Belize              | -25631.6670012556
0.427 | Benin               | -40373.4104598122
0.805 | Chile               | -16805.7940082156
0.687 | China               | -37279.2893141103
0.744 | Costa Rica          | -19505.5631231635
0.4   | Cote d'Ivoire       | -38058.2060339272
0.776 | Cuba                | -23724.5779612041
0.895 | Denmark             |  117325.594028813
0.644 | Egypt               | -34609.9941604549
...
(96 rows)

=> SELECT APPLY_INVERSE_PCA (HDI, country, col1
    USING PARAMETERS model_name = 'pcamodel', exclude_columns='HDI,country',
    key_columns = 'HDI, country') OVER () FROM worldPCA;
HDI  |       country       |      em1970       |      em1971       |      em1972      |      em1973      |
      em1974      |      em1975       |      em1976|      em1977      |      em1978       |      em1979
   |      em1980       |      em1981      |      em1982       |      em1983       |      em1984       |em1985
|      em1986       |      em1987       |      em1988       |      em1989      |      em1990      |      em1991
|      em1992       |      em1993|      em1994      |      em1995       |      em1996       |      em1997
    |      em1998       |      em1999       |      em2000       |      em2001       |em2002       |
em2003      |      em2004       |      em2005      |      em2006       |      em2007       |      em2008
|      em2009      |      em2010       |     gdp1970      |     gdp1971      |     gdp1972      |     gdp1973
|     gdp1974      |     gdp1975      |     gdp1976      |     gdp1977      |gdp1978      |     gdp1979
 |     gdp1980      |     gdp1981      |     gdp1982      |     gdp1983      |     gdp1984      |     gdp1985
      |     gdp1986|    gdp1987      |     gdp1988      |     gdp1989      |     gdp1990      |     gdp1991
     |     gdp1992      |     gdp1993      |     gdp1994      |     gdp1995      |     gdp1996      |
gdp1997      |     gdp1998      |     gdp1999      |     gdp2000      |     gdp2001      |     gdp2002
|     gdp2003      |gdp2004      |     gdp2005      |     gdp2006      |     gdp2007      |     gdp2008
  |     gdp2009      |     gdp2010
-------+---------------------+-------------------+-------------------+------------------+------------------
+------------------+-------------------+------------------+------------------+-------------------+---------
----------+-------------------+------------------+-------------------+-------------------+-----------------
--+------------------+-------------------+-------------------+-------------------+------------------+-------
-----------+------------------+-------------------+-------------------+------------------+------------------
-+-------------------+------------------+-------------------+-------------------+-------------------+-------
------------+--------------------+------------------+-------------------+------------------+----------------
---+-------------------+-------------------+------------------+-------------------+------------------+------
------------+------------------+------------------+------------------+------------------+------------------+
------------------+------------------+------------------+------------------+------------------+-------------
-----+------------------+------------------+------------------+------------------+------------------+-------
-----------+------------------+------------------+------------------+------------------+------------------+-
-----------------+------------------+------------------+------------------+------------------+--------------
----+------------------+------------------+------------------+------------------+------------------+--------
----------+------------------+------------------+------------------+------------------+------------------
0.886 | Belgium             |  18585.6613572407 | -16145.6374560074 |  26938.956253415 | 8094.30475779595 |
 12073.5461203817 | -11069.0567600181 | 19133.8584911727|   5500.312894949 | -4227.94863799987 |  6265.77925410752
|  -10884.749295608 | 30929.4669575201 | -7831.49439429977 |  3235.81760508742 | -22765.9285442662 | 27200
.6767714485 | -10554.9550160917 |   1169.4144482273 | -16783.7961289161 | 27932.2660829329 | 17227.9083196848
| 13956.0524012749 | -40175.6286481088 | -10889.4785920499 | 22703.6576872859 | -14635.5832197402 |
2857.12270512168 | 20473.5044214494 | -52199.4895696423 | -11038.7346460738 |  18466.7298633088 | -17410.4225137703 |
-3475.63826305462 | 29305.6753822341 |   1242.5724942049 | 17491.0096310849 | -12609.9984515902 | -17909.3603476248
|  6276.58431412381 | 21851.9475485178 | -2614.33738160397 | 3777.74134131349 | 4522.08854282736 | 4251.90446379366
| 4512.15101396876 | 4265.49424538129 | 5190.06845330997 | 4543.80444817989 | 5639.81122679089 | 4420.44705213467
|  5658.8820279283 | 5172.69025294376 | 5019.63640408663 | 5938.84979495903 | 4976.57073629812 | 4710.49525137591
| 6523.65700286465 | 5067.82520773578 | 6789.13070219317 | 5525.94643553563 | 6894.68336419297 | 5961.58442474331
| 5661.21093840818 | 7721.56088518218 |  5959.7301109143 | 6453.43604137202 | 6739.39384033096 | 7517.97645468455
| 6907.49136910647 | 7049.03921764209 | 7726.49091035527 | 8552.65909911844 | 7963.94487647115 | 7187.45827585515
| 7994.02955410523 | 9532.89844418041 | 7962.25713582666 | 7846.68238907624 | 10230.9878908643 | 8642.76044946519
| 8886.79860331866 |  8718.3731386891
...
(96 rows)

568 - permissions-for-createudfunction-statements.md

  • CREATE privilege on the function's schema

  • USAGE privilege on the function's library

569 - permissions-superuser-only.md

Privileges

Superuser

570 - pinning-policy-on-subcluster.md

Name of a depot subcluster, default_subcluster to specify the default database subcluster. If this argument is omitted, all database depots are targeted.

571 - place-skip-nonindexable-polygons.md

(Optional) BOOLEAN

In rare cases, intricate polygons (for instance, with too high resolution or anomalous spikes) cannot be indexed. These polygons are considered non-indexable. When set to False, non-indexable polygons cause the index creation to fail. When set to True, index creation can succeed by excluding non-indexable polygons from the index.

To review the polygons that were not able to be indexed, use STV_Describe_Index with the parameter list_polygon.

Default: False

572 - pls-description.md

Combining aspects of PCA (principal component analysis) and linear regression, the PLS regression algorithm extracts a set of latent components that explain as much covariance as possible between the predictor and response variables, and then performs a regression that predicts response values using the extracted components.

This technique is particularly useful when the number of predictor variables is greater than the number of observations or the predictor variables are highly collinear. If either of these conditions is true of the input relation, ordinary linear regression fails to converge to an accurate model.

573 - polygons.md

Number of polygons indexed.

574 - ports-table.md

Port Protocol Service Notes
22 TCP sshd Required by Administration tools and the Management Console Cluster Installation wizard.
4803 TCP Spread Client connections
4803 UDP Spread Daemon-to-daemon connections
4804 UDP Spread Daemon-to-daemon connections
5433 TCP Vertica Vertica clients (vsql, ODBC, JDBC, etc)
5433 UDP Vertica Vertica Spread monitoring and MC cluster import
5434 TCP Vertica Intra- and inter-cluster communication. Vertica opens the Vertica client port +1 (5434 by default) for intra-cluster communication, such as during a plan. If the port +1 from the default client port is not available, then Vertica opens a random port for intra-cluster communication.
5444 TCP Vertica Management Console MC-to-node and node-to-node (agent) communications port. See Changing MC or agent ports.
5450 TCP Vertica Management Console Port used to connect to MC from a web browser and allows communication from nodes to the MC application/web server. See Connecting to Management Console.
5554 TCP Node Management Agent Node Management Agent
6543 UDP Spread Monitor-to-daemon connection
8443 TCP HTTPS HTTPS service. To change the port, use HTTPServerPortOffset.

575 - predictor-collumn-boolean-float-values.md

Comma-separated list of columns in the input relation that represent independent variables for the model, or asterisk (*) to select all columns. If you select all columns, the argument list for parameter exclude_columns must include response-column, and any columns that are invalid as predictor columns.

All predictor columns must be of type numeric or BOOLEAN; otherwise the model is invalid.

576 - predictor-columns-param.md

Comma-separated list of columns in the input relation that represent independent variables for the model, or asterisk (*) to select all columns. If you select all columns, the argument list for parameter exclude_columns must include response-column, and any columns that are invalid as predictor columns.

577 - preferred-address-family-ado.md

The IP version to use if the client and server have both IPv4 and IPv6 addresses and you have provided a host name. Valid values are:

  • Ipv4—Connect to the server using IPv4.

  • Ipv6—Connect to the server using IPv6.

  • None—Use the IP address provided by the DNS server.

578 - preferredf-address-family.md

The IP version to use if the client and server have both IPv4 and IPv6 addresses and you have provided a host name, one of the following:

  • ipv4: Connect to the server using IPv4.

  • ipv6: Connect to the server using IPv6.

  • none: Use the IP address provided by the DNS server.

579 - prevent-tm-running-on-subcluster-example.md

=> ALTER RESOURCE POOL TM FOR SUBCLUSTER dashboard MEMORYSIZE '0%'
   MAXMEMORYSIZE '0%';

580 - priorityclass-name.md

The PriorityClass name assigned to pods in the StatefulSet. This affects where the pod gets scheduled.

For details, see the Kubernetes documentation.

581 - privileges-input-relation-output.md

Non-superusers:

  • SELECT privileges on the input relation

  • CREATE privileges on the output view schema

582 - privileges-model-input-relation-output.md

Non-superusers:

  • CREATE privileges on the schema where the model is created

  • SELECT privileges on the input relation

  • CREATE privileges on the output view schema

583 - privileges-model-input-relation.md

Non-superusers:

  • CREATE privileges on the schema where the model is created

  • SELECT privileges on the input relation

584 - probability-normalization-parameter.md

The classifier's normalization method, either softmax (multi-class classifier) or logit (binary classifier). If unspecified, the default logit function is used for normalization.

585 - probe-settings.md

For a detailed list of the available probe settings, see the Kubernetes documentation.

586 - proc-oid.md

A unique identifier assigned to a tokenizer when it is created. Users must query the system table vs_procedures to get the proc_oid for a given tokenizer name. See Configuring a tokenizer for more information.

587 - producer-overview-bullets.md

  • Vertica anayltics results. Use KafkaExport to export Vertica tables and queries.

  • Health and performance data from Data Collector tables. Create push-based notifiers to send this data for consumption for third-party monitoring tools.

  • Ad hoc messages. Use NOTIFY to signal that tasks such as stored procedures are complete.

588 - profile-parameter-table.md

Name Description
name

The name of the profile to create, where *name*conforms to conventions described in Identifiers.

To modify the default profile, set name to default. For example:

ALTER PROFILE DEFAULT LIMIT PASSWORD_MIN_SYMBOLS 1;

PASSWORD_LIFE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days a password remains valid.

  • UNLIMITED: Password remains valid indefinitely.

After your password's lifetime and grace period expire, you must change your password on your next login, if you have not done so already.

PASSWORD_MIN_LIFE_TIME

Set to an integer value, one of the following:

  • Default: 0

  • ≥ 1: The number of days a password must be set before it can be changed

  • UNLIMITED: Password can be reset at any time.

PASSWORD_GRACE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days a password can be used after it expires.

  • UNLIMITED: No grace period.

FAILED_LOGIN_ATTEMPTS

Set to an integer value, one of the following:

  • ≥ 1: The number of consecutive failed login attempts Vertica allows before locking your account.

  • UNLIMITED: Vertica allows an unlimited number of failed login attempts.

PASSWORD_LOCK_TIME
  • ≥ 1: The number of days (units configurable with PasswordLockTimeUnit) a user's account is locked after FAILED_LOGIN_ATTEMPTS number of login attempts. The account is automatically unlocked when the lock time elapses.

  • UNLIMITED: Account remains indefinitely inaccessible until a superuser manually unlocks it.

PASSWORD_REUSE_MAX

Set to an integer value, one of the following:

  • ≥ 1: The number of times you must change your password before you can reuse an earlier password.

  • UNLIMITED: You can reuse an earlier password without any intervening changes.

PASSWORD_REUSE_TIME

Set to an integer value, one of the following:

  • ≥ 1: The number of days that must pass after a password is set before you can reuse it.

  • UNLIMITED: You can reuse an earlier password immediately.

PASSWORD_MAX_LENGTH

The maximum number of characters allowed in a password, one of the following:

  • Integer between 8 and 512, inclusive
PASSWORD_MIN_LENGTH

The minimum number of characters required in a password, one of the following:

  • 0 to PASSWORD_MAX_LENGTH

  • UNLIMITED: Minimum of PASSWORD_MAX_LENGTH

PASSWORD_MIN_LETTERS

Minimum number of letters (a-z and A-Z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_UPPERCASE_LETTERS

Minimum number of uppercase letters (A-Z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_LOWERCASE_LETTERS

Minimum number of lowercase letters (a-z) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_DIGITS

Minimum number of digits (0-9) that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_SYMBOLS

Minimum number of symbols—printable non-letter and non-digit characters such as $, #, @—that must be in a password, one of the following:

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

PASSWORD_MIN_CHAR_CHANGE

Minimum number of characters that must be different from the previous password:

  • Default: 0

  • Integer between 0 and PASSWORD_MAX_LENGTH, inclusive

  • UNLIMITED: 0 (no minimum)

589 - profiles-external-password-authentication-methods.md

The following profile settings affect client authentication methods, such as LDAP or GSS:

  • FAILED_LOGIN_ATTEMPTS

  • PASSWORD_LOCK_TIME

All other profile settings are used only by Vertica to manage its passwords.

590 - projection-column-name.md

The name of a projection column. The list of projection columns must match the SELECT list columns and expressions in number, type, and sequence.

If projection column names are omitted, Vertica uses the anchor table column names specified in the SELECT list.

591 - projection-eviction-policy-supersession.md

If the projection has another eviction policy already set on it, the new policy supersedes it.

592 - projection-name.md

Name of the projection to create, which must conform to Identifiers. It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.

593 - projection-schema.md

Schema for this projection and its anchor table. The value must be the same for both. If you specify a database, it must be the current database.

594 - projections-intro.md

Unlike traditional databases that store data in tables, Vertica physically stores table data in projections, which are collections of table columns.

Projections store data in a format that optimizes query execution. Similar to materialized views, they store result sets on disk rather than compute them each time they are used in a query. Vertica automatically refreshes these result sets with updated or new data.

595 - promote-demote-node-important-note.md

596 - provision-db-intro-aws.md

After you deploy Management Console on Amazon Web Services (AWS) with a CloudFormation Template, you can provision a cluster and database. Vertica clusters are provisioned in the same Virtual Private Cloud (VPC) as the Vertica Management Console. You can create an initial cluster of up to 60 hosts.

Prerequisites

Before you begin, complete or obtain the following:

597 - purge_snip.md

Permanently removes deleted data from physical storage so disk space can be reused. You can purge historical data up to and including the Ancient History Mark epoch.

598 - query-levels-config-params.md

Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.

599 - query-table-with-voltage-secure-protect-all-keys.md

=> SELECT id, first_name, last_name FROM customers3 u
          JOIN (SELECT VoltageSecureProtectAllKeys('376765616314013' USING PARAMETERS
                                                   format='cc_num',
                                                   config_dfs_path='/voltagesecure/conf')
          OVER ()) pak
          ON u.cc_num = pak.protected;

  id  | first_name | last_name
------+------------+-----------
 5345 | Thane      | Ross

(1 row)

600 - query-using-voltage-secuer-protect.md

=> SELECT id, first_name, last_name FROM customers
       WHERE ssn = VoltageSecureProtect('559-32-0670' USING PARAMETERS
                                        format='ssn',
                                        config_dfs_path='voltage.conf');

  id  | first_name | last_name
------+------------+-----------
 5345 | Thane      | Ross
(1 row)

601 - quickstart-pointer.md

For short tutorial on how to install Vertica, create a database, and load data, see the Quickstart guide.

602 - random-forest-model-attributes.md

data
Data for the function, including:
  • predictorNames: The name of the predictors in the same order they were specified for training the model.

  • predictorTypes: The type of the predictors in the same order as their names in predictorNames.

ntree
Number of trees in the model.
skippedRows
Number of rows in input_relation that were skipped because they contained an invalid value.
processedRows
Total number of rows in input_relation minus skippedRows.
callStr
Value of all input arguments that were specified at the time the function was called.

603 - random-seed.md

Typical pseudo-random generators accept a seed, which is set to generate a reproducible pseudo-random sequence. Vertica, however, distributes SQL processing over a cluster of nodes, where each node generates its own independent random sequence.

Results depending on RANDOM are not reproducible because the work might be divided differently across nodes. Therefore, Vertica automatically generates truly random seeds for each node each time a request is executed and does not provide a mechanism for forcing a specific seed.

604 - rbac-release-artifact.md

Vertica provides the required Roles and RoleBindings as GitHub release artifacts.

605 - read-filter-wildcards.md

You can use LIKE wildcards in these options. See LIKE for more information about using wildcards.

606 - read-option-description.md

When you use this option along with the --read option, vkconfig outputs the Vertica query it would use to retrieve the data, rather than outputting the data itself. This option is useful if you want to access the data from within Vertica without having to go through vkconfig. This option has no effect if not used with --read.

607 - rebalance-shards-example-pt1.md

=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
   v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
   = ns.node_name) ORDER BY 1,2,3;

   subcluster_name    |      node_name       | shard_name  | subscription_state
----------------------+----------------------+-------------+--------------------
 analytics_subcluster | v_verticadb_node0004 |             |
 analytics_subcluster | v_verticadb_node0005 |             |
 analytics_subcluster | v_verticadb_node0006 |             |
 default_subcluster   | v_verticadb_node0001 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0003 | ACTIVE
(12 rows)

608 - rebalance-shards-example-pt2.md


=> SELECT REBALANCE_SHARDS('analytics_subcluster');
 REBALANCE_SHARDS
-------------------
 REBALANCED SHARDS
(1 row)

=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
   v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
   = ns.node_name) ORDER BY 1,2,3;

   subcluster_name    |      node_name       | shard_name  | subscription_state
----------------------+----------------------+-------------+--------------------
 analytics_subcluster | v_verticadb_node0004 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0004 | segment0001 | ACTIVE
 analytics_subcluster | v_verticadb_node0004 | segment0003 | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | segment0001 | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | segment0002 | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | segment0002 | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0003 | ACTIVE
(18 rows)

609 - rebalancequerystoragecontainers.md

610 - redacting-parameters.md

If a parameter name meets any of the following criteria, its value is automatically redacted from logs and system tables like QUERY_REQUESTS:

  • Named "secret" or "password"

  • Ends with "_secret" or "_password"

611 - ref-enterprise-mode.md

Enterprise Mode only

612 - ref-eon-mode.md

Eon Mode only

613 - refresh-methods.md

Vertica can refresh a projection from one of its buddies, if one is available. In this case, the target projection gets the source buddy's historical data. Otherwise, the projection is refreshed from scratch with data of the latest epoch at the time of the refresh operation. In this case, the projection cannot participate in historical queries on any epoch that precedes the refresh operation.

Vertica can perform incremental refreshes when the following conditions are met:

  • The table being refreshed is partitioned.

  • The table does not contain any unpartitioned data.

  • The operation is a full projection refresh (not a partition range projection refresh).

In an incremental refresh, the refresh operation first loads data from the partition with the highest range of keys. After refreshing this partition, Vertica begins to refresh the partition with next highest partition range. This process continues until all projection partitions are refreshed. While the refresh operation is in progress, projection partitions that have completed the refresh process become available to process query requests.

The method used to refresh a given projection is recorded in the REFRESH_METHOD column of the PROJECTION_REFRESHES system table.

614 - regex-function-table.md

string-expression``
The VARCHAR or LONG VARCHAR expression to evaluate for matches with the regular expression specified in pattern. If string-expression is in the __raw__ column of a flex or columnar table, cast the string to a LONG VARCHAR before searching for pattern.
pattern``
The regular expression to match against string-expression. The regular expression must conform with Perl regular expression syntax.

615 - regex-string-expression.md

The VARCHAR or LONG VARCHAR expression to evaluate for matches with the regular expression specified in pattern. If string-expression is in the __raw__ column of a flex or columnar table, cast the string to a LONG VARCHAR before searching for pattern.

616 - regex-string.md

The regular expression to match against string-expression. The regular expression must conform with Perl regular expression syntax.

617 - regression-model-attributes.md

data
The data for the function, including:
  • coeffNames: Name of the coefficients. This starts with intercept and then follows with the names of the predictors in the same order specified in the call.

  • coeff: Vector of estimated coefficients, with the same order as coeffNames

  • stdErr: Vector of the standard error of the coefficients, with the same order as coeffNames

  • zValue (for logistic regression): Vector of z-values of the coefficients, in the same order as coeffNames

  • tValue (for linear regression): Vector of t-values of the coefficients, in the same order as coeffNames

  • pValue: Vector of p-values of the coefficients, in the same order as coeffNames

regularization
Type of regularization to use when training the model.
lambda
Regularization parameter. Higher values enforce stronger regularization. This value must be nonnegative.
alpha
Elastic net mixture parameter.
iterations
Number of iterations that actually occur for the convergence before exceeding max_iterations.
skippedRows
Number of rows of the input relation that were skipped because they contained an invalid value.
processedRows
Total number of input relation rows minus skippedRows.
callStr
Value of all input arguments specified when the function was called.

618 - regular-expression.md

'^(?<time>\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d+)
 (?<thread_name>[A-Za-z ]+):(?<thread_id>(?:0x)?[0-9a-f]+)
-?(?<transaction_id>[0-9a-f])?(?:[(?<component>\w+)]
\<(?<level>\w+)\> )?(?:<(?<elevel>\w+)> @[?(?<enode>\w+)]?: )
?(?<text>.*)'

619 - remove-location-label-conditions.md

You can remove a location label only if the following conditions are both true:

  • No database object has a storage policy that specifies this label.

  • The labeled location is not the last available storage for the objects associated with it.

620 - replicate-example-exclude.md

=> REPLICATE INCLUDE "public.*" EXCLUDE '*.customer_*' FROM source_db;

621 - replicate-example-include.md

=> REPLICATE INCLUDE "public.t*" FROM source_db;

622 - replicate-to-example.md

=> CONNECT TO VERTICA target_db USER dbadmin 
           PASSWORD 'mypassword' ON 'vertica_node01', 5433;
 CONNECT

=> REPLICATE customers TO target_db;
REPLICATE

623 - replication_caution.md

624 - reshard-clarification.md

If the database has been resharded, this value will be inaccurate until the Tuple mover realigns the storage containers to the new shard layout.

625 - resize-node-depot-caching.md

=> SELECT node_name, location_label, location_path, max_size, disk_percent FROM storage_locations WHERE location_usage = 'DEPOT' ORDER BY node_name;
    node_name     | location_label  |      location_path      |  max_size   | disk_percent
------------------+-----------------+-------------------------+-------------+--------------
 v_vmart_node0001 | auto-data-depot | /home/dbadmin/verticadb | 36060108800 | 70%
 v_vmart_node0002 | auto-data-depot | /home/dbadmin/verticadb | 36059377664 | 70%
 v_vmart_node0003 | auto-data-depot | /home/dbadmin/verticadb | 36060108800 | 70%
(3 rows)

=> SELECT alter_location_size('depot', '','80%');
 alter_location_size
---------------------
 depotSize changed.
(1 row)

=> SELECT node_name, location_label, location_path, max_size, disk_percent FROM storage_locations WHERE location_usage = 'DEPOT' ORDER BY node_name;
    node_name     | location_label  |      location_path      |  max_size   | disk_percent
------------------+-----------------+-------------------------+-------------+--------------
 v_vmart_node0001 | auto-data-depot | /home/dbadmin/verticadb | 41211552768 | 80%
 v_vmart_node0002 | auto-data-depot | /home/dbadmin/verticadb | 41210717184 | 80%
 v_vmart_node0003 | auto-data-depot | /home/dbadmin/verticadb | 41211552768 | 80%
(3 rows)

626 - resource-pool-backwards-compatibility-notice.md

627 - resource-pool-cascade-to.md

Secondary resource pool for executing queries that exceed the RUNTIMECAP setting of their assigned resource pool:

CASCADE TO secondary-pool

628 - resource-pool-cpu-affinity-set.md

CPUs available to this resource pool. All cluster nodes must have the same number of CPUs. The CPU resources assigned to this set are unavailable to general resource pools.

CPUAFFINITYSET {
  'cpu-index[,...]'
| 'cpu-indexi-cpu-indexn'
| 'integer%'
| NONE
}
  • cpu-index[,...]: Dedicates one or more comma-delimited CPUs to this resource pool.
  • cpu-indexi-cpu-indexn: Dedicates a range of contiguous CPU indexes i through n to this resource pool.
  • integer%: Percentage of all available CPUs to use for this resource pool. Vertica rounds this percentage down to include whole CPU units.
  • NONE (empty string): No affinity set is assigned to this resource pool. Queries associated with this pool are executed on any CPU.

Default: NONE

629 - resource-pool-cpuaffinitymode.md

Specifies whether the resource pool has exclusive or shared use of the CPUs specified in CPUAFFINITYSET:

CPUAFFINITYMODE { SHARED | EXCLUSIVE | ANY }
  • SHARED: Queries that run in this resource pool share its CPUAFFINITYSET CPUs with other Vertica resource pools.
  • EXCLUSIVE: Dedicates CPUAFFINITYSET CPUs to this resource pool only, and excludes other Vertica resource pools. If CPUAFFINITYSET is set as a percentage, then that percentage of CPU resources available to Vertica is assigned solely for this resource pool.
  • ANY: Queries in this resource pool can run on any CPU, invalid if CPUAFFINITYSET designates CPU resources.

Default: ANY

630 - resource-pool-executionparallelism.md

Number of threads used to process any single query issued in this resource pool.

EXECUTIONPARALLELISM { limit | AUTO }
  • limit: An integer value between 1 and the number of cores. Setting this parameter to a reduced value increases throughput of short queries issued in the resource pool, especially if queries are executed concurrently.
  • AUTO or 0: Vertica calculates the setting from the number of cores, available memory, and amount of data in the system. Unless memory is limited, or the amount of data is very small, Vertica sets this parameter to the number of cores on the node.

Default: AUTO

631 - resource-pool-maxconcurrency.md

Maximum number of concurrent execution slots available to the resource pool across the cluster:

MAXCONCURRENCY { integer | NONE }

NONE (empty string): Unlimited number of concurrent execution slots.

Default: NONE

632 - resource-pool-maxmemorysize.md

Maximum size per node the resource pool can grow by borrowing memory from the GENERAL pool:

MAXMEMORYSIZE {
  'integer%'
  |'integer{K|M|G|T}'
  NONE
}
  • integer%: Percentage of total memory
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes
  • NONE (empty string): Unlimited, resource pool can borrow any amount of available memory from the GENERAL pool.

Default: NONE

633 - resource-pool-maxquerymemorysize.md

Maximum amount of memory this resource pool can allocate at runtime to process a query. If the query requires more memory than this setting, Vertica stops execution and returns an error.

Set this parameter as follows:

MAXQUERYMEMORYSIZE {
  'integer%'
| 'integer{K|M|G|T}'
| NONE
}
  • integer%: Percentage of MAXMEMORYSIZE for this resource pool.
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes, up to the value of MAXMEMORYSIZE.
  • NONE (empty string): Unlimited; resource pool can borrow any amount of available memory from the GENERAL pool, within the limits set by MAXMEMORYSIZE.

Default: NONE

634 - resource-pool-memorysize.md

Total per-node memory available to the Vertica resource manager that is allocated to this resource pool:

MEMORYSIZE {
  'integer%'
| 'integer{K|M|G|T}'
}
  • integer%: Percentage of total memory
  • integer{K|M|G|T}: Amount of memory in kilobytes, megabytes, gigabytes, or terabytes

Default: 0%. No memory allocated, the resource pool borrows memory from the GENERAL pool.

635 - resource-pool-planned-concurrency.md

Preferred number of queries to execute concurrently in the resource pool. This setting applies to the entire cluster:

PLANNEDCONCURRENCY { num-queries | AUTO }
  • num-queries: Integer value ≥ 1, the preferred number of queries to execute concurrently in the resource pool. When possible, query resource budgets are limited to allow this level of concurrent execution.

  • AUTO: Value is calculated automatically at query runtime. Vertica sets this parameter to the lower of these two calculations, but never less than 4:

    • Number of logical cores

    • Memory divided by 2GB

    If the number of logical cores on each node is different, AUTO is calculated differently for each node. Distributed queries run like the minimal effective planned concurrency. Single node queries run with the planned concurrency of the initiator.

Default: AUTO

636 - resource-pool-priority.md

Priority of queries in this resource pool when they compete for resources in the GENERAL pool:

PRIORITY { integer | HOLD }
  • integer: Negative or positive integer value, where higher numbers denote higher priority:

    • User-defined resource pool: -100 to 100

    • Built-in resource pools SYSQUERY, RECOVERY, and TM: -110 to 110

  • HOLD: Sets priority to -999. Queries in this resource pool are queued until QUEUETIMEOUT is reached.

Default: 0

637 - resource-pool-queue-timeout.md

Maximum time a request can wait for pool resources before it is rejected, not more than one year:

QUEUETIMEOUT { integer | 'interval' | 'NONE' }
  • integer: Maximum wait time in seconds

  • [interval](/sql-reference/language-elements/literals/datetime-literals/interval-literal.html): Maximum wait time expressed in the following format:

    num year num months num [days] HH:MM:SS.ms
    
  • NONE (empty string): No maximum wait time, request can be queued indefinitely, up to one year.

If the value that you specify resolves to more than one year, Vertica returns with a warning and sets the parameter to 365 days:

=> ALTER RESOURCE POOL user_0 QUEUETIMEOUT '11 months 50 days 08:32';
WARNING 5693:  Using 1 year for QUEUETIMEOUT
ALTER RESOURCE POOL
=> SELECT QUEUETIMEOUT FROM resource_pools WHERE name = 'user_0';
 QUEUETIMEOUT
--------------
 365
(1 row)

Default: 00:05 (5 minutes)

638 - resource-pool-runtime-cap.md

Maximum execution time allowed to queries in this resource pool, not more than one year, otherwise Vertica returns with an error. If a query exceeds this setting, it tries to cascade to a secondary pool:

RUNTIMECAP { 'interval' | NONE }
  • interval: Maximum wait time expressed in the following format:

    num year num month num [day] HH:MM:SS.ms
    
  • NONE (empty string): No maximum wait time, request can be queued indefinitely, up to one year.

If the user or session also has a RUNTIMECAP, the shorter limit applies.

639 - resource-pool-runtimepriority.md

Determines how the resource manager should prioritize dedication of run-time resources (CPU, I/O bandwidth) to queries already running in this resource pool:

RUNTIMEPRIORITY { HIGH | MEDIUM | LOW }

Default: MEDIUM

640 - resource-pool-runtimeprioritythreshold.md

Maximum time (in seconds) in which query processing must complete before the resource manager assigns to it the resource pool's RUNTIMEPRIORITY. All queries begin execution with a priority of HIGH.

RUNTIMEPRIORITYTHRESHOLD seconds

Default: 2

641 - resource-pool-single-initiator.md

Set to false for backward compatibility. Do not change this setting.

642 - restore-catalog-size-limitation.md

Catalog size limitations

Object-level restores can fail if your catalog size is greater than five percent of the total memory available in the node performing the restore. In this situation, Vertica recommends restoring individual objects from the backup. For more information, refer to Restoring individual objects.

643 - restore2-newer-db-version.md

Restoring objects to a higher Vertica version

Vertica supports restoration to a database that is no more than one minor version higher than the current database version. For example, you can restore objects from a 12.0.x database to a 12.1.x database.

If restored objects require a UDx library that is not present in the later-version database, Vertica displays the following error:

ERROR 2858:  Could not find function definition

You can resolve this issue by installing compatible libraries in the target database.

644 - restriction-ignored-eon-mode.md

This requirement applies only to Enterprise mode; in Eon mode, it is ignored.

645 - restrictions-swap-move-partition-tables.md

The following restrictions apply to the source and target tables:

  • If the source and target partitions are in different storage tiers, Vertica returns a warning but the operation proceeds. The partitions remain in their existing storage tier.

  • The target table cannot be immutable.

  • The following tables cannot be used as sources or targets:

    • Temporary tables

    • Virtual tables

    • System tables

    • External tables

646 - revive-eon-db-from-communal-storage.md

An Eon Mode database keeps an up-to-date version of its data and metadata in its communal storage location. After a cluster hosting an Eon Mode database is terminated, this data and metadata continue to reside in communal storage. When you revive the database later, Vertica uses the data in this location to restore the database in the same state on a newly provisioned cluster.

647 - revoke-authentication-examples.md

  • Revoke v_ldap authentication from user jsmith:

    => REVOKE AUTHENTICATION v_ldap FROM jsmith;
    
  • Revoke v_gss authentication from the role DBprogrammer:

    => REVOKE AUTHENTICATION v_gss FROM DBprogrammer;
    
  • Revoke localpwd as the default client authentication method:

    => REVOKE AUTHENTICATION localpwd FROM PUBLIC;
    

648 - revoke-from-grantee-parameter-description.md

Whose privileges are revoked, one of the following:

649 - revoke-grant-option.md

Revokes the grant option for the specified privileges. Current privileges for grantees remain unaffected. If you omit this clause, Vertica revokes both the grant option and current privileges.

650 - revoke-statement-privileges.md

Privileges

Non-superuser, one of the following:

651 - rewrite-set-using-query.md

When you call REFRESH_COLUMNS on a flattened table's SET USING (or DEFAULT USING) column, it executes the SET USING query by joining the target and source tables. By default, the source table is always the inner table of the join. In most cases, cardinality of the source table is less than the target table, so REFRESH_COLUMNS executes the join efficiently.

Occasionally—notably, when you call REFRESH_COLUMNS on a partitioned table—the source table can be larger than the target table. In this case, performance of the join operation can be suboptimal.

You can address this issue by enabling configuration parameter RewriteQueryForLargeDim. When enabled (1), Vertica rewrites the query, by reversing the inner and outer join between the target and source tables.

652 - rf-nbins.md

Integer in the range [2, 1000], the number of bins to use for discretizing continuous features.

Default: 32

653 - rollback_snip.md

Transaction rollbacks restore a database to an earlier state by discarding changes made by that transaction. Statement-level rollbacks discard only the changes initiated by the reverted statements. Transaction-level rollbacks discard all changes made by the transaction.

With a ROLLBACK statement, you can explicitly roll back to a named savepoint within the transaction, or discard the entire transaction. Vertica can also initiate automatic rollbacks in two cases:

  • An individual statement returns an ERROR message. In this case, Vertica rolls back the statement.

  • DDL errors, systemic failures, dead locks, and resource constraints return a ROLLBACK message. In this case, Vertica rolls back the entire transaction.

Explicit and automatic rollbacks always release any locks that the transaction holds.

654 - root-key.md

=> CREATE KEY SSCA_key TYPE 'RSA' LENGTH 2048;

655 - row-restrictions.md

ROW columns have several restrictions:

  • Maximum nesting depth is 100.
  • Vertica tables support up to 9800 columns and fields. The ROW itself is not counted, only its fields.
  • ROW columns cannot use any constraints (such as NOT NULL) or defaults.
  • ROW fields cannot be auto_increment or setof.
  • ROW definition must include at least one field.
  • Row is a reserved keyword within a ROW definition, but is permitted as the name of a table or column.
  • Tables containing ROW columns cannot also contain IDENTITY, default, SET USING, or named sequence columns.

656 - runlevel1.md

Best practice for restoring failed hardware

Following this procedure will prevent Vertica from misdiagnosing missing disk or bad mounts as data corruptions, which would result in a time-consuming, full-node recovery.

If a server fails due to hardware issues, for example a bad disk or a failed controller, upon repairing the hardware:

  1. Reboot the machine into runlevel 1, which is a root and console-only mode.

    Runlevel 1 prevents network connectivity and keeps Vertica from attempting to reconnect to the cluster.

  2. In runlevel 1, validate that the hardware has been repaired, the controllers are online, and any RAID recover is able to proceed.

  3. Once the hardware is confirmed consistent, only then reboot to runlevel 3 or higher.

At this point, the network activates, and Vertica rejoins the cluster and automatically recovers any missing data. Note that, on a single-node database, if any files that were associated with a projection have been deleted or corrupted, Vertica will delete all files associated with that projection, which could result in data loss.

657 - s3-create-user-storage-location.md

   --- set database-level credential (once):
=> ALTER DATABASE DEFAULT SET AWSAuth = 'myaccesskeyid123456:mysecretaccesskey123456789012345678901234';

=> CREATE LOCATION 's3://datalake' SHARED USAGE 'USER' LABEL 's3user';

=> CREATE ROLE ExtUsers;
   --- Assign users to this role using GRANT (Role).

=> GRANT READ ON LOCATION 's3://datalake' TO ExtUsers;

658 - s3-endpoints.md

If you aren't using a CloudFormation Template (CFT) to install Vertica, you must create an S3 gateway endpoint in your VPC. For more information, see the AWS documentation.

For example, the Vertica CFT has the following VPC endpoint:

"S3Enpoint" : {
    "Type" : "AWS::EC2::VPCEndpoint",
    "Properties" : {
    "PolicyDocument" : {
        "Version":"2012-10-17",
        "Statement":[{
        "Effect":"Allow",
        "Principal": "*",
        "Action":["*"],
        "Resource":["*"]
        }]
    },
    "RouteTableIds" : [ {"Ref" : "RouteTable"} ],
    "ServiceName" : { "Fn::Join": [ "", [ "com.amazonaws.", { "Ref": "AWS::Region" }, ".s3" ] ] },
    "VpcId" : {"Ref" : "VPC"}
}

659 - s3-secret.md

The following command stores both your S3-compatible communal access and secret key credentials in a Secret named s3-creds:

$ kubectl create secret generic s3-creds --from-literal=accesskey=accesskey --from-literal=secretkey=secretkey

660 - sal-storage-id-description.md

Unique hexadecimal numeric ID assigned by the Vertica catalog, which identifies the storage.

661 - sampling-size.md

Float in the range (0.0, 1.0], the portion of the input data set that is randomly picked for training each tree.

Default: 0.632

662 - sandbox-metadata.md

While the nodes in the main cluster sync their metadata to /path-to-communal-storage/metadata/db_name, the nodes in the sandbox sync to /path-to-communal-storage/metadata/sandbox_name.

663 - saslssl-notifier.md

Follow this procedure to create or alter notifiers for Kafka endpoints that use SASL_SSL. Note that you must repeat this procedure whenever you change the TLSMODE, certificates, or CA bundle for a given notifier.

  1. Create a TLS Configuration with the desired TLS mode, certificate, and CA certificates.

  2. Use CREATE or ALTER to disable the notifier and set the TLS Configuration:

    => ALTER NOTIFIER encrypted_notifier
        DISABLE
        TLS CONFIGURATION kafka_tls_config;
    
  3. ALTER the notifier and set the proper rdkafka adapter parameters for SASL_SSL:

    => ALTER NOTIFIER encrypted_notifier PARAMETERS
        'sasl.username=user;sasl.password=password;sasl.mechanism=PLAIN;security.protocol=SASL_SSL';
    
  4. Enable the notifier:

    => ALTER NOTIFIER encrypted_notifier ENABLE;
    

664 - save-plans-query-name.md

save_plans_query-label_query-number_save-plans-version

where:

  • query-label is a LABEL hint embedded in the input query associated with this directed query. If theinput query contains no label, then this string is set to nolabel.
  • query-number is an integer in a continuous sequence between 0 and budget-query, which uniquely identifies this directed query from others in the same SAVE_PLANS-generated set.
  • [save-plans-version](/sql-reference/system-tables/v-catalog-schema/directed-queries.html#SAVE_PLANS_VERSION) identifies the set of directed queries to which this directed query belongs.

665 - save-plans-version.md

Identifies a set of directed queries that were generated by the same call to SAVE_PLANS. All directed queries of the set share the same SAVE_PLANS_VERSION integer, which increments by 1 the previous highest SAVE_PLANS_VERSION setting. Use this identifier to activate, deactivate, and drop a set of directed queries.

666 - savepoint_snip.md

A savepoint is a special marker inside a transaction that allows commands that execute after the savepoint to be rolled back. The transaction is restored to the state that preceded the savepoint.

Vertica supports two types of savepoints:

  • An implicit savepoint is automatically established after each successful command within a transaction. This savepoint is used to roll back the next statement if it returns an error. A transaction maintains one implicit savepoint, which it rolls forward with each successful command. Implicit savepoints are available to Vertica only and cannot be referenced directly.

  • Named savepoints are labeled markers within a transaction that you set through SAVEPOINT statements. A named savepoint can later be referenced in the same transaction through RELEASE SAVEPOINT, which destroys it, and ROLLBACK TO SAVEPOINT, which rolls back all operations that followed the savepoint. Named savepoints can be especially useful in nested transactions: a nested transaction that begins with a savepoint can be rolled back entirely, if necessary.

667 - saving-password-caution.md

Under normal circumstances, users are not able to directly read data from files stored in DFS. However, all users who have access to UDx functions that read from the DFS could access these files from within Vertica.

In addition, these files are stored as plain text in every node's file system. Anyone with the proper file system access on the nodes can read the file's contents.

You should take both of these facts into consideration when deciding whether to store sensitive information such as passwords or shared secrets in either the shared or per-user configuration files.

668 - sc-manage-overview-mc.md

Subcluster management

You can add, Scale Up, Scale Down, remove, and terminate subclusters with MC. For details, see the following:

669 - scalar-function-c-.md

The ScalarFunction API provides the following methods for extension by subclasses:

virtual void setup(ServerInterface &srvInterface,
        const SizedColumnTypes &argTypes);

virtual void processBlock(ServerInterface &srvInterface,
        BlockReader &arg_reader, BlockWriter &res_writer)=0;

virtual void getOutputRange (ServerInterface &srvInterface,
        ValueRangeReader &inRange, ValueRangeWriter &outRange)

virtual void cancel(ServerInterface &srvInterface);

virtual void destroy(ServerInterface &srvInterface, const SizedColumnTypes &argTypes);

670 - scalar-function-factory-c-.md

The ScalarFunctionFactory API provides the following methods for extension by subclasses:

virtual ScalarFunction * createScalarFunction(ServerInterface &srvInterface)=0;

virtual void getPrototype(ServerInterface &srvInterface,
        ColumnTypes &argTypes, ColumnTypes &returnType)=0;

virtual void getReturnType(ServerInterface &srvInterface,
        const SizedColumnTypes &argTypes, SizedColumnTypes &returnType);

virtual void getParameterType(ServerInterface &srvInterface,
        SizedColumnTypes &parameterTypes);

671 - scalar-function-factory-java.md

The ScalarFunctionFactory API provides the following methods for extension by subclasses:

public abstract ScalarFunction createScalarFunction(ServerInterface srvInterface);

public abstract void getPrototype(ServerInterface srvInterface, ColumnTypes argTypes, ColumnTypes returnType);

public void getReturnType(ServerInterface srvInterface, SizedColumnTypes argTypes,
        SizedColumnTypes returnType) throws UdfException;

public void getParameterType(ServerInterface srvInterface, SizedColumnTypes parameterTypes);

672 - scalar-function-factory-python.md

The ScalarFunctionFactory API provides the following methods for extension by subclasses:

def createScalarFunction(self, srv)

def getPrototype(self, srv_interface, arg_types, return_type)

def getReturnType(self, srv_interface, arg_types, return_type)

673 - scalar-function-factory-r.md

Implement the Factory function API to define a scalar function factory:

FunctionNameFactory <- function() {
  list(name    = FunctionName,
       udxtype = c("scalar"),
       intype  = c("int"),
       outtype = c("int"))
}

674 - scalar-function-java.md

The ScalarFunction API provides the following methods for extension by subclasses:

public void setup(ServerInterface srvInterface, SizedColumnTypes argTypes);

public abstract void processBlock(ServerInterface srvInterface, BlockReader arg_reader,
        BlockWriter res_writer) throws UdfException, DestroyInvocation;

protected void cancel(ServerInterface srvInterface);

public void destroy(ServerInterface srvInterface, SizedColumnTypes argTypes);

675 - scalar-function-python.md

The ScalarFunction API provides the following methods for extension by subclasses:


def setup(self, server_interface, col_types)

def processBlock(self, server_interface, block_reader, block_writer)

def destroy(self, server_interface, col_types)

676 - scalar-function-r.md

Implement the Main function API to define a scalar function:

FunctionName <- function(input.data.frame, parameters.data.frame) {
  # Computations

  # The function must return a data frame.
  return(output.data.frame)
}

677 - scalar-setting-null-c-.md

The following C++ example demonstrates setting the null behavior of Add2ints so Vertica does not call the function with NULL values.

class Add2intsNullOnNullInputFactory : public Vertica::ScalarFunctionFactory
{
    virtual Vertica::ScalarFunction *createScalarFunction(Vertica::ServerInterface &srvInterface)
    { return vt_createFuncObj(srvInterface.allocator, Add2ints); }
    virtual void getPrototype(Vertica::ServerInterface &srvInterface,
                              Vertica::ColumnTypes &argTypes,
                              Vertica::ColumnTypes &returnType)
    {
        argTypes.addInt();
        argTypes.addInt();
        returnType.addInt();
    }

public:
    Add2intsNullOnNullInputFactory() {strict = RETURN_NULL_ON_NULL_INPUT;}
};
RegisterFactory(Add2intsNullOnNullInputFactory);

678 - scalar-setting-volatility-c-.md

The following example shows a version of the Add2ints example factory class that makes the function immutable.

class Add2intsImmutableFactory : public Vertica::ScalarFunctionFactory
{
    virtual Vertica::ScalarFunction *createScalarFunction(Vertica::ServerInterface &srvInterface)
    { return vt_createFuncObj(srvInterface.allocator, Add2ints); }
    virtual void getPrototype(Vertica::ServerInterface &srvInterface,
                              Vertica::ColumnTypes &argTypes,
                              Vertica::ColumnTypes &returnType)
    {
        argTypes.addInt();
        argTypes.addInt();
        returnType.addInt();
    }

public:
    Add2intsImmutableFactory() {vol = IMMUTABLE;}
};
RegisterFactory(Add2intsImmutableFactory);

679 - scalar-setting-volatility-java.md

The following example demonstrates setting the Add2IntsFactory's vol field to IMMUTABLE to tell Vertica it can cache the arguments and return value.

public class Add2IntsFactory extends ScalarFunctionFactory {

    @Override
    public void getPrototype(ServerInterface srvInterface, ColumnTypes argTypes, ColumnTypes returnType){
        argTypes.addInt();
        argTypes.addInt();
        returnType.addInt();
    }

    @Override
    public ScalarFunction createScalarFunction(ServerInterface srvInterface){
        return new Add2Ints();
    }

    // Class constructor
    public Add2IntsFactory() {
        // Tell Vertica that the same set of arguments will always result in the
        // same return value.
        vol = volatility.IMMUTABLE;
    }
}

680 - scaling-factor_snip.md

The scaling factor determines the number of storage containers that Vertica uses to store each projection across the database during rebalancing when local segmentation is enabled. When setting the scaling factor, follow these guidelines:

  • The number of storage containers should be greater than or equal to the number of partitions multiplied by the number of local segments:

    num-storage-containers>= (num-partitions*num-local-segments )

  • Set the scaling factor high enough so rebalance can transfer local segments to satisfy the skew threshold, but small enough so the number of storage containers does not result in too many ROS containers, and cause ROS pushback. The maximum number of ROS containers (by default 1024) is set by configuration parameter ContainersPerProjectionLimit.

681 - scaling-prereqs.md

682 - scheduler-resource-pool1.md

Vertica recommends that you always create a resource pool specifically for each scheduler. Schedulers assume that they have exclusive use of their assigned resource pool. Using a separate pool for a scheduler lets you fine-tune its impact on your Vertica cluster's performance. You create resource pools with CREATE RESOURCE POOL.

683 - scheduler-resource-pool2.md

If you do not create and assign a resource pool for your scheduler, it uses a portion of the GENERAL resource pool. Vertica recommends that you do not use the GENERAL pool for schedulers in production environments. This fallback to the GENERAL pool is intended as a convenience when you test your scheduler configuration. When you are ready to deploy your scheduler, create a resource pool that you tuned to its specific needs. Each time that you start a scheduler that is using the GENERAL pool, the vkconfig utility displays a warning message.

Not allocating enough resources to your schedulers can result in errors. For example, you might get OVERSHOT DEADLINE FOR FRAME errors if the scheduler is unable to load data from all topics in a data frame.

See Resource pool architecture for more information about resource pools.

684 - schema-default-include-exclude-privileges.md

Specifies whether to enable or disable default inheritance of privileges for new tables in the specified schema:

  • EXCLUDE SCHEMA PRIVILEGES (default): Disables inheritance of schema privileges.

  • INCLUDE SCHEMA PRIVILEGES: Specifies to grant tables in the specified schema the same privileges granted to that schema. This option has no effect on existing tables in the schema.

685 - sdk-version.md

Finding the current SDK version

You must develop your UDx using the same SDK version as the database in which you plan to use it. To display the SDK version currently installed on your system, run the following command in vsql:

=> SELECT sdk_version();

686 - search-lambda-arg.md

Lambda function to apply to each element. The function must return a Boolean value. The first argument to the function is the element, and the optional second element is the index of the element.

687 - security-definer-invoker.md

A procedure with SECURITY DEFINER effectively executes the procedure as that user, so changes to the database appear to be performed by the procedure's definer rather than its caller.

688 - security-description.md

By default, stored procedures execute with the privileges of the caller (invoker), so callers must have the necessary privileges on the catalog objects accessed by the stored procedure. You can allow callers to execute the procedure with the privileges, default roles, user parameters, and user attributes (RESOURCE_POOL, MEMORY_CAP_KB, TEMP_SPACE_CAP_KB, RUNTIMECAP) of the definer by specifying DEFINER for the SECURITY option.

689 - segmented-projection-intro.md

You typically create segmented projections for large fact tables. Vertica splits segmented projections into chunks (segments) of similar size and distributes these segments evenly across the cluster. System K-safety determines how many duplicates (buddies) of each segment are created and maintained on different nodes.

690 - select-list-non-standard-projections.md

Specifies the table data to query:

{table-column | expr-with-table-columns } [ [AS] alias] }[,...]

You can optionally assign an alias to each column expression and reference that alias elsewhere in the SELECT statement.

If you specify projection column names, the two lists of projection columns and table columns/expressions must exactly match in number and order.

691 - self-signed-ca-warning.md

692 - self-signed-ca.md

A CA is a trusted entity that signs and validates other certificates with its own certificate. The following example generates a self-signed root CA certificate:

  1. Generate or import a private key. The following command generates a new private key:

          
    => CREATE KEY ca_private_key TYPE 'RSA' LENGTH 4096;
    CREATE KEY
    
    

  2. Generate the certificate with the following format. Sign the certificate the with the private key that you generated or imported in the previous step:

          
    => CREATE CA CERTIFICATE ca_certificate
    SUBJECT '/C=country_code/ST=state_or_province/L=locality/O=organization/OU=org_unit/CN=Vertica Root CA'
    VALID FOR days_valid
    EXTENSIONS 'authorityKeyIdentifier' = 'keyid:always,issuer', 'nsComment' = 'Vertica generated root CA cert'
    KEY ca_private_key;
    
    

    For example:

          
    => CREATE CA CERTIFICATE SSCA_cert
    SUBJECT '/C=US/ST=Massachusetts/L=Cambridge/O=OpenText/OU=Vertica/CN=Vertica Root CA'
    VALID FOR 3650
    EXTENSIONS 'nsComment' = 'Self-signed root CA cert'
    KEY SSCA_key;
    
    

693 - self-signed-root-certificate.md

      
=> CREATE CA CERTIFICATE SSCA_cert
SUBJECT '/C=US/ST=Massachusetts/L=Cambridge/O=OpenText/OU=Vertica/CN=Vertica Root CA'
VALID FOR 3650
EXTENSIONS 'nsComment' = 'Self-signed root CA cert'
KEY SSCA_key;

694 - sequence-parameter-types.md

  • Changes values that control sequence behavior—for example, its start value and range of minimum and maximum values. These changes take effect only when you start a new database session.
  • Changes sequence name, schema, or ownership. These changes take effect immediately.

695 - session-id-column-description.md

Unique numeric ID assigned by the Vertica catalog, which identifies the session for which profiling information is captured. This identifier is unique within the cluster at any point in time but can be reused when the session closes.

696 - set-config-parameter-with-clause-recursion-limit.md

697 - set-data-collector-notify-policy-disable-example.md

=> SELECT SET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures','vertica_stats', 'vertica_notifications', false);
 SET_DATA_COLLECTOR_NOTIFY_POLICY
----------------------------------
 SET
(1 row)

=> SELECT GET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures');
 GET_DATA_COLLECTOR_NOTIFY_POLICY
----------------------------------
 Not notifiable;
(1 row)

698 - set-data-collector-notify-policy-example.md

To be notified of failed login attempts, you can create a notifier that sends a notification when the DC component LoginFailures updates. The TLSMODE 'verify-ca' verifies that the server's certificate is signed by a trusted CA.

=> CREATE NOTIFIER vertica_stats ACTION 'kafka://kafka01.example.com:9092' MAXMEMORYSIZE '10M' TLSMODE 'verify-ca';
CREATE NOTIFIER
=> SELECT SET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures','vertica_stats', 'vertica_notifications', true);
SET_DATA_COLLECTOR_NOTIFY_POLICY
----------------------------------
 SET
(1 row)

699 - set-spread-option-caution.md

700 - set-spread-option-example.md

=> SELECT SET_SPREAD_OPTION( 'TokenTimeout', '35000');
NOTICE 9003:  Spread has been notified about the change
                   SET_SPREAD_OPTION
--------------------------------------------------------
 Spread option 'TokenTimeout' has been set to '35000'.

(1 row)

=> SELECT * FROM V_MONITOR.SPREAD_STATE;
    node_name     | token_timeout
------------------+---------------
 v_vmart_node0001 |         35000
 v_vmart_node0002 |         35000
 v_vmart_node0003 |         35000
(3 rows);

701 - setting-depot-max-size.md

Valid only if the storage location usage type is set to DEPOT, specifies the maximum amount of disk space that the depot can allocate from the storage location's file system.

You can specify size in two ways:

  • integer%: Percentage of storage location disk size.

  • integer{K|M|G|T}: Amount of storage location disk size in kilobytes, megabytes, gigabytes, or terabytes.

702 - setusing-setdefault-expressions.md

Supported expressions

DEFAULT and SET USING generally support the same expressions. These include:

703 - silent-numeric-overflow.md

By default, Vertica allows silent numeric overflow when you call this function on numeric data types. For more information on this behavior and how to change it, seeNumeric data type overflow with SUM, SUM_FLOAT, and AVG.

704 - simple-call-to-voltage-secure-protect-all-keys.md

=> SELECT VoltageSecureProtectAllKeys('376765616314013' USING PARAMETERS
                                      format='cc_num',
                                      config_dfs_path='/voltagesecure/conf')
          OVER ();

      data       |    protected
-----------------+-----------------
 376765616314013 | XMVMRU9RJVU4013
 376765616314013 | X5FD4KO1UEE4013
 376765616314013 | M7ZXTIQVCPB4013
 376765616314013 | UBOSC9K3EXZ4013
 376765616314013 | ZJ1C50C9L9R4013
(5 rows)

705 - simple-partition-expression-example.md

The following statements create the store_orders table and load data into it. The CREATE TABLE statement includes a simple partition clause that specifies to partition data by year:

=> CREATE TABLE public.store_orders
(
    order_no int,
    order_date timestamp NOT NULL,
    shipper varchar(20),
    ship_date date
)
UNSEGMENTED ALL NODES
PARTITION BY YEAR(order_date);
CREATE TABLE
=> COPY store_orders FROM '/home/dbadmin/export_store_orders_data.txt';
41834

As COPY loads the new table data into ROS storage, the Tuple Mover executes the table's partition clause by dividing orders for each year into separate partitions, and consolidating these partitions in ROS containers.

In this case, the Tuple Mover creates four partition keys for the loaded data—2017, 2016, 2015, and 2014—and divides the data into separate ROS containers accordingly:

=> SELECT dump_table_partition_keys('store_orders');
... Partition keys on node v_vmart_node0001
  Projection 'store_orders_super'
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2017
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2016
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2015
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2014

 Partition keys on node v_vmart_node0002
  Projection 'store_orders_super'
   Storage [ROS container]
     No of partition keys: 1
     Partition keys: 2017
...

(1 row)

As new data is loaded into store_orders, the Tuple Mover merges it into the appropriate partitions, creating partition keys as needed for new years.

706 - single-partition-key.md

where min‑range‑value must be ≤ max‑range‑value. To specify a single partition key, min‑range‑value and max‑range‑value must be equal.

707 - single-table-replication-example.md

=> CONNECT TO VERTICA source_db USER dbadmin 
           PASSWORD 'mypassword' ON 'vertica_node01', 5433;
 CONNECT

=> REPLICATE customers FROM source_db;
REPLICATE

708 - single-threaded-algorithm.md

Since its input data must be sorted by timestamp, this algorithm is single-threaded.

709 - skip-non-orientable-polygons.md

(Optional) Boolean

When set to False, non-orientable polygons generate an error. For example, if you use STV_ForceLHR or STV_Reverse with skip_nonorientable_polygons set to False, a geography polygon containing a hole generates an error. When set to True, the result returned is the polygon, as passed to the API, without alteration.

This argument can help you when you are creating an index from a table containing polygons that cannot be re-oriented.

Vertica Place considers these polygons non-orientable:

  • Polygons with a hole

  • Multipolygons

  • Multipolygons with a hole

Default value: False

710 - sns-notifier-example.md

The following example creates an SNS topic, subscribes to it with an SQS queue, and then configures an SNS notifier for the DC component LoginFailures:

  1. Create an SNS topic.

  2. Create an SQS queue.

  3. Subscribe the SQS queue to the SNS topic.

  4. Set SNSAuth with your AWS credentials:

    => ALTER DATABASE DEFAULT SET SNSAuth='VNDDNVOPIUQF917O5PDB:+mcnVONVIbjOnf1ekNis7nm3mE83u9fjdwmlq36Z';
    
  5. Set SNSRegion:

    => ALTER DATABASE DEFAULT SET SNSRegion='us-east-1'
    
  6. Enable HTTPS:

    => ALTER DATABASE DEFAULT SET SNSEnableHttps=1;
    
  7. Create an SNS notifier:

    => CREATE NOTIFIER v_sns_notifier ACTION 'sns' MAXPAYLOAD '256K' MAXMEMORYSIZE '10M' CHECK COMMITTED;
    
  8. Verify that the SNS notifier, SNS topic, and SQS queue are properly configured:

    1. Manually send a message from the notifier to the SNS topic with NOTIFY:

      => SELECT NOTIFY('test message', 'v_sns_notifier', 'arn:aws:sns:us-east-1:123456789012:MyTopic')
      
    2. Poll the SQS queue for your message.

  9. Attach the SNS notifier to the LoginFailures component with SET_DATA_COLLECTOR_NOTIFY_POLICY:

    => SELECT SET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures', 'v_sns_notifier', 'Login failed!', true)
    

711 - source-factory-c-.md

The SourceFactory API provides the following methods for extension by subclasses:

virtual void plan(ServerInterface &srvInterface, NodeSpecifyingPlanContext &planCtxt);

// must implement exactly one of prepareUDSources() or prepareUDSourcesExecutor()
virtual std::vector< UDSource * > prepareUDSources(ServerInterface &srvInterface,
            NodeSpecifyingPlanContext &planCtxt);

virtual std::vector< UDSource * > prepareUDSourcesExecutor(ServerInterface &srvInterface,
            ExecutorPlanContext &planCtxt);

virtual void getParameterType(ServerInterface &srvInterface,
            SizedColumnTypes &parameterTypes);

virtual bool isSourceApportionable();

ssize_t getDesiredThreads(ServerInterface &srvInterface,
            ExecutorPlanContext &planContext);

After creating your SourceFactory, you must register it with the RegisterFactory macro.

712 - source-factory-java.md

The SourceFactory API provides the following methods for extension by subclasses:

public void plan(ServerInterface srvInterface, NodeSpecifyingPlanContext planCtxt)
    throws UdfException;

// must implement one overload of prepareUDSources()
public ArrayList< UDSource > prepareUDSources(ServerInterface srvInterface,
                NodeSpecifyingPlanContext planCtxt)
    throws UdfException;

public ArrayList< UDSource > prepareUDSources(ServerInterface srvInterface,
                ExecutorPlanContext planCtxt)
    throws UdfException;

public void getParameterType(ServerInterface srvInterface, SizedColumnTypes parameterTypes);

public boolean isSourceApportionable();

public int getDesiredThreads(ServerInterface srvInterface,
                ExecutorPlanContext planCtxt)
    throws UdfException;

713 - spatial-index-def.md

Name of the spatial index, of type VARCHAR.

714 - special-variables.md

Special variables exist between the scope of a procedure's argument and the outermost block of its definition. This means that:

  • Special variables shadow procedure arguments

  • Variables declared in the body of the stored procedure will shadow the special variable

715 - spheroid.md

(Optional) BOOLEAN that specifies whether to use a perfect sphere or WGS84.

Default: False

716 - spread-state-example.md

=> SELECT * FROM V_MONITOR.SPREAD_STATE;
    node_name     | token_timeout
------------------+---------------
 v_vmart_node0003 |          8000
 v_vmart_node0001 |          8000
 v_vmart_node0002 |          8000
(3 rows)

717 - srid-defintion.md

Spatial reference system identifier (SRID) of the GEOMETRY object, type INTEGER.

The SRID is stored in the GEOMETRY object, but does not influence the results of spatial computations.

718 - srid.md

Spatial reference system identifier.

719 - ssl-client-auth.md

This parameter is optional when the Kafka server's parameter ssl.client.auth is set to none or requested.

720 - stable-functions_snip.md

When run with a given set of arguments, stable functions produce the same result within a single query or scan operation. However, a stable function can produce different results when issued under different environments or at different times, such as change of locale and time zone—for example, SYSDATE.

721 - start-scheduler-example.md

$ nohup vkconfig launch --conf weblog.conf >/dev/null 2>&1 &

722 - storage-location-label-caution.md

723 - storage-location-startup.md

If you use HDFS storage locations, the HDFS data must be available when you start Vertica. Your HDFS cluster must be operational, and the ROS files must be present. If you moved data files, or they are corrupted, or your HDFS cluster is not responsive, Vertica cannot start.

724 - storage-location-usage-types.md

  • DATA,TEMP (default): The storage location can store persistent and temporary DML-generated data, and data for temporary tables.

  • TEMP: A path-specified location to store DML-generated temporary data. If path is set to S3, then this location is used only when the RemoteStorageForTemp configuration parameter is set to 1, and TEMP must be qualified with ALL NODES SHARED. For details, see S3 Storage of Temporary Data.

  • DATA: The storage location can only store persistent data.

  • USER: Users with READ and WRITE privileges can access data and external tables of this storage location.

  • DEPOT: The storage location is used in Eon Mode to store the depot. Only create DEPOT storage locations on local Linux file systems.

    Vertica allows a single DEPOT storage location per node. If you want to move your depot to different location (on a different file system, for example) you must first drop the old depot storage location, then create the new location.

725 - storage-oid-description.md

Numeric ID assigned by the Vertica catalog, which identifies the storage.

726 - string-tokenizer-usage.md


=> CREATE TABLE T (url varchar(30), description varchar(2000));
CREATE TABLE
=> INSERT INTO T VALUES ('www.amazon.com','Online retail merchant and provider of cloud services');
 OUTPUT
--------
      1
(1 row)
=> INSERT INTO T VALUES ('www.vertica.com','World''s fastest analytic database');
 OUTPUT
--------
      1
(1 row)
=> COMMIT;
COMMIT

=> -- Invoke the UDTF
=> SELECT url, tokenize(description) OVER (partition by url) FROM T;
       url       |   words
-----------------+-----------
 www.amazon.com  | Online
 www.amazon.com  | retail
 www.amazon.com  | merchant
 www.amazon.com  | and
 www.amazon.com  | provider
 www.amazon.com  | of
 www.amazon.com  | cloud
 www.amazon.com  | services
 www.vertica.com | World's
 www.vertica.com | fastest
 www.vertica.com | analytic
 www.vertica.com | database
(12 rows)

727 - stv-paths.md

The path can be on any shared file system or object store.

728 - stv-polygon-point-and-stv-line-string-point.md

The values returned are points of either GEOMETRY or GEOGRAPHY type depending on the input object's type. GEOMETRY points inherit the SRID of the input object.

729 - subcluster-example-create-load-balance-group.md

=> SELECT node_name,node_address,node_address_family,subcluster_name
   FROM v_catalog.nodes;
      node_name       | node_address | node_address_family |  subcluster_name
----------------------+--------------+---------------------+--------------------
 v_verticadb_node0001 | 10.11.12.10  | ipv4                | load_subcluster
 v_verticadb_node0002 | 10.11.12.20  | ipv4                | load_subcluster
 v_verticadb_node0003 | 10.11.12.30  | ipv4                | load_subcluster
 v_verticadb_node0004 | 10.11.12.40  | ipv4                | analytics_subcluster
 v_verticadb_node0005 | 10.11.12.50  | ipv4                | analytics_subcluster
 v_verticadb_node0006 | 10.11.12.60  | ipv4                | analytics_subcluster
(6 rows)

=> CREATE NETWORK ADDRESS node01 ON v_verticadb_node0001 WITH '10.11.12.10';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node02 ON v_verticadb_node0002 WITH '10.11.12.20';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node03 ON v_verticadb_node0003 WITH '10.11.12.30';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node04 ON v_verticadb_node0004 WITH '10.11.12.40';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node05 ON v_verticadb_node0005 WITH '10.11.12.50';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node06 ON v_verticadb_node0006 WITH '10.11.12.60';
CREATE NETWORK ADDRESS

=> CREATE LOAD BALANCE GROUP load_subcluster WITH SUBCLUSTER load_subcluster
   FILTER '0.0.0.0/0';
CREATE LOAD BALANCE GROUP
=> CREATE LOAD BALANCE GROUP analytics_subcluster WITH SUBCLUSTER
   analytics_subcluster FILTER '0.0.0.0/0';
CREATE LOAD BALANCE GROUP

730 - subcluster-resource-pool.md

Eon Mode only, the subcluster to associate with this resource pool, where subcluster is one of the following:

  • SUBCLUSTER subcluster-name: Resource pool for an existing subcluster. You cannot be connected to this subcluster, otherwise Vertica returns an error.
  • CURRENT SUBCLUSTER: Resource pool for the subcluster that you are connected to.

731 - subcluster-shutdown-client-admonition.md

732 - subclusters-promote-example.md

=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | f
 load_subcluster   | t
(2 rows)


=> SELECT PROMOTE_SUBCLUSTER_TO_PRIMARY('analytics_cluster');
 PROMOTE_SUBCLUSTER_TO_PRIMARY
-------------------------------
 PROMOTE SUBCLUSTER TO PRIMARY
(1 row)


=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | t
 load_subcluster   | t
(2 rows)

733 - subexpression-numbering-note.md

734 - supported-ebs.md

  • c4.4xlarge

  • c4.8xlarge

  • c5.4xlarge

  • c5.9xlarge

  • c6i.4xlarge

  • c6i.8xlarge

  • c6i.12xlarge

  • c6i.16xlarge

  • c6i.24xlarge

  • c6i.32xlarge

  • m4.4xlarge

  • m4.10xlarge

  • m5.4xlarge

  • m5.8xlarge

  • m5.12xlarge

  • r4.4xlarge

  • r4.8xlarge

  • r4.16xlarge

  • r5.4xlarge

  • r5.8xlarge

  • r5.12xlarge

  • r6i.4xlarge

  • r6i.8xlarge

  • r6i.12xlarge

  • r6i.16xlarge

  • r6i.24xlarge

  • r6i.32xlarge

735 - supported-spatial-data-types.md

Vertica supports two spatial data types. These data types store two- and three-dimensional spatial objects in a table column:

  • GEOMETRY: Spatial object with coordinates expressed as (x,y) pairs, defined in the Cartesian plane. All calculations use Cartesian coordinates.
  • GEOGRAPHY: Spatial object defined as on the surface of a perfect sphere, or a spatial object in the WGS84 coordinate system. Coordinates are expressed in longitude/latitude angular values, measured in degrees. All calculations are in meters. For perfect sphere calculations, the sphere has a radius of 6371 kilometers, which approximates the shape of the earth.

The maximum size of a GEOMETRY or GEOGRAPHY data type is 10,000,000 bytes (10 MB). You cannot use either data type as a table's primary key.

736 - svd-model-example.md

=> SELECT SVD ('svdmodel', 'small_svd', 'x1,x2,x3,x4');
SVD
--------------------------------------------------------------
Finished in 1 iterations.
Accepted Rows: 8  Rejected Rows: 0
(1 row)

=> CREATE TABLE transform_svd AS SELECT
     APPLY_SVD (id, x1, x2, x3, x4 USING PARAMETERS model_name='svdmodel', exclude_columns='id', key_columns='id')
     OVER () FROM small_svd;
CREATE TABLE

=> SELECT * FROM transform_svd;
id  |       col1        |        col2         |        col3         |        col4
----+-------------------+---------------------+---------------------+--------------------
4   |  0.44849499240202 |  -0.347260956311326 |   0.186958376368345 |  0.378561270493651
6   |  0.17652411036246 | -0.0753183783382909 |  -0.678196192333598 | 0.0567124770173372
1   | 0.494871802886819 |   0.161721379259287 |  0.0712816417153664 | -0.473145877877408
2   |  0.17652411036246 | -0.0753183783382909 |  -0.678196192333598 | 0.0567124770173372
3   | 0.150974762654569 |   0.589561842046029 | 0.00392654610109522 |  0.360011163271921
5   | 0.494871802886819 |   0.161721379259287 |  0.0712816417153664 | -0.473145877877408
8   |  0.44849499240202 |  -0.347260956311326 |   0.186958376368345 |  0.378561270493651
7   | 0.150974762654569 |   0.589561842046029 | 0.00392654610109522 |  0.360011163271921
(8 rows)

=> SELECT APPLY_INVERSE_SVD (* USING PARAMETERS model_name='svdmodel', exclude_columns='id',
key_columns='id') OVER () FROM transform_svd;
id  |        x1        |        x2        |        x3        |        x4
----+------------------+------------------+------------------+------------------
4 | 91.4056627665577 | 44.7629617207482 | 83.1704961993117 | 38.9274292265543
6 | 20.6468626294368 | 9.30974906868751 | 8.71006863405534 |  6.5855928603967
7 | 31.2494347777156 | 20.6336519003026 | 27.5668287751507 | 5.84427645886865
1 |  107.93376580719 | 51.6980548011917 | 97.9665796560552 | 40.4918236881051
2 | 20.6468626294368 | 9.30974906868751 | 8.71006863405534 |  6.5855928603967
3 | 31.2494347777156 | 20.6336519003026 | 27.5668287751507 | 5.84427645886865
5 |  107.93376580719 | 51.6980548011917 | 97.9665796560552 | 40.4918236881051
8 | 91.4056627665577 | 44.7629617207482 | 83.1704961993117 | 38.9274292265543
(8 rows)

737 - svm-model-attributes.md

coeff
Coefficients in the model:
  • colNames: Intercept, or predictor column name

  • coefficients: Coefficient value

nAccepted
Number of samples accepted for training from the data set
nRejected
Number of samples rejected when training
nIteration
Number of iterations used in training
callStr
SQL statement used to replicate the training

738 - sync-hcat-schema-types.md

Hive STRING and BINARY data types are matched, in Vertica, to the VARCHAR(65000) and VARBINARY(65000) types. Adjust the data types with ALTER TABLE as needed after creating the schema. The maximum size of a VARCHAR or VARBINARY in Vertica is 65000, but you can use LONG VARCHAR and LONG VARBINARY to specify larger values.

Hive and Vertica define string length in different ways. In Hive the length is the number of characters; in Vertica it is the number of bytes. Thus, a character encoding that uses more than one byte, such as Unicode, can cause mismatches between the two. To avoid data truncation, set values in Vertica based on bytes, not characters.

If data size exceeds the column size, Vertica logs an event at read time in the QUERY_EVENTS system table.

739 - synch-with-hcatalog-schema-example.md

=> CREATE HCATALOG SCHEMA hcat WITH hostname='hcathost' HCATALOG_SCHEMA='default'
   HCATALOG_USER='hcatuser';
CREATE SCHEMA
=> SELECT sync_with_hcatalog_schema('hcat', 'hcat');
sync_with_hcatalog_schema
----------------------------------------
Schema hcat synchronized with hcat
tables in hcat = 56
tables altered in hcat = 0
tables created in hcat = 56
stale tables in hcat = 0
table changes erred in hcat = 0
(1 row)

=> -- Use vsql's \d command to describe a table in the synced schema

=> \d hcat.messages
List of Fields by Tables
  Schema   |   Table  | Column  |      Type      | Size  | Default | Not Null | Primary Key | Foreign Key
-----------+----------+---------+----------------+-------+---------+----------+-------------+-------------
hcat       | messages | id      | int            |     8 |         | f        | f           |
hcat       | messages | userid  | varchar(65000) | 65000 |         | f        | f           |
hcat       | messages | "time"  | varchar(65000) | 65000 |         | f        | f           |
hcat       | messages | message | varchar(65000) | 65000 |         | f        | f           |
(4 rows)

740 - syslog-no-logging.md

Events monitored by this notifier type are not logged to MONITORING_EVENTS nor vertica.log.

741 - syslog-notifier-example.md

The following example creates a notifier that writes a message to syslog when the Data collector (DC) component LoginFailures updates:

  1. Enable syslog notifiers for the current database:

    => ALTER DATABASE DEFAULT SET SyslogEnabled = 1;
    
  2. Create and enable a syslog notifier v_syslog_notifier:

    => CREATE NOTIFIER v_syslog_notifier ACTION 'syslog'
        ENABLE
        MAXMEMORYSIZE '10M'
        IDENTIFIED BY 'f8b0278a-3282-4e1a-9c86-e0f3f042a971'
        PARAMETERS 'eventSeverity = 5';
    
  3. Configure the syslog notifier v_syslog_notifier for updates to the LoginFailures DC component with SET_DATA_COLLECTOR_NOTIFY_POLICY:

    => SELECT SET_DATA_COLLECTOR_NOTIFY_POLICY('LoginFailures','v_syslog_notifier', 'Login failed!', true);
    

    This notifier writes the following message to syslog (default location: /var/log/messages) when a user fails to authenticate as the user Bob:

    Apr 25 16:04:58
    vertica_host_01
    vertica:
        Event Posted:
            Event Code:21
            Event Id:0
            Event Severity: Notice [5]
            PostedTimestamp: 2022-04-25 16:04:58.083063
            ExpirationTimestamp: 2022-04-25 16:04:58.083063
            EventCodeDescription: Notifier
            ProblemDescription: (Login failed!)
        {
           "_db":"VMart",
           "_schema":"v_internal",
           "_table":"dc_login_failures",
           "_uuid":"f8b0278a-3282-4e1a-9c86-e0f3f042a971",
           "authentication_method":"Reject",
           "client_authentication_name":"default: Reject",
           "client_hostname":"::1",
           "client_label":"",
           "client_os_user_name":"dbadmin",
           "client_pid":523418,
           "client_version":"",
           "database_name":"dbadmin",
           "effective_protocol":"3.8",
           "node_name":"v_vmart_node0001",
           "reason":"REJECT",
           "requested_protocol":"3.8",
           "ssl_client_fingerprint":"",
           "ssl_client_subject":"",
           "time":"2022-04-25 16:04:58.082568-05",
           "user_name":"Bob"
        }#012
        DatabaseName: VMart
        Hostname: vertica_host_01
    

742 - system-table-save-info.md

Vertica maintains system table metrics until they reach a designated size quota (in kilobytes). This quota is set through internal processes, which you cannot set or view directly.

743 - table-constraintfor-alter-create-table-sections.md

Table-constraint

Adds a constraint to table metadata. You can specify table constraints with CREATE TABLE, or add a constraint to an existing table with ALTER TABLE. For details, see Setting constraints.

Syntax

[ CONSTRAINT constraint-name ]
{
... PRIMARY KEY (column[,... ]) [ ENABLED | DISABLED ]
... | FOREIGN KEY (column[,... ] ) REFERENCES table [ (column[,...]) ]
... | UNIQUE (column[,...]) [ ENABLED | DISABLED ]
... | CHECK (expression) [ ENABLED | DISABLED ]
}

Parameters

CONSTRAINT constraint-name
Assigns a name to the constraint. Vertica recommends that you name all constraints.
PRIMARY KEY
Defines one or more NOT NULL columns as the primary key as follows:
PRIMARY KEY (column[,...]) [ ENABLED | DISABLED]

You can qualify this constraint with the keyword ENABLED or DISABLED. See Enforcing Constraints below.

If you do not name a primary key constraint, Vertica assigns the name C_PRIMARY.

FOREIGN KEY
Adds a referential integrity constraint defining one or more columns as foreign keys as follows:
FOREIGN KEY (column[,... ]) REFERENCES table [(column[,... ])]

If you omit column, Vertica references the primary key in table.

If you do not name a foreign key constraint, Vertica assigns the name C_FOREIGN.

UNIQUE
Specifies that the data in a column or group of columns is unique with respect to all table rows, as follows:
UNIQUE (column[,...]) [ENABLED | DISABLED]

You can qualify this constraint with the keyword ENABLED or DISABLED. See Enforcing Constraints below.

If you do not name a unique constraint, Vertica assigns the name C_UNIQUE.

CHECK
Specifies a check condition as an expression that returns a Boolean value, as follows:
CHECK (expression) [ENABLED | DISABLED]

You can qualify this constraint with the keyword ENABLED or DISABLED. See Enforcing Constraints below.

If you do not name a check constraint, Vertica assigns the name C_CHECK.

Privileges

Non-superusers: table owner, or the following privileges:

Enforcing constraints

A table can specify whether Vertica automatically enforces a primary key, unique key or check constraint with the keyword ENABLED or DISABLED. If you omit ENABLED or DISABLED, Vertica determines whether to enable the constraint automatically by checking the appropriate configuration parameter:

  • EnableNewPrimaryKeysByDefault

  • EnableNewUniqueKeysByDefault

  • EnableNewCheckConstraintsByDefault

For details, see Constraint enforcement.

Examples

The following example creates a table (t01) with a primary key constraint.

CREATE TABLE t01 (id int CONSTRAINT sampleconstraint PRIMARY KEY);
CREATE TABLE

This example creates the same table without the constraint, and then adds the constraint with ALTER TABLE ADD CONSTRAINT

CREATE TABLE t01 (id int);
CREATE TABLE

ALTER TABLE t01 ADD CONSTRAINT sampleconstraint PRIMARY KEY(id);
WARNING 2623:  Column "id" definition changed to NOT NULL
ALTER TABLE

The following example creates a table (addapk) with two columns, adds a third column to the table, and then adds a primary key constraint on the third column.

=> CREATE TABLE addapk (col1 INT, col2 INT);
CREATE TABLE

=> ALTER TABLE addapk ADD COLUMN col3 INT;
ALTER TABLE

=> ALTER TABLE addapk ADD CONSTRAINT col3constraint PRIMARY KEY (col3) ENABLED;
WARNING 2623:  Column "col3" definition changed to NOT NULL
ALTER TABLE

Using the sample table addapk, check that the primary key constraint is enabled (is_enabled is t).

=> SELECT constraint_name, column_name, constraint_type, is_enabled FROM PRIMARY_KEYS WHERE table_name IN ('addapk');

 constraint_name | column_name | constraint_type | is_enabled
-----------------+-------------+-----------------+------------
 col3constraint  | col3        | p               | t
(1 row)

This example disables the constraint using ALTER TABLE ALTER CONSTRAINT.

=> ALTER TABLE addapk ALTER CONSTRAINT col3constraint DISABLED;

Check that the primary key is now disabled (is_enabled is f).

=> SELECT constraint_name, column_name, constraint_type, is_enabled FROM PRIMARY_KEYS WHERE table_name IN ('addapk');

 constraint_name | column_name | constraint_type | is_enabled
-----------------+-------------+-----------------+------------
 col3constraint  | col3        | p               | f
(1 row)

For a general discussion of constraints, see Constraints. For additional examples of creating and naming constraints, see Naming constraints.

744 - table-eviction-policy-supersession.md

If the table has another eviction policy already set on it, the new policy supersedes it.

745 - table-privileges.md

746 - target-primary-key-restrictions-warning.md

747 - tensorflow-ct-example.md

For instance, the MNIST handwritten digit classification dataset contains 784 input features for each input row, one feature for each pixel in the images of handwritten digits. The PREDICT_TENSORFLOW function requires that each of these input features are contained in a separate input column. By encapsulating these features into a single ARRAY, the PREDICT_TENSORFLOW_SCALAR function only needs a single input column of type ROW, where the pixel values are the array elements for an input field:

--Each array for the "image" field has 784 elements. 
=> SELECT * FROM mnist_train;
id |                   inputs
---+---------------------------------------------
 1 | {"image":[0, 0, 0,..., 244, 222, 210,...]}
 2 | {"image":[0, 0, 0,..., 185, 84, 223,...]}
 3 | {"image":[0, 0, 0,..., 133, 254, 78,...]}
 ...

In this case, the function output consists of a single opeartion with one tensor. The value of this field is an array of ten elements, which are all zero except for the element whose index is the predicted digit:

=> SELECT id, PREDICT_TENSORFLOW_SCALAR(inputs USING PARAMETERS model_name='tf_mnist_ct') FROM mnist_test;
 id |                   PREDICT_TENSORFLOW_SCALAR                      
----+-------------------------------------------------------------------
  1 | {"prediction:0":["0", "0", "0", "0", "1", "0", "0", "0", "0", "0"]} 
  2 | {"prediction:0":["0", "1", "0", "0", "0", "0", "0", "0", "0", "0"]} 
  3 | {"prediction:0":["0", "0", "0", "0", "0", "0", "0", "1", "0", "0"]} 
...

748 - tf-integration-install.md

$ /opt/vertica/bin/admintools -t install_package -d database_name -p 'password' --package TFIntegration

749 - time-zone-literals.md

Time zone literals

To view the default list of valid literals, see the files in the following directory:

opt/vertica/share/timezonesets

For example:

$ cat Antarctica.txt
...
# src/timezone/tznames/Antarctica.txt
#

AWST    28800    # Australian Western Standard Time
                 #     (Antarctica/Casey)
                 #     (Australia/Perth)
...

NZST    43200    # New Zealand Standard Time
                 #     (Antarctica/McMurdo)
                 #     (Pacific/Auckland)
ROTT   -10800    # Rothera Time
                 #     (Antarctica/Rothera)
SYOT    10800    # Syowa Time
                 #     (Antarctica/Syowa)
VOST    21600    # Vostok time
                 #     (Antarctica/Vostok)

750 - time-zone-specification1.md

Specifying time zones

You can specify time zones in two ways:

  • A string literal such as America/Chicago or PST

  • An interval that specifies a UTC offset—for example, INTERVAL '-08:00'

It is generally good practice to specify time zones with literals that indicate a geographic location. Vertica makes the necessary seasonal adjustments, and thereby avoids inconsistent results. For example, the following two queries are issued when daylight time is in effect. Because the local UTC offset during daylight time is -04, both queries return the same results:

751 - time-zone-specification2.md

If you issue a use the UTC offset in a similar query when standard time is in effect, you must adjust the UTC offset accordingly—for Denver time, to -07—otherwise, Vertica returns a different (and erroneous) result:

752 - timeout-option-description.md

Integer number of seconds to wait for a response from the Kafka cluster.

753 - tls-auth-record-example.md

For example, to reject all plaintext client connections, specify the reject authentication method and the HOST NO TLS access method as follows:

=> CREATE AUTHENTICATION RejectNoSSL METHOD 'reject' HOST NO TLS '0.0.0.0/0';  --IPv4
=> CREATE AUTHENTICATION RejectNoSSL METHOD 'reject' HOST NO TLS '::/0';       --IPv6

754 - tls-config-privileges.md

A privilege, one of the following:

755 - tls-notifier-example.md

=> CREATE NOTIFIER encrypted_notifier
    ACTION 'kafka://127.0.0.1:9092'
    MAXMEMORYSIZE '10M'
    TLS CONFIGURATION 'notifier_tls_config'

756 - tlsmode-client-server-tls.md

  • DISABLE: Disables TLS. All other options for this parameter enable TLS.

  • ENABLE: Enables TLS. Vertica does not verify client certificates.

  • TRY_VERIFY: Establishes a TLS connection if one of the following is true:

    • The client presents a valid certificate.
    • The client doesn't present a certificate

    If the client presents an invalid certificate, the connection is rejected.

  • VERIFY_CA: Connection succeeds if Vertica verifies that the client certificate is from a trusted CA. If the client does not present a client certificate, the connection is rejected.

757 - tlsmode.md

  • DISABLE: Disables TLS. All other options for this parameter enable TLS.

  • ENABLE: Enables TLS. Vertica does not check client certificates.

  • TRY_VERIFY: Establishes a TLS connection if one of the following is true:

    • the other host presents a valid certificate

    • the other host doesn't present a certificate

    If the other host presents an invalid certificate, the connection will use plaintext.

  • VERIFY_CA: Connection succeeds if Vertica verifies that the other host's certificate is from a trusted CA. If the other host does not present a certificate, the connection uses plaintext.

  • VERIFY_FULL: Connection succeeds if Vertica verifies that the other host's certificate is from a trusted CA and the certificate's cn (Common Name) or subjectAltName attribute matches the hostname or IP address of the other host.

    Note that for client certificates, cn is used for the username, so subjectAltName must match the hostname or IP address of the other host.

758 - tlsmodeldap.md

Enable TLS or LDAPS (the exact protocol used depends on the value of host in the AUTHENTICATION object) by setting the TLSMODE to one of the following. TRY_VERIFY or higher requires a CA certificate:

  • ENABLE: Enables TLS. Vertica does not check the LDAP server's certificate.

  • TRY_VERIFY: Establishes a TLS connection if one of the following is true:

    • The LDAP server presents a valid certificate.

    • The LDAP server doesn't present a certificate.

    If the LDAP server presents an invalid certificate, a plaintext connection is used.

  • VERIFY_CA: Connection succeeds if Vertica verifies that the LDAP server's certificate is from a trusted CA. Using this TLSMODE forces all connections without a certificate to use plaintext.

  • VERIFY_FULL: Connection succeeds if Vertica verifies that the LDAP server's certificate is from a trusted CA and the cn (Common Name) or subjectAltName attribute matches the hostname or IP address of the LDAP server.

    The cn is used for the username, so subjectAltName must match the hostname or IP address of the LDAP server.

759 - tm-resource-pool-maxconcurrency.md

Sets across all nodes the maximum number of concurrent execution slots available to TM pool. In databases created in Vertica releases ≥9.3, the default value is 7. In databases created in earlier versions, the default is 3.This setting specifies the maximum number of merges that can occur simultaneously on multiple threads.

760 - tolerations.md

Any tolerations and taints that aid in determining where to schedule a pod.

761 - transaction-id-column-description.md

Identifier for the transaction within the session, if any. If a session is active but no transaction has begun, TRANSACTION_ID returns NULL.

762 - transaction-isolation-level-retention-per-session.md

A transaction retains its isolation level until it completes, even if the session's isolation level changes during the transaction. Vertica internal processes (such as the Tuple Mover and refresh operations) and DDL operations always run at the SERIALIZABLE isolation level to ensure consistency.

763 - transform-function-c-.md

The TransformFunction API provides the following methods for extension by subclasses:

virtual void setup(ServerInterface &srvInterface,
        const SizedColumnTypes &argTypes);

virtual void processPartition(ServerInterface &srvInterface,
        PartitionReader &input_reader, PartitionWriter &output_writer)=0;

virtual void cancel(ServerInterface &srvInterface);

virtual void destroy(ServerInterface &srvInterface,
        const SizedColumnTypes &argTypes);

The PartitionReader and PartitionWriter classes provide getters and setters for column values, along with next() to iterate through partitions. See the API reference documentation for details.

764 - transform-function-factory-c-.md

The TransformFunctionFactory API provides the following methods for extension by subclasses:

virtual TransformFunction *
    createTransformFunction (ServerInterface &srvInterface)=0;

virtual void getPrototype(ServerInterface &srvInterface,
            ColumnTypes &argTypes, ColumnTypes &returnType)=0;

virtual void getReturnType(ServerInterface &srvInterface,
            const SizedColumnTypes &argTypes,
            SizedColumnTypes &returnType)=0;

virtual void getParameterType(ServerInterface &srvInterface,
            SizedColumnTypes &parameterTypes);

virtual void getTransformFunctionProperties(ServerInterface &srvInterface,
            const SizedColumnTypes &argTypes,
            Properties &properties);

765 - transform-function-factory-java.md

The TransformFunctionFactory API provides the following methods for extension by subclasses:

public abstract TransformFunction createTransformFunction(ServerInterface srvInterface);

public abstract void getPrototype(ServerInterface srvInterface, ColumnTypes argTypes, ColumnTypes returnType);

public abstract void getReturnType(ServerInterface srvInterface, SizedColumnTypes argTypes,
        SizedColumnTypes returnType) throws UdfException;

public void getParameterType(ServerInterface srvInterface, SizedColumnTypes parameterTypes);

766 - transform-function-factory-python.md

The TransformFunctionFactory API provides the following methods for extension by subclasses:


def createTransformFunction(self, srv)

def getPrototype(self, srv_interface, arg_types, return_type)

def getReturnType(self, srv_interface, arg_types, return_type)

def getParameterType(self, server_interface, parameterTypes)

def getTransformFunctionProperties(self, server_interface, arg_types)

767 - transform-function-factory-r.md

Implement the Factory function API to define a transform function factory:

FunctionNameFactory <- function() {
  list(name    = FunctionName,
       udxtype = c("scalar"),
       intype  = c("int"),
       outtype = c("int"))
}

768 - transform-function-java.md

The TransformFunction API provides the following methods for extension by subclasses:

public void setup(ServerInterface srvInterface, SizedColumnTypes argTypes);

public abstract void processPartition(ServerInterface srvInterface,
        PartitionReader input_reader, PartitionWriter input_writer)
    throws UdfException, DestroyInvocation;

protected void cancel(ServerInterface srvInterface);

public void destroy(ServerInterface srvInterface, SizedColumnTypes argTypes);

The PartitionReader and PartitionWriter classes provide getters and setters for column values, along with next() to iterate through partitions. See the API reference documentation for details.

769 - transform-function-prop-c-.md

To mark a UDTF as one-to-many, you must set the isExploder class property to True within the getTransformFunctionProperties() method. Whether a UDTF is marked as one-to-many can be determined by the transform function's arguments and parameters, for example:

void getFunctionProperties(ServerInterface &srvInterface,
        const SizedColumnTypes &argTypes,
        Properties &properties) override
{
    if (argTypes.getColumnCount() > 1) {
        properties.isExploder = false;
    }
    else {
        properties.isExploder = true;
    }
}

770 - transform-function-prop-python.md

To mark a UDTF as one-to-many, you must set the is_exploder class property to True within the getTransformFunctionProperties() method. Whether a UDTF is marked as one-to-many can be determined by the transform function's arguments and parameters, for example:

def getFunctionProperties(cls, server_interface, arg_types):
    props = vertica_sdk.TransformFunctionFactory.Properties()
    if arg_types.getColumnCount() != 1:
        props.is_exploder = False
    else:
        props.is_exploder = True
    return props

771 - transform-function-python.md

The TransformFunction API provides the following methods for extension by subclasses:


def setup(self, server_interface, col_types)

def processPartition(self, server_interface, partition_reader, partition_writer)

def destroy(self, server_interface, col_types)

The PartitionReader and PartitionWriter classes provide getters and setters for column values, along with next() to iterate through partitions. See the API reference documentation for details.

772 - transform-function-r.md

Implement the Main function API to define a transform function:

FunctionName <- function(input.data.frame, parameters.data.frame) {
  # Computations

  # The function must return a data frame.
  return(output.data.frame)
}

773 - troubleshooting-information.md

774 - truncated-table-output.md

Query results for the fields INPUT_QUERY and ANNOTATED_QUERY are truncated after ~32K characters. You can get the full content of both fields in two ways:

775 - ud-an-f-example-rank.md

An example of running this rank function, named an_rank, is:

=> SELECT * FROM hits;
      site       |    date    | num_hits
-----------------+------------+----------
 www.example.com | 2012-01-02 |       97
 www.vertica.com | 2012-01-01 |   343435
 www.example.com | 2012-01-01 |      123
 www.example.com | 2012-01-04 |      112
 www.vertica.com | 2012-01-02 |   503695
 www.vertica.com | 2012-01-03 |   490387
 www.example.com | 2012-01-03 |      123
(7 rows)
=> SELECT site,date,num_hits,an_rank()
   OVER (PARTITION BY site ORDER BY num_hits DESC)
   AS an_rank FROM hits;
      site       |    date    | num_hits | an_rank
-----------------+------------+----------+---------
 www.example.com | 2012-01-03 |      123 |       1
 www.example.com | 2012-01-01 |      123 |       1
 www.example.com | 2012-01-04 |      112 |       3
 www.example.com | 2012-01-02 |       97 |       4
 www.vertica.com | 2012-01-02 |   503695 |       1
 www.vertica.com | 2012-01-03 |   490387 |       2
 www.vertica.com | 2012-01-01 |   343435 |       3
(7 rows)

As with the built-in RANK analytic function, rows that have the same value for the ORDER BY column (num_hits in this example) have the same rank, but the rank continues to increase, so that the next row that has a different ORDER BY key gets a rank value based on the number of rows that preceded it.

776 - ud-chunker-c-.md

The UDChunker API provides the following methods for extension by subclasses:

virtual void setup(ServerInterface &srvInterface,
            SizedColumnTypes &returnType);

virtual StreamState alignPortion(ServerInterface &srvInterface,
            DataBuffer &input, InputState state);

virtual StreamState process(ServerInterface &srvInterface,
            DataBuffer &input, InputState input_state)=0;

virtual void cancel(ServerInterface &srvInterface);

virtual void destroy(ServerInterface &srvInterface,
            SizedColumnTypes &returnType);

777 - ud-filter-c-.md

The UDFilter API provides the following methods for extension by subclasses:

virtual void setup(ServerInterface &srvInterface);

virtual bool useSideChannel();

virtual StreamState process(ServerInterface &srvInterface, DataBuffer &input, InputState input_state, DataBuffer &output)=0;

virtual StreamState processWithMetadata(ServerInterface &srvInterface, DataBuffer &input,
    LengthBuffer &input_lengths, InputState input_state, DataBuffer &output, LengthBuffer &output_lengths)=0;

virtual void cancel(ServerInterface &srvInterface);

virtual void destroy(ServerInterface &srvInterface);

778 - ud-filter-java.md

The UDFilter API provides the following methods for extension by subclasses:

public void setup(ServerInterface srvInterface) throws UdfException;

public abstract StreamState process(ServerInterface srvInterface, DataBuffer input,
                InputState input_state, DataBuffer output)
    throws UdfException;

protected void cancel(ServerInterface srvInterface);

public void destroy(ServerInterface srvInterface) throws UdfException;

779 - ud-filter-python.md

The UDFilter API provides the following methods for extension by subclasses:

class PyUDFilter(vertica_sdk.UDFilter):
    def __init__(self):
        pass

    def setup(self, srvInterface):
        pass

    def process(self, srvInterface, inputbuffer, outputbuffer, inputstate):
        # User process data here, and put into outputbuffer.
        return StreamState.DONE

780 - ud-parser-c-.md

The UDParser API provides the following methods for extension by subclasses:

virtual void setup(ServerInterface &srvInterface, SizedColumnTypes &returnType);

virtual bool useSideChannel();

virtual StreamState process(ServerInterface &srvInterface, DataBuffer &input, InputState input_state)=0;

virtual StreamState processWithMetadata(ServerInterface &srvInterface, DataBuffer &input,
                            LengthBuffer &input_lengths, InputState input_state)=0;

virtual void cancel(ServerInterface &srvInterface);

virtual void destroy(ServerInterface &srvInterface, SizedColumnTypes &returnType);

virtual RejectedRecord getRejectedRecord();

781 - ud-parser-java.md

The UDParser API provides the following methods for extension by subclasses:

public void setup(ServerInterface srvInterface, SizedColumnTypes returnType)
    throws UdfException;

public abstract StreamState process(ServerInterface srvInterface,
                DataBuffer input, InputState input_state)
    throws UdfException, DestroyInvocation;

protected void cancel(ServerInterface srvInterface);

public void destroy(ServerInterface srvInterface, SizedColumnTypes returnType)
    throws UdfException;

public RejectedRecord getRejectedRecord() throws UdfException;

A UDParser uses a StreamWriter to write its output. StreamWriter provides methods for all the basic types, such as setBooleanValue(), setStringValue(), and so on. In the Java API this class also provides the setValue() method, which automatically sets the data type.

The methods described so far write single column values. StreamWriter also provides a method to write a complete row from a map. The setRowFromMap() method takes a map of column names and values and writes all the values into their corresponding columns. This method does not define new columns but instead writes values only to existing columns. The JsonParser example uses this method to write arbitrary JSON input. (See Java example: JSON parser.)

setRowsFromMap() also populates any VMap ('raw') column of Flex Tables (see Flex tables) with the entire provided map. For most cases, setRowsFromMap() is the appropriate way to populate a Flex Table. However, you can also generate a VMap value into a specified column using setVMap(), similar to other setValue() methods.

The setRowFromMap() method automatically coerces the input values into the types defined for those columns using an associated TypeCoercion. In most cases, using the default implementation (StandardTypeCoercion) is appropriate.

TypeCoercion uses policies to govern its behavior. For example, the FAIL_INVALID_INPUT_VALUE policy means invalid input is treated as an error instead of using a null value. Errors are caught and handled as rejections (see "Rejecting Rows" in User-defined parser). Policies also govern whether input that is too long is truncated. Use the setPolicy() method on the parser's TypeCoercion to set policies. See the API documentation for supported values.

You might need to customize type coercion beyond setting these policies. To do so, subclass one of the provided implementations of TypeCoercion and override the asType() methods. Such customization could be necessary if your parser reads objects that come from a third-party library. A parser handling geo-coordinates, for example, might override asLong to translate inputs like "40.4397N" into numbers. See the Vertica API documentation for a list of implementations.

782 - ud-parser-python.md

The UDParser API provides the following methods for extension by subclasses:


class PyUDParser(vertica_sdk.UDSource):
    def __init__(self):
        pass
    def setup(srvInterface, returnType):
        pass
    def process(self, srvInterface, inputbuffer, inputstate, streamwriter):
        # User implement process function.
        # User reads data from inputbuffer and parse the data.
        # Rows are emitted via the streamwriter argument
        return StreamState.DONE

In Python, the process() method requires both an input buffer and an output buffer (see InputBuffer and OutputBuffer APIs). The input buffer represents the source of the information that you want to parse. The output buffer delivers the filtered information to Vertica.

In the event the filter rejects a record, use the method REJECT() to identify the rejected data and the reason for the rejection.

783 - ud-source-c-.md

The UDSource API provides the following methods for extension by subclasses:

virtual void setup(ServerInterface &srvInterface);

virtual bool useSideChannel();

virtual StreamState process(ServerInterface &srvInterface, DataBuffer &output)=0;

virtual StreamState processWithMetadata(ServerInterface &srvInterface, DataBuffer &output, LengthBuffer &output_lengths)=0;

virtual void cancel(ServerInterface &srvInterface);

virtual void destroy(ServerInterface &srvInterface);

virtual vint getSize();

virtual std::string getUri();

784 - ud-source-java.md

The UDSource API provides the following methods for extension by subclasses:

public void setup(ServerInterface srvInterface) throws UdfException;

public abstract StreamState process(ServerInterface srvInterface, DataBuffer output) throws UdfException;

protected void cancel(ServerInterface srvInterface);

public void destroy(ServerInterface srvInterface) throws UdfException;

public Integer getSize();

public String getUri();

785 - udf.md

User-defined extensions (UDxs) are functions contained in external libraries that are developed in C++, Python, Java, or R using the Vertica SDK. The external libraries are defined in the Vertica catalog using the CREATE LIBRARY statement. They are best suited for analytic operations that are difficult to perform in SQL, or that need to be performed frequently enough that their speed is a major concern.

786 - udsf-get-output-range.md


    /*
     * This method computes the output range for this scalar function from
     *   the ranges of its inputs in a single invocation.
     *
     * The input ranges are retrieved via inRange
     * The output range is returned via outRange
     */
    virtual void getOutputRange(Vertica::ServerInterface &srvInterface,
                                Vertica::ValueRangeReader &inRange,
                                Vertica::ValueRangeWriter &outRange)
    {
        if (inRange.hasBounds(0) && inRange.hasBounds(1)) {
            // Input ranges have bounds defined
            if (inRange.isNull(0) || inRange.isNull(1)) {
                // At least one range has only NULL values.
                // Output range can only have NULL values.
                outRange.setNull();
                outRange.setHasBounds();
                return;
            } else {
                // Compute output range
                const vint& a1LoBound = inRange.getIntRefLo(0);
                const vint& a2LoBound = inRange.getIntRefLo(1);
                outRange.setIntLo(a1LoBound + a2LoBound);

                const vint& a1UpBound = inRange.getIntRefUp(0);
                const vint& a2UpBound = inRange.getIntRefUp(1);
                outRange.setIntUp(a1UpBound + a2UpBound);
            }
        } else {
            // Input ranges are unbounded. No output range can be defined
            return;
        }

        if (!inRange.canHaveNulls(0) && !inRange.canHaveNulls(1)) {
            // There cannot be NULL values in the output range
            outRange.setCanHaveNulls(false);
        }

        // Let Vertica know that the output range is bounded
        outRange.setHasBounds();
    }

787 - udsf-row-restrictions.md

A ROW returned from a UDSF cannot be used as an argument to COUNT.

788 - unbounded-collections-max-binary-size.md

Instead of specifying a bound, you can specify a maximum binary size for an unbounded collection. The binary size acts as an absolute limit, regardless of how many elements the collection contains. Collections that do not specify a maximum binary size use the value of DefaultArrayBinarySize. This size is set at the time the collection is defined and is not affected by later changes to the value of DefaultArrayBinarySize.

You cannot set a maximum binary size for a bounded collection, only an unbounded one.

789 - unique-identifiers.md

It must also be unique among all names of sequences, tables, projections, views, and models within the same schema.

790 - unix-odbc-value-length-limit.md

791 - unmatched-keys.md

Data being loaded can contain keys that are not part of the table definition. If you are loading into a flex table (or a flexible complex type column), no data is lost. For a table with strongly-defined columns, however, new keys cannot be loaded because the table does not have a place to put them.

This parser emits warnings if it finds new keys and if both of the following are true:

  • The target table is not a flex table.

  • The new key is not nested within a flexible complex type column.

New keys are logged in the UDX_EVENTS system table. If a new key is a complex type with nested keys, only the top-level key is logged. When you see a warning about unmatched keys, you can query this table and then use ALTER TABLE to modify your table definition for future loads.

Querying an external table loads data and thus can trigger these warnings. To prevent them, set the suppress_warnings parameter to 'unmatched_keys' or 'true':

792 - unsegmented-projection-example.md

The following example shows how to create an unsegmented projection for the table store.store_dimension:


=> CREATE PROJECTION store.store_dimension_proj (storekey, name, city, state)
             AS SELECT store_key, store_name, store_city, store_state
             FROM store.store_dimension
             UNSEGMENTED ALL NODES;
CREATE PROJECTION

Vertica uses the same name to identify all instances of the unsegmented projection—in this example, store.store_dimension_proj. The keyword ALL NODES specifies to replicate the projection on all nodes:


=> \dj store.store_dimension_proj
                         List of projections
 Schema |         Name         |  Owner  |       Node       | Comment
--------+----------------------+---------+------------------+---------
 store  | store_dimension_proj | dbadmin | v_vmart_node0001 |
 store  | store_dimension_proj | dbadmin | v_vmart_node0002 |
 store  | store_dimension_proj | dbadmin | v_vmart_node0003 |
(3 rows)

For more information about projection name conventions, see Projection naming.

793 - unsegmented-projections-intro.md

In many cases, dimension tables are relatively small, so you do not need to segment them. Accordingly, you should design a K-safe database so projections for its dimension tables are replicated without segmentation on all cluster nodes. You create unsegmented projections with a CREATE PROJECTION statement that includes the clause UNSEGMENTED ALL NODES. This clause specifies to create identical instances of the projection on all cluster nodes.

794 - update-description.md

Cannot be used with --create, --delete, or --read.

795 - uri-azure-blob-details.md

azb://[account@]host[:port]/container/path

796 - uri-azure-blob.md

azb://account/container/path

797 - uri-gcs.md

gs://bucket/path

798 - uri-hdfs-name-node.md

[[s]web]hdfs://namenode-host:port/path

799 - uri-hdfs-ns.md

[[s]web]hdfs://[nameservice]/path

800 - uri-s3.md

s3://bucket/path

801 - user-defined-sql-function_snip.md

User-defined SQL functions let you define and store commonly-used SQL expressions as a function. User-defined SQL functions are useful for executing complex queries and combining Vertica built-in functions. You simply call the function name you assigned in your query.

A user-defined SQL function can be used anywhere in a query where an ordinary SQL expression can be used, except in a table partition clause or the projection segmentation clause.

802 - user-id-column-description.md

Unique numeric ID assigned by the Vertica catalog, which identifies the user.

803 - users-ls.md

Restrictions

You cannot change a storage location from a USER usage type if you created the location that way, or to a USER type if you did not. You can change a USER storage location to specify DATA (storing TEMP files is not supported). However, doing so does not affect the primary objective of a USER storage location, to be accessible by non-dbadmin users with assigned privileges.

You cannot change a storage location from SHARED TEMP or SHARED USER to SHARED DATA or the reverse.

804 - users-must-have-same-parameters-note.md

The SecureData integration only supports one configuration for the SecureData Appliance at a time.

805 - uuid-backwards-compatibility-note.md

Vertica version 9.0.0 introduced the UUID data type, including JDBC support for UUIDs. The Vertica ADO.NET, ODBC, and OLE DB clients added full support for UUIDs in version 9.0.1. Vertica maintains backwards compatibility with older supported client driver versions that do not support the UUID data type, as follows:

When an older client... Vertica...
Queries tables with UUID columns Translates the native UUID values to CHAR values.
Inserts data into a UUID column Converts the CHAR value sent by the client into a native UUID value.
Queries a UUID column's metadata Reports its data type as CHAR.

806 - v-map-data-api-description.md

Any VMap data. The VMap can exist as:

  • The __raw__ column of a flex table

  • Data returned from a map function such as MAPLOOKUP

  • Other database content

807 - v1beta1-deprecation.md

808 - var-overview.md

VAR is a multivariate autoregressive time series algorithm that captures the relationship between multiple time series variables over time. Unlike AR, which only considers a single variable, VAR models incorporate feedback between different variables in the model, enabling the model to analyze how variables interact across lagged time steps. For example, with two variables—atmospheric pressure and rain accumulation—a VAR model could determine whether a drop in pressure tends to result in rain at a future date.

809 - vbr-namespace.md

vbr behaves differently depending on whether the target namespace exists:

  • Exists: vbr attempts to restore or replicate the objects to the existing namespace, which must have the same shard count, shard boundaries, and node subscriptions as the source namespace. If these conditions are not met, the vbr task fails.
  • Nonexistent: vbr creates a namespace in the target database with the name specified in --target-namespace and the shard count of the source namespace, and then replicates or restores the objects to that namespace.

If no target namespace is specified, vbr attempts to restore or replicate objects to a namespace with the same name as the source namespace.

810 - vcluster-desc-inline.md

a Go library that uses a high-level REST interface to perform database operations with the Node Management Agent (NMA) and HTTPS service. The vclusterops library replaces Administration tools (admintools), a traditional command-line interface that executes administrator commands through STDIN and required SSH keys for internal node communications. The vclusterops deployment is more efficient in containerized environments than the admintools deployment.

811 - versioning-statuses.md

  • under_review: Status assigned to newly registered models.

  • staging: Model is targeted for A/B testing against the model currently in production.

  • production: Model is in production for its specified application. Only one model can be in production for a given registered_name at one time.

  • archived: Status of models that were previously in production. Archived models can be returned to production at any time.

  • declined: Model is no longer in consideration for production.

  • unregistered: Model is removed from the versioning environment. The model does not appear in the REGISTERED_MODELS system table.

812 - vertica-user-community-link.md

Vertica User Community

813 - view-cluster-page.md

  1. On the MC Home page, click View Infrastructure to go to the Infrastructure page. This page lists all the clusters the MC is monitoring.

  2. Click any cluster shown on the Infrastructure page.

  3. Select View or Manage from the dialog that displays, to view its Cluster page. (In a cloud environment, if MC was deployed from a cloud template the button says "Manage". Otherwise, the button says "View".)

814 - view-inherited-privileges-disabled.md

815 - volatile-functions_snip.md

Regardless of their arguments or environment, volatile functions can return a different result with each invocation—for example, UUID_GENERATE.

816 - voltage-secure-configure-example.md

=> \x
Expanded display is on.
=> SELECT VoltageSecureConfigure(USING PARAMETERS config_dfs_path='voltage.conf',
                                 username='alice', identity='alice@example.com', store_password=false
                                ) OVER ();
-[ RECORD 1 ]-----+----------------------------------------------------------------
config_dfs_path   | voltage.conf
identity          | alice@example.com
username          | alice

817 - voltage-secure-confugure-global-example.md

=> SELECT VoltageSecureConfigureGlobal(USING PARAMETERS
                                       policy_url='https://voltage-pp-0000.example.com/policy/clientPolicy.xml',
                                       NETWORK_TIMEOUT=200)
                                       OVER ();

                           policy_url                            | allow_short_fpe | enable_file_cache | network_timeout
-----------------------------------------------------------------+-----------------+-------------------+-----------------
 https://voltage-pp-0000.example.com/policy/clientPolicy.xml     |                 |                   | 200
(1 row)

818 - voltage-secure-refresh-policy-example.md

Manually refresh the client policy across the nodes:

=> SELECT VoltageSecureRefreshPolicy() OVER ();
                                  PolicyRefresh
-------------------------------------------------------------------------------------
Successfully refreshed policy on node [v_sandbox_node0001]. Policy on other nodes
will be refreshed the next time a Voltage operation is run on them.
(1 row)

819 - vsql_snip.md

vsql is a character-based, interactive, front-end utility that lets you type SQL statements and see the results. It also provides a number of meta-commands and various shell-like features that facilitate writing scripts and automating a variety of tasks.

820 - webhook-enable-caution.md

821 - webhooks-api.md

GET webhooks Returns a list of active webhooks.
POST webhooks/subscribe Creates a new webhook.
DELETE webhooks/:subscriber_id Deletes an existing webhook.

822 - welcome-v-sql.md

Welcome to vsql, the Vertica Analytic Database interactive terminal.
Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

=>

823 - window-frame-clause_snip.md

{ ROWS | RANGE } { BETWEEN start-point AND end-point } | start-point

start-point | end-point:

  { UNBOUNDED {PRECEDING | FOLLOWING}
    | CURRENT ROW
    | constant-value  {PRECEDING | FOLLOWING}}

824 - window-order-clause_snip.md

ORDER BY { expression [ ASC | DESC [ NULLS { FIRST | LAST | AUTO } ] ]
  }[,...]

825 - window-partition-clause-syntax.md

{ PARTITION BY expression[,...] 
  | PARTITION BEST 
  | PARTITION NODES 
  | PARTITION ROW 
  | PARTITION LEFT JOIN }

826 - windows-driver-installation.md

  1. Download the Windows client driver installer. For details on the drivers included in this installer, see Windows client driver installer.

  2. Run the installer and follow the prompts to install the drivers.

  3. Reboot your system.

827 - windows-driver-upgrade.md

  1. Download the Windows client driver installer.

  2. Run the installer and follow the prompts to upgrade the driver. The installer upgrades existing drivers in place.

  3. Reboot your system.

828 - with-grant-option.md

Allows the grantee to grant and revoke the same privileges to other users or roles. For details, see Granting privileges.

829 - without-with-timezone.md

Specifies whether to include a time zone with the stored value:

  • WITHOUT TIME ZONE (default): Specifiesthat input-string does not include a time zone. If the input string contains a time zone, Vertica ignores this qualifier. Instead, it conforms to WITH TIME ZONE behavior.

  • WITH TIME ZONE: Specifies to convert input-string to UTC, using the UTC offset for the specified time zone. If the input string omits a time zone, Vertica uses the UTC offset of the time zone that is configured for your system.

830 - wkb.md

The Well-Known Binary (WKB) representation of the spatial object.

831 - workload-routing-param-desc.md

The name of the workload for the session. For details, see Workload routing.

832 - wos-configuration.md

833 - x-cord.md

x-coordinate or longitude, FLOAT.

834 - xg-boost-overview.md

XGBoost (eXtreme Gradient Boosting) is a popular supervised-learning algorithm used for regression and classification on large datasets. It uses sequentially-built shallow decision trees to provide accurate results and a highly-scalable training method that avoids overfitting.

835 - xgb-subsampling-param-desc.md

col_sample_by parameters "stack" on top of each other if several are specified. That is, given a set of 24 columns, for col_sample_by_tree=0.5 andcol_sample_by_node=0.5,col_sample_by_tree samples 12 columns, reducing the available, unsampled column pool to 12. col_sample_by_node then samples half of the remaining pool, so each node samples 6 columns.

This algorithm will always sample at least one column.

836 - y-coord.md

y-coordinate or latitude, FLOAT.