This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Configuration parameters
Configuration parameters are settings that affect database behavior.
Configuration parameters are settings that affect database behavior. You can use configuration parameters to enable, disable, or tune features related to different database aspects like Tuple Mover, security, Database Designer, or projections. Configuration parameters have default values, stored in the Vertica database.
You can modify certain parameters to configure your Vertica database in two ways:
Before you modify a database parameter, review all documentation about the parameter to determine the context under which you can change it. Some parameter changes require a database restart to take effect. The CHANGE_REQUIRES_RESTART
column in the system table CONFIGURATION_PARAMETERS indicates whether a parameter requires a restart. You can also query CONFIGURATION_PARAMETERS to determine what levels (node, session, database) are valid for a given parameter.
1 - Managing configuration parameters: VSQL
You can configure Vertica configuration parameters at the following levels, in ascending order precedence:.
You can configure Vertica configuration parameters at the following levels, in ascending order precedence:
-
Database
-
Node
-
Session
-
User
Most configuration parameters can be set at the database level. You can query system table CONFIGURATION_PARAMETERS to determine at which other levels specific configuration parameters can be set. For example, the following query obtains all partitioning parameters and their allowed levels:
=> SELECT parameter_name, current_value, default_value, allowed_levels FROM configuration_parameters
WHERE parameter_name ILIKE '%partition%';
parameter_name | current_value | default_value | allowed_levels
------------------------------------+---------------+---------------+----------------
MaxPartitionCount | 1024 | 1024 | NODE, DATABASE
PartitionSortBufferSize | 67108864 | 67108864 | DATABASE
DisableAutopartition | 0 | 0 | SESSION, USER
PatternMatchingMaxPartitionMatches | 3932160 | 3932160 | NODE, DATABASE
PatternMatchingMaxPartition | 20971520 | 20971520 | NODE, DATABASE
ActivePartitionCount | 1 | 1 | NODE, DATABASE
(6 rows)
Setting and clearing configuration parameters
You change specific configuration parameters with the appropriate ALTER statements; the same statements also let you reset configuration parameters to their default values:
For example, the following ALTER statements change ActivePartitionCount at the database level from 1 to 2 , and DisablePartitionCount at the session level from 0 to 1:
=> ALTER DATABASE DEFAULT SET ActivePartitionCount = 2;
ALTER DATABASE
=> ALTER SESSION SET DisableAutopartition = 1;
ALTER SESSION
=> SELECT parameter_name, current_value, default_value FROM configuration_parameters
WHERE parameter_name IN ('ActivePartitionCount', 'DisableAutopartition');
parameter_name | current_value | default_value
----------------------+---------------+---------------
ActivePartitionCount | 2 | 1
DisableAutopartition | 1 | 0
(2 rows)
You can later reset the same configuration parameters to their default values:
=> ALTER DATABASE DEFAULT CLEAR ActivePartitionCount;
ALTER DATABASE
=> ALTER SESSION CLEAR DisableAutopartition;
ALTER DATABASE
=> SELECT parameter_name, current_value, default_value FROM configuration_parameters
WHERE parameter_name IN ('ActivePartitionCount', 'DisableAutopartition');
parameter_name | current_value | default_value
----------------------+---------------+---------------
DisableAutopartition | 0 | 0
ActivePartitionCount | 1 | 1
(2 rows)
Caution
Vertica is designed to operate with minimal configuration changes. Be careful to set and change configuration parameters according to documented guidelines.
2 - Viewing configuration parameter values
You can view active configuration parameter values in two ways:.
You can view active configuration parameter values in two ways:
SHOW statements
Use the following SHOW
statements to view active configuration parameters:
-
SHOW CURRENT: Returns settings of active configuration parameter values. Vertica checks settings at all levels, in the following ascending order of precedence:
If no values are set at any scope, SHOW CURRENT returns the parameter's default value.
-
SHOW DATABASE: Displays configuration parameter values set for the database.
-
SHOW USER: Displays configuration parameters set for the specified user, and for all users.
-
SHOW SESSION: Displays configuration parameter values set for the current session.
-
SHOW NODE: Displays configuration parameter values set for a node.
If a configuration parameter requires a restart to take effect, the values in a SHOW CURRENT
statement might differ from values in other SHOW
statements. To see which parameters require restart, query the CONFIGURATION_PARAMETERS system table.
System tables
You can query several system tables for configuration parameters:
3 - Configuration parameter categories
Use the configuration parameters in the following categories to configure Vertica.
Use the configuration parameters in the following categories to configure Vertica. Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
3.1 - General parameters
The following parameters configure basic database operations.
The following parameters configure basic database operations. Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameter |
Description |
ApplyEventsDuringSALCheck |
Boolean, specifies whether Vertica uses catalog events to filter out dropped corrupt partitions during node startup. Dropping corrupt partitions can speed node recovery.
When disabled (0), Vertica reports corrupt partitions, but takes no action. Leaving corrupt partitions in place can reset the current projection checkpoint epoch to the epoch before the corruption occurred.
This parameter has no effect on unpartitioned tables.
Default: 0
|
ApportionedFileMinimumPortionSizeKB |
Specifies the minimum portion size (in kilobytes) for use with apportioned file loads. Vertica apportions a file load across multiple nodes only if:
See also EnableApportionLoad and EnableApportionedFileLoad.
Default: 1024
|
BlockedSocketGracePeriod |
Sets how long a session socket remains blocked while awaiting client input or output for a given query. See Handling session socket blocking.
Default: None (Socket blocking can continue indefinitely.)
|
CatalogCheckpointPercent |
Specifies the threshold at which a checkpoint is created for the database catalog.
By default, this parameter is set to 50 (percent), so when transaction logs reach 50% of the size of the last checkpoint, Vertica adds a checkpoint. Each checkpoint demarcates all changes to the catalog since the last checkpoint.
Default: 50 (percent)
|
ClusterSequenceCacheMode |
Boolean, specifies whether the initiator node requests cache for other nodes in a cluster, and then sends cache to other nodes along with the execution plan, one of the following.
See Distributing named sequences.
Default: 1 (enabled)
|
CompressCatalogOnDisk |
Specifies whether to compress the size of the catalog on disk, one of the following:
Consider enabling this parameter if the catalog disk partition is small (<50 GB) and the metadata is large (hundreds of tables, partitions, or nodes).
Default: 0
|
CompressNetworkData |
Boolean, specifies whether to compress all data sent over the internal network when enabled (set to 1). This compression speeds up network traffic at the expense of added CPU load. If the network is throttling database performance, enable compression to correct the issue.
Default: 0
|
CopyFaultTolerantExpressions |
Boolean, indicates whether to report record rejections during transformations and proceed (true) or abort COPY operations if a transformation fails.
Default: 0 (false)
|
CopyFromVerticaWithIdentity |
Allows
COPY FROM VERTICA and
EXPORT TO VERTICA to load values into Identity (or Auto-increment) columns. The destination Identity column is not incremented automatically. To disable the default behavior, set this parameter to 0 (zero).
Default: 1
|
DatabaseHeartbeatInterval |
Determines the interval (in seconds) at which each node performs a health check and communicates a heartbeat. If a node does not receive a message within five times of the specified interval, the node is evicted from the cluster. Setting the interval to 0 disables the feature.
See Automatic eviction of unhealthy nodes.
Default: 120
|
DefaultArrayBinarySize |
The maximum binary size, in bytes, for an unbounded collection, if a maximum size is not specified at creation time.
Default: 65000
|
DefaultTempTableLocal |
Boolean, specifies whether CREATE TEMPORARY TABLE creates a local or global temporary table, one of the following:
For details, see Creating temporary tables.
Default: 0
|
DivideZeroByZeroThrowsError |
Boolean, specifies whether to return an error if a division by zero operation is requested:
-
0: Return 0.
-
1: Returns an error.
Default: 1
|
EnableApportionedChunkingInDefaultLoadParser |
Boolean, specifies whether to enable the built-in parser for delimited files to take advantage of both apportioned load and cooperative parse for potentially better performance.
Default: 1 (enable)
|
EnableApportionedFileLoad |
Boolean, specifies whether to enable automatic apportioning across nodes of file loads using COPY FROM VERTICA. Vertica attempts to apportion the load if:
-
This parameter and EnableApportionLoad are both enabled.
-
The parser supports apportioning.
-
The load is divisible into portion sizes of at least the value of ApportionedFileMinimumPortionSizeKB.
Setting this parameter does not guarantee that loads will be apportioned, but disabling it guarantees that they will not be.
See Distributing a load.
Default: 1 (enable)
|
EnableApportionLoad |
Boolean, specifies whether to enable automatic apportioning across nodes of data loads using COPY...WITH SOURCE. Vertica attempts to apportion the load if:
Setting this parameter does not guarantee that loads will be apportioned, but disabling it guarantees that they will not be.
For details, see Distributing a load.
Default: 1 (enable)
|
EnableBetterFlexTypeGuessing |
Boolean, specifies whether to enable more accurate type guessing when assigning data types to non-string keys in a flex table __raw__ column with COMPUTE_FLEXTABLE_KEYS or COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW . If this parameter is disabled (0), Vertica uses a limited set of Vertica data type assignments. For examples, see Computing flex table keys.
Default: 1 (enable)
|
EnableCooperativeParse |
Boolean, specifies whether to implement multi-threaded parsing capabilities on a node. You can use this parameter for both delimited and fixed-width loads.
Default: 1 (enable)
|
EnableForceOuter |
Boolean, specifies whether Vertica uses a table's force_oute r value to implement a join. For more information, see Controlling join inputs.
Default: 0 (forced join inputs disabled)
|
EnableMetadataMemoryTracking |
Boolean, specifies whether to enable Vertica to track memory used by database metadata in the METADATA resource pool.
Default: 1 (enable)
|
EnableResourcePoolCPUAffinity |
Boolean, specifies whether Vertica aligns queries to the resource pool of the processing CPU. When disabled (0), queries run on any CPU, regardless of the CPU_AFFINITY_SET of the resource pool.
Default: 1
|
EnableStrictTimeCasts |
Specifies whether all cast failures result in an error.
Default: 0 (disable)
|
EnableUniquenessOptimization |
Boolean, specifies whether to enable query optimization that is based on guaranteed uniqueness of column values. Columns that can be guaranteed to include unique values include:
Default: 1 (enable)
|
EnableWithClauseMaterialization |
Superseded by WithClauseMaterialization. |
EnableWITHTempRelReuseLimit |
Sets EE5 temp relation support for WITH materialization. Accepted values are:
-
0: Disables this feature.
-
1: Force-saves all WITH clause queries into EE5 Temp Relations, whether or not they are reused.
-
2: Enables this feature only when queries are reused.
Default: 2
|
ExternalTablesExceptionsLimit |
Determines the maximum number of COPY exceptions and rejections allowed when a SELECT statement references an external table. Set to -1 to remove any exceptions limit. See Querying external tables.
Default: 100
|
FailoverToStandbyAfter |
Specifies the length of time that an active standby node waits before taking the place of a failed node.
This parameter is set to an interval literal.
Default: None
|
FencedUDxMemoryLimitMB |
Sets the maximum amount of memory, in megabytes (MB), that a fenced-mode UDF can use. If a UDF attempts to allocate more memory than this limit, that attempt triggers an exception. For more information, see Fenced and unfenced modes.
Default: -1 (no limit)
|
FlexTableDataTypeGuessMultiplier |
Specifies a multiplier that the COMPUTE_FLEXTABLE_KEYS and COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW functions use when assigning a data type and column width for the flex keys table. Both functions assign each key a data type, and multiply the longest key value by this factor to estimate column width. This value is not used to calculate the width of any real columns in a flex table.
Default: 2.0
|
FlexTableRawSize |
Specifies the default column width for the __raw__ column of new flex tables, a value between 1 and 32000000, inclusive.
Default: 130000
|
ForceUDxFencedMode |
When enabled (1), forces all UDxs that support fenced mode to run in fenced mode even if their definition specified NOT FENCED.
Default: 0
|
JavaBinaryForUDx |
Sets the full path to the Java executable that Vertica uses to run Java UDxs. See Installing Java on Vertica hosts. |
JoinDefaultTupleFormat |
Specifies how to size VARCHAR column data when joining tables on those columns, and buffers accordingly, one of the following:
-
fixed : Use join column metadata to size column data to a fixed length, and buffer accordingly.
-
variable : Use the actual length of join column data, so buffer size varies for each join.
Default: fixed
|
KeepAliveIdleTime |
Length (in seconds) of the idle period before the first TCP keepalive probe is sent to ensure that the client is still connected. If set to 0, Vertica uses the kernel's tcp_keepalive_time parameter setting.
Default: 0
|
KeepAliveProbeCount |
Number of consecutive keepalive probes that must go unacknowledged by the client before the client connection is considered lost and closed. If set to 0, Vertica uses the kernel's tcp_keepalive_probes parameter setting.
Default: 0
|
KeepAliveProbeInterval |
Time interval (in seconds) between keepalive probes. If set to 0, Vertica uses the kernel's tcp_keepalive_intvl parameter setting.
Default: 0
|
LockTimeout |
Specifies in seconds how long a table can be locked. You can set this parameter at all levels: session, node, and database.
Default: 300
|
LoadSourceStatisticsLimit |
Specifies the maximum number of sources per load operation that are profiled in the LOAD_SOURCES system table. Set it to 0 to disable profiling.
Default: 256
|
MaxBundleableROSSizeKB |
Specifies the minimum size, in kilobytes, of an independent ROS file. Vertica bundles storage container ROS files below this size into a single file. Bundling improves the performance of any file-intensive operations, including backups, restores, and mergeouts.
If you set this parameter to a value of 0, Vertica bundles .fdb and .pidx files without bundling other storage container files.
Default: 1024
|
MaxClientSessions |
Determines the maximum number of client sessions that can run on a single node of the database. The default value allows for five additional administrative logins. These logins prevent DBAs from being locked out of the system if non-dbadmin users reach the login limit.
Tip
Setting this parameter to 0 prevents new client sessions from being opened while you are shutting down the database. Restore the parameter to its original setting after you restart the database. For details, see Managing Sessions.
Default: 50 user logins and 5 additional administrative logins
|
ParquetMetadataCacheSizeMB |
Size of the cache used for metadata when reading Parquet data. The cache uses local TEMP storage.
Default: 4096
|
PatternMatchingUseJit |
Boolean, specifies whether to enables just-in-time compilation (to machine code) of regular expression pattern matching functions used in queries. Enabling this parameter can usually improve pattern matching performance on large tables. The Perl Compatible Regular Expressions (PCRE) pattern-match library evaluates regular expressions. Restart the database for this parameter to take effect.
See also Regular expression functions.
Default: 1 (enable)
|
PatternMatchStackAllocator |
Boolean, specifies whether to override the stack memory allocator for the pattern-match library. The Perl Compatible Regular Expressions (PCRE) pattern-match library evaluates regular expressions. Restart the database for this parameter to take effect.
See also Regular expression functions.
Default: 1 (enable override)
|
TerraceRoutingFactor |
Specifies whether to enable or disable terrace routing on any Enterprise Mode large cluster that implements rack-based fault groups.
- To enable, set as follows:
where: * numRackNodes : Number of nodes in a rack * numRacks : Number of racks in the cluster
- To disable, set to a value so large that terrace routing cannot be enabled for the largest clusters—for example, 1000.
For details, see Terrace routing.
Default: 2
|
TransactionIsolationLevel |
Changes the isolation level for the database. After modification, Vertica uses the new transaction level for every new session. Existing sessions and their transactions continue to use the original isolation level.
See also Change transaction isolation levels.
Default: READ COMMITTED
|
TransactionMode |
Specifies whether transactions are in read/write or read-only modes. Read/write is the default. Existing sessions and their transactions continue to use the original isolation level.
Default: READ WRITE
|
UDxFencedBlockTimeout |
Specifies the number of seconds to wait for output before aborting a UDx running in Fenced and unfenced modes. If the server aborts a UDx for this reason, it produces an error message similar to "ERROR 3399: Failure in UDx RPC call: timed out in receiving a UDx message". If you see this error frequently, you can increase this limit. UDxs running in fenced mode do not run in the server process, so increasing this value does not impede server performance.
Default: 60
|
UseLocalTzForParquetTimestampConversion |
Boolean, specifies whether to do timezone conversion when reading Parquet files. Hive version 1.2.1 introduced an option to localize timezones when writing Parquet files. Previously it wrote them in UTC and Vertica adjusted the value when reading the files.
Set to 0 if Hive already adjusted the timezones.
Default: 1 (enable conversion)
|
UseServerIdentityOverUserIdentity |
Boolean, specifies whether to ignore user-supplied credentials for non-Linux file systems and always use a USER storage location to govern access to data. See Creating a Storage Location for USER Access.
Default: 0 (disable)
|
WithClauseMaterialization |
Boolean, specifies whether to enable materialization of WITH clause results. When materialization is enabled (1), Vertica evaluates each WITH clause once and stores results in a temporary table.
Default: 0 (disable)
|
WithClauseRecursionLimit |
Specifies the maximum number of times a WITH RECURSIVE clause iterates over the content of its own result set before it exits. For details, see WITH clause recursion.
Important
Be careful to set WithClauseRecursionLimit only as high as needed to traverse the deepest hierarchies. Vertica sets no limit on this parameter; however, a high value can incur considerable overhead that adversely affects performance and exhausts system resources.
If a high recursion count is required, then consider enabling materialization. For details, see WITH RECURSIVE Materialization.
Default: 8
|
3.2 - Eon Mode parameters
The following parameters configure how the database operates when running in Eon Mode.
The following parameters configure how the database operates when running in Eon Mode. Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameter |
Description |
BackgroundDepotWarming |
Specifies background depot warming behavior:
-
1: The depot loads objects while it is warming, and continues to do so in the background after the node becomes active and starts executing queries.
-
0: Node activation is deferred until the depot fetches and loads all queued objects
For details, see Depot Warming.
Default: 1
|
CatalogSyncInterval |
Specifies in minutes how often the transaction log sync service syncs metadata to communal storage. If you change this setting, Vertica restarts the interval count.
Default: 5
|
DelayForDeletes |
Specifies in hours how long to wait before deleting a file from communal storage. Vertica first deletes a file from the depot. After the specified time interval, the delete also occurs in communal storage.
Default: 0. Deletes the file from communal storage as soon as it is not in use by shard subscribers.
|
DepotOperationsForQuery |
Specifies behavior when the depot does not contain queried file data, one of the following:
-
ALL (default): Fetch file data from communal storage, if necessary displace existing files by evicting them from the depot.
-
FETCHES : Fetch file data from communal storage only if space is available; otherwise, read the queried data directly from communal storage.
-
NONE : Do not fetch file data to the depot, read the queried data directly from communal storage.
You can also specify query-level behavior with the hint
DEPOT_FETCH .
|
ECSMode |
String parameter that sets the strategy Vertica uses when dividing the data in a shard among subscribing nodes during an ECS-enabled query, one of the following:
-
AUTO : Optimizer automatically determines the strategy to use.
-
COMPUTE_OPTIMIZED : Force use of the compute-optimized strategy.
-
IO_OPTIMIZED : Force use of the I/O-optimized strategy.
For details, see Manually choosing an ECS strategy.
Default: AUTO
|
ElasticKSafety |
Boolean parameter that controls whether Vertica adjusts shard subscriptions due to the loss of a primary node:
- 1: When a primary node is lost, Vertica subscribes other primary nodes to the down node's shard subscriptions. This action helps reduce the chances of a database into going read-only mode due to loss of shard coverage.
- 0 : Vertica does not change shard subscriptions in reaction to the loss of a primary node.
Default: 1
For details, see Maintaining Shard Coverage.
|
EnableDepotWarmingFromPeers |
Boolean parameter, specifies whether Vertica warms a node depot while the node is starting up and not ready to process queries:
For details, see Depot Warming.
Default: 0
|
FileDeletionServiceInterval |
Specifies in seconds the interval between each execution of the reaper cleaner service task.
Default: 60 seconds
|
PreFetchPinnedObjectsToDepotAtStartup |
If enabled (set to 1), a warming depot fetches objects that are pinned on its subcluster. For details, see Depot Warming.
Default: 0
|
ReaperCleanUpTimeoutAtShutdown |
Specifies in seconds how long Vertica waits for the reaper to delete files from communal storage before shutting down. If set to a negative value, Vertica shuts down without waiting for the reaper.
Note
The reaper is a service task that deletes disk files.
Default: 300
|
StorageMergeMaxTempCacheMB |
The size of temp space allocated per query to the StorageMerge operator for caching the data of S3 storage containers.
Note
The actual temp space that is allocated is the lesser of:
For details, see Local caching of storage containers.
|
UseCommunalStorageForBatchDepotWarming |
Boolean parameter, specifies whether where a node retrieves data when warming its depot:
Note
The actual temp space that is allocated is the lesser of two settings:
Default: 1
Important
This parameter is for internal use only. Do not change it unless directed to do so by Vertica support.
|
UseDepotForReads |
Boolean parameter, specifies whether Vertica accesses the depot to answer queries, or accesses only communal storage:
-
1: Vertica first searches the depot for the queried data; if not there, Vertica fetches the data from communal storage for this and future queries.
-
0: Vertica bypasses the depot and always obtains queried data from communal storage.
Note
Enable depot reads to improve query performance and support K-safety.
Default: 1
|
UseDepotForWrites |
Boolean parameter, specifies whether Vertica writes loaded data to the depot and then uploads files to communal storage:
-
1: Write loaded data to the depot, upload files to communal storage.
-
0: Bypass the depot and always write directly to communal storage.
Default: 1
|
UsePeerToPeerDataTransfer |
Boolean parameter, specifies whether Vertica pushes loaded data to other shard subscribers:
Note
Setting to 1 helps improve performance when a node is down.
Default: 0
Important
This parameter is for internal use only. Do not change it unless directed to do so by Vertica support.
|
3.3 - S3 parameters
Use the following parameters to configure reading from S3 file systems and on-premises storage with S3-compatible APIs, such as Pure Storage, using COPY FROM.
Use the following parameters to configure reading from S3 file systems and on-premises storage with S3-compatible APIs, such as Pure Storage, using COPY FROM. For more information about reading data from S3, see S3 Object Store.
For the parameters to control the AWS Library (UDSource), see Configure the Vertica library for Amazon Web Services.
Note
When using AWS, using
ALTER SESSION to change these parameters also changes the corresponding parameters for the AWS Library (UDSource).
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameter |
Description |
AWSAuth |
ID and secret key for authentication. For extra security, do not store credentials in the database; use ALTER SESSION...SET PARAMETER to set this value for the current session only. If you use a shared credential, you can set it in the database with ALTER DATABASE...SET PARAMETER. For example:
=> ALTER SESSION SET AWSAuth=' ID : secret ';
AWS calls these AccessKeyID and SecretAccessKey.
To use admintools create_db or revive_db for Eon Mode on-premises, create a configuration file called auth_params.conf with these settings:
AWSAuth = key:secret
AWSEndpoint = IP:port
|
AWSCAFile |
File name of the TLS server certificate bundle to use. Setting this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.
If set, this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.
=> ALTER DATABASE DEFAULT SET AWSCAFile = '/etc/ssl/ca-bundle.pem';
Default: system-dependent
|
AWSCAPath |
Path Vertica uses to look up TLS server certificates. The file name of the TLS server certificate bundle to use.
If set, this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.
=> ALTER DATABASE DEFAULT SET AWSCAPath = '/etc/ssl/';
Default: system-dependent
|
AWSEnableHttps |
Boolean, specifies whether to use the HTTPS protocol when connecting to S3, can be set only at the database level with ALTER DATABASE...SET PARAMETER. If you choose not to use TLS, this parameter must be set to 0.
Default: 1 (enabled)
|
AWSEndpoint |
Endpoint to use when interpreting S3 URLs, set as follows.
Important
Do not include http(s):// for AWS endpoints.
Important
Do not include http(s)://
```
AWSEndpoint = s3-fips.dualstack.us-east-1.amazonaws.com
S3EnableVirtualAddressing = 1
```
- On-premises/Pure: IP address of the Pure Storage server. If using admintools
create_db or revive_db , create configuration file auth_params.conf and include these settings:
awsauth = key:secret
awsendpoint = IP:port
- When AWSEndpoint is not set, the default behavior is to use virtual-hosted request URLs.
Default: s3.amazonaws.com
|
AWSLogLevel |
Log level, one of the following:
-
OFF
-
FATAL
-
ERROR
-
WARN
-
INFO
-
DEBUG
-
TRACE
**Default:**ERROR
|
AWSRegion |
AWS region containing the S3 bucket from which to read files. This parameter can only be configured with one region at a time. If you need to access buckets in multiple regions, change the parameter each time you change regions.
If you do not set the correct region, you might experience a delay before queries fail because Vertica retries several times before giving up.
Default: us-east-1
|
AWSSessionToken |
Temporary security token generated by running the get-session-token command, which generates temporary credentials you can use to configure multi-factor authentication.
Note
If you use session tokens, you must set all parameters at the session level, even if some of them are set at the database level. Use ALTER SESSION to set session parameters.
|
S3BucketConfig |
Contains S3 bucket configuration information as a JSON object with the following properties. Each property has an equivalent parameter (shown in parentheses). If both the property in S3BucketConfig and the equivalent S3 parameter are set, the S3BucketConfig property takes precedence.
Properties:
-
bucket : The name of the bucket
-
region : The name of the region (AWSRegion)
-
protocol : Specifies whether to secure the connection, one of the following:
-
endpoint : The endpoint URL or IP address (AWSEndpoint)
-
enableVirtualAddressing : Whether to rewrite the S3 URL to use a virtual hosted path (S3BucketCredentials)
-
requesterPays : Boolean, specifies whether requester (instead of bucket owner) pays the cost of accessing data on the bucket; must be set in order to access S3 buckets configured as Requester Pays buckets. By setting this property to true, you are accepting the charges for accessing data. If not specified, the default value is false.
The configuration properties for a given bucket may differ based on its type. For example, the following S3BucketConfig is for an AWS bucket AWSBucket and a Pure Storage bucket PureStorageBucket . AWSBucket doesn't specify an endpoint, so Vertica uses the AWSEndpoint, which defaults to s3.amazonaws.com :
ALTER DATABASE DEFAULT SET S3BucketConfig=
'[
{
"bucket": "AWSBucket",
"region": "us-east-2",
"protocol": "https",
"requesterPays": true
},
{
"bucket": "PureStorageBucket",
"endpoint": "pure.mycorp.net:1234",
"protocol": "http",
"enableVirtualAddressing": false
}
]';
|
S3BucketCredentials |
Contains credentials for accessing an S3 bucket. Each property in S3BucketCredentials has an equivalent parameter (shown in parentheses). When set, S3BucketCredentials takes precedence over both AWSAuth and AWSSessionToken.
Providing credentials for more than one bucket authenticates to them simultaneously, allowing you to perform cross-endpoint joins, export from one bucket to another, etc.
Properties:
-
bucket : The name of the bucket
-
accessKey : The access key for the bucket (the ID in AWSAuth)
-
secretAccessKey : The secret access key for the bucket (the secret in AWSAuth)
-
sessionToken : The session token, only used when S3BucketCredentials is set at the session level (AWSSessionToken)
For example, the following S3BucketCredentials is for an AWS bucket AWSBucket and a Pure Storage bucket PureStorageBucket and sets all possible properties:
ALTER SESSION SET S3BucketCredentials='
[
{
"bucket": "AWSBucket",
"accessKey": "<AK0>",
"secretAccessKey": "<SAK0>",
"sessionToken": "1234567890"
},
{
"bucket": "PureStorageBucket",
"accessKey": "<AK1>",
"secretAccessKey": "<SAK1>"
}
]';
This parameter is only visible to the superuser. Users can set this parameter at the session level with ALTER SESSION.
|
S3EnableVirtualAddressing |
Boolean, specifies whether to rewrite S3 URLs to use virtual-hosted paths. For example, if you use AWS, the S3 URLs change to bucketname.s3.amazonaws.com instead of s3.amazonaws.com/bucketname . This configuration setting takes effect only when you have specified a value for AWSEndpoint.
If you set AWSEndpoint to a FIPS-compliant S3 Endpoint, you must enable S3EnableVirtualAddressing in auth_params.conf:
AWSEndpoint = s3-fips.dualstack.us-east-1.amazonaws.com
S3EnableVirtualAddressing = 1
The value of this parameter does not affect how you specify S3 paths.
Default: 0 (disabled)
Note
As of September 30, 2020, AWS requires virtual address paths for newly created buckets.
|
S3RequesterPays |
Boolean, specifies whether requester (instead of bucket owner) pays the cost of accessing data on the bucket. When true, the bucket owner is only responsible for paying the cost of storing the data, rather than all costs associated with the bucket; must be set in order to access S3 buckets configured as Requester Pays buckets. By setting this property to true, you are accepting the charges for accessing data. If not specified, the default value is false. |
AWSStreamingConnectionPercentage |
Controls the number of connections to the communal storage that Vertica uses for streaming reads. In a cloud environment, this setting helps prevent streaming data from communal storage using up all available file handles. It leaves some file handles available for other communal storage operations.
Due to the low latency of on-premises object stores, this option is unnecessary for an Eon Mode database that uses on-premises communal storage, such as Pure Storage and MinIO. In this case, disable the parameter by setting it to 0.
|
3.4 - AWS library S3-Compatible user-defined session parameters
The AWS library is deprecated.
Deprecated
The AWS library is deprecated. To export delimited data to S3 or any other destination, use
EXPORT TO DELIMITED.
Use these parameters to configure the Vertica library for all S3-compatible file systems. You use this library to export data from Vertica to S3. All parameters listed are case-sensitive.
Note
While the name of this library and its parameters specify AWS, you can use this library to configure all S3-compatible file systems, such as Pure Storage.
Using ALTER SESSION to change the S3 configuration parameters described in S3 parameters also changes the corresponding parameters for this library. The reverse is not true: setting these parameters sets the values only for the library.
Parameter |
Description |
aws_ca_bundle |
The path which Vertica will use when looking for a SSL server certificate bundle. For SUSE Linux you must set a value. Setting the AWSCAFile configuration parameter for a session also sets this UDParameter.
For example:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_ca_bundle='/home/user/ssl_folder/ca_bundle';
Default: system dependent
|
aws_ca_path |
The path which Vertica will use when looking for SSL server certificates. For SUSE Linux you must set a value. Setting the AWSCAPath configuration parameter for a session also sets this UDParameter.
For example:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_ca_path='/home/user/ssl_folder';
Default: system dependent
|
aws_endpoint |
The endpoint to use when interpreting S3 URLs. For example:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_endpoint='localhost:8080';
Note
The AWS endpoint should only include the hostname or IP address:port number; it should exclude http(s)://
Default: s3.amazonaws.com
|
aws_id |
Your access key ID. Setting the AWSAuth configuration parameter for a session also sets this UDParameter.
For example:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_id=' aws-id ';
|
aws_max_recv_speed |
The maximum transfer speed when receiving data to S3 in bytes per second. For example, to set a maximum receive speed of 100KB/S:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_max_recv_speed=102400;
|
aws_max_send_speed |
The maximum transfer speed when sending data to S3 in bytes per second. For example, to set a maximum send speed of 1KB/S:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_max_send_speed=1024;
|
aws_proxy |
A string value which allows you to set an HTTP/HTTPS proxy for the library.
For example:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_proxy='192.168.1.1:8080';
|
aws_region |
The AWS region containing your S3 bucket. aws_region can only be configured with one region at a time. If you need to access buckets in multiple regions, you must re-set the parameter each time you change regions.
Setting the AWSRegion configuration parameter for a session also sets this UDParameter.
For example:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_region=' region-id ';
Default: us-east-1
|
aws_secret |
Your secret access key. Setting the AWSAuth configuration parameter for a session also sets this UDParameter.
For example:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_secret=' aws-key ';
|
aws_session_token |
The temporary security token generated by running the AWS STS command get-session-token . This AWS STS command generates temporary credentials you can use to implement multi-factor authentication for security purposes.
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_session_token=' session-token ';
|
aws_verbose |
When enabled, logs libcurl debug messages to /opt/vertica/packages/AWS/logs.
For example:
=> ALTER SESSION SET UDPARAMETER FOR awslib aws_verbose=1;
Default: false
|
See also
3.5 - Google Cloud Storage parameters
Use the following parameters to configure reading from Google Cloud Storage (GCS) using COPY FROM.
Use the following parameters to configure reading from Google Cloud Storage (GCS) using COPY FROM. For more information about reading data from S3, see Specifying where to load data from.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameter |
Description |
GCSAuth |
An ID and secret key to authenticate to GCS. You can set parameters globally and for the current session with ALTER DATABASE...SET PARAMETER and ALTER SESSION...SET PARAMETER, respectively. For extra security, do not store credentials in the database; instead, set it for the current session with ALTER SESSION. For example:
=> ALTER SESSION SET GCSAuth=' ID : secret ';
If you use a shared credential, set it in the database with ALTER DATABASE.
|
GCSEnableHttps |
Specifies whether to use the HTTPS protocol when connecting to GCS, can be set only at the database level with ALTER DATABASE...SET PARAMETER.
Default: 1 (enabled)
|
GCSEndpoint |
The connection endpoint address.
Default: storage.googleapis.com
|
3.6 - Azure parameters
Use the following parameters to configure reading from Azure blob storage.
Use the following parameters to configure reading from Azure blob storage. For more information about reading data from Azure, see Azure Blob Storage object store.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- AzureStorageCredentials
- Collection of JSON objects, each of which specifies connection credentials for one endpoint. This parameter takes precedence over Azure managed identities.
The collection must contain at least one object and may contain more. Each object must specify at least one of accountName
or blobEndpoint
, and at least one of accountKey
or sharedAccessSignature
.
accountName
: If not specified, uses the label of blobEndpoint
.
blobEndpoint
: Host name with optional port (host:port
). If not specified, uses account
.blob.core.windows.net
.
accountKey
: Access key for the account or endpoint.
sharedAccessSignature
: Access token for finer-grained access control, if being used by the Azure endpoint.
- AzureStorageEndpointConfig
- Collection of JSON objects, each of which specifies configuration elements for one endpoint. Each object must specify at least one of
accountName
or blobEndpoint
.
accountName
: If not specified, uses the label of blobEndpoint
.
blobEndpoint
: Host name with optional port (host:port
). If not specified, uses account
.blob.core.windows.net
.
protocol
: HTTPS (default) or HTTP.
isMultiAccountEndpoint
: true if the endpoint supports multiple accounts, false otherwise (default is false). To use multiple-account access, you must include the account name in the URI. If a URI path contains an account, this value is assumed to be true unless explicitly set to false.
3.7 - Apache Hadoop parameters
The following table describes general parameters for configuring integration with Apache Hadoop.
The following table describes general parameters for configuring integration with Apache Hadoop. See Apache Hadoop integration for more information.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameter |
Description |
EnableHDFSBlockInfoCache |
Boolean, whether to distribute block location metadata collected during planning on the initiator to all database nodes for execution. Distributing this metadata reduces name node accesses, and thus load, but can degrade database performance somewhat in deployments where the name node isn't contended. This performance effect is because the data must be serialized and distributed. Enable distribution if protecting the name node is more important than query performance; usually this applies to large HDFS clusters where name node contention is already an issue.
Default: 0 (disabled)
|
HadoopConfDir |
Directory path containing the XML configuration files copied from Hadoop. The same path must be valid on every Vertica node. You can use the VERIFY_HADOOP_CONF_DIR meta-function to test that the value is set correctly. Setting this parameter is required to read data from HDFS.
For all Vertica users, the files are accessed by the Linux user under which the Vertica server process runs.
When you set this parameter, previously-cached configuration information is flushed.
You can set this parameter at the session level. Doing so overrides the database value; it does not append to it. For example:
=> ALTER SESSION SET HadoopConfDir='/test/conf:/hadoop/hcat/conf';
To append, get the current value and include it on the new path after your additions. Setting this parameter at the session level does not change how the files are accessed.
Default: obtained from environment if possible
|
HadoopFSAuthentication |
How (or whether) to use Kerberos authentication with HDFS. By default, if KerberosKeytabFile is set, Vertica uses that credential for both Vertica and HDFS. Usually this is the desired behavior. However, if you are using a Kerberized Vertica cluster with a non-Kerberized HDFS cluster, set this parameter to "none" to indicate that Vertica should not use the Vertica Kerberos credential to access HDFS.
Default: "keytab" if KerberosKeytabFile is set, otherwise "none"
|
HadoopFSBlockSizeBytes |
Block size to write to HDFS. Larger files are divided into blocks of this size.
Default: 64MB
|
HadoopFSNNOperationRetryTimeout |
Number of seconds a metadata operation (such as list directory) waits for a response before failing. Accepts float values for millisecond precision.
Default: 6 seconds
|
HadoopFSReadRetryTimeout |
Number of seconds a read operation waits before failing. Accepts float values for millisecond precision. If you are confident that your file system will fail more quickly, you can improve performance by lowering this value.
Default: 180 seconds
|
HadoopFSReplication |
Number of replicas HDFS makes. This is independent of the replication that Vertica does to provide K-safety. Do not change this setting unless directed otherwise by Vertica support.
Default: 3
|
HadoopFSRetryWaitInterval |
Initial number of seconds to wait before retrying read, write, and metadata operations. Accepts float values for millisecond precision. The retry interval increases exponentially with every retry.
Default: 3 seconds
|
HadoopFSTokenRefreshFrequency |
How often, in seconds, to refresh the Hadoop tokens used to hold Kerberos tickets (see Token expiration).
Default: 0 (refresh when token expires)
|
HadoopFSWriteRetryTimeout |
Number of seconds a write operation waits before failing. Accepts float values for millisecond precision. If you are confident that your file system will fail more quickly, you can improve performance by lowering this value.
Default: 180 seconds
|
HadoopImpersonationConfig |
Session parameter specifying the delegation token or Hadoop user for HDFS access. See HadoopImpersonationConfig format for information about the value of this parameter and Proxy users and delegation tokens for more general context. |
HDFSUseWebHDFS |
Boolean, whether URLs in the hdfs scheme use WebHDFS instead of LibHDFS++ to access data.
Deprecated
This parameter is deprecated because LibHDFS++ is deprecated. In the future, Vertica will use WebHDFS for all hdfs URLs.
Default: 0 (disabled)
|
WebhdfsClientCertConf |
mTLS configurations for accessing one or more WebHDFS servers. The value is a JSON string; each member has the following properties:
nameservice and authority are mutually exclusive.
For example:
=> ALTER SESSION SET WebhdfsClientCertConf =
'[{"authority" : "my.authority.com:50070", "certName" : "myCert"},
{"nameservice" : "prod", "certName" : "prodCert"}]';
|
HCatalog Connector parameters
The following table describes the parameters for configuring the HCatalog Connector. See Using the HCatalog Connector for more information.
Note
You can override HCatalog configuration parameters when you create an HCatalog schema with
CREATE HCATALOG SCHEMA.
Parameter |
Description |
EnableHCatImpersonation |
Boolean, whether the HCatalog Connector uses (impersonates) the current Vertica user when accessing Hive. If impersonation is enabled, the HCatalog Connector uses the Kerberos credentials of the logged-in Vertica user to access Hive data. Disable impersonation if you are using an authorization service to manage access without also granting users access to the underlying files. For more information, see Configuring security.
Default: 1 (enabled)
|
HCatalogConnectorUseHiveServer2 |
Boolean, whether Vertica internally uses HiveServer2 instead of WebHCat to get metadata from Hive.
Default: 1 (enabled)
|
HCatalogConnectorUseLibHDFSPP |
Boolean, whether the HCatalog Connector should use the hdfs scheme instead of webhdfs with the HCatalog Connector.
Deprecated
Vertica uses the hdfs scheme by default. To use webhdfs , set the HDFSUseWebHDFS parameter.
Default: 1 (enabled)
|
HCatConnectionTimeout |
The number of seconds the HCatalog Connector waits for a successful connection to the HiveServer2 (or WebHCat) server before returning a timeout error.
Default: 0 (Wait indefinitely)
|
HCatSlowTransferLimit |
Lowest transfer speed (in bytes per second) that the HCatalog Connector allows when retrieving data from the HiveServer2 (or WebHCat) server. In some cases, the data transfer rate from the server to Vertica is below this threshold. In such cases, after the number of seconds specified in the HCatSlowTransferTime parameter pass, the HCatalog Connector cancels the query and closes the connection.
Default: 65536
|
HCatSlowTransferTime |
Number of seconds the HCatalog Connector waits before testing whether the data transfer from the server is too slow. See the HCatSlowTransferLimit parameter.
Default: 60
|
3.8 - Security parameters
Use these client authentication configuration parameters and general security parameters to configure TLS.
Use these client authentication configuration parameters and general security parameters to configure TLS.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Database parameters
Parameter |
Description |
DataSSLParams |
This parameter has been deprecated. Use the data_channel TLS CONFIGURATION instead.
Enables encryption using SSL on the data channel. The value of this parameter is a comma-separated list of the following:
-
An SSL certificate (chainable)
-
The corresponding SSL private key
-
The SSL CA (Certificate Authority) certificate.
You should set EncryptSpreadComm before setting this parameter.
In the following example, the SSL Certificate contains two certificates, where the certificate for the non-root CA verifies the certificate for the cluster. This is called an SSL Certificate Chain.
=> ALTER DATABASE DEFAULT SET PARAMETER DataSSLParams =
'----BEGIN CERTIFICATE-----<certificate for Cluster>-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----<certificate for non-root CA>-----END CERTIFICATE-----,
-----BEGIN RSA PRIVATE KEY-----<private key for Cluster>-----END RSA PRIVATE KEY-----,
-----BEGIN CERTIFICATE-----<certificate for public CA>-----END CERTIFICATE-----';
|
DefaultIdleSessionTimeout |
Indicates a default session timeout value for all users where
IDLESESSIONTIMEOUT is not set. For example:
=> ALTER DATABASE DEFAULT SET defaultidlesessiontimeout = '300 secs';
|
DHParams |
String, a Diffie-Hellman group of at least 2048 bits in the form:
-----BEGIN DH PARAMETERS-----...-----END DH PARAMETERS-----
You can generate your own or use the pre-calculated Modular Exponential (MODP) Diffie-Hellman groups specified in RFC 3526.
Changes to this parameter do not take effect until you restart the database.
Default: RFC 3526 2048-bit MODP Group 14:
-----BEGIN DH PARAMETERS-----MIIBCAKCAQEA///////////
JD9qiIWjCNMTGYouA3BzRKQJOCIpnzHQCC76mOxObIlFKCHmONAT
d75UZs806QxswKwpt8l8UN0/hNW1tUcJF5IW1dmJefsb0TELppjf
tawv/XLb0Brft7jhr+1qJn6WunyQRfEsf5kkoZlHs5Fs9wgB8uKF
jvwWY2kg2HFXTmmkWP6j9JM9fg2VdI9yjrZYcYvNWIIVSu57VKQd
wlpZtZww1Tkq8mATxdGwIyhghfDKQXkYuNs474553LBgOhgObJ4O
i7Aeij7XFXfBvTFLJ3ivL9pVYFxg5lUl86pVq5RXSJhiY+gUQFXK
OWoqsqmj//////////wIBAg==-----END DH PARAMETERS-----
|
DoUserSpecificFilteringInSysTables |
Boolean, specifies whether a non-superuser can view details of another user:
Default: 0
|
EnableAllRolesOnLogin |
Boolean, specifies whether to automatically enable all roles granted to a user on login:
-
0: Do not automatically enable roles
-
1: Automatically enable roles. With this setting, users do not need to run SET ROLE.
Default: 0 (disable)
|
EnabledCipherSuites |
Specifies which SSL cipher suites to use for secure client-server communication. Changes to this parameter apply only to new connections.
Default: Vertica uses the Microsoft Schannel default cipher suites. For more information, see the Schannel documentation.
|
EncryptSpreadComm |
Enables Spread encryption on the control channel, set to one of the following strings:
-
vertica : Specifies that Vertica generates the Spread encryption key for the database cluster.
-
aws-kms| key-name , where key-name is a named key in the iAWS Key Management Service (KMS). On database restart, Vertica fetches the named key from the KMS instead of generating its own.
If the parameter is empty, Spread communication is unencrypted. In general, you should enable this parameter before modifying other security parameters.
Enabling this parameter requires database restart.
|
GlobalHeirUsername |
A string that specifies which user inherits objects after their owners are dropped. This setting ensures preservation of data that would otherwise be lost.
Set this parameter to one of the following string values:
-
Empty string: Objects of dropped users are removed from the database.
-
username : Reassigns objects of dropped users to username . If username does not exist, Vertica creates that user and sets GlobalHeirUsername to it.
-
<auto> : Reassigns objects of dropped LDAP users to user dbadmin .
Note
Be sure to include the angle brackets < >.
For more information about usage, see Examples.
Default: <auto>
|
ImportExportTLSMode |
When using CONNECT TO VERTICA to connect to another Vertica cluster for import or export, specifies the degree of stringency for using TLS. Possible values are:
-
PREFER : Try TLS but fall back to plaintext if TLS fails.
-
REQUIRE : Use TLS and fail if the server does not support TLS.
-
VERIFY_CA : Require TLS (as with REQUIRE), and also validate the other server's certificate using the CA specified by the "server" TLS CONFIGURATION's CA certificates (in this case, "ca_cert" and "ica_cert"):
=> SELECT name, certificate, ca_certificate, mode FROM tls_configurations WHERE name = 'server';
name | certificate | ca_certificate | mode
--------+------------------+--------------------+-----------
server | server_cert | ca_cert,ica_cert | VERIFY_CA
(1 row)
-
VERIFY_FULL : Require TLS and validate the certificate (as with VERIFY_CA), and also validate the server certificate's hostname.
-
REQUIRE_FORCE , VERIFY_CA_FORCE , and VERIFY_FULL_FORCE : Same behavior as REQUIRE , VERIFY_CA , and VERIFY_FULL , respectively, and cannot be overridden by CONNECT TO VERTICA.
Default: PREFER
|
PasswordLockTimeUnit |
The time units for which an account is locked by PASSWORD_LOCK_TIME after FAILED_LOGIN_ATTEMPTS , one of the following:
-
'd' : days (default)
-
'h' : hours
-
'm' : minutes
-
's' : seconds
For example, to configure the default profile to lock user accounts for 30 minutes after three unsuccessful login attempts:
=> ALTER DATABASE DEFAULT SET PasswordLockTimeUnit = 'm'
=> ALTER PROFILE DEFAULT LIMIT PASSWORD_LOCK_TIME 30;
|
RequireFIPS |
Boolean, specifies whether the FIPS mode is enabled:
On startup, Vertica automatically sets this parameter from the contents of the file crypto.fips_enabled . You cannot modify this parameter.
For details, see FIPS compliance for the Vertica server.
Default: 0
|
SecurityAlgorithm |
Sets the algorithm for the function that hash authentication uses, one of the following:
For example:
=> ALTER DATABASE DEFAULT SET SecurityAlgorithm = 'SHA512';
Default: SHA512
|
SystemCABundlePath |
The absolute path to a certificate bundle of trusted CAs. This CA bundle is used when establishing TLS connections to external services such as AWS or Azure through their respective SDKs and libcurl. The CA bundle file must be in the same location on all nodes.
If this parameter is empty, Vertica searches the "standard" paths for the CA bundles, which differs between distributions:
- Red Hat-based:
/etc/pki/tls/certs/ca-bundle.crt
- Debian-based:
/etc/ssl/certs/ca-certificates.crt
- SUSE:
/var/lib/ca-certificates/ca-bundle.pem
Example:
=> ALTER DATABASE DEFAULT SET SystemCABundlePath = 'path/to/ca_bundle.pem';
Default: Empty
|
TLS CONFIGURATION parameters
To set your Vertica database's TLSMode, private key, server certificate, and CA certificate(s), see ALTER TLS CONFIGURATION. In versions prior to 11.0.0, these parameters were known as EnableSSL, SSLPrivateKey, SSLCertificate, and SSLCA, respectively.
Examples
Set the database parameter GlobalHeirUsername
:
=> \du
List of users
User name | Is Superuser
-----------+--------------
Joe | f
SuzyQ | f
dbadmin | t
(3 rows)
=> ALTER DATABASE DEFAULT SET PARAMETER GlobalHeirUsername='SuzyQ';
ALTER DATABASE
=> \c - Joe
You are now connected as user "Joe".
=> CREATE TABLE t1 (a int);
CREATE TABLE
=> \c
You are now connected as user "dbadmin".
=> \dt t1
List of tables
Schema | Name | Kind | Owner | Comment
--------+------+-------+-------+---------
public | t1 | table | Joe |
(1 row)
=> DROP USER Joe;
NOTICE 4927: The Table t1 depends on User Joe
ROLLBACK 3128: DROP failed due to dependencies
DETAIL: Cannot drop User Joe because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too
=> DROP USER Joe CASCADE;
DROP USER
=> \dt t1
List of tables
Schema | Name | Kind | Owner | Comment
--------+------+-------+-------+---------
public | t1 | table | SuzyQ |
(1 row)
3.9 - Kerberos configuration parameters
The following parameters let you configure the Vertica principal for Kerberos authentication and specify the location of the Kerberos keytab file.
The following parameters let you configure the Vertica principal for Kerberos authentication and specify the location of the Kerberos keytab
file.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameter |
Description |
KerberosServiceName |
Provides the service name portion of the Vertica Kerberos principal. By default, this parameter is vertica . For example:
vertica/host@EXAMPLE.COM
Default: vertica
|
KerberosHostname |
Provides the instance or host name portion of the Vertica Kerberos principal. For example:
vertica/host@EXAMPLE.COM
If you omit the optional KerberosHostname parameter, Vertica uses the return value from the function gethostname() . Assuming each cluster node has a different host name, those nodes will each have a different principal, which you must manage in that node's keytab file.
|
KerberosRealm |
Provides the realm portion of the Vertica Kerberos principal. A realm is the authentication administrative domain and is usually formed in uppercase letters. For example:
vertica/hostEXAMPLE.COM
|
KerberosKeytabFile |
Provides the location of the keytab file that contains credentials for the Vertica Kerberos principal. By default, this file is located in /etc . For example:
KerberosKeytabFile=/etc/krb5.keytab
Note
-
The principal must take the form KerberosServiceName/KerberosHostName@KerberosRealm
-
The keytab file must be readable by the file owner who is running the process (typically the Linux dbadmin user assigned file permissions 0600).
|
KerberosTicketDuration |
Determines the lifetime of the ticket retrieved from performing a kinit. The default is 0 (zero) which disables this parameter.
If you omit setting this parameter, the lifetime is determined by the default Kerberos configuration.
|
3.10 - Machine learning parameters
You use machine learning parameters to configure various aspects of machine learning functionality in Vertica.
You use machine learning parameters to configure various aspects of machine learning functionality in Vertica.
Parameter |
Description |
MaxModelSizeKB |
Sets the maximum size of models that can be imported. The sum of the size of files specified in the metadata.json file determines the model size. The unit of this parameter is KBytes. The native Vertica model (category=VERTICA_MODELS) is exempted from this limit. If you can export the model from Vertica, and the model is not altered while outside Vertica, you can import it into Vertica again.
The MaxModelSizeKB parameter can be set only by a superuser and only at the database level. It is visible only to a superuser. Its default value is 4GB, and its valid range is between 1KB and 64GB (inclusive).
Examples:
To set this parameter to 3KB:
=> ALTER DATABASE DEFAULT SET MaxModelSizeKB = 3;
To set this parameter to 64GB (the maximum allowed):
=> ALTER DATABASE DEFAULT SET MaxModelSizeKB = 67108864;
To reset this parameter to the default value:
=> ALTER DATABASE DEFAULT CLEAR MaxModelSizeKB;
Default: 4GB
|
3.11 - Tuple mover parameters
These parameters control how the operates.
These parameters control how the Tuple Mover operates.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameters |
Description |
ActivePartitionCount |
Sets the number of active partitions. The active partitions are those most recently created. For example:
=> ALTER DATABASE DEFAULT SET ActivePartitionCount = 2;
For information about how the Tuple Mover treats active and inactive partitions during a mergeout operation, see Partition mergeout.
Default: 1
|
CancelTMTimeout |
When partition, copy table, and rebalance operations encounter a conflict with an internal Tuple Mover job, those operations attempt to cancel the conflicting Tuple Mover job. This parameter specifies the amount of time, in seconds, that the blocked operation waits for the Tuple Mover cancellation to take effect. If the operation is unable to cancel the Tuple Mover job within limit specified by this parameter, the operation displays an error and rolls back.
Default: 300
|
EnableTMOnRecoveringNode |
Boolean, specifies whether Tuple Mover performs mergeout activities on nodes with a node state of RECOVERING. Enabling Tuple Mover reduces the number of ROS containers generated during recovery. Having fewer than 1024 ROS containers per projection allows Vertica to maintain optimal recovery performance.
Default: 1 (enabled)
|
MaxMrgOutROSSizeMB |
Specifies in MB the maximum size of ROS containers that are candidates for mergeout operations. The Tuple Mover avoids merging ROS containers that are larger than this setting.
Note
After a rebalance operation, Tuple Mover groups ROS containers in batches that are smaller than MaxMrgOutROSSizeMB. ROS containers that are larger than MaxMrgOutROSSizeMB are merged individually
Default: -1 (no maximum limit)
|
MergeOutInterval |
Specifies in seconds how frequently the Tuple Mover checks the mergeout request queue for pending requests:
-
If the queue contains mergeout requests, the Tuple Mover does nothing and goes back to sleep.
-
If the queue is empty, the Tuple Mover:
It then goes back to sleep.
Default: 600
|
PurgeMergeoutPercent |
Specifies as a percentage the threshold of deleted records in a ROS container that invokes an automatic mergeout operation, to purge those records. Vertica only counts the number of 'aged-out' delete vectors—that is, delete vectors that are as 'old' or older than the ancient history mark (AHM) epoch.
This threshold applies to all ROS containers for non-partitioned tables. It also applies to ROS containers of all inactive partitions. In both cases, aged-out delete vectors are permanently purged from the ROS container.
Note
This configuration parameter only applies to automatic mergeout operations. It does not apply to manual mergeout operations that are invoked by calling meta-functions DO_TM_TASK('mergeout') and PURGE.
Default: 20 (percent)
|
3.12 - Projection parameters
The following configuration parameters help you manage projections.
The following configuration parameters help you manage projections.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameters |
Description |
AnalyzeRowCountInterval |
Specifies how often Vertica checks the number of projection rows and whether the threshold set by ARCCommitPercentage has been crossed.
For more information, see Collecting database statistics.
Default: 86400 seconds (24 hours)
|
ARCCommitPercentage |
Sets the threshold percentage of difference between the last-recorded aggregate projection row count and current row count for a given table. When the difference exceeds this threshold, Vertica updates the catalog with the current row count.
Default: 3 (percent)
|
ContainersPerProjectionLimit |
Specifies how many ROS containers Vertica creates per projection before ROS pushback occurs.
Caution
Increasing this parameter's value can cause serious degradation of database performance. Vertica strongly recommends that you not modify this parameter without first consulting with Customer Support professionals.
Default: 1024
|
MaxAutoSegColumns |
Specifies the number of columns (0 –1024) to use in an auto-projection's hash segmentation clause. Set to 0 to use all columns.
Default: 8
|
MaxAutoSortColumns |
Specifies the number of columns (0 –1024) to use in an auto-projection's sort expression. Set to 0 to use all columns.
Default: 8
|
RebalanceQueryStorageContainers |
By default, prior to performing a rebalance, Vertica performs a system table query to compute the size of all projections involved in the rebalance task. This query enables Vertica to optimize the rebalance to most efficiently utilize available disk space. This query can, however, significantly increase the time required to perform the rebalance.
By disabling the system table query, you can reduce the time required to perform the rebalance. If your nodes are low on disk space, disabling the query increases the chance that a node runs out of disk space. In that situation, the rebalance fails.
Default: 1 (enable)
|
RewriteQueryForLargeDim |
If enabled (1), Vertica rewrites a SET USING or DEFAULT USING query during a REFRESH_COLUMNS operation by reversing the inner and outer join between the target and source tables. Doing so can optimize refresh performance in cases where the source data is in a table that is larger than the target table.
Important
Enable this parameter only if the SET USING source data is in a table that is larger than the target table. If the source data is in a table smaller than the target table, then enabling RewriteQueryForLargeDim can adversely affect refresh performance.
Default: 0
|
SegmentAutoProjection |
Determines whether auto-projections are segmented if the table definition omits a segmentation clause. You can set this parameter at database and session scopes.
Default: 1 (create segmented auto projections)
|
3.13 - Epoch management parameters
The following table describes the epoch management parameters for configuring Vertica.
The following table describes the epoch management parameters for configuring Vertica.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameters |
Description |
AdvanceAHMInterval |
Determines how frequently (in seconds) Vertica checks the history retention status.
AdvanceAHMInterval cannot be set to a value that is less than the EpochMapInterval.
Default: 180 (seconds)
|
AHMBackupManagement |
Blocks the advancement of the Ancient History Mark (AHM). When this parameter is enabled, the AHM epoch cannot be later than the epoch of your latest full backup. If you advance the AHM to purge and delete data, do not enable this parameter.
Caution
Do not enable this parameter before taking full backups, as it would prevent the AHM from advancing.
Default: 0
|
EpochMapInterval |
Determines the granularity of mapping between epochs and time available to historical queries. When a historical queries AT TIME T request is issued, Vertica maps it to an epoch within a granularity of EpochMapInterval seconds. It similarly affects the time reported for Last Good Epoch during Failure recovery. Note that it does not affect internal precision of epochs themselves.
Tip
Decreasing this interval increases the number of epochs saved on disk. Therefore, consider reducing the HistoryRetentionTime parameter to limit the number of history epochs that Vertica retains.
Default: 180 (seconds)
|
HistoryRetentionTime |
Determines how long deleted data is saved (in seconds) as an historical reference. When the specified time since the deletion has passed, you can purge the data. Use the -1 setting if you prefer to use HistoryRetentionEpochs to determine which deleted data can be purged.
Note
The default setting of 0 effectively prevents the use of the Administration tools 'Roll Back Database to Last Good Epoch' option because the AHM remains close to the current epoch and a rollback is not permitted to an epoch prior to the AHM.
Tip
If you rely on the Roll Back option to remove recently loaded data, consider setting a day-wide window to remove loaded data. For example:
ALTER DATABASE DEFAULT SET HistoryRetentionTime = 86400;
Default: 0 (Data saved when nodes are down.)
|
HistoryRetentionEpochs |
Specifies the number of historical epochs to save, and therefore, the amount of deleted data.
Unless you have a reason to limit the number of epochs, Vertica recommends that you specify the time over which deleted data is saved.
If you specify both History parameters, HistoryRetentionTime takes precedence. Setting both parameters to -1, preserves all historical data.
See Setting a purge policy.
Default: -1 (disabled)
|
3.14 - Monitoring parameters
The following table describes parameters that control options for monitoring the Vertica database.
The following table describes parameters that control options for monitoring the Vertica database.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameters |
Description |
EnableDataCollector |
Enables and disables the Data Collector, which is the Workload Analyzer's internal diagnostics utility. Affects all sessions on all nodes. Use 0 to turn off data collection.
Default: 1 (enabled)
|
SnmpTrapDestinationsList |
Defines where Vertica sends traps for SNMP. See Configuring reporting for SNMP. For example:
=> ALTER DATABASE DEFAULT SET SnmpTrapDestinationsList = 'localhost 162 public';
Default: none
|
SnmpTrapsEnabled |
Enables event trapping for SNMP. See Configuring reporting for SNMP.
Default: 0
|
SnmpTrapEvents |
Define which events Vertica traps through SNMP. See Configuring reporting for SNMP. For example:
ALTER DATABASE DEFAULT SET SnmpTrapEvents = 'Low Disk Space, Recovery Failure';
Default: Low Disk Space, Read Only File System, Loss of K Safety, Current Fault Tolerance at Critical Level, Too Many ROS Containers, Node State Change, Recovery Failure, Stale Checkpoint, and CRC Mismatch.
|
SyslogEnabled |
Enables event trapping for syslog. See Configuring reporting for syslog.
Default: 0
|
SyslogEvents |
Defines events that generate a syslog entry. See Configuring reporting for syslog. For example:
ALTER DATABASE DEFAULT SET SyslogEvents = 'Low Disk Space, Recovery Failure';
Default: none
|
SyslogFacility |
Defines which SyslogFacility Vertica uses. See Configuring reporting for syslog.
Default: user
|
3.15 - Profiling parameters
The following table describes the profiling parameters for configuring Vertica.
The following table describes the profiling parameters for configuring Vertica. See Profiling database performance for more information on profiling queries.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameters |
Description |
GlobalEEProfiling |
Enables profiling for query execution runs in all sessions on all nodes.
Default: 0
|
GlobalQueryProfiling |
Enables query profiling for all sessions on all nodes.
Default: 0
|
GlobalSessionProfiling |
Enables session profiling for all sessions on all nodes.
Default: 0
|
SaveDCEEProfileThresholdUS |
Sets in microseconds the query duration threshold for saving profiling information to system tables QUERY_CONSUMPTION and EXECUTION_ENGINE_PROFILES. You can set this parameter to a maximum value of 2147483647 (231-1, or ~35.79 minutes).
Default: 60000000 (60 seconds)
|
3.16 - Database Designer parameters
The following table describes the parameters for configuring the Vertica Database Designer.
The following table describes the parameters for configuring the Vertica Database Designer.
Parameter |
Description |
DBDCorrelationSampleRowCount |
Minimum number of table rows at which Database Designer discovers and records correlated columns.
Default: 4000
|
DBDLogInternalDesignProcess |
Enables or disables Database Designer logging.
Default: 0 (False)
|
DBDUseOnlyDesignerResourcePool |
Enables use of the DBD pool by the Vertica Database Designer.
When set to false, design processing is mostly contained by the user's resource pool, but might spill over into some system resource pools for less-intensive tasks
Default: 0 (False)
|
3.17 - Internationalization parameters
The following table describes internationalization parameters for configuring Vertica.
The following table describes internationalization parameters for configuring Vertica.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameters |
Description |
DefaultIntervalStyle |
Sets the default interval style to use. If set to 0 (default), the interval is in PLAIN style (the SQL standard), no interval units on output. If set to 1, the interval is in UNITS on output. This parameter does not take effect until the database is restarted.
Default: 0
|
DefaultSessionLocale |
Sets the default session startup locale for the database. This parameter does not take effect until the database is restarted.
Default: en_US@collation=binary
|
EscapeStringWarning |
Issues a warning when backslashes are used in a string literal. This can help locate backslashes that are being treated as escape characters so they can be fixed to follow the SQL standard-conforming string syntax instead.
Default: 1
|
StandardConformingStrings |
Determines whether character string literals treat backslashes () as string literals or escape characters. When set to -1, backslashes are treated as string literals; when set to 0, backslashes are treated as escape characters.
Tip
To treat backslashes as escape characters, use the Extended string syntax
(E'...');
Default: -1
|
3.18 - Text search parameters
You can configure Vertica for text search using the following parameter.
You can configure Vertica for text search using the following parameter.
Parameters |
Description |
TextIndexMaxTokenLength |
Controls the maximum size of a token in a text index.
For example:
ALTER DATABASE database_name SET PARAMETER TextIndexMaxTokenLength=760;
If the parameter is set to a value greater than 65000 characters, then the tokenizer truncates the token at 65000 characters.
Caution
Avoid setting this parameter near its maximum value, 65000. Doing so can result in a significant decrease in performance. For optimal performance, set this parameter to the maximum token value of your tokenizer.
Default: 128 (characters)
|
3.19 - Kafka user-defined session parameters
Use the following Vertica use-defined session parameters to configure Kafka SSL when not using a scheduler.
Use the following Vertica use-defined session parameters to configure Kafka SSL when not using a scheduler. The kafka_ parameters configure SSL authentication for Kafka. Refer to TLS/SSL encryption with Kafka for more information.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameter |
Description |
kafka_SSL_CA |
The contents of the certificate authority certificate. For example:
=> ALTER SESSION SET UDPARAMETER kafka_SSL_CA='MIIBOQIBAAJBAIOL';
Default: none
|
kafka_SSL_Certificate |
The contents of the SSL certificate. For example:
=> ALTER SESSION SET UDPARAMETER kafka_SSL_Certificate='XrM07O4dV/nJ5g';
This parameter is optional when the Kafka server's parameter ssl.client.auth is set to none or requested .
Default: none
|
kafka_SSL_PrivateKey_secret |
The private key used to encrypt the session. Vertica does not log this information. For example:
=> ALTER SESSION SET UDPARAMETER kafka_SSL_PrivateKey_secret='A60iThKtezaCk7F';
This parameter is optional when the Kafka server's parameter ssl.client.auth is set to none or requested .
Default: none
|
kafka_SSL_PrivateKeyPassword_secret |
The password used to create the private key. Vertica does not log this information.
For example:
ALTER SESSION SET UDPARAMETER kafka_SSL_PrivateKeyPassword_secret='secret';
This parameter is optional when the Kafka server's parameter ssl.client.auth is set to none or requested .
Default: none
|
kafka_Enable_SSL |
Enables SSL authentication for Vertica-Kafka integration. For example:
=> ALTER SESSION SET UDPARAMETER kafka_Enable_SSL=1;
Default: 0
|
MaxSessionUDParameterSize |
Sets the maximum length of a value in a user-defined session parameter. For example:
=> ALTER SESSION SET MaxSessionUDParameterSize = 2000
Default: 1000
|
User-defined session parameters
3.20 - Constraint parameters
The following configuration parameters control how Vertica evaluates and enforces constraints.
The following configuration parameters control how Vertica evaluates and enforces constraints. All parameters are set at the database level through
ALTER DATABASE
.
Three of these parameters—EnableNewCheckConstraintsByDefault, EnableNewPrimaryKeysByDefault, and EnableNewUniqueKeysByDefault—can be used to enforce CHECK, PRIMARY KEY, and UNIQUE constraints, respectively. For details, see Constraint enforcement.
Parameters |
Description |
EnableNewCheckConstraintsByDefault |
Boolean parameter, set to 0 or 1:
|
EnableNewPrimaryKeysByDefault |
Boolean parameter, set to 0 or 1:
-
0 (default): Disable enforcement of new PRIMARY KEY constraints except where the table DDL explicitly enables them.
-
1: Enforce new PRIMARY KEY constraints except where the table DDL explicitly disables them.
Note
Vertica recommends enforcing constraints PRIMARY KEY and UNIQUE together.
|
EnableNewUniqueKeysByDefault |
Boolean parameter, set to 0 or 1:
|
MaxConstraintChecksPerQuery |
Sets the maximum number of constraints that
ANALYZE_CONSTRAINTS can handle with a single query:
-
-1 (default): No maximum set, ANALYZE_CONSTRAINTS uses a single query to evaluate all constraints within the specified scope.
-
Integer > 0: The maximum number of constraints per query. If the number of constraints to evaluate exceeds this value, ANALYZE_CONSTRAINTS handles it with multiple queries.
For details, see Distributing Constraint Analysis.
|
3.21 - Numeric precision parameters
The following configuration parameters let you configure numeric precision for numeric data types.
The following configuration parameters let you configure numeric precision for numeric data types. For more about using these parameters, seeNumeric data type overflow with SUM, SUM_FLOAT, and AVG.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameters |
Description |
AllowNumericOverflow |
Boolean, set to one of the following:
-
1 (true): Allows silent numeric overflow. Vertica does not implicitly extend precision of numeric data types. Vertica ignores the value of NumericSumExtraPrecisionDigits.
-
0 (false): Vertica produces an overflow error, if a result exceeds the precision set by NumericSumExtraPrecisionDigits.
Default: 1 (true)
|
NumericSumExtraPrecisionDigits |
An integer between 0 and 20, inclusive. Vertica produces an overflow error if a result exceeds the specified precision. This parameter setting only applies if AllowNumericOverflow is set to 0 (false).
Default: 6 (places beyond the DDL-specified precision)
|
3.22 - Memory management parameters
The following table describes parameters for managing Vertica memory usage.
The following table describes parameters for managing Vertica memory usage.
Caution
Modify these parameters only under guidance from Vertica Support.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Parameters |
Description |
MemoryPollerIntervalSec |
Specifies in seconds how often the Vertica memory poller checks whether Vertica memory usage is below the thresholds of several configuration parameters (see below):
-
MemoryPollerMallocBloatThreshold
-
MemoryPollerReportThreshold
-
MemoryPollerTrimThreshold
Important
To disable polling of all thresholds, set this parameter to 0. Doing so effectively disables automatic memory usage reporting and trimming.
Default: 2
|
MemoryPollerMallocBloatThreshold |
Threshold of glibc memory bloat.
The memory poller calls glibc function malloc_info() to obtain the amount of free memory in malloc. It then compares MemoryPollerMallocBloatThreshold —by default, set to 0.3—with the following expression:
free-memory-in-malloc / RSS
If this expression evaluates to a value higher than MemoryPollerMallocBloatThreshold , the memory poller calls glibc function malloc_trim() . This function reclaims free memory from malloc and returns it to the operating system. Details on calls to malloc_trim() are written to system table
MEMORY_EVENTS .
To disable polling of this threshold, set the parameter to 0.
Default: 0.3
|
MemoryPollerReportThreshold |
Threshold of memory usage that determines whether the Vertica memory poller writes a report.
The memory poller compares MemoryPollerReportThreshold with the following expression:
RSS / available-memory
When this expression evaluates to a value higher than MemoryPollerReportThreshold —by default, set to 0.93, then the memory poller writes a report to MemoryReport.log , in the Vertica working directory. This report includes information about Vertica memory pools, how much memory is consumed by individual queries and session, and so on. The memory poller also logs the report as an event in system table
MEMORY_EVENTS , where it sets EVENT_TYPE to MEMORY_REPORT .
To disable polling of this threshold, set the parameter to 0.
Default: 0.93
|
MemoryPollerTrimThreshold |
Threshold for the memory poller to start checking whether to trim glibc-allocated memory.
The memory poller compares MemoryPollerTrimThreshold —by default, set to 0.83— with the following expression:
RSS / available-memory
If this expression evaluates to a value higher than MemoryPollerTrimThreshold , then the memory poller starts checking the next threshold—set in MemoryPollerMallocBloatThreshold —for glibc memory bloat.
To disable polling of this threshold, set the parameter to 0. Doing so also disables polling of MemoryPollerMallocBloatThreshold .
Default: 0.83
|