This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Configuration parameters
Vertica supports a wide variety of configuration parameters that affect many facets of database behavior. These parameters can be set with the appropriate ALTER statements at one or more levels, listed here in descending order of precedence:
-
User (ALTER USER)
-
Session (ALTER SESSION)
-
Node (ALTER NODE)
-
Database (ALTER DATABASE)
Not all parameters can be set at all levels. Consult the documentation of individual parameters for restrictions.
You can query the CONFIGURATION_PARAMETERS system table to obtain the current settings for all user-accessible parameters. For example, the following query returns settings for partitioning parameters: their current and default values, which levels they can be set at, and whether changes require a database restart to take effect:
=> SELECT parameter_name, current_value, default_value, allowed_levels, change_requires_restart
FROM configuration_parameters WHERE parameter_name ILIKE '%partitioncount%';
parameter_name | current_value | default_value | allowed_levels | change_requires_restart
----------------------+---------------+---------------+----------------+-------------------------
MaxPartitionCount | 1024 | 1024 | NODE, DATABASE | f
ActivePartitionCount | 1 | 1 | NODE, DATABASE | f
(2 rows)
1 - General parameters
The following parameters configure basic database operations.
The following parameters configure basic database operations. Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- AddEnforceLengthDuringCopyRewrite
- When using prepared statements, Vertica rewrites INSERT statements into a COPY statement for enhanced performance. This configuration parameter controls the inclusion of the ENFORCELENGTH parameter in such COPY statements.
Default: True
- ApplyEventsDuringSALCheck
- Boolean, specifies whether Vertica uses catalog events to filter out dropped corrupt partitions during node startup. Dropping corrupt partitions can speed node recovery.
When disabled (0), Vertica reports corrupt partitions, but takes no action. Leaving corrupt partitions in place can reset the current projection checkpoint epoch to the epoch before the corruption occurred.
This parameter has no effect on unpartitioned tables.
Default: 0
- ApportionedFileMinimumPortionSizeKB
- Specifies the minimum portion size (in kilobytes) for use with apportioned file loads. Vertica apportions a file load across multiple nodes only if:
See also EnableApportionLoad and EnableApportionedFileLoad.
Default: 1024
- BlockedSocketGracePeriod
- Sets how long a session socket remains blocked while awaiting client input or output for a given query. See Handling session socket blocking.
Default: None (Socket blocking can continue indefinitely.)
- CatalogCheckpointPercent
- Specifies the threshold at which a checkpoint is created for the database catalog.
By default, this parameter is set to 50 (percent), so when transaction logs reach 50% of the size of the last checkpoint, Vertica adds a checkpoint. Each checkpoint demarcates all changes to the catalog since the last checkpoint.
Default: 50 (percent)
- ClusterSequenceCacheMode
- Boolean, specifies whether the initiator node requests cache for other nodes in a cluster, and then sends cache to other nodes along with the execution plan, one of the following.
See Distributing sequences.
Default: 1 (enabled)
- CompressCatalogOnDisk
- Whether to compress the size of the catalog on disk, one of the following:
This parameter is most effective if the catalog disk partition is small (<50 GB) and the metadata is large (hundreds of tables, partitions, or nodes).
Default: 1
- CompressNetworkData
- Boolean, specifies whether to compress all data sent over the internal network when enabled (set to 1). This compression speeds up network traffic at the expense of added CPU load. If the network is throttling database performance, enable compression to correct the issue.
Default: 0
- CopyFaultTolerantExpressions
- Boolean, indicates whether to report record rejections during transformations and proceed (true) or abort COPY operations if a transformation fails.
Default: 0 (false)
- CopyFromVerticaWithIdentity
- Allows COPY FROM VERTICA and EXPORT TO VERTICA to load values into IDENTITY columns. The destination IDENTITY column is not incremented automatically. To disable the default behavior, set this parameter to 0 (zero).
Default: 1
- DatabaseHeartbeatInterval
- Determines the interval (in seconds) at which each node performs a health check and communicates a heartbeat. If a node does not receive a message within five times of the specified interval, the node is evicted from the cluster. Setting the interval to 0 disables the feature.
See Automatic eviction of unhealthy nodes.
Default: 120
- DataLoaderDefaultRetryLimit
- Default number of times a data loader retries failed loads. Changing this value changes the retry limit for existing data loaders that do not specify another limit.
Default: 3
- DefaultArrayBinarySize
- The maximum binary size, in bytes, for an unbounded collection, if a maximum size is not specified at creation time.
Default: 65000
- DefaultResourcePoolForUsers
- Resource pool that is assigned to the profile of a new user, whether created in Vertica or LDAP. This pool is also assigned to users when their assigned resource pool is dropped.
You can set DefaultResourcePoolForUsers only to a global resource pool; attempts to set it to a subcluster resource pool return with an error.
Note
If an LDAP user is merged with an existing Vertica user, the resource pool setting on the existing user remains unchanged.
For details, see User resource allocation.
Default: GENERAL
- DefaultTempTableLocal
- Boolean, specifies whether CREATE TEMPORARY TABLE creates a local or global temporary table, one of the following:
For details, see Creating temporary tables.
Default: 0
- DivideZeroByZeroThrowsError
- Boolean, specifies whether to return an error if a division by zero operation is requested:
-
0: Return 0.
-
1: Returns an error.
Default: 1
- EnableApportionedChunkingInDefaultLoadParser
- Boolean, specifies whether to enable the built-in parser for delimited files to take advantage of both apportioned load and cooperative parse for potentially better performance.
Default: 1 (enable)
- EnableApportionedFileLoad
- Boolean, specifies whether to enable automatic apportioning across nodes of file loads using COPY FROM VERTICA. Vertica attempts to apportion the load if:
-
This parameter and EnableApportionLoad are both enabled.
-
The parser supports apportioning.
-
The load is divisible into portion sizes of at least the value of ApportionedFileMinimumPortionSizeKB.
Setting this parameter does not guarantee that loads will be apportioned, but disabling it guarantees that they will not be.
See Distributing a load.
Default: 1 (enable)
- EnableApportionLoad
- Boolean, specifies whether to enable automatic apportioning across nodes of data loads using COPY...WITH SOURCE. Vertica attempts to apportion the load if:
Setting this parameter does not guarantee that loads will be apportioned, but disabling it guarantees that they will not be.
For details, see Distributing a load.
Default: 1 (enable)
- EnableBetterFlexTypeGuessing
- Boolean, specifies whether to enable more accurate type guessing when assigning data types to non-string keys in a flex table
__raw__
column with COMPUTE_FLEXTABLE_KEYS or COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW. If this parameter is disabled (0), Vertica uses a limited set of Vertica data type assignments.
Default: 1 (enable)
- EnableCooperativeParse
- Boolean, specifies whether to implement multi-threaded parsing capabilities on a node. You can use this parameter for both delimited and fixed-width loads.
Default: 1 (enable)
- EnableForceOuter
- Boolean, specifies whether Vertica uses a table's
force_oute
r value to implement a join. For more information, see Controlling join inputs.
Default: 0 (forced join inputs disabled)
- EnableMetadataMemoryTracking
- Boolean, specifies whether to enable Vertica to track memory used by database metadata in the METADATA resource pool.
Default: 1 (enable)
- EnableResourcePoolCPUAffinity
- Boolean, specifies whether Vertica aligns queries to the resource pool of the processing CPU. When disabled (0), queries run on any CPU, regardless of the
CPU_AFFINITY_SET
of the resource pool.
Default: 1
- EnableStrictTimeCasts
- Specifies whether all cast failures result in an error.
Default: 0 (disable)
- EnableUniquenessOptimization
- Boolean, specifies whether to enable query optimization that is based on guaranteed uniqueness of column values. Columns that can be guaranteed to include unique values include:
Default: 1 (enable)
- EnableWithClauseMaterialization
- Superseded by WithClauseMaterialization.
- ExternalTablesExceptionsLimit
- Determines the maximum number of
COPY
exceptions and rejections allowed when a SELECT
statement references an external table. Set to -1
to remove any exceptions limit. See Querying external tables.
Default: 100
- FailoverToStandbyAfter
- Specifies the length of time that an active standby node waits before taking the place of a failed node.
This parameter is set to an interval literal.
Default: None
- FencedUDxMemoryLimitMB
- Sets the maximum amount of memory, in megabytes (MB), that a fenced-mode UDF can use. If a UDF attempts to allocate more memory than this limit, that attempt triggers an exception. For more information, see Fenced and unfenced modes.
Default: -1 (no limit)
- FlexTableDataTypeGuessMultiplier
- Specifies a multiplier that the COMPUTE_FLEXTABLE_KEYS and COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW functions use when assigning a data type and column width for the flex keys table. Both functions assign each key a data type, and multiply the longest key value by this factor to estimate column width. This value is not used to calculate the width of any real columns in a flex table.
Default: 2.0
- FlexTableRawSize
- Specifies the default column width for the
__raw__
column of new flex tables, a value between 1 and 32000000, inclusive.
Default: 130000
- ForceUDxFencedMode
- When enabled (1), forces all UDxs that support fenced mode to run in fenced mode even if their definitions specify NOT FENCED.
-
File exporters cannot be run in fenced mode.
Default: 0
- HTTPServerPortOffset
- Controls the offset for the HTTPS port. The default HTTPS port is 8443, the sum of the client port (5433) and default HTTPServerPortOffset (3010).
Caution
This parameter should not be changed without support guidance.
Default: 3010
- HTTPSlowRequestThreshold
- Threshold in milliseconds for logging HTTP requests in UDFS_EVENTS. Requests that take longer than this value are logged; faster ones are not.
Default: 500
- IcebergPathMapping
- For external tables using Iceberg data, a map of URI prefixes used by Iceberg to prefixes accessible to Vertica. The value is a JSON object:
'{"source-prefix":"new-prefix"[,...]}'
Specify prefixes only (up through port number), not complete paths.
- IdleTimeoutInitializingConnectionsMs
- The length of time (in milliseconds) that the server waits before timing out, during each step in connection initialization. After connection initialization, the session is created.
Default: 10000
- JavaBinaryForUDx
- Sets the full path to the Java executable that Vertica uses to run Java UDxs. See Installing Java on Vertica hosts.
- JoinDefaultTupleFormat
- Specifies how to size VARCHAR column data when joining tables on those columns, and buffers accordingly, one of the following:
-
fixed
: Use join column metadata to size column data to a fixed length, and buffer accordingly.
-
variable
: Use the actual length of join column data, so buffer size varies for each join.
Default: fixed
- KeepAliveIdleTime
- Length (in seconds) of the idle period before the first TCP keepalive probe is sent to ensure that the client is still connected. If set to 0, Vertica uses the kernel's tcp_keepalive_time parameter setting.
Default: 0
- KeepAliveProbeCount
- Number of consecutive keepalive probes that must go unacknowledged by the client before the client connection is considered lost and closed. If set to 0, Vertica uses the kernel's tcp_keepalive_probes parameter setting.
Default: 0
- KeepAliveProbeInterval
- Time interval (in seconds) between keepalive probes. If set to 0, Vertica uses the kernel's tcp_keepalive_intvl parameter setting.
Default: 0
- LockTimeout
- Specifies in seconds how long a table waits to acquire a lock.
Default: 300
- LoadSourceStatisticsLimit
- Specifies the maximum number of sources per load operation that are profiled in the LOAD_SOURCES system table. Set it to 0 to disable profiling.
Default: 256
- MaxBundleableROSSizeKB
- Specifies the minimum size, in kilobytes, of an independent ROS file. Vertica bundles storage container ROS files below this size into a single file. Bundling improves the performance of any file-intensive operations, including backups, restores, and mergeouts.
If you set this parameter to a value of 0, Vertica bundles .fdb and .pidx files without bundling other storage container files.
Default: 1024
- MaxClientSessions
- Determines the maximum number of client sessions that can run on a single node of the database. The default value allows for five additional administrative logins. These logins prevent DBAs from being locked out of the system if non-dbadmin users reach the login limit.
Tip
Setting this parameter to 0 prevents new client sessions from being opened while you are shutting down the database. Restore the parameter to its original setting after you restart the database. For details, see
Managing Sessions.
Default: 50 user logins and 5 additional administrative logins
- ObjectStoreGlobStrategy
- For partitioned external tables in object stores, the strategy to use for expanding globs before pruning partitions:
-
Flat
: COPY fetches a list of all full object names with a given prefix, which can incur latency if partitions are numerous or deeply nested.
-
Hierarchical
: COPY fetches object names one partition layer at a time, allowing earlier pruning but possibly requiring more calls to the object store when queries are not selective or there are not many partition directory levels.
For details, see Partitions on Object Stores.
Default: Flat
- ObjStoreUploadParallelism
- Integer (maximum 1000), size of the thread pool used when writing files to S3. Using more threads parallelizes uploads, but uses more memory. Each node has a single thread pool used for all file writes.
This configuration parameter can be set only at the database level.
Default: 0 (disabled)
- ParquetMetadataCacheSizeMB
- Size of the cache used for metadata when reading Parquet data. The cache uses local TEMP storage.
Default: 4096
- PatternMatchingUseJit
- Boolean, specifies whether to enable just-in-time compilation (to machine code) of regular expression pattern matching functions used in queries. Enabling this parameter can usually improve pattern matching performance on large tables. The Perl Compatible Regular Expressions (PCRE) pattern-match library evaluates regular expressions. Restart the database for this parameter to take effect.
See also Regular expression functions.
Default: 1 (enable)
- PatternMatchStackAllocator
- Boolean, specifies whether to override the stack memory allocator for the pattern-match library. The Perl Compatible Regular Expressions (PCRE) pattern-match library evaluates regular expressions. Restart the database for this parameter to take effect.
See also Regular expression functions.
Default: 1 (enable override)
- TerraceRoutingFactor
- Specifies whether to enable or disable terrace routing on any Enterprise Mode large cluster that implements rack-based fault groups.
- To enable, set as follows:
where: * numRackNodes
: Number of nodes in a rack * numRacks
: Number of racks in the cluster
- To disable, set to a value so large that terrace routing cannot be enabled for the largest clusters—for example, 1000.
For details, see Terrace routing.
Default: 2
- TransactionIsolationLevel
- Changes the isolation level for the database. After modification, Vertica uses the new transaction level for every new session. Existing sessions and their transactions continue to use the original isolation level.
See also Change transaction isolation levels.
Default: READ COMMITTED
- TransactionMode
- Whether transactions are in read/write or read-only modes. Read/write is the default. Existing sessions and their transactions continue to use the original isolation level.
Default: READ WRITE
- UDFSSlowRequestThreshold
- Threshold in milliseconds for logging events in UDFS_EVENTS. Requests that take longer than this value are logged; faster ones are not.
Default: 1000
- UDxFencedBlockTimeout
- Number of seconds to wait for output before aborting a UDx running in Fenced and unfenced modes. If the server aborts a UDx for this reason, it produces an error message similar to "ERROR 3399: Failure in UDx RPC call: timed out in receiving a UDx message". If you see this error frequently, you can increase this limit. UDxs running in fenced mode do not run in the server process, so increasing this value does not impede server performance.
Default: 60
- UseLocalTzForParquetTimestampConversion
- Boolean, specifies whether to do timezone conversion when reading Parquet files. Hive version 1.2.1 introduced an option to localize timezones when writing Parquet files. Previously it wrote them in UTC and Vertica adjusted the value when reading the files.
Set to 0 if Hive already adjusted the timezones.
Default: 1 (enable conversion)
- UseServerIdentityOverUserIdentity
- Boolean, specifies whether to ignore user-supplied credentials for non-Linux file systems and always use a USER storage location to govern access to data. See Creating a Storage Location for USER Access.
Default: 0 (disable)
- WithClauseMaterialization
- Boolean, specifies whether to enable materialization of WITH clause results. When materialization is enabled (1), Vertica evaluates each WITH clause once and stores results in a temporary table.
For WITH queries with complex types, temp relations are disabled.
Default: 0 (disable)
- WithClauseRecursionLimit
- Specifies the maximum number of times a WITH RECURSIVE clause iterates over the content of its own result set before it exits. For details, see WITH clause recursion.
Important
Be careful to set WithClauseRecursionLimit only as high as needed to traverse the deepest hierarchies. Vertica sets no limit on this parameter; however, a high value can incur considerable overhead that adversely affects performance and exhausts system resources.
If a high recursion count is required, then consider enabling materialization. For details, see WITH RECURSIVE Materialization.
Default: 8
2 - Azure parameters
Use the following parameters to configure reading from Azure blob storage.
Use the following parameters to configure reading from Azure blob storage. For more information about reading data from Azure, see Azure Blob Storage object store.
For external tables using highly partitioned data in an object store, see the ObjectStoreGlobStrategy configuration parameter and Partitions on Object Stores.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- AzureStorageCredentials
- Collection of JSON objects, each of which specifies connection credentials for one endpoint. This parameter takes precedence over Azure managed identities.
The collection must contain at least one object and may contain more. Each object must specify at least one of accountName
or blobEndpoint
, and at least one of accountKey
or sharedAccessSignature
.
accountName
: If not specified, uses the label of blobEndpoint
.
blobEndpoint
: Host name with optional port (host:port
). If not specified, uses account
.blob.core.windows.net
.
accountKey
: Access key for the account or endpoint.
sharedAccessSignature
: Access token for finer-grained access control, if being used by the Azure endpoint.
- AzureStorageEndpointConfig
- Collection of JSON objects, each of which specifies configuration elements for one endpoint. Each object must specify at least one of
accountName
or blobEndpoint
.
accountName
: If not specified, uses the label of blobEndpoint
.
blobEndpoint
: Host name with optional port (host:port
). If not specified, uses account
.blob.core.windows.net
.
protocol
: HTTPS (default) or HTTP.
isMultiAccountEndpoint
: true if the endpoint supports multiple accounts, false otherwise (default is false). To use multiple-account access, you must include the account name in the URI. If a URI path contains an account, this value is assumed to be true unless explicitly set to false.
3 - Constraints parameters
The following configuration parameters control how Vertica evaluates and enforces constraints.
The following configuration parameters control how Vertica evaluates and enforces constraints. All parameters are set at the database level through
ALTER DATABASE
.
Three of these parameters—EnableNewCheckConstraintsByDefault, EnableNewPrimaryKeysByDefault, and EnableNewUniqueKeysByDefault—can be used to enforce CHECK, PRIMARY KEY, and UNIQUE constraints, respectively. For details, see Constraint enforcement.
- EnableNewCheckConstraintsByDefault
- Boolean parameter, set to 0 or 1:
- EnableNewPrimaryKeysByDefault
- Boolean parameter, set to 0 or 1:
-
0 (default): Disable enforcement of new PRIMARY KEY constraints except where the table DDL explicitly enables them.
-
1: Enforce new PRIMARY KEY constraints except where the table DDL explicitly disables them.
Note
Vertica recommends enforcing constraints PRIMARY KEY and UNIQUE together.
- EnableNewUniqueKeysByDefault
- Boolean parameter, set to 0 or 1:
- MaxConstraintChecksPerQuery
- Sets the maximum number of constraints that
ANALYZE_CONSTRAINTS
can handle with a single query:
-
-1 (default): No maximum set, ANALYZE_CONSTRAINTS
uses a single query to evaluate all constraints within the specified scope.
-
Integer > 0: The maximum number of constraints per query. If the number of constraints to evaluate exceeds this value, ANALYZE_CONSTRAINTS
handles it with multiple queries.
For details, see Distributing Constraint Analysis.
4 - Database Designer parameters
The following table describes the parameters for configuring the Vertica Database Designer.
The following table describes the parameters for configuring the Vertica Database Designer.
DBDCorrelationSampleRowCount
- Minimum number of table rows at which Database Designer discovers and records correlated columns.
Default: 4000
DBDLogInternalDesignProcess
- Enables or disables Database Designer logging.
Default: 0 (False)
DBDUseOnlyDesignerResourcePool
- Enables use of the DBD pool by the Vertica Database Designer.
When set to false, design processing is mostly contained by the user's resource pool, but might spill over into some system resource pools for less-intensive tasks
Default: 0 (False)
5 - Eon Mode parameters
The following parameters configure how the database operates when running in Eon Mode.
The following parameters configure how the database operates when running in Eon Mode. Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- BackgroundDepotWarming
- Specifies background depot warming behavior:
-
1: The depot loads objects while it is warming, and continues to do so in the background after the node becomes active and starts executing queries.
-
0: Node activation is deferred until the depot fetches and loads all queued objects
For details, see Depot Warming.
Default: 1
- CatalogSyncInterval
- Specifies in minutes how often the transaction log sync service syncs metadata to communal storage. If you change this setting, Vertica restarts the interval count.
Default: 5
- DelayForDeletes
- Specifies in hours how long to wait before deleting a file from communal storage. Vertica first deletes a file from the depot. After the specified time interval, the delete also occurs in communal storage.
Default: 0. Deletes the file from communal storage as soon as it is not in use by shard subscribers.
- DepotOperationsForQuery
- Specifies behavior when the depot does not contain queried file data, one of the following:
-
ALL
(default): Fetch file data from communal storage, if necessary displace existing files by evicting them from the depot.
-
FETCHES
: Fetch file data from communal storage only if space is available; otherwise, read the queried data directly from communal storage.
-
NONE
: Do not fetch file data to the depot, read the queried data directly from communal storage.
You can also specify query-level behavior with the hint
DEPOT_FETCH
.
- ECSMode
- String parameter that sets the strategy Vertica uses when dividing the data in a shard among subscribing nodes during an ECS-enabled query, one of the following:
-
AUTO
: Optimizer automatically determines the strategy to use.
-
COMPUTE_OPTIMIZED
: Force use of the compute-optimized strategy.
-
IO_OPTIMIZED
: Force use of the I/O-optimized strategy.
For details, see Manually choosing an ECS strategy.
Default: AUTO
- ElasticKSafety
- Boolean parameter that controls whether Vertica adjusts shard subscriptions due to the loss of a primary node:
- 1: When a primary node is lost, Vertica subscribes other primary nodes to the down node's shard subscriptions. This action helps reduce the chances of a database into going read-only mode due to loss of shard coverage.
- 0 : Vertica does not change shard subscriptions in reaction to the loss of a primary node.
Default: 1
For details, see Maintaining Shard Coverage.
- EnableDepotWarmingFromPeers
- Boolean parameter, specifies whether Vertica warms a node depot while the node is starting up and not ready to process queries:
For details, see Depot Warming.
Default: 0
- FileDeletionServiceInterval
- Specifies in seconds the interval between each execution of the reaper cleaner service task.
Default: 60 seconds
- MaxDepotSizePercent
- An integer value that specifies the maximum size of the depot as a percentage of disk size,
Default: 80
- PreFetchPinnedObjectsToDepotAtStartup
- If enabled (set to 1), a warming depot fetches objects that are pinned on its subcluster. For details, see Depot Warming.
Default: 0
- ReaperCleanUpTimeoutAtShutdown
- Specifies in seconds how long Vertica waits for the reaper to delete files from communal storage before shutting down. If set to a negative value, Vertica shuts down without waiting for the reaper.
Note
The reaper is a service task that deletes disk files.
Default: 300
- StorageMergeMaxTempCacheMB
- The size of temp space allocated per query to the StorageMerge operator for caching the data of S3 storage containers.
Note
The actual temp space that is allocated is the lesser of:
For details, see Local caching of storage containers.
- UseCommunalStorageForBatchDepotWarming
- Boolean parameter, specifies whether where a node retrieves data when warming its depot:
Note
The actual temp space that is allocated is the lesser of two settings:
Default: 1
Important
This parameter is for internal use only. Do not change it unless directed to do so by Vertica support.
- UseDepotForReads
- Boolean parameter, specifies whether Vertica accesses the depot to answer queries, or accesses only communal storage:
-
1: Vertica first searches the depot for the queried data; if not there, Vertica fetches the data from communal storage for this and future queries.
-
0: Vertica bypasses the depot and always obtains queried data from communal storage.
Note
Enable depot reads to improve query performance and support K-safety.
Default: 1
- UseDepotForWrites
- Boolean parameter, specifies whether Vertica writes loaded data to the depot and then uploads files to communal storage:
-
1: Write loaded data to the depot, upload files to communal storage.
-
0: Bypass the depot and always write directly to communal storage.
Default: 1
- UsePeerToPeerDataTransfer
- Boolean parameter, specifies whether Vertica pushes loaded data to other shard subscribers:
Note
Setting to 1 helps improve performance when a node is down.
Default: 0
Important
This parameter is for internal use only. Do not change it unless directed to do so by Vertica support.
6 - Epoch management parameters
The following table describes the epoch management parameters for configuring Vertica.
The following table describes the epoch management parameters for configuring Vertica.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- AdvanceAHMInterval
- Determines how frequently (in seconds) Vertica checks the history retention status.
AdvanceAHMInterval cannot be set to a value that is less than the EpochMapInterval.
Default: 180 (seconds)
- AHMBackupManagement
- Blocks the advancement of the Ancient History Mark (AHM). When this parameter is enabled, the AHM epoch cannot be later than the epoch of your latest full backup. If you advance the AHM to purge and delete data, do not enable this parameter.
Caution
Do not enable this parameter before taking full backups, as it would prevent the AHM from advancing.
Default: 0
- EpochMapInterval
- Determines the granularity of mapping between epochs and time available to historical queries. When a historical queries
AT TIME T
request is issued, Vertica maps it to an epoch within a granularity of EpochMapInterval seconds. It similarly affects the time reported for Last Good Epoch during Failure recovery. Note that it does not affect internal precision of epochs themselves.
Tip
Decreasing this interval increases the number of epochs saved on disk. Therefore, consider reducing the HistoryRetentionTime parameter to limit the number of history epochs that Vertica retains.
Default: 180 (seconds)
- HistoryRetentionTime
- Determines how long deleted data is saved (in seconds) as an historical reference. When the specified time since the deletion has passed, you can purge the data. Use the -1 setting if you prefer to use
HistoryRetentionEpochs
to determine which deleted data can be purged.
Note
The default setting of 0 effectively prevents the use of the
Administration tools 'Roll Back Database to Last Good Epoch' option because the
AHM remains close to the current epoch and a rollback is not permitted to an epoch prior to the AHM.
Tip
If you rely on the Roll Back option to remove recently loaded data, consider setting a day-wide window to remove loaded data. For example:
ALTER DATABASE DEFAULT SET HistoryRetentionTime = 86400;
Default: 0 (Data saved when nodes are down.)
- HistoryRetentionEpochs
- Specifies the number of historical epochs to save, and therefore, the amount of deleted data.
Unless you have a reason to limit the number of epochs, Vertica recommends that you specify the time over which deleted data is saved.
If you specify both History
parameters, HistoryRetentionTime
takes precedence. Setting both parameters to -1, preserves all historical data.
See Setting a purge policy.
Default: -1 (disabled)
7 - Google Cloud Storage parameters
Use the following parameters to configure reading from Google Cloud Storage (GCS) using COPY FROM.
Use the following parameters to configure reading from Google Cloud Storage (GCS) using COPY FROM. For more information about reading data from S3, see Specifying where to load data from.
For external tables using highly partitioned data in an object store, see the ObjectStoreGlobStrategy configuration parameter and Partitions on Object Stores.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- GCSAuth
- An ID and secret key to authenticate to GCS. You can set parameters globally and for the current session with ALTER DATABASE...SET PARAMETER and ALTER SESSION...SET PARAMETER, respectively. For extra security, do not store credentials in the database; instead, set it for the current session with ALTER SESSION. For example:
=> ALTER SESSION SET GCSAuth='ID:secret';
If you use a shared credential, set it in the database with ALTER DATABASE.
- GCSEnableHttps
- Specifies whether to use the HTTPS protocol when connecting to GCS, can be set only at the database level with ALTER DATABASE...SET PARAMETER.
Default: 1 (enabled)
- GCSEndpoint
- The connection endpoint address.
Default: storage.googleapis.com
8 - Hadoop parameters
The following table describes general parameters for configuring integration with Apache Hadoop.
The following table describes general parameters for configuring integration with Apache Hadoop. See Apache Hadoop integration for more information.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- EnableHDFSBlockInfoCache
- Boolean, whether to distribute block location metadata collected during planning on the initiator to all database nodes for execution. Distributing this metadata reduces name node accesses, and thus load, but can degrade database performance somewhat in deployments where the name node isn't contended. This performance effect is because the data must be serialized and distributed. Enable distribution if protecting the name node is more important than query performance; usually this applies to large HDFS clusters where name node contention is already an issue.
Default: 0 (disabled)
- HadoopConfDir
- Directory path containing the XML configuration files copied from Hadoop. The same path must be valid on every Vertica node. You can use the VERIFY_HADOOP_CONF_DIR meta-function to test that the value is set correctly. Setting this parameter is required to read data from HDFS.
For all Vertica users, the files are accessed by the Linux user under which the Vertica server process runs.
When you set this parameter, previously-cached configuration information is flushed.
You can set this parameter at the session level. Doing so overrides the database value; it does not append to it. For example:
=> ALTER SESSION SET HadoopConfDir='/test/conf:/hadoop/hcat/conf';
To append, get the current value and include it on the new path after your additions. Setting this parameter at the session level does not change how the files are accessed.
Default: obtained from environment if possible
- HadoopFSAuthentication
- How (or whether) to use Kerberos authentication with HDFS. By default, if KerberosKeytabFile is set, Vertica uses that credential for both Vertica and HDFS. Usually this is the desired behavior. However, if you are using a Kerberized Vertica cluster with a non-Kerberized HDFS cluster, set this parameter to "none" to indicate that Vertica should not use the Vertica Kerberos credential to access HDFS.
Default: "keytab" if KerberosKeytabFile is set, otherwise "none"
- HadoopFSBlockSizeBytes
- Block size to write to HDFS. Larger files are divided into blocks of this size.
Default: 64MB
- HadoopFSNNOperationRetryTimeout
- Number of seconds a metadata operation (such as list directory) waits for a response before failing. Accepts float values for millisecond precision.
Default: 6 seconds
- HadoopFSReadRetryTimeout
- Number of seconds a read operation waits before failing. Accepts float values for millisecond precision. If you are confident that your file system will fail more quickly, you can improve performance by lowering this value.
Default: 180 seconds
- HadoopFSReplication
- Number of replicas HDFS makes. This is independent of the replication that Vertica does to provide K-safety. Do not change this setting unless directed otherwise by Vertica support.
Default: 3
- HadoopFSRetryWaitInterval
- Initial number of seconds to wait before retrying read, write, and metadata operations. Accepts float values for millisecond precision. The retry interval increases exponentially with every retry.
Default: 3 seconds
- HadoopFSTokenRefreshFrequency
- How often, in seconds, to refresh the Hadoop tokens used to hold Kerberos tickets (see Token expiration).
Default: 0 (refresh when token expires)
- HadoopFSWriteRetryTimeout
- Number of seconds a write operation waits before failing. Accepts float values for millisecond precision. If you are confident that your file system will fail more quickly, you can improve performance by lowering this value.
Default: 180 seconds
- HadoopImpersonationConfig
- Session parameter specifying the delegation token or Hadoop user for HDFS access. See HadoopImpersonationConfig format for information about the value of this parameter and Proxy users and delegation tokens for more general context.
- HDFSUseWebHDFS
- Boolean, whether URLs in the
hdfs
scheme use WebHDFS instead of LibHDFS++ to access data.
Deprecated
This parameter is deprecated because LibHDFS++ is deprecated. In the future, Vertica will use WebHDFS for all hdfs
URLs.
Default: 1 (enabled)
- WebhdfsClientCertConf
- mTLS configurations for accessing one or more WebHDFS servers. The value is a JSON string; each member has the following properties:
nameservice
and authority
are mutually exclusive.
For example:
=> ALTER SESSION SET WebhdfsClientCertConf =
'[{"authority" : "my.authority.com:50070", "certName" : "myCert"},
{"nameservice" : "prod", "certName" : "prodCert"}]';
HCatalog Connector parameters
The following table describes the parameters for configuring the HCatalog Connector. See Using the HCatalog Connector for more information.
Note
You can override HCatalog configuration parameters when you create an HCatalog schema with
CREATE HCATALOG SCHEMA.
- EnableHCatImpersonation
- Boolean, whether the HCatalog Connector uses (impersonates) the current Vertica user when accessing Hive. If impersonation is enabled, the HCatalog Connector uses the Kerberos credentials of the logged-in Vertica user to access Hive data. Disable impersonation if you are using an authorization service to manage access without also granting users access to the underlying files. For more information, see Configuring security.
Default: 1 (enabled)
- HCatalogConnectorUseHiveServer2
- Boolean, whether Vertica internally uses HiveServer2 instead of WebHCat to get metadata from Hive.
Default: 1 (enabled)
- HCatalogConnectorUseLibHDFSPP
- Boolean, whether the HCatalog Connector should use the
hdfs
scheme instead of webhdfs
with the HCatalog Connector.
Deprecated
Vertica uses the hdfs
scheme by default. To use webhdfs
, set the HDFSUseWebHDFS parameter.
Default: 1 (enabled)
- HCatConnectionTimeout
- The number of seconds the HCatalog Connector waits for a successful connection to the HiveServer2 (or WebHCat) server before returning a timeout error.
Default: 0 (Wait indefinitely)
- HCatSlowTransferLimit
- Lowest transfer speed (in bytes per second) that the HCatalog Connector allows when retrieving data from the HiveServer2 (or WebHCat) server. In some cases, the data transfer rate from the server to Vertica is below this threshold. In such cases, after the number of seconds specified in the HCatSlowTransferTime parameter pass, the HCatalog Connector cancels the query and closes the connection.
Default: 65536
- HCatSlowTransferTime
- Number of seconds the HCatalog Connector waits before testing whether the data transfer from the server is too slow. See the HCatSlowTransferLimit parameter.
Default: 60
9 - Internationalization parameters
The following table describes internationalization parameters for configuring Vertica.
The following table describes internationalization parameters for configuring Vertica.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- DefaultIntervalStyle
- Sets the default interval style to use. If set to 0 (default), the interval is in PLAIN style (the SQL standard), no interval units on output. If set to 1, the interval is in UNITS on output. This parameter does not take effect until the database is restarted.
Default: 0
- DefaultSessionLocale
- Sets the default session startup locale for the database. This parameter does not take effect until the database is restarted.
Default: en_US@collation=binary
- EscapeStringWarning
- Issues a warning when backslashes are used in a string literal. This can help locate backslashes that are being treated as escape characters so they can be fixed to follow the SQL standard-conforming string syntax instead.
Default: 1
- StandardConformingStrings
- Determines whether character string literals treat backslashes () as string literals or escape characters. When set to -1, backslashes are treated as string literals; when set to 0, backslashes are treated as escape characters.
Tip
To treat backslashes as escape characters, use the Extended string syntax
(E'...');
Default: -1
10 - Kafka user-defined session parameters
Set Vertica user-defined session parameters to configure Kafka SSL when not using a scheduler, using ALTER SESSION SET UDPARAMETER.
Set Vertica user-defined session parameters to configure Kafka SSL when not using a scheduler, using ALTER SESSION SET UDPARAMETER. The kafka
-prefixed parameters configure SSL authentication for Kafka. For details, see TLS/SSL encryption with Kafka.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- kafka_SSL_CA
- The contents of the certificate authority certificate. For example:
=> ALTER SESSION SET UDPARAMETER kafka_SSL_CA='MIIBOQIBAAJBAIOL';
Default: none
- kafka_SSL_Certificate
- The contents of the SSL certificate. For example:
=> ALTER SESSION SET UDPARAMETER kafka_SSL_Certificate='XrM07O4dV/nJ5g';
This parameter is optional when the Kafka server's parameter ssl.client.auth
is set to none
or requested
.
Default: none
- kafka_SSL_PrivateKey_secret
- The private key used to encrypt the session. Vertica does not log this information. For example:
=> ALTER SESSION SET UDPARAMETER kafka_SSL_PrivateKey_secret='A60iThKtezaCk7F';
This parameter is optional when the Kafka server's parameter ssl.client.auth
is set to none
or requested
.
Default: none
- kafka_SSL_PrivateKeyPassword_secret
- The password used to create the private key. Vertica does not log this information.
For example:
ALTER SESSION SET UDPARAMETER kafka_SSL_PrivateKeyPassword_secret='secret';
This parameter is optional when the Kafka server's parameter ssl.client.auth
is set to none
or requested
.
Default: none
- kafka_Enable_SSL
- Enables SSL authentication for Vertica-Kafka integration. For example:
=> ALTER SESSION SET UDPARAMETER kafka_Enable_SSL=1;
Default: 0
- MaxSessionUDParameterSize
- Sets the maximum length of a value in a user-defined session parameter. For example:
=> ALTER SESSION SET MaxSessionUDParameterSize = 2000
Default: 1000
User-defined session parameters
11 - Kerberos parameters
The following parameters let you configure the Vertica principal for Kerberos authentication and specify the location of the Kerberos keytab file.
The following parameters let you configure the Vertica principal for Kerberos authentication and specify the location of the Kerberos keytab
file.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- KerberosEnableKeytabPermissionCheck
- Whether the Vertica server verifies permissions on the keytab file. By default, the Vertica server verifies these permissions.
In a hybrid Kubernetes cluster, set this parameter to 0
so that there is no permissions check. Otherwise, Kerberos authentication fails because the keytab file is stored in a Secret, and the VerticaDB operator cannot verify permissions on a Secret.
Default: 1
- KerberosHostname
- Instance or host name portion of the Vertica Kerberos principal. For example:
vertica/host@EXAMPLE.COM
If you omit the optional KerberosHostname
parameter, Vertica uses the return value from the function gethostname()
. Assuming each cluster node has a different host name, those nodes will each have a different principal, which you must manage in that node's keytab file.
- KerberosKeytabFile
- Location of the
keytab
file that contains credentials for the Vertica Kerberos principal. By default, this file is located in /etc
. For example:
KerberosKeytabFile=/etc/krb5.keytab
Note
-
The principal must take the form KerberosServiceName/KerberosHostName@KerberosRealm
-
The keytab
file must be readable by the file owner who is running the process (typically the Linux dbadmin user assigned file permissions 0600).
- KerberosRealm
- Realm portion of the Vertica Kerberos principal. A realm is the authentication administrative domain and is usually formed in uppercase letters. For example:
vertica/hostEXAMPLE.COM
- KerberosServiceName
- Service name portion of the Vertica Kerberos principal. By default, this parameter is
vertica
. For example:
vertica/host@EXAMPLE.COM
Default: vertica
- KerberosTicketDuration
- Lifetime of the ticket retrieved from performing a kinit. The default is 0 (zero) which disables this parameter.
If you omit setting this parameter, the lifetime is determined by the default Kerberos configuration.
12 - Machine learning parameters
You use machine learning parameters to configure various aspects of machine learning functionality in Vertica.
You use machine learning parameters to configure various aspects of machine learning functionality in Vertica.
- MaxModelSizeKB
- Sets the maximum size of models that can be imported. The sum of the size of files specified in the metadata.json file determines the model size. The unit of this parameter is KBytes. The native Vertica model (category=VERTICA_MODELS) is exempted from this limit. If you can export the model from Vertica, and the model is not altered while outside Vertica, you can import it into Vertica again.
The MaxModelSizeKB parameter can be set only by a superuser and only at the database level. It is visible only to a superuser. Its default value is 4GB, and its valid range is between 1KB and 64GB (inclusive).
Examples:
To set this parameter to 3KB:
=> ALTER DATABASE DEFAULT SET MaxModelSizeKB = 3;
To set this parameter to 64GB (the maximum allowed):
=> ALTER DATABASE DEFAULT SET MaxModelSizeKB = 67108864;
To reset this parameter to the default value:
=> ALTER DATABASE DEFAULT CLEAR MaxModelSizeKB;
Default: 4GB
13 - Memory management parameters
The following table describes parameters for managing Vertica memory usage.
The following table describes parameters for managing Vertica memory usage.
Caution
Modify these parameters only under guidance from Vertica Support.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- MemoryPollerIntervalSec
- Specifies in seconds how often the Vertica memory poller checks whether Vertica memory usage is below the thresholds of several configuration parameters (see below):
-
MemoryPollerMallocBloatThreshold
-
MemoryPollerReportThreshold
-
MemoryPollerTrimThreshold
Important
To disable polling of all thresholds, set this parameter to 0. Doing so effectively disables automatic memory usage
reporting and
trimming.
Default: 2
- MemoryPollerMallocBloatThreshold
- Threshold of glibc memory bloat.
The memory poller calls glibc function malloc_info()
to obtain the amount of free memory in malloc. It then compares MemoryPollerMallocBloatThreshold
—by default, set to 0.3—with the following expression:
free-memory-in-malloc / RSS
If this expression evaluates to a value higher than MemoryPollerMallocBloatThreshold
, the memory poller calls glibc function
malloc_trim()
. This function reclaims free memory from malloc and returns it to the operating system. Details on calls to malloc_trim()
are written to system table
MEMORY_EVENTS
.
To disable polling of this threshold, set the parameter to 0.
Default: 0.3
- MemoryPollerReportThreshold
- Threshold of memory usage that determines whether the Vertica memory poller writes a report.
The memory poller compares MemoryPollerReportThreshold
with the following expression:
RSS / available-memory
When this expression evaluates to a value higher than MemoryPollerReportThreshold
—by default, set to 0.93, then the memory poller writes a report to MemoryReport.log
, in the Vertica working directory. This report includes information about Vertica memory pools, how much memory is consumed by individual queries and session, and so on. The memory poller also logs the report as an event in system table
MEMORY_EVENTS
, where it sets EVENT_TYPE
to MEMORY_REPORT
.
To disable polling of this threshold, set the parameter to 0.
Default: 0.93
- MemoryPollerTrimThreshold
- Threshold for the memory poller to start checking whether to trim glibc-allocated memory.
The memory poller compares MemoryPollerTrimThreshold
—by default, set to 0.83— with the following expression:
RSS / available-memory
If this expression evaluates to a value higher than MemoryPollerTrimThreshold
, then the memory poller starts checking the next threshold—set in MemoryPollerMallocBloatThreshold
—for glibc memory bloat.
To disable polling of this threshold, set the parameter to 0. Doing so also disables polling of MemoryPollerMallocBloatThreshold
.
Default: 0.83
14 - Monitoring parameters
The following table describes parameters that control options for monitoring the Vertica database.
The following parameters control options for monitoring the Vertica database.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- EnableDataCollector
- Enables the Data Collector, which is the Workload Analyzer's internal diagnostics utility. Affects all sessions on all nodes. To disable, set to 0.
Default: 1 (enabled)
- EnableLogRotate
- Enables and disables the timed LogRotate service.
Default: 1 (enabled)
- LogRotateInterval
- Interval literal, how often the LogRotate service runs.
Default: 8h (runs every 8 hours)
- LogRotateMaxAge
- Interval literal, the maximum age of rotated logs (
.gz
) to keep. Rotated logs with an age greater than this value are rotated. For details, see Rotating log files.
Default: 7 days
- LogRotateMaxSize
- String, the maximum size of a log (
.log
) file. The string has format integer
{K|M|G|T}
, where K
is kibibytes, M
is mebibytes, G
is gibibytes, and T
is tebibytes. Logs larger than this value are rotated. For details, see Rotating log files.
Default: 100M
- SnmpTrapDestinationsList
- Defines where Vertica sends traps for SNMP. See Configuring reporting for SNMP. For example:
=> ALTER DATABASE DEFAULT SET SnmpTrapDestinationsList = 'localhost 162 public';
Default: none
- SnmpTrapsEnabled
- Enables event trapping for SNMP. See Configuring reporting for SNMP.
Default: 0
- SnmpTrapEvents
- Defines which events Vertica traps through SNMP. See Configuring reporting for SNMP. For example:
=> ALTER DATABASE DEFAULT SET SnmpTrapEvents = 'Low Disk Space, Recovery Failure';
Default: Low Disk Space, Read Only File System, Loss of K Safety, Current Fault Tolerance at Critical Level, Too Many ROS Containers, Node State Change, Recovery Failure, Stale Checkpoint, and CRC Mismatch.
- SyslogEnabled
- Enables event trapping for syslog. See Configuring reporting for syslog.
Default: 0 (disabled)
- SyslogEvents
- Defines events that generate a syslog entry. See Configuring reporting for syslog. For example:
=> ALTER DATABASE DEFAULT SET SyslogEvents = 'Low Disk Space, Recovery Failure';
Default: none
- SyslogFacility
- Defines which SyslogFacility Vertica uses. See Configuring reporting for syslog.
Default: user
15 - Numeric precision parameters
The following configuration parameters let you configure numeric precision for numeric data types.
The following configuration parameters let you configure numeric precision for numeric data types. For more about using these parameters, seeNumeric data type overflow with SUM, SUM_FLOAT, and AVG.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- AllowNumericOverflow
- Boolean, set to one of the following:
-
1 (true): Allows silent numeric overflow. Vertica does not implicitly extend precision of numeric data types. Vertica ignores the value of NumericSumExtraPrecisionDigits.
-
0 (false): Vertica produces an overflow error, if a result exceeds the precision set by NumericSumExtraPrecisionDigits.
Default: 1 (true)
- NumericSumExtraPrecisionDigits
- An integer between 0 and 20, inclusive. Vertica produces an overflow error if a result exceeds the specified precision. This parameter setting only applies if AllowNumericOverflow is set to 0 (false).
Default: 6 (places beyond the DDL-specified precision)
16 - Profiling parameters
The following table describes the profiling parameters for configuring Vertica.
The following table describes the profiling parameters for configuring Vertica. See Profiling database performance for more information on profiling queries.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- GlobalEEProfiling
- Enables profiling for query execution runs in all sessions on all nodes.
Default: 0
- GlobalQueryProfiling
- Enables query profiling for all sessions on all nodes.
Default: 0
- GlobalSessionProfiling
- Enables session profiling for all sessions on all nodes.
Default: 0
- SaveDCEEProfileThresholdUS
- Sets in microseconds the query duration threshold for saving profiling information to system tables QUERY_CONSUMPTION and EXECUTION_ENGINE_PROFILES. You can set this parameter to a maximum value of 2147483647 (231-1, or ~35.79 minutes).
Default: 60000000 (60 seconds)
17 - Projection parameters
The following configuration parameters help you manage projections.
The following configuration parameters help you manage projections.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- AnalyzeRowCountInterval
- Specifies how often Vertica checks the number of projection rows and whether the threshold set by
ARCCommitPercentage
has been crossed.
For more information, see Collecting database statistics.
Default: 86400 seconds (24 hours)
- ARCCommitPercentage
- Sets the threshold percentage of difference between the last-recorded aggregate projection row count and current row count for a given table. When the difference exceeds this threshold, Vertica updates the catalog with the current row count.
Default: 3 (percent)
- ContainersPerProjectionLimit
- Specifies how many ROS containers Vertica creates per projection before ROS pushback occurs.
Caution
Increasing this parameter's value can cause serious degradation of database performance. Vertica strongly recommends that you not modify this parameter without first consulting with Customer Support professionals.
Default: 1024
- MaxAutoSegColumns
- Specifies the number of columns (0 –1024) to use in an auto-projection's hash segmentation clause. Set to
0
to use all columns.
Default: 8
- MaxAutoSortColumns
- Specifies the number of columns (0 –1024) to use in an auto-projection's sort expression. Set to
0
to use all columns.
Default: 8
- RebalanceQueryStorageContainers
- By default, prior to performing a rebalance, Vertica performs a system table query to compute the size of all projections involved in the rebalance task. This query enables Vertica to optimize the rebalance to most efficiently utilize available disk space. This query can, however, significantly increase the time required to perform the rebalance.
By disabling the system table query, you can reduce the time required to perform the rebalance. If your nodes are low on disk space, disabling the query increases the chance that a node runs out of disk space. In that situation, the rebalance fails.
Default: 1 (enable)
- RewriteQueryForLargeDim
- If enabled (1), Vertica rewrites a SET USING or DEFAULT USING query during a REFRESH_COLUMNS operation by reversing the inner and outer join between the target and source tables. Doing so can optimize refresh performance in cases where the source data is in a table that is larger than the target table.
Important
Enable this parameter only if the SET USING source data is in a table that is larger than the target table. If the source data is in a table smaller than the target table, then enabling RewriteQueryForLargeDim can adversely affect refresh performance.
Default: 0
- SegmentAutoProjection
- Determines whether auto-projections are segmented if the table definition omits a segmentation clause. You can set this parameter at database and session scopes.
Default: 1 (create segmented auto projections)
18 - S3 parameters
Use the following parameters to configure reading from S3 file systems and on-premises storage with S3-compatible APIs, using COPY.
Use the following parameters to configure reading from S3 file systems and on-premises storage with S3-compatible APIs. For more information about reading data from S3, see S3 Object Store.
For external tables using highly partitioned data in an object store, see the ObjectStoreGlobStrategy configuration parameter and Partitions on Object Stores.
Query the CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- AWSAuth
- ID and secret key for authentication. For extra security, do not store credentials in the database; use ALTER SESSION...SET PARAMETER to set this value for the current session only. If you use a shared credential, you can set it in the database with ALTER DATABASE...SET PARAMETER. For example:
=> ALTER SESSION SET AWSAuth='ID:secret';
In AWS, these arguments are named AccessKeyID and SecretAccessKey.
To use admintools create_db
or revive_db
for Eon Mode on-premises, create a configuration file called auth_params.conf
with these settings:
AWSAuth = key:secret
AWSEndpoint = IP:port
- AWSCAFile
File name of the TLS server certificate bundle to use. If set, this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.
=> ALTER DATABASE DEFAULT SET AWSCAFile = '/etc/ssl/ca-bundle.pem';
Default: system-dependent
- AWSCAPath
Path Vertica uses to look up TLS server certificate bundles. If set, this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.
=> ALTER DATABASE DEFAULT SET AWSCAPath = '/etc/ssl/';
Default: system-dependent
- AWSEnableHttps
Boolean, specifies whether to use the HTTPS protocol when connecting to S3, can be set only at the database level with ALTER DATABASE. If you choose not to use TLS, this parameter must be set to 0.
Default: 1 (enabled)
- AWSEndpoint
- Endpoint to use when interpreting S3 URLs, set as follows.
Important
Do not include http(s)://
for AWS endpoints.
-
AWS: hostname_or_ip
:port_number
.
-
AWS with a FIPS-compliant S3 Endpoint: S3_hostname
and enable virtual addressing:
Important
Do not include http(s)://
AWSEndpoint = s3-fips.dualstack.us-east-1.amazonaws.com
S3EnableVirtualAddressing = 1
-
On-premises/Pure: IP address of the Pure Storage server. If using admintools create_db
or revive_db
, create configuration file auth_params.conf
and include these settings:
awsauth = key:secret
awsendpoint = IP:port
-
When AWSEndpoint is not set, the default behavior is to use virtual-hosted request URLs.
Default: s3.amazonaws.com
- AWSLogLevel
- Log level, one of the following:
-
OFF
-
FATAL
-
ERROR
-
WARN
-
INFO
-
DEBUG
-
TRACE
**Default:**ERROR
- AWSRegion
- AWS region containing the S3 bucket from which to read files. This parameter can only be configured with one region at a time. If you need to access buckets in multiple regions, change the parameter each time you change regions.
If you do not set the correct region, you might experience a delay before queries fail because Vertica retries several times before giving up.
Default: us-east-1
- AWSSessionToken
- Temporary security token generated by running the
get-session-token
command, which generates temporary credentials you can use to configure multi-factor authentication.
Set this parameter in a user session using ALTER SESSION. You can set this parameter at the database level, but be aware that session tokens are temporary. When the token expires, any attempt to access AWS fails.
Note
If you use session tokens at the session level, you must set all parameters at the session level, even if some of them are set at the database level. Use
ALTER SESSION to set session parameters.
- AWSStreamingConnectionPercentage
- Controls the number of connections to the communal storage that Vertica uses for streaming reads. In a cloud environment, this setting helps prevent streaming data from communal storage using up all available file handles. It leaves some file handles available for other communal storage operations.
Due to the low latency of on-premises object stores, this option is unnecessary for an Eon Mode database that uses on-premises communal storage. In this case, disable the parameter by setting it to 0.
- S3BucketConfig
- A JSON array of objects specifying per-bucket configuration overrides. Each property other than the bucket name has a corresponding configuration parameter (shown in parentheses). If both the database-level parameter and its equivalent in S3BucketConfig are set, the value in S3BucketConfig takes precedence.
Properties:
-
bucket
: Name of the bucket
-
region
(AWSRegion): Name of the region
-
protocol
(AWSEnableHttps): Connection protocol, either http
or https
-
endpoint
(AWSEndpoint): Endpoint URL or IP address
-
enableVirtualAddressing
(S3EnableVirtualAddressing): Whether to rewrite the S3 URL to use a virtual hosted path
-
requesterPays
(S3RequesterPays): Whether requester (instead of bucket owner) pays the cost of accessing data on the bucket
-
serverSideEncryption
(S3ServerSideEncryption): Encryption algorithm if using SSE-S3 or SSE-KMS, one of AES256
, aws:kms
, or an empty string
-
sseCustomerAlgorithm
(S3SseCustomerAlgorithm): Encryption algorithm if using SSE-C; must be AES256
-
sseCustomerKey
(S3SseCustomerKey): Key if using SSE-C encryption, either 32-character plaintext or 44-character base64-encoded
-
sseKmsKeyId
(S3SseKmsKeyId): Key ID if using SSE-KMS encryption
-
proxy
(S3Proxy): HTTP(S) proxy string
The configuration properties for a given bucket might differ based on its type. For example, the following S3BucketConfig is for an AWS bucket AWSBucket
and a Pure Storage bucket PureStorageBucket
. AWSBucket
doesn't specify an endpoint, so Vertica uses the value of AWSEndpoint, which defaults to s3.amazonaws.com
:
ALTER DATABASE DEFAULT SET S3BucketConfig=
'[
{
"bucket": "AWSBucket",
"region": "us-east-2",
"protocol": "https",
"requesterPays": true
},
{
"bucket": "PureStorageBucket",
"endpoint": "pure.mycorp.net:1234",
"protocol": "http",
"enableVirtualAddressing": false
}
]';
- S3BucketCredentials
- Contains credentials for accessing an S3 bucket. Each property in S3BucketCredentials has an equivalent parameter (shown in parentheses). When set, S3BucketCredentials takes precedence over both AWSAuth and AWSSessionToken.
Providing credentials for more than one bucket authenticates to them simultaneously, allowing you to perform cross-endpoint joins, export from one bucket to another, etc.
Properties:
-
bucket
: Name of the bucket
-
accessKey
: Access key for the bucket (the ID
in AWSAuth)
-
secretAccessKey
: Secret access key for the bucket (the secret
in AWSAuth)
-
sessionToken
: Session token, only used when S3BucketCredentials is set at the session level (AWSSessionToken)
For example, the following S3BucketCredentials is for an AWS bucket AWSBucket
and a Pure Storage bucket PureStorageBucket
and sets all possible properties:
ALTER SESSION SET S3BucketCredentials='
[
{
"bucket": "AWSBucket",
"accessKey": "<AK0>",
"secretAccessKey": "<SAK0>",
"sessionToken": "1234567890"
},
{
"bucket": "PureStorageBucket",
"accessKey": "<AK1>",
"secretAccessKey": "<SAK1>"
}
]';
This parameter is only visible to the superuser. Users can set this parameter at the session level with ALTER SESSION.
- S3EnableVirtualAddressing
- Boolean, specifies whether to rewrite S3 URLs to use virtual-hosted paths. For example, if you use AWS, the S3 URLs change to
bucketname.s3.amazonaws.com
instead of s3.amazonaws.com/bucketname
. This configuration setting takes effect only when you have specified a value for AWSEndpoint.
If you set AWSEndpoint to a FIPS-compliant S3 Endpoint, you must enable S3EnableVirtualAddressing in auth_params.conf:
AWSEndpoint = s3-fips.dualstack.us-east-1.amazonaws.com
S3EnableVirtualAddressing = 1
The value of this parameter does not affect how you specify S3 paths.
Default: 0 (disabled)
Note
As of September 30, 2020, AWS requires virtual address paths for newly created buckets.
- S3Proxy
- HTTP(S) proxy settings, if needed, a string in the following format:
http[s]://[user:password@]host[:port]
Default: "" (empty string)
- S3RequesterPays
- Boolean, specifies whether requester (instead of bucket owner) pays the cost of accessing data on the bucket. When true, the bucket owner is only responsible for paying the cost of storing the data, rather than all costs associated with the bucket; must be set in order to access S3 buckets configured as Requester Pays buckets. By setting this property to true, you are accepting the charges for accessing data. If not specified, the default value is false.
- S3ServerSideEncryption
- String, encryption algorithm to use when reading or writing to S3. The value depends on which type of encryption at rest is configured for S3:
-
AES256
: Use for SSE-S3 encryption
-
aws:kms
: Use for SSE-KMS encryption
-
Empty string (""
): No encryption
SSE-C encryption does not use this parameter. Instead, see S3SseCustomerAlgorithm.
For details on using SSE parameters, see S3 object store.
Default: ""
(no encryption)
- S3SseCustomerAlgorithm
- String, the encryption algorithm to use when reading or writing to S3 using SSE-C encryption. The only supported values are
AES256
and ""
.
For SSE-S3 and SSE-KMS, instead use S3ServerSideEncryption.
Default: ""
(no encryption)
- S3SseCustomerKey
- If using SSE-C encryption, the client key for S3 access.
- S3SseKmsKeyId
- If using SSE-KMS encryption, the key identifier (not the key) to pass to the Key Management Server. Vertica must have permission to use the key, which is managed through KMS.
19 - SNS parameters
The following parameters configure Amazon Simple Notification Service (SNS) notifiers.
The following parameters configure Amazon Simple Notification Service (SNS) notifiers. These parameters can only be set at the database level and some, if unset, fall back to their equivalent S3 parameters.
Notifiers must be disabled and then reenabled for these parameters to take effect:
=> ALTER NOTIFIER sns_notifier DISABLE;
=> ALTER NOTIFIER sns_notifier ENABLE;
- SNSAuth (falls back to AWSAuth)
- ID and secret key for authentication, equivalent to the AccessKeyID and SecretAccessKey in AWS. For example:
=> ALTER DATABASE DEFAULT SET SNSAuth='myID:mySecret';
Default: "" (empty string)
- SNSCAFile (falls back to AWSCAFile)
File name of the TLS server certificate bundle to use. If set, this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.
=> ALTER DATABASE DEFAULT SET SNSCAFile = '/etc/ssl/ca-bundle.pem';
Default: "" (empty string)
- SNSCAPath (falls back to AWSCAPath)
Path Vertica uses to look up TLS server certificate bundles. If set, this parameter overrides the Vertica default CA bundle path specified in the SystemCABundlePath parameter.
=> ALTER DATABASE DEFAULT SET SNSCAPath = '/etc/ssl/';
Default: "" (empty string)
- SNSEnableHttps
Boolean, specifies whether to use the HTTPS protocol when connecting to S3, can be set only at the database level with ALTER DATABASE. If you choose not to use TLS, this parameter must be set to 0.
Default: 1 (enabled)
- SNSEndpoint
- URL of the SNS API endpoint. If this parameter is set to an empty string and the region is specified (either by SNSRegion or its fallback to AWSRegion), Vertica automatically infers the appropriate endpoint.
If you use FIPS, you must manually specify a FIPS-compliant endpoint.
Default: "" (empty string)
- SNSRegion (falls back to AWSRegion)
- AWS region for the SNSEndpoint. This parameter can only be configured with one region at a time.
Default: "" (empty string)
20 - SQS parameters
The following parameters configure Amazon Simple Queue Service (SQS) access.
The following parameters configure access to Amazon Simple Queue Service (SQS) queues. SQS is used with CREATE DATA LOADER to automatically load data from a queue into Vertica. These parameters can be set at the database or session level and some, if unset, fall back to their equivalent S3 parameters.
- SQSAuth (falls back to AWSAuth)
- ID and secret key for authentication, equivalent to the AccessKeyID and SecretAccessKey in AWS. For example:
=> ALTER DATABASE DEFAULT SET SQSAuth='myID:mySecret';
Default: "" (empty string)
- SQSQueueCredentials
- Per-queue credentials for accessing SQS queues, a JSON array. When set, these values take precedence over SQSAuth.
Properties:
-
accessKey
: Access key for the queue (the ID
in AWSAuth)
-
secretAccessKey
: Secret access key for the queue (the secret
in AWSAuth)
-
sessionToken
: AWS session token, only used when SQSQueueCredentials is set at the session level
The following example sets credentials for two queues:
=> ALTER SESSION SET SQSQueueCredentials='[
{"queue":"queue0","accessKey":"<AK0>","secretAccessKey":"<SAK0>","sessionToken":"1234567890"},
{"queue":"queue1","accessKey":"foo","secretAccessKey":"bar"}]';
Default: "" (empty string)
- SQSSessionToken
- Temporary, session-level SQS access token. Set this parameter in a user session using ALTER SESSION.
If you set a session token at the session level, you must also set SQSAuth at the session level, even if it is already set at the database level.
In general, SQSSessionToken should be set only at the session level. While you can set this parameter at the database level, session tokens are ephemeral, so when the database-level token expires, you must set it again either at the session or database level.
21 - Security parameters
Use these client authentication configuration parameters and general security parameters to configure TLS.
Use these client authentication configuration parameters and general security parameters to configure TLS.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
Database parameters
- DataSSLParams
- This parameter has been deprecated. Use the data_channel TLS Configuration instead.
Enables encryption using SSL on the data channel. The value of this parameter is a comma-separated list of the following:
-
An SSL certificate (chainable)
-
The corresponding SSL private key
-
The SSL CA (Certificate Authority) certificate.
You should set EncryptSpreadComm
before setting this parameter.
In the following example, the SSL Certificate contains two certificates, where the certificate for the non-root CA verifies the certificate for the cluster. This is called an SSL Certificate Chain.
=> ALTER DATABASE DEFAULT SET PARAMETER DataSSLParams =
'----BEGIN CERTIFICATE-----<certificate for Cluster>-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----<certificate for non-root CA>-----END CERTIFICATE-----,
-----BEGIN RSA PRIVATE KEY-----<private key for Cluster>-----END RSA PRIVATE KEY-----,
-----BEGIN CERTIFICATE-----<certificate for public CA>-----END CERTIFICATE-----';
- DefaultIdleSessionTimeout
- Indicates a default session timeout value for all users where
IDLESESSIONTIMEOUT
is not set. For example:
=> ALTER DATABASE DEFAULT SET defaultidlesessiontimeout = '300 secs';
- DHParams
-
Deprecated
This parameter has been deprecated and will be removed in a future release.
String, a Diffie-Hellman group of at least 2048 bits in the form:
-----BEGIN DH PARAMETERS-----...-----END DH PARAMETERS-----
You can generate your own or use the pre-calculated Modular Exponential (MODP) Diffie-Hellman groups specified in RFC 3526.
Changes to this parameter do not take effect until you restart the database.
Default: RFC 3526 2048-bit MODP Group 14:
-----BEGIN DH PARAMETERS-----MIIBCAKCAQEA///////////
JD9qiIWjCNMTGYouA3BzRKQJOCIpnzHQCC76mOxObIlFKCHmONAT
d75UZs806QxswKwpt8l8UN0/hNW1tUcJF5IW1dmJefsb0TELppjf
tawv/XLb0Brft7jhr+1qJn6WunyQRfEsf5kkoZlHs5Fs9wgB8uKF
jvwWY2kg2HFXTmmkWP6j9JM9fg2VdI9yjrZYcYvNWIIVSu57VKQd
wlpZtZww1Tkq8mATxdGwIyhghfDKQXkYuNs474553LBgOhgObJ4O
i7Aeij7XFXfBvTFLJ3ivL9pVYFxg5lUl86pVq5RXSJhiY+gUQFXK
OWoqsqmj//////////wIBAg==-----END DH PARAMETERS-----
- DisableInheritedPrivileges
- Boolean, whether to disable privilege inheritance at the database level (enabled by default).
- DoUserSpecificFilteringInSysTables
- Boolean, specifies whether a non-superuser can view details of another user:
Default: 0
- EnableAllRolesOnLogin
- Boolean, specifies whether to automatically enable all roles granted to a user on login:
-
0: Do not automatically enable roles
-
1: Automatically enable roles. With this setting, users do not need to run SET ROLE.
Default: 0 (disable)
- EnabledCipherSuites
- Specifies which SSL cipher suites to use for secure client-server communication. Changes to this parameter apply only to new connections.
Default: Vertica uses the Microsoft Schannel default cipher suites. For more information, see the Schannel documentation.
- EnableConnectCredentialForwarding
- When using CONNECT TO VERTICA, whether to use the password of the current user to authenticate to a target database.
Default: 0 (disable)
- EnableOAuth2JITCleanup
- If enabled, users created by just-in-time OAuth provisioning are automatically dropped if the user does not log in after the number of days specified by OAuth2UserExpiredInterval.
To view provisioned users, see USERS.
Default: 0 (disable)
- EncryptSpreadComm
- Enables Spread encryption on the control channel, set to one of the following strings:
-
vertica
: Specifies that Vertica generates the Spread encryption key for the database cluster.
-
aws-kms|
key-name
, where key-name
is a named key in the iAWS Key Management Service (KMS). On database restart, Vertica fetches the named key from the KMS instead of generating its own.
If the parameter is empty, Spread communication is unencrypted. In general, you should enable this parameter before modifying other security parameters.
Enabling this parameter requires database restart.
- GlobalHeirUsername
- A string that specifies which user inherits objects after their owners are dropped. This setting ensures preservation of data that would otherwise be lost.
Set this parameter to one of the following string values:
-
Empty string: Objects of dropped users are removed from the database.
-
username
: Reassigns objects of dropped users to username
. If username
does not exist, Vertica creates that user and sets GlobalHeirUsername
to it.
-
<auto>
: Reassigns objects of dropped LDAP or just-in-time-provisioned users to the dbadmin
user. The brackets (< and >) are required for this option.
For more information about usage, see Examples.
Default: <auto>
- ImportExportTLSMode
- When using CONNECT TO VERTICA to connect to another Vertica cluster for import or export, specifies the degree of stringency for using TLS. Possible values are:
-
PREFER
: Try TLS but fall back to plaintext if TLS fails.
-
REQUIRE
: Use TLS and fail if the server does not support TLS.
-
VERIFY_CA
: Require TLS (as with REQUIRE), and also validate the other server's certificate using the CA specified by the "server" TLS Configuration's CA certificates (in this case, "ca_cert" and "ica_cert"):
=> SELECT name, certificate, ca_certificate, mode FROM tls_configurations WHERE name = 'server';
name | certificate | ca_certificate | mode
--------+------------------+--------------------+-----------
server | server_cert | ca_cert,ica_cert | VERIFY_CA
(1 row)
-
VERIFY_FULL
: Require TLS and validate the certificate (as with VERIFY_CA), and also validate the server certificate's hostname.
-
REQUIRE_FORCE
, VERIFY_CA_FORCE
, and VERIFY_FULL_FORCE
: Same behavior as REQUIRE
, VERIFY_CA
, and VERIFY_FULL
, respectively, and cannot be overridden by CONNECT TO VERTICA.
Default: PREFER
- InternodeTLSConfig
- The TLS Configuration to use for internode encryption.
For example:
=> ALTER DATABASE DEFAULT SET InternodeTLSConfig = my_tls_config;
Default: data_channel
- LDAPAuthTLSConfig
- The TLS Configuration to use for TLS with LDAP authentication.
For example:
=> ALTER DATABASE DEFAULT SET LDAPAuthTLSConfig = my_tls_config;
Default: ldapauth
- LDAPLinkTLSConfig
- The TLS Configuration to use for TLS for the LDAP Link service.
For example:
=> ALTER DATABASE DEFAULT SET LDAPLinkTLSConfig = my_tls_config;
Default: ldaplink
- OAuth2JITClient
-
Deprecated
This parameter is deprecated. Use the OAuth2JITRolesClaimName
parameter instead.
The client/application name that contains Vertica roles in the Keycloak identity provider. The Vertica roles under resource_access.
OAuth2JITClient
.roles
are automatically granted to and set as default roles for users created by just-in-time provisioning. Roles that do not exist in Vertica are ignored.
For details, see Just-in-time user provisioning.
Default:
vertica
- OAuth2JITRolesClaimName
- Specifies an IdP client with one or more roles that correspond to a Vertica database role. During just-in-time user provisioning, each client role that matches an existing Vertica role is automatically assigned to the user. Roles that do not exist in Vertica are ignored.
For authentication records that use JWT validation, Vertica retrieves the client roles from the access token.
To retrieve client roles for authentication records that use IdP validation, Vertica sends a request to either the introspect_url
or the userinfo_url
set in the authentication record. This parameter requires the full path to the roles claim in that request. For example, the full path to the roles claim in a Keycloak request uses the following format:
resource_access.
clientName
.roles
For details, see Just-in-time user provisioning.
Default: resource_access.vertica.roles
- OAuth2JITForbiddenRoles
- Vertica database roles that you do not want Vertica to automatically assign to users during just-in-time provisioning. This parameter accepts one or more roles as a comma-separated list.
- OAuth2JITGroupsClaimName
- The name of the IdP group whose name or role mappings correspond to Vertica database roles. During just-in-time user provisioning, each group name or group role mapping that matches an existing Vertica role is automatically assigned to the user. Roles that do not exist in Vertica are ignored.
For authentication records that use JWT validation, Vertica retrieves the JIT-provisioned user's groups from the access token.
To retrieve the JIT-provisioned user's groups for authentication records that use IdP validation, Vertica sends a request to either the introspect_url
or the userinfo_url
set in the authentication record.
Default:
groups
- OAuth2UserExpiredInterval
- If EnableOAuthJITCleanup is enabled, users created by just-in-time OAuth provisioning are automatically dropped after not logging in for the number of days specified by OAuth2UserExpiredInterval. The number of days the user has not logged in is calculated relative to the
LAST_LOGIN_TIME
column in the USERS system table.
Note
The LAST_LOGIN_TIME
as recorded by the USERS system table is not persistent; if the database is restarted, the LAST_LOGIN_TIME
for users created by just-in-time user provisioning is set to the database start time (this appears as an empty value in LAST_LOGIN_TIME
).
You can view the database start time by querying the DATABASES system table:
=> SELECT database_name, start_time FROM databases;
database_name | start_time
---------------+-------------------------------
VMart | 2023-02-06 14:26:50.630054-05
(1 row)
Default: 14
- PasswordLockTimeUnit
- The time units for which an account is locked by
PASSWORD_LOCK_TIME
after FAILED_LOGIN_ATTEMPTS
, one of the following:
-
'd'
: days (default)
-
'h'
: hours
-
'm'
: minutes
-
's'
: seconds
For example, to configure the default profile to lock user accounts for 30 minutes after three unsuccessful login attempts:
=> ALTER DATABASE DEFAULT SET PasswordLockTimeUnit = 'm'
=> ALTER PROFILE DEFAULT LIMIT PASSWORD_LOCK_TIME 30;
- RequireFIPS
- Boolean, specifies whether the FIPS mode is enabled:
On startup, Vertica automatically sets this parameter from the contents of the file crypto.fips_enabled
. You cannot modify this parameter.
For details, see FIPS compliance for the Vertica server.
Default: 0
- SecurityAlgorithm
- Sets the algorithm for the function that hash authentication uses, one of the following:
For example:
=> ALTER DATABASE DEFAULT SET SecurityAlgorithm = 'SHA512';
Default: SHA512
- ServerTLSConfig
- The TLS Configuration to use for client-server TLS.
For example:
=> ALTER DATABASE DEFAULT SET ServerTLSConfig = my_tls_config;
Default: server
- SystemCABundlePath
- The absolute path to a certificate bundle of trusted CAs. This CA bundle is used when establishing TLS connections to external services such as AWS or Azure through their respective SDKs and libcurl. The CA bundle file must be in the same location on all nodes.
If this parameter is empty, Vertica searches the "standard" paths for the CA bundles, which differs between distributions:
- Red Hat-based:
/etc/pki/tls/certs/ca-bundle.crt
- Debian-based:
/etc/ssl/certs/ca-certificates.crt
- SUSE:
/var/lib/ca-certificates/ca-bundle.pem
Example:
=> ALTER DATABASE DEFAULT SET SystemCABundlePath = 'path/to/ca_bundle.pem';
Default: Empty
TLS parameters
To set your Vertica database's TLSMode, private key, server certificate, and CA certificate(s), see TLS configurations. In versions prior to 11.0.0, these parameters were known as EnableSSL, SSLPrivateKey, SSLCertificate, and SSLCA, respectively.
Examples
Set the database parameter GlobalHeirUsername
:
=> \du
List of users
User name | Is Superuser
-----------+--------------
Joe | f
SuzyQ | f
dbadmin | t
(3 rows)
=> ALTER DATABASE DEFAULT SET PARAMETER GlobalHeirUsername='SuzyQ';
ALTER DATABASE
=> \c - Joe
You are now connected as user "Joe".
=> CREATE TABLE t1 (a int);
CREATE TABLE
=> \c
You are now connected as user "dbadmin".
=> \dt t1
List of tables
Schema | Name | Kind | Owner | Comment
--------+------+-------+-------+---------
public | t1 | table | Joe |
(1 row)
=> DROP USER Joe;
NOTICE 4927: The Table t1 depends on User Joe
ROLLBACK 3128: DROP failed due to dependencies
DETAIL: Cannot drop User Joe because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too
=> DROP USER Joe CASCADE;
DROP USER
=> \dt t1
List of tables
Schema | Name | Kind | Owner | Comment
--------+------+-------+-------+---------
public | t1 | table | SuzyQ |
(1 row)
22 - Stored procedure parameters
The following parameters configure the behavior of stored procedures and triggers.
The following parameters configure the behavior of stored procedures and triggers.
- EnableNestedStoredProcedures
- Boolean, whether to enable nested stored procedures.
Default: 0
- EnableStoredProcedureScheduler
- Boolean, whether to enable the scheduler. For details, see Scheduled execution.
Default: 1
- PLvSQLCoerceNull
- Boolean, whether to allow NULL-to-false type coercion to improve compatibility with PLpgSQL. For details, see PL/pgSQL to PL/vSQL migration guide.
Default: 0
23 - Text search parameters
You can configure Vertica for text search using the following parameter.
You can configure Vertica for text search using the following parameter.
- TextIndexMaxTokenLength
- Controls the maximum size of a token in a text index.
For example:
ALTER DATABASE database_name SET PARAMETER TextIndexMaxTokenLength=760;
If the parameter is set to a value greater than 65000 characters, then the tokenizer truncates the token at 65000 characters.
Caution
Avoid setting this parameter near its maximum value, 65000. Doing so can result in a significant decrease in performance. For optimal performance, set this parameter to the maximum token value of your tokenizer.
Default: 128 (characters)
24 - Tuple mover parameters
These parameters control how the operates.
These parameters control how the Tuple Mover operates.
Query the
CONFIGURATION_PARAMETERS system table to determine what levels (node, session, user, database) are valid for a given parameter.
- ActivePartitionCount
- Sets the number of active partitions. The active partitions are those most recently created. For example:
=> ALTER DATABASE DEFAULT SET ActivePartitionCount = 2;
For information about how the Tuple Mover treats active and inactive partitions during a mergeout operation, see Partition mergeout.
Default: 1
- CancelTMTimeout
- When partition, copy table, and rebalance operations encounter a conflict with an internal Tuple Mover job, those operations attempt to cancel the conflicting Tuple Mover job. This parameter specifies the amount of time, in seconds, that the blocked operation waits for the Tuple Mover cancellation to take effect. If the operation is unable to cancel the Tuple Mover job within limit specified by this parameter, the operation displays an error and rolls back.
Default: 300
- EnableTMOnRecoveringNode
- Boolean, specifies whether Tuple Mover performs mergeout activities on nodes with a node state of RECOVERING. Enabling Tuple Mover reduces the number of ROS containers generated during recovery. Having fewer than 1024 ROS containers per projection allows Vertica to maintain optimal recovery performance.
Default: 1 (enabled)
- MaxMrgOutROSSizeMB
- Specifies in MB the maximum size of ROS containers that are candidates for mergeout operations. The Tuple Mover avoids merging ROS containers that are larger than this setting.
Note
After a
rebalance operation, Tuple Mover groups ROS containers in batches that are smaller than MaxMrgOutROSSizeMB. ROS containers that are larger than MaxMrgOutROSSizeMB are merged individually
Default: -1 (no maximum limit)
- MergeOutInterval
- Specifies in seconds how frequently the Tuple Mover checks the mergeout request queue for pending requests:
-
If the queue contains mergeout requests, the Tuple Mover does nothing and goes back to sleep.
-
If the queue is empty, the Tuple Mover:
It then goes back to sleep.
Default: 600
- PurgeMergeoutPercent
- Specifies as a percentage the threshold of deleted records in a ROS container that invokes an automatic mergeout operation, to purge those records. Vertica only counts the number of 'aged-out' delete vectors—that is, delete vectors that are as 'old' or older than the ancient history mark (AHM) epoch.
This threshold applies to all ROS containers for non-partitioned tables. It also applies to ROS containers of all inactive partitions. In both cases, aged-out delete vectors are permanently purged from the ROS container.
Note
This configuration parameter only applies to automatic mergeout operations. It does not apply to manual mergeout operations that are invoked by calling meta-functions
DO_TM_TASK('mergeout') and
PURGE.
Default: 20 (percent)