The following functions are meant to be used in Eon Mode.
This is the multi-page printable view of this section. Click here to print.
Eon Mode functions
- 1: ALTER_LOCATION_SIZE
- 2: BACKGROUND_DEPOT_WARMING
- 3: CANCEL_DEPOT_WARMING
- 4: CANCEL_DRAIN_SUBCLUSTER
- 5: CLEAN_COMMUNAL_STORAGE
- 6: CLEAR_DATA_DEPOT
- 7: CLEAR_DEPOT_ANTI_PIN_POLICY_PARTITION
- 8: CLEAR_DEPOT_ANTI_PIN_POLICY_PROJECTION
- 9: CLEAR_DEPOT_ANTI_PIN_POLICY_TABLE
- 10: CLEAR_DEPOT_PIN_POLICY_PARTITION
- 11: CLEAR_DEPOT_PIN_POLICY_PROJECTION
- 12: CLEAR_DEPOT_PIN_POLICY_TABLE
- 13: CLEAR_FETCH_QUEUE
- 14: DEMOTE_SUBCLUSTER_TO_SECONDARY
- 15: FINISH_FETCHING_FILES
- 16: FLUSH_REAPER_QUEUE
- 17: MIGRATE_ENTERPRISE_TO_EON
- 18: PROMOTE_SUBCLUSTER_TO_PRIMARY
- 19: REBALANCE_SHARDS
- 20: RESHARD_DATABASE
- 21: SANDBOX_SUBCLUSTER
- 22: SET_DEPOT_ANTI_PIN_POLICY_PARTITION
- 23: SET_DEPOT_ANTI_PIN_POLICY_PROJECTION
- 24: SET_DEPOT_ANTI_PIN_POLICY_TABLE
- 25: SET_DEPOT_PIN_POLICY_PARTITION
- 26: SET_DEPOT_PIN_POLICY_PROJECTION
- 27: SET_DEPOT_PIN_POLICY_TABLE
- 28: SHUTDOWN_SUBCLUSTER
- 29: SHUTDOWN_WITH_DRAIN
- 30: START_DRAIN_SUBCLUSTER
- 31: START_REAPING_FILES
- 32: SYNC_CATALOG
- 33: UNSANDBOX_SUBCLUSTER
1 - ALTER_LOCATION_SIZE
Eon Mode only
Resizes the depot on one node, all nodes in a subcluster, or all nodes in the database.
Important
Reducing the size of the depot is liable to increase contention over depot usage and require frequent evictions. This behavior can increase the number of queries and load operations that are routed to communal storage for processing, which can incur slower performance and increased access charges.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
ImmutableSyntax
ALTER_LOCATION_SIZE( 'location', '[target]', 'size')
Parameters
location
- Specifies the location to resize, one of the following:
-
depot
: Resizes the node's current depot. -
The depot's absolute path in the Linux filesystem. If you change the depot size on multiple nodes and specify a path, the path must be identical on all affected nodes . By default, this is not the case, as the node's name is typically this path. For example, the default depot path for node 1 in the
verticadb
database is/vertica/data/verticadb/v_verticadb_node0001_depot
.
-
target
- The node or nodes on which to change the depot, one of the following:
-
Node name: Resize the specified node.
-
Subcluster name: Resize depots of all nodes in the specified subcluster.
-
Empty string: Resize all depots in the database.
-
size
Valid only if the storage location usage type is set to
DEPOT
, specifies the maximum amount of disk space that the depot can allocate from the storage location's file system.You can specify
size
in two ways:-
integer
%
: Percentage of storage location disk size. -
integer
{K|M|G|T}
: Amount of storage location disk size in kilobytes, megabytes, gigabytes, or terabytes.
Important
The depot size cannot exceed 80 percent of the file system disk space where the depot is stored. If you specify a value that is too large, Vertica issues a warning and automatically changes the value to 80 percent of the file system size.-
Privileges
SuperuserExamples
Increase depot size on all nodes to 80 percent of file system:
=> SELECT node_name, location_label, location_path, max_size, disk_percent FROM storage_locations WHERE location_usage = 'DEPOT' ORDER BY node_name;
node_name | location_label | location_path | max_size | disk_percent
------------------+-----------------+-------------------------+-------------+--------------
v_vmart_node0001 | auto-data-depot | /home/dbadmin/verticadb | 36060108800 | 70%
v_vmart_node0002 | auto-data-depot | /home/dbadmin/verticadb | 36059377664 | 70%
v_vmart_node0003 | auto-data-depot | /home/dbadmin/verticadb | 36060108800 | 70%
(3 rows)
=> SELECT alter_location_size('depot', '','80%');
alter_location_size
---------------------
depotSize changed.
(1 row)
=> SELECT node_name, location_label, location_path, max_size, disk_percent FROM storage_locations WHERE location_usage = 'DEPOT' ORDER BY node_name;
node_name | location_label | location_path | max_size | disk_percent
------------------+-----------------+-------------------------+-------------+--------------
v_vmart_node0001 | auto-data-depot | /home/dbadmin/verticadb | 41211552768 | 80%
v_vmart_node0002 | auto-data-depot | /home/dbadmin/verticadb | 41210717184 | 80%
v_vmart_node0003 | auto-data-depot | /home/dbadmin/verticadb | 41211552768 | 80%
(3 rows)
Change the depot size to 75% of the filesystem size for all nodes in the analytics subcluster:
=> SELECT subcluster_name, subclusters.node_name, storage_locations.max_size, storage_locations.disk_percent FROM subclusters INNER JOIN storage_locations ON subclusters.node_name = storage_locations.node_name WHERE storage_locations.location_usage='DEPOT';
subcluster_name | node_name | max_size | disk_percent
--------------------+----------------------+----------------------------
default_subcluster | v_verticadb_node0001 | 25264737485 | 60%
default_subcluster | v_verticadb_node0002 | 25264737485 | 60%
default_subcluster | v_verticadb_node0003 | 25264737485 | 60%
analytics | v_verticadb_node0004 | 25264737485 | 60%
analytics | v_verticadb_node0005 | 25264737485 | 60%
analytics | v_verticadb_node0006 | 25264737485 | 60%
analytics | v_verticadb_node0007 | 25264737485 | 60%
analytics | v_verticadb_node0008 | 25264737485 | 60%
analytics | v_verticadb_node0009 | 25264737485 | 60%
(9 rows)
=> SELECT ALTER_LOCATION_SIZE('depot','analytics','75%');
ALTER_LOCATION_SIZE
---------------------
depotSize changed.
(1 row)
=> SELECT subcluster_name, subclusters.node_name, storage_locations.max_size, storage_locations.disk_percent FROM subclusters INNER JOIN storage_locations ON subclusters.node_name = storage_locations.node_name WHERE storage_locations.location_usage='DEPOT';
subcluster_name | node_name | max_size | disk_percent
--------------------+----------------------+----------------------------
default_subcluster | v_verticadb_node0001 | 25264737485 | 60%
default_subcluster | v_verticadb_node0002 | 25264737485 | 60%
default_subcluster | v_verticadb_node0003 | 25264737485 | 60%
analytics | v_verticadb_node0004 | 31580921856 | 75%
analytics | v_verticadb_node0005 | 31580921856 | 75%
analytics | v_verticadb_node0006 | 31580921856 | 75%
analytics | v_verticadb_node0007 | 31580921856 | 75%
analytics | v_verticadb_node0008 | 31580921856 | 75%
analytics | v_verticadb_node0009 | 31580921856 | 75%
(9 rows)
See also
Eon Mode architecture2 - BACKGROUND_DEPOT_WARMING
Eon Mode only
Deprecated
Vertica version 10.0.0 removes support for foreground depot warming. When enabled, depot warming always happens in the background. Because foreground depot warming no longer exists, this function serves no purpose and has been deprecated. Calling it has no effect.Forces a node that is warming its depot to start processing queries while continuing to warm its depot in the background. Depot warming only occurs when a node is joining the database and is activating its subscriptions. This function only has an effect if:
-
The database is running in Eon Mode.
-
The node is currently warming its depot.
-
The node is warming its depot from communal storage. This is the case when the UseCommunalStorageForBatchDepotWarming configuration parameter is set to the default value of 1. See Eon Mode parameters for more information about this parameter.
After calling this function, the node warms its depot in the background while taking part in queries.
This function has no effect on a node that is not warming its depot.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
BACKGROUND_DEPOT_WARMING('node-name' [, 'subscription-name'])
Arguments
node-name
- The name of the node that you want to warm its depot in the background.
subscription-name
- The name of a shard that the node subscribes to that you want the node to warm in the background. You can find the names of the shards a node subscribes to in the SHARD_NAME column of the NODE_SUBSCRIPTIONS system table.
Note
When you supply the name of a specific shard subscription to warm in the background, the node may not immediately begin processing queries. It continues to warm any other shard subscriptions in the foreground if they are not yet warm. The node does not begin taking part in queries until it finishes warming the other subscriptions.
Return value
A message indicating that the node's warming will continue in the background.
Privileges
The user must be a superuser .
Examples
The following example demonstrates having node 6 of the verticadb database warm its depot in the background:
=> SELECT BACKGROUND_DEPOT_WARMING('v_verticadb_node0006');
BACKGROUND_DEPOT_WARMING
----------------------------------------------------------------------------
Depot warming running in background. Check monitoring tables for progress.
(1 row)
See also
3 - CANCEL_DEPOT_WARMING
Eon Mode only
Cancels depot warming on a node. Depot warming only occurs when a node is joining the database and is activating its subscriptions. You can choose to cancel all warming on the node, or cancel the warming of a specific shard's subscription. The node finishes whatever data transfers it is currently carrying out to warm its depot and removes pending warming-related transfers from its queue. It keeps any data it has already loaded into its depot. If you cancel warming for a specific subscription, it stops warming its depot if all of its other subscriptions are warmed. If they aren't warmed, the node continues to warm those other subscriptions.
This function only has an effect if:
-
The database is running in Eon Mode.
-
The node is currently warming its depot.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CANCEL_DEPOT_WARMING('node-name' [, 'subscription-name'])
Arguments
'
node-name
'
- The name of the node whose depot warming you want canceled.
'
subscription-name
'
- The name of a shard that the node subscribes to that you want the node to stop warming. You can find the names of the shards a node subscribes to in the SHARD_NAME column of the NODE_SUBSCRIPTIONS system table.
Return value
Returns a message indicating warming has been canceled.
Privileges
The user must be a superuser.
Usage considerations
Canceling depot warming can negatively impact the performance of your queries. A node with a cold depot may have to retrieve much of its data from communal storage, which is slower than accessing the depot.
Examples
The following demonstrates canceling the depot warming taking place on node 7:
=> SELECT CANCEL_DEPOT_WARMING('v_verticadb_node0007');
CANCEL_DEPOT_WARMING
--------------------------
Depot warming cancelled.
(1 row)
See also
4 - CANCEL_DRAIN_SUBCLUSTER
Eon Mode only
Cancels the draining of a subcluster or subclusters. This function can cancel draining operations that were started by either START_DRAIN_SUBCLUSTER or the draining portion of the SHUTDOWN_WITH_DRAIN function. CANCEL_DRAIN_SUBCLUSTER marks all nodes in the designated subclusters as not draining. The previously draining nodes again accept new client connections and connections redirected from load-balancing.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CANCEL_DRAIN_SUBCLUSTER( 'subcluster-name' )
Arguments
subcluster-name
- Name of the subcluster whose draining operation to cancel. Enter an empty string to cancel the draining operation on all subclusters.
Privileges
Superuser
Examples
The following example demonstrates how to cancel a draining operation on a subcluster.
First, you can query the DRAINING_STATUS system table to view which subclusters are currently draining:
=> SELECT node_name, subcluster_name, is_draining FROM draining_status ORDER BY 1;
node_name | subcluster_name | is_draining
-------------------+--------------------+-------
verticadb_node0001 | default_subcluster | f
verticadb_node0002 | default_subcluster | f
verticadb_node0003 | default_subcluster | f
verticadb_node0004 | analytics | t
verticadb_node0005 | analytics | t
verticadb_node0006 | analytics | t
The following function call cancels the draining of the analytics
subcluster:
=> SELECT CANCEL_DRAIN_SUBCLUSTER('analytics');
CANCEL_DRAIN_SUBCLUSTER
--------------------------------------------------------
Targeted subcluster: 'analytics'
Action: CANCEL DRAIN
(1 row)
To confirm that the subcluster is no longer draining, you can again query the DRAINING_STATUS system table:
=> SELECT node_name, subcluster_name, is_draining FROM draining_status ORDER BY 1;
node_name | subcluster_name | is_draining
-------------------+--------------------+-------
verticadb_node0001 | default_subcluster | f
verticadb_node0002 | default_subcluster | f
verticadb_node0003 | default_subcluster | f
verticadb_node0004 | analytics | f
verticadb_node0005 | analytics | f
verticadb_node0006 | analytics | f
(6 rows)
See also
5 - CLEAN_COMMUNAL_STORAGE
Eon Mode only
Marks for deletion invalid data in communal storage, often data that leaked due to an event where Vertica cleanup mechanisms failed. Events that require calling this function include:
-
Node failure
-
Interrupted migration of an Enterprise database to Eon
-
Restoring objects from backup
Tip
It is generally good practice to call CLEAN_COMMUNAL_STORAGE soon after completing an Enterprise-to-Eon migration, and reviving the migrated Eon database.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAN_COMMUNAL_STORAGE ( ['actually-delete'] )
Parameters
actually-delete
- BOOLEAN, specifies whether to queue data files for deletion:
-
true
(default): Add files to the reaper queue and return immediately. The queued files are removed automatically by the reaper service, or can be removed manually by calling FLUSH_REAPER_QUEUE. -
false
: Report information about extra files but do not queue them for deletion.
-
Privileges
Superuser
Examples
=> SELECT CLEAN_COMMUNAL_STORAGE('true')
CLEAN_COMMUNAL_STORAGE
------------------------------------------------------------------
CLEAN COMMUNAL STORAGE
Task was canceled.
Total leaked files: 9265
Total size: 4236501526
Files have been queued for deletion.
Check communal_cleanup_records for more information.
(1 row)
6 - CLEAR_DATA_DEPOT
Eon Mode only
Deletes the specified depot data. You can clear depot data of a single table or all tables, from one subcluster, a single node, or the entire database cluster. Clearing depot data can incur extra processing time for any subsequent queries that require that data and must now fetch it from communal storage. Clearing depot data has no effect on communal storage.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_DATA_DEPOT( [ '[table-name]' [, '[target-depots]'] ] )
Arguments
To clear all depot data from the database cluster, call this function with no arguments.
table-name
- Name of the table to delete from the target depots. If you omit a table name or supply an empty string, data of all tables is deleted from the target depots.
target-depots
- The depots to clear, one of the following:
subcluster-name
: Name of the depot subcluster,default_subcluster
to specify the default database subcluster.node-name
: Clears depot data from the specified node. Depot data on other nodes in the same subcluster are unaffected.
This argument optionally qualifies the argument for
table-name
. If you omit this argument or supply an empty string, Vertica clears all depot data from the database cluster.
Privileges
Superuser
Examples
Clear the cached data of one table from the specified subcluster depot:
=> SELECT CLEAR_DATA_DEPOT('t1', 'subcluster_1');
clear_data_depot
------------------
Depot cleared
(1 row)
Clear all depot data that is cached on the specified subcluster:.
=> SELECT CLEAR_DATA_DEPOT('', 'subcluster_1');
clear_data_depot
------------------
Depot cleared
(1 row)
Clear all depot data that is cached on the specified node:
=> select clear_data_depot('','v_vmart_node0001');
clear_data_depot
------------------
Depot cleared
(1 row)
Clear all data of the specified table from the depots of all cluster nodes:
=> SELECT CLEAR_DATA_DEPOT('t1');
clear_data_depot
------------------
Depot cleared
(1 row)
Clear all depot data from the database cluster:
=> SELECT CLEAR_DATA_DEPOT();
clear_data_depot
------------------
Depot cleared
(1 row)
See also
Managing depot caching7 - CLEAR_DEPOT_ANTI_PIN_POLICY_PARTITION
Eon Mode only
Removes an anti-pinning policy from the specified partition.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_DEPOT_ANTI_PIN_POLICY_PARTITION( '[[database.]schema.]object-name', 'min-range-value', 'max-range-value' [, 'subcluster'] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.object-name
- Table or projection with a partition anti-pinning policy to clear.
min-range-value
,max-range-value
- Range of partition keys in
table
from which to clear an anti-pinning policy, wheremin‑range‑value
must be ≤max‑range‑value
. To specify a single partition key,min‑range‑value
andmax‑range‑value
must be equal. subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
- SET_DEPOT_ANTI_PIN_POLICY_PARTITION
8 - CLEAR_DEPOT_ANTI_PIN_POLICY_PROJECTION
Eon Mode only
Removes an anti-pinning policy from the specified projection.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_DEPOT_ANTI_PIN_POLICY_PROJECTION( '[[database.]schema.]projection' [, 'subcluster' ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.projection
- Projection with the anti-pinning policy to clear.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
- SET_DEPOT_ANTI_PIN_POLICY_PROJECTION
9 - CLEAR_DEPOT_ANTI_PIN_POLICY_TABLE
Eon Mode only
Removes an anti-pinning policy from the specified table.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_DEPOT_PIN_POLICY_TABLE( '[[database.]schema.]table' [, 'subcluster' ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table
- Table with the anti-pinning policy to clear.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
- SET_DEPOT_ANTI_PIN_POLICY_TABLE
10 - CLEAR_DEPOT_PIN_POLICY_PARTITION
Eon Mode only
Clears a depot pinning policy from the specified table or projection partitions. After the object is unpinned, it can be evicted from the depot by any unpinned or pinned object.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_DEPOT_PIN_POLICY_PARTITION( '[[database.]schema.]object-name', 'min-range-value', 'max-range-value' [, subcluster ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.object-name
- Table or projection with a partition pinning policy to clear.
min-range-value
,max-range-value
- Range of partition keys in
table
from which to clear a pinning policy, wheremin‑range‑value
must be ≤max‑range‑value
. To specify a single partition key,min‑range‑value
andmax‑range‑value
must be equal. subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
11 - CLEAR_DEPOT_PIN_POLICY_PROJECTION
Eon Mode only
Clears a depot pinning policy from the specified projection. After the object is unpinned, it can be evicted from the depot by any unpinned or pinned object.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_DEPOT_PIN_POLICY_PROJECTION( '[[database.]schema.]projection' [, 'subcluster' ] )
Parameters
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.projection
- Projection with a pinning policy to clear.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
12 - CLEAR_DEPOT_PIN_POLICY_TABLE
Eon Mode only
Clears a depot pinning policy from the specified table. After the object is unpinned, it can be evicted from the depot by any unpinned or pinned object.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_DEPOT_PIN_POLICY_TABLE( '[[database.]schema.]table' [, 'subcluster' ] )
Parameters
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table
- Table with a pinning policy to clear.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
13 - CLEAR_FETCH_QUEUE
Eon Mode only
Removes all entries or entries for a specific transaction from the queue of fetch requests of data from the communal storage. You can view the fetch queue by querying the DEPOT_FETCH_QUEUE system table. This function removes all of the queued requests synchronously. It returns after all the fetches have been removed from the queue.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_FETCH_QUEUE([transaction_id])
Parameters
*
transaction_id*
- The id of the transaction whose fetches will be cleared from the queue. If this value is not specified, all fetches are removed from the fetch queue.
Examples
This example clears all of the queued fetches for all transactions.
=> SELECT CLEAR_FETCH_QUEUE();
CLEAR_FETCH_QUEUE
--------------------------
Cleared the fetch queue.
(1 row)
This example clears the fetch queue for a specific transaction.
=> SELECT node_name,transaction_id FROM depot_fetch_queue;
node_name | transaction_id
----------------------+-------------------
v_verticadb_node0001 | 45035996273719510
v_verticadb_node0003 | 45035996273719510
v_verticadb_node0002 | 45035996273719510
v_verticadb_node0001 | 45035996273719777
v_verticadb_node0003 | 45035996273719777
v_verticadb_node0002 | 45035996273719777
(6 rows)
=> SELECT clear_fetch_queue(45035996273719510);
clear_fetch_queue
--------------------------
Cleared the fetch queue.
(1 row)
=> SELECT node_name,transaction_id from depot_fetch_queue;
node_name | transaction_id
----------------------+-------------------
v_verticadb_node0001 | 45035996273719777
v_verticadb_node0003 | 45035996273719777
v_verticadb_node0002 | 45035996273719777
(3 rows)
14 - DEMOTE_SUBCLUSTER_TO_SECONDARY
Eon Mode only
Converts a primary subcluster to a secondary subcluster.
Vertica will not allow you to demote a primary subcluster if any of the following are true:
-
The subcluster contains a critical node.
-
The subcluster is the only primary subcluster in the database. You must have at least one primary subcluster.
-
The initiator node is a member of the subcluster you are trying to demote. You must call DEMOTE_SUBCLUSTER_TO_SECONDARY from another subcluster.
Important
This function call can take a long time to complete because all the nodes in the subcluster you are promoting or demoting take a global catalog lock, write a checkpoint, and then commit. This global catalog lock can cause other database tasks to fail with errors.
Schedule calls to this function when other database activity is low.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
DEMOTE_SUBCLUSTER_TO_SECONDARY('subcluster-name')
Parameters
subcluster-name
- The name of the primary subcluster to demote to a secondary subcluster.
Privileges
Superuser
Examples
The following example demotes the subcluster analytics_cluster
to a secondary subcluster:
=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
subcluster_name | is_primary
-------------------+------------
analytics_cluster | t
load_subcluster | t
(2 rows)
=> SELECT DEMOTE_SUBCLUSTER_TO_SECONDARY('analytics_cluster');
DEMOTE_SUBCLUSTER_TO_SECONDARY
--------------------------------
DEMOTE SUBCLUSTER TO SECONDARY
(1 row)
=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
subcluster_name | is_primary
-------------------+------------
analytics_cluster | f
load_subcluster | t
(2 rows)
Attempting to demote the subcluster that contains the initiator node results in an error:
=> SELECT node_name FROM sessions WHERE user_name = 'dbadmin'
AND client_type = 'vsql';
node_name
----------------------
v_verticadb_node0004
(1 row)
=> SELECT node_name, is_primary FROM subclusters WHERE subcluster_name = 'analytics';
node_name | is_primary
----------------------+------------
v_verticadb_node0004 | t
v_verticadb_node0005 | t
v_verticadb_node0006 | t
(3 rows)
=> SELECT DEMOTE_SUBCLUSTER_TO_SECONDARY('analytics');
ERROR 9204: Cannot promote or demote subcluster including the initiator node
HINT: Run this command on another subcluster
See also
15 - FINISH_FETCHING_FILES
Eon Mode only
Fetches to the depot all files that are queued for download from communal storage.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
FINISH_FETCHING_FILES()
Privileges
Superuser
Examples
Get all files queued for download:
=> SELECT FINISH_FETCHING_FILES();
FINISH_FETCHING_FILES
---------------------------------
Finished fetching all the files
(1 row)
See also
Eon Mode concepts16 - FLUSH_REAPER_QUEUE
Eon Mode only
Deletes all data marked for deletion in the database. Use this function to remove all data marked for deletion before the reaper service deletes disk files.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
FLUSH_REAPER_QUEUE( [sync-catalog] )
Parameters
*
sync-catalog*
- Specifies to sync metadata in the database catalog on all nodes before the function executes:
-
true
(default): Sync the database catalog -
false
: Run without syncing.
-
Privileges
Superuser
Examples
Remove all files that are marked for deletion:
=> SELECT FLUSH_REAPER_QUEUE();
FLUSH_REAPER_QUEUE
-----------------------------------------------------
Sync'd catalog and deleted all files in the reaper queue.
(1 row)
See also
CLEAN_COMMUNAL_STORAGE17 - MIGRATE_ENTERPRISE_TO_EON
Enterprise Mode only
Migrates an Enterprise database to an Eon Mode database. MIGRATE_ENTERPRISE_TO_EON runs in the foreground; until it returns—either with success or an error—it blocks all operations in the same session on the source Enterprise database. If successful, MIGRATE_ENTERPRISE_TO_EON returns with a list of nodes in the migrated database.
If migration is interrupted before the meta-function returns—for example, the client disconnects, or a network outage occurs—the migration returns an error. In this case, call MIGRATE_ENTERPRISE_TO_EON again to restart migration. For details, see Handling Interrupted Migration.
You can repeat migration multiple times to the same communal storage location—for example, to capture changes that occurred in the source database during the previous migration. For details, see Repeating Migration.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
MIGRATE_ENTERPRISE_TO_EON ( 'communal-storage-location', 'depot-location' [, is-dry-run] )
communal-storage-location
- URI of communal storage location. For URI syntax examples for each supported schema, see File systems and object stores.
depot-location
- Path of Eon depot location, typically:
/vertica/depot
Important
Management Console requires this convention to enable access to depot data and activity.
is-dry-run
- Boolean. If set to true, MIGRATE_ENTERPRISE_TO_EON only checks whether the Enterprise source database complies with all migration prerequisites. If the meta-function discovers any compliance issues, it writes these to the migration error log
migrate_enterprise_to_eon_error.log
in the database directory.Default: false
Privileges
Superuser
Examples
Migrate an Enterprise database to Eon Mode on AWS:
=> SELECT MIGRATE_ENTERPRISE_TO_EON ('s3://verticadbbucket', '/vertica/depot');
migrate_enterprise_to_eon
---------------------------------------------------------------------
v_vmart_node0001,v_vmart_node0002,v_vmart_node0003,v_vmart_node0004
(1 row)
See also
Migrating an enterprise database to Eon Mode18 - PROMOTE_SUBCLUSTER_TO_PRIMARY
Eon Mode only
Converts a secondary subcluster to a primary subcluster. You cannot use this function to promote the subcluster that contains the initiator node. You must call it while connected to a node in another subcluster.
Important
This function call can take a long time to complete because all the nodes in the subcluster you are promoting or demoting take a global catalog lock, write a checkpoint, and then commit. This global catalog lock can cause other database tasks to fail with errors.
Schedule calls to this function when other database activity is low.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
PROMOTE_SUBCLUSTER_TO_PRIMARY('subcluster-name')
Parameters
subcluster-name
- The name of the secondary cluster to promote to a primary subcluster.
Privileges
Superuser
Examples
The following example promotes the subcluster named analytics_cluster to a primary cluster:
=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
subcluster_name | is_primary
-------------------+------------
analytics_cluster | f
load_subcluster | t
(2 rows)
=> SELECT PROMOTE_SUBCLUSTER_TO_PRIMARY('analytics_cluster');
PROMOTE_SUBCLUSTER_TO_PRIMARY
-------------------------------
PROMOTE SUBCLUSTER TO PRIMARY
(1 row)
=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
subcluster_name | is_primary
-------------------+------------
analytics_cluster | t
load_subcluster | t
(2 rows)
See also
19 - REBALANCE_SHARDS
Eon Mode only
Rebalances shard assignments in a subcluster or across the entire cluster in Eon Mode. If the current session ends, the operation immediately aborts. The amount of time required to rebalance shards scales in a roughly linear fashion based on the number of objects in your database.
Important
If your database has multiple namespaces, REBALANCE_SHARDS rebalances the shards across all namespaces.Run REBALANCE_SHARDS after you modify your cluster using ALTER NODE or when you add nodes to a subcluster.
Note
Vertica rebalances shards in a subcluster automatically when you:
-
Remove a node from a subcluster.
-
Add a new subcluster with the admintools command
db_add_subcluster
with the-s
option followed by a list of hosts.
After you rebalance shards, you will no longer be able to restore objects from a backup taken before the rebalancing. (Full backups are always possible.) After you rebalance, make another full backup so you will be able to restore objects from it in the future.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
REBALANCE_SHARDS(['subcluster-name'])
Parameters
subcluster-name
- The name of the subcluster where shards will be rebalanced. If you do not supply this parameter, all subclusters in the database rebalance their shards.
Privileges
Superuser
Examples
The following shows that the nodes in the in the newly-added analytics_subcluster
do not yet have shard subscriptions:
=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
= ns.node_name) ORDER BY 1,2,3;
subcluster_name | node_name | shard_name | subscription_state
----------------------+----------------------+-------------+--------------------
analytics_subcluster | v_verticadb_node0004 | |
analytics_subcluster | v_verticadb_node0005 | |
analytics_subcluster | v_verticadb_node0006 | |
default_subcluster | v_verticadb_node0001 | replica | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0002 | replica | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | replica | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0003 | ACTIVE
(12 rows)
Rebalance the shards to analytics_subcluster
, then confirm the rebalance was successful by querying the NODES system table:
=> SELECT REBALANCE_SHARDS('analytics_subcluster');
REBALANCE_SHARDS
-------------------
REBALANCED SHARDS
(1 row)
=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
= ns.node_name) ORDER BY 1,2,3;
subcluster_name | node_name | shard_name | subscription_state
----------------------+----------------------+-------------+--------------------
analytics_subcluster | v_verticadb_node0004 | replica | ACTIVE
analytics_subcluster | v_verticadb_node0004 | segment0001 | ACTIVE
analytics_subcluster | v_verticadb_node0004 | segment0003 | ACTIVE
analytics_subcluster | v_verticadb_node0005 | replica | ACTIVE
analytics_subcluster | v_verticadb_node0005 | segment0001 | ACTIVE
analytics_subcluster | v_verticadb_node0005 | segment0002 | ACTIVE
analytics_subcluster | v_verticadb_node0006 | replica | ACTIVE
analytics_subcluster | v_verticadb_node0006 | segment0002 | ACTIVE
analytics_subcluster | v_verticadb_node0006 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0001 | replica | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0002 | replica | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | replica | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0003 | ACTIVE
(18 rows)
If your database has multiple namespaces, you can run the following query to confirm the rebalance was successful across all namespaces:
=> SELECT nodes.subcluster_name AS subcluster_name, vs_namespaces.name AS namespace_name, vs_shards.shardname, vs_nodes.name AS nodename, vs_node_subscriptions.type, vs_node_subscriptions.state
FROM vs_node_subscriptions
INNER JOIN vs_shards
ON vs_node_subscriptions.shardoid=vs_shards.oid
INNER JOIN vs_shard_groups ON vs_shard_groups.oid=vs_shards.shardgroupoid
INNER JOIN vs_nodes ON vs_nodes.oid=vs_node_subscriptions.nodeoid
INNER JOIN nodes ON nodes.node_id=vs_nodes.oid
INNER JOIN vs_namespaces ON vs_namespaces.oid=vs_shard_groups.namespaceoid;
subcluster_name | namespace_name | shardname | nodename | type | state
-----------------------+--------------------+-------------+----------------------+-----------+--------
default_subcluster | default_namespace | replica | v_verticadb_node0002 | SECONDARY | ACTIVE
default_subcluster | default_namespace | replica | v_verticadb_node0003 | SECONDARY | ACTIVE
default_subcluster | default_namespace | replica | v_verticadb_node0001 | PRIMARY | ACTIVE
default_subcluster | default_namespace | segment0001 | v_verticadb_node0002 | SECONDARY | ACTIVE
default_subcluster | default_namespace | segment0001 | v_verticadb_node0001 | PRIMARY | ACTIVE
default_subcluster | default_namespace | segment0002 | v_verticadb_node0002 | PRIMARY | ACTIVE
default_subcluster | default_namespace | segment0002 | v_verticadb_node0003 | SECONDARY | ACTIVE
default_subcluster | default_namespace | segment0003 | v_verticadb_node0003 | SECONDARY | ACTIVE
default_subcluster | default_namespace | segment0003 | v_verticadb_node0001 | PRIMARY | ACTIVE
default_subcluster | ns1 | replica | v_verticadb_node0002 | SECONDARY | ACTIVE
default_subcluster | ns1 | replica | v_verticadb_node0003 | SECONDARY | ACTIVE
default_subcluster | ns1 | replica | v_verticadb_node0001 | PRIMARY | ACTIVE
default_subcluster | ns1 | segment0001 | v_verticadb_node0002 | SECONDARY | ACTIVE
default_subcluster | ns1 | segment0001 | v_verticadb_node0001 | PRIMARY | ACTIVE
default_subcluster | ns1 | segment0002 | v_verticadb_node0002 | PRIMARY | ACTIVE
default_subcluster | ns1 | segment0002 | v_verticadb_node0003 | SECONDARY | ACTIVE
default_subcluster | ns1 | segment0003 | v_verticadb_node0003 | PRIMARY | ACTIVE
default_subcluster | ns1 | segment0003 | v_verticadb_node0001 | SECONDARY | ACTIVE
default_subcluster | ns1 | segment0004 | v_verticadb_node0002 | SECONDARY | ACTIVE
default_subcluster | ns1 | segment0004 | v_verticadb_node0001 | PRIMARY | ACTIVE
analytics_subcluster | default_namespace | replica | v_verticadb_node0005 | SECONDARY | ACTIVE
analytics_subcluster | default_namespace | replica | v_verticadb_node0006 | SECONDARY | ACTIVE
analytics_subcluster | default_namespace | replica | v_verticadb_node0004 | PRIMARY | ACTIVE
analytics_subcluster | default_namespace | segment0001 | v_verticadb_node0005 | SECONDARY | ACTIVE
analytics_subcluster | default_namespace | segment0001 | v_verticadb_node0004 | PRIMARY | ACTIVE
analytics_subcluster | default_namespace | segment0002 | v_verticadb_node0005 | PRIMARY | ACTIVE
analytics_subcluster | default_namespace | segment0002 | v_verticadb_node0006 | SECONDARY | ACTIVE
analytics_subcluster | default_namespace | segment0003 | v_verticadb_node0006 | SECONDARY | ACTIVE
analytics_subcluster | default_namespace | segment0003 | v_verticadb_node0004 | PRIMARY | ACTIVE
analytics_subcluster | ns1 | replica | v_verticadb_node0005 | SECONDARY | ACTIVE
analytics_subcluster | ns1 | replica | v_verticadb_node0006 | SECONDARY | ACTIVE
analytics_subcluster | ns1 | replica | v_verticadb_node0004 | PRIMARY | ACTIVE
analytics_subcluster | ns1 | segment0001 | v_verticadb_node0005 | SECONDARY | ACTIVE
analytics_subcluster | ns1 | segment0001 | v_verticadb_node0004 | PRIMARY | ACTIVE
analytics_subcluster | ns1 | segment0002 | v_verticadb_node0006 | PRIMARY | ACTIVE
analytics_subcluster | ns1 | segment0002 | v_verticadb_node0006 | SECONDARY | ACTIVE
analytics_subcluster | ns1 | segment0003 | v_verticadb_node0006 | PRIMARY | ACTIVE
analytics_subcluster | ns1 | segment0003 | v_verticadb_node0004 | SECONDARY | ACTIVE
analytics_subcluster | ns1 | segment0004 | v_verticadb_node0005 | SECONDARY | ACTIVE
analytics_subcluster | ns1 | segment0004 | v_verticadb_node0004 | PRIMARY | ACTIVE
(40 rows)
See also
20 - RESHARD_DATABASE
Eon Mode only
Changes the number of shards in the default_namespace
. You can only change the number of shards in the default_namespace
if it is the only namespace in your database. If your database contains any non-default namespaces, running RESHARD_DATABASE results in an error.
RESHARD_DATABASE does not immediately affect the storage containers in communal storage. After re-sharding, the new shards still point to the existing containers. If you increase the number of shards in the namespace, multiple shards will point to the same storage containers. Eventually, the Tuple Mover (TM) mergeout tasks will realign the storage containers with the new shard segmentation bounds. If you want the TM to immediately realign storage containers, call DO_TM_TASK to run a 'RESHARDMERGEOUT' task.
This function requires a global catalog lock (GCLX) during runtime. The runtime depends on the size of your catalog. The function does not disrupt most queries. However, the global catalog lock might affect data loads and DDL statements.
Important
RESHARD_DATABASE might be rolled back if you call REBALANCE_SHARDS during runtime. In some cases, rollback is caused by down nodes or nodes that fail during the re-shard process.Syntax
RESHARD_DATABASE(shard-count)
Arguments
shard-count
- A positive integer, the number of shards in the re-sharded
default_namespace
. For information about choosing a suitableshard-count
, see Choosing the initial node and shard counts.
Privileges
Superuser
Examples
See Reshard the default namespace.
See also
21 - SANDBOX_SUBCLUSTER
Creates a sandbox for a secondary subcluster.
Note
Vertica recommends using the admintoolssandbox_subcluster
command to create sandboxes. This command includes additional sanity checks and validates that the sandboxed nodes are UP after sandbox creation. However, you must use the SANDBOX_SUBCLUSTER function to add additional subclusters to an existing sandbox.
If sandboxing the first subcluster in a sandbox, the nodes in the specified subcluster create a checkpoint of the catalog at function runtime. When these nodes auto-restart in the sandbox cluster, they form a primary subcluster that uses the data and catalog checkpoint from the main cluster. After the nodes successfully restart, the sandbox cluster and the main cluster are mutually isolated and can diverge.
While the nodes in the main cluster sync their metadata to /path-to-communal-storage/
metadata
/db_name
, the nodes in the sandbox sync to /path-to-communal-storage/
metadata
/sandbox_name
.
You can perform standard database operations and queries, such as loading data or creating new tables, in either cluster without affecting the other cluster. For example, dropping a table in the sandbox cluster does not drop the table in the main cluster, and vice versa.
Because both clusters reference the same data files, neither cluster can delete files that existed at the time of sandbox creation. However, files that are created in the sandbox can be removed. Files in the main cluster can be queued for removal, but they are not processed until all active sandboxes are removed.
You cannot nest sandboxes, but you can have more than one subcluster in a sandbox and multiple sandboxes active at the same time. To add an additional secondary subcluster to an existing sandbox, you must first call SANDBOX_SUBCLUSTER in the sandbox cluster and then in the main cluster. For details, see Adding subclusters to existing sandboxes.
This is a meta-function. You must call-meta-functions in a top-level SELECT statement. The function also requires a global catalog lock (GCLX) during runtime.
Behavior type
VolatileSyntax
SANDBOX_SUBCLUSTER( 'sandbox-name', 'subcluster-name', 'options' )
Arguments
sandbox-name
- Name of the sandbox. The name must conform to the following rules:
-
Consist of at most 30 characters, all of which must have an ASCII code between 36 and 126
-
Begin with a letter
-
Unique among all existing databases and sandboxes
-
subcluster-name
- Name of the secondary subcluster to sandbox. Attempting to sandbox a primary subcluster or a subcluster that is already sandboxed results in an error. The nodes in the subcluster must all have a status of UP and provide full subscription coverage for all shards.
options
- Currently, there are no options for this function.
Privileges
SuperuserExamples
The following example sandboxes the sc02
secondary subcluster into a sandbox named sand
:
=> SELECT SANDBOX_SUBCLUSTER('sand', 'sc_02', '');
SANDBOX_SUBCLUSTER
-----------------------------------------------------------------------------------------------
Subcluster 'sc_02' has been sandboxed to 'sand'. It is going to auto-restart and re-form.
(1 row)
If you query the NODES system table from the main cluster, you can see that the nodes of sc_02
have a status of UNKNOWN and are listed as member of the sand
sandbox:
=> SELECT node_name, subcluster_name, node_state, sandbox FROM NODES;
node_name | subcluster_name | node_state | sandbox
----------------------+--------------------+------------+---------
v_verticadb_node0001 | default_subcluster | UP |
v_verticadb_node0002 | default_subcluster | UP |
v_verticadb_node0003 | default_subcluster | UP |
v_verticadb_node0004 | sc_02 | UNKNOWN | sand
v_verticadb_node0005 | sc_02 | UNKNOWN | sand
v_verticadb_node0006 | sc_02 | UNKNOWN | sand
(6 rows)
When you issue the same query on one of the sandboxed nodes, the table shows that the sandboxed nodes are UP and the nodes from the main cluster are UNKNOWN, confirming that the cluster is successfully sandboxed:
=> SELECT node_name, subcluster_name, node_state, sandbox FROM NODES;
node_name | subcluster_name | node_state | sandbox
----------------------+--------------------+------------+---------
v_verticadb_node0001 | default_subcluster | UNKNOWN |
v_verticadb_node0002 | default_subcluster | UNKNOWN |
v_verticadb_node0003 | default_subcluster | UNKNOWN |
v_verticadb_node0004 | sc_02 | UP | sand
v_verticadb_node0005 | sc_02 | UP | sand
v_verticadb_node0006 | sc_02 | UP | sand
(6 rows)
You can now perform standard database operations in either cluster without impacting the other cluster. For instance, if you create a machine learning dataset named train_data
in the sandboxed subcluster, the new table does not propagate to the main cluster:
--In the sandboxed subcluster
=> CREATE TABLE train_data(time timestamp, Temperature float);
CREATE TABLE
=> COPY train_data FROM LOCAL 'daily-min-temperatures.csv' DELIMITER ',';
Rows Loaded
-------------
3650
(1 row)
=> SELECT * FROM train_data LIMIT 5;
time | Temperature
---------------------+-------------
1981-01-27 00:00:00 | 19.4
1981-02-20 00:00:00 | 15.7
1981-02-27 00:00:00 | 17.5
1981-03-04 00:00:00 | 16
1981-04-24 00:00:00 | 11.5
(5 rows)
--In the main cluster
=> SELECT * FROM train_data LIMIT 5;
ERROR 4566: Relation "train_data" does not exist
See also
22 - SET_DEPOT_ANTI_PIN_POLICY_PARTITION
Eon Mode only
Assigns the highest depot eviction priority to a partition. Among other depot-cached objects, objects with an anti-pinning policy are the most susceptible to eviction from the depot. After eviction, the object must be read directly from communal storage the next time it is needed.
If the table has another partition-level eviction policy already set on it, then Vertica combines the policies based on policy type.
If you alter or remove table partitioning, Vertica automatically clears all eviction policies previously set on partitions of that table. The table's eviction policy, if any, is unaffected.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SET_DEPOT_ANTI_PIN_POLICY_PARTITION (
'[[database.]schema.]object-name', 'min-range-value', 'max-range-value' [, 'subcluster' ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.object-name
- Target of this policy.
min-range-value
,max-range-value
- Minimum and maximum value of partition keys in
object-name
to anti-pin, wheremin‑range‑value
must be ≤max‑range‑value
. To specify a single partition key,min‑range‑value
andmax‑range‑value
must be equal.If the new policy's partition key range overlaps the range of an existing partition-level eviction policy, Vertica gives precedence to the new policy, as described in Overlapping Policies.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
23 - SET_DEPOT_ANTI_PIN_POLICY_PROJECTION
Eon Mode only
Assigns the highest depot eviction priority to a projection. Among other depot-cached objects, objects with an anti-pinning policy are the most susceptible to eviction from the depot. After eviction, the object must be read directly from communal storage the next time it is needed.
If the projection has another eviction policy already set on it, the new policy supersedes it.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SET_DEPOT_ANTI_PIN_POLICY_PROJECTION ( '[[database.]schema.]projection' [, 'subcluster' ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.projection
- Target of this policy.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
24 - SET_DEPOT_ANTI_PIN_POLICY_TABLE
Eon Mode only
Assigns the highest depot eviction priority to a table. Among other depot-cached objects, objects with an anti-pinning policy are the most susceptible to eviction from the depot. After eviction, the object must be read directly from communal storage the next time it is needed.
If the table has another eviction policy already set on it, the new policy supersedes it.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SET_DEPOT_ANTI_PIN_POLICY_TABLE ( '[[database.]schema.]table' [, 'subcluster' ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table
- Target of this policy.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.
Privileges
Superuser
See also
25 - SET_DEPOT_PIN_POLICY_PARTITION
Eon Mode only
Pins the specified partitions of a table or projection to a subcluster depot, or all database depots, to reduce exposure to depot eviction.
If the table has another partition-level eviction policy already set on it, then Vertica combines the policies based on policy type.
If you alter or remove table partitioning, Vertica automatically clears all eviction policies previously set on partitions of that table. The table's eviction policy, if any, is unaffected.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SET_DEPOT_PIN_POLICY_PARTITION (
'[[database.]schema.]object-name', 'min-range-value', 'max-range-value' [, 'subcluster' ] [, 'download' ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.object-name
- Table or projection to pin. If you specify a projection, it must store the partition keys.
min-range-value
,max-range-value
- Minimum and maximum value of partition keys in
object-name
to pin, wheremin‑range‑value
must be ≤max‑range‑value
. To specify a single partition key,min‑range‑value
andmax‑range‑value
must be equal.If the new policy's partition key range overlaps the range of an existing partition-level eviction policy, Vertica gives precedence to the new policy, as described in Overlapping Policies below.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.download
- Boolean, if true, SET_DEPOT_PIN_POLICY_PARTITION immediately queues the specified partitions for download from communal storage.
Default: false
Privileges
Superuser
Overlapping policies
If a new partition pinning policy overlaps the partition key range of an existing eviction policy, Vertica determines how to apply the policy based on the type of the new and existing policies.
Both policies are pinning policies
If both the new and existing policies are pining policies, then Vertica collates the two ranges. For example, if you create two partition pinning policies with key ranges of 1-3 and 2-10, Vertica creates a single policy with a key range of 1-10.
Partition pinning policy overlaps anti-pinning policy
If the new partition pinning policy overlaps an anti-pinning policy, then Vertica issues a warning and informational message that it reassigned the range of overlapping keys from the anti-pinning policy to the new pinning policy.
For example, if you create an anti-partition pinning policy and then a pinning policy with key ranges of 1-10 and 5-20, respectively, Vertica truncates the earlier anti-pinning policy's key range:
policy_type | min_value | max_value |
---|---|---|
PIN | 5 | 20 |
ANTI_PIN | 1 | 4 |
If the new pinning policy's partition range falls inside the range of an older anti-pinning policy, Vertica splits the anti-pinning policy. So, given an existing partition anti-pinning policy with a key range of 1-20, a new partition pinning policy with a key range of 5-10 splits the anti-pinning policy:
policy_type | min_value | max_value |
---|---|---|
ANTI_PIN | 1 | 4 |
PIN | 5 | 10 |
ANTI_PIN | 11 | 20 |
Precedence of pinning policies
In general, partition management functions that involve two partitioned tables give precedence to the target table's pinning policy, as follows:
-
COPY_PARTITIONS_TO_TABLE: Partition-level pinning is reliable if the source and target tables have pinning policies on the same partition keys. If the two tables have different pinning policies, then the partition pinning policies of the target table apply.
-
MOVE_PARTITIONS_TO_TABLE: Partition-level pinning policies of the target table apply.
-
SWAP_PARTITIONS_BETWEEN_TABLES: Partition-level pinning policies of the target table apply.
For example, the following statement copies partitions from table t1
to table t2
:
=> SELECT COPY_PARTITIONS_TO_TABLE('t1', '1', '5', 't2');
In this case, the following logic applies:
-
If the two tables have different partition pinning policies, then the pinning policy of target table
t2
for partition keys 1-5 applies. -
If table
t2
does not exist, then Vertica creates it from tablet1
, and copiest1
's policy on partition keys 1-5. Subsequently, if you clear the partition pinning policy from either table, it is also cleared from the other.
See also
26 - SET_DEPOT_PIN_POLICY_PROJECTION
Eon Mode only
Pins a projection to a subcluster depot, or all database depots, to reduce its exposure to depot eviction. For details on pinning policies and usage guidelines, see Pinning Policies.
If the projection has another eviction policy already set on it, the new policy supersedes it.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SET_DEPOT_PIN_POLICY_PROJECTION ( '[[database.]schema.]projection' [, 'subcluster' ] [, download ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.projection
- Projection to pin.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.download
- Boolean, if true SET_DEPOT_PIN_POLICY_PROJECTION immediately queues the specified projection for download from communal storage.
Default: false
Privileges
Superuser
See also
27 - SET_DEPOT_PIN_POLICY_TABLE
Eon Mode only
Pins a table to a subcluster depot, or all database depots, to reduce its exposure to depot eviction.
If the table has another eviction policy already set on it, the new policy supersedes it. After you pin a table to a subcluster depot, you cannot subsequently pin any of its partitions and projections in that depot.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SET_DEPOT_PIN_POLICY_TABLE ( '[[database.]schema.]table' [, 'subcluster' ] [, download ] )
Arguments
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table
- Table to pin.
subcluster
Name of a depot subcluster,
default_subcluster
to specify the default database subcluster. If this argument is omitted, all database depots are targeted.download
- Boolean, if true, SET_DEPOT_PIN_POLICY_TABLE immediately queues the specified table for download from communal storage.
Default: false
Privileges
Superuser
See also
28 - SHUTDOWN_SUBCLUSTER
Eon Mode only
Shuts down a subcluster. This function shuts down the subcluster synchronously, returning when shutdown is complete with the message Subcluster shutdown. If the subcluster is already down, the function returns with no error.
Stopping a subcluster does not warn you if there are active user sessions connected to the subcluster. This behavior is the same as stopping an individual node. Before stopping a subcluster, verify that no users are connected to it.
If you want to drain client connections before shutting down a subcluster, you can gracefully shutdown the subcluster using SHUTDOWN_WITH_DRAIN.
Caution
This function does not test whether the target subcluster is critical (a subcluster whose loss would cause the database to shut down). Using this function to shut down a critical subcluster results in the database shutting down. Always verify that the subcluster you want to shut down is not critical by querying the CRITICAL_SUBCLUSTERS system table before calling this function.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SHUTDOWN_SUBCLUSTER('subcluster-name')
Arguments
subcluster-name
- Name of the subcluster to shut down.
Privileges
Superuser
Examples
The following example demonstrates shutting down the subcluster analytics
:
=> SELECT subcluster_name, node_name, node_state FROM nodes order by 1,2;
subcluster_name | node_name | node_state
--------------------+----------------------+------------
analytics | v_verticadb_node0004 | UP
analytics | v_verticadb_node0005 | UP
analytics | v_verticadb_node0006 | UP
default_subcluster | v_verticadb_node0001 | UP
default_subcluster | v_verticadb_node0002 | UP
default_subcluster | v_verticadb_node0003 | UP
(6 rows)
=> SELECT SHUTDOWN_SUBCLUSTER('analytics');
WARNING 4539: Received no response from v_verticadb_node0004 in stop subcluster
WARNING 4539: Received no response from v_verticadb_node0005 in stop subcluster
WARNING 4539: Received no response from v_verticadb_node0006 in stop subcluster
SHUTDOWN_SUBCLUSTER
---------------------
Subcluster shutdown
(1 row)
=> SELECT subcluster_name, node_name, node_state FROM nodes order by 1,2;
subcluster_name | node_name | node_state
--------------------+----------------------+------------
analytics | v_verticadb_node0004 | DOWN
analytics | v_verticadb_node0005 | DOWN
analytics | v_verticadb_node0006 | DOWN
default_subcluster | v_verticadb_node0001 | UP
default_subcluster | v_verticadb_node0002 | UP
default_subcluster | v_verticadb_node0003 | UP
(6 rows)
Note
The "WARNING 4539" messages after calling SHUTDOWN_SUBCLUSTER occur because the nodes are in the process of shutting down. They are expected.See also
29 - SHUTDOWN_WITH_DRAIN
Eon Mode only
Gracefully shuts down a subcluster or subclusters. The function drains client connections on the subcluster's nodes and then shuts down the subcluster. This is synchronous function that returns when the shutdown message has been sent to the subcluster.
Work from existing user sessions continues on draining nodes, but the nodes refuse new client connections and are excluded from load-balancing operations. dbadmin can still connect to draining nodes.
The nodes drain until either the existing connections complete their work and close or the user-specified timeout is reached. When one of these conditions is met, the function proceeds to shut down the subcluster.
For more information about the graceful shutdown process, see Graceful Shutdown.
Caution
This function does not test whether the target subcluster is critical (a subcluster whose loss would cause the database to shut down). Using this function to shut down a critical subcluster results in the database shutting down. Always verify that the subcluster you want to shut down is not critical by querying the CRITICAL_SUBCLUSTERS system table before calling this function.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SHUTDOWN_WITH_DRAIN( 'subcluster-name', timeout-seconds )
Arguments
subcluster-name
- Name of the subcluster to shutdown. Enter an empty string to shutdown all subclusters in a database.
timeout-seconds
- Number of seconds to wait before forcefully closing
subcluster-name
's client connections and shutting down. The behavior depends on the sign oftimeout-seconds
:- Positive integer: The function waits until either the runtime reaches
timeout-seconds
or the client connections finish their work and close. As soon as one of these conditions is met, the function immediately proceeds to shut down the subcluster. - Zero: The function immediately closes any open client connections and shuts down the subcluster.
- Negative integer: The function marks the subcluster as draining and waits indefinitely to shut down the subcluster until all active user sessions disconnect.
- Positive integer: The function waits until either the runtime reaches
Privileges
Superuser
Examples
In the following example, the function marks the subcluster named analytics as draining and then shuts it down as soon as either the existing client connections close or 300 seconds pass:
=> SELECT SHUTDOWN_WITH_DRAIN('analytics', 120);
NOTICE 0: Draining has started on subcluster (analytics)
NOTICE 0: Begin shutdown of subcluster (analytics)
SHUTDOWN_WITH_DRAIN
--------------------------------------------------------------------------------------------------------------------
Set subcluster (analytics) to draining state
Waited for 3 nodes to drain
Shutdown message sent to subcluster (analytics)
(1 row)
You can query the DC_DRAINING_EVENTS table to see more information about draining and shutdown events, such as whether any user sessions were forcibly closed. This subcluster had one active user session when the shutdown began, but it closed before the timeout was reached:
=> SELECT event_type, event_type_name, event_description, event_result, event_result_name FROM dc_draining_events;
event_type | event_type_name | event_description | event_result | event_result_name
------------+------------------------------+---------------------------------------------------------------------+--------------+-------------------
0 | START_DRAIN_SUBCLUSTER | START_DRAIN for SHUTDOWN of subcluster (analytics) | 0 | SUCCESS
2 | START_WAIT_FOR_NODE_DRAIN | Wait timeout is 120 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 0 seconds | 4 | INFORMATIONAL
4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 30 seconds | 4 | INFORMATIONAL
3 | END_WAIT_FOR_NODE_DRAIN | Wait for drain ended with 0 sessions remaining | 0 | SUCCESS
5 | BEGIN_SHUTDOWN_AFTER_DRAIN | Starting shutdown of subcluster (analytics) following drain | 4 | INFORMATIONAL
(6 rows)
See also
30 - START_DRAIN_SUBCLUSTER
Eon Mode only
Drains a subcluster or subclusters. The function marks all nodes in the designated subcluster as draining. Work from existing user sessions continues on draining nodes, but the nodes refuse new client connections and are excluded from load balancing operations. dbadmin can still connect to draining nodes.
To drain connections on a subcluster as part of a graceful shutdown process, you can call SHUTDOWN_WITH_DRAIN. For details, see Graceful Shutdown.
To cancel a draining operation on a subcluster, call CANCEL_DRAIN_SUBCLUSTER. If all draining nodes in a subcluster are stopped, they are marked as not draining upon restart.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
START_DRAIN_SUBCLUSTER( 'subcluster-name' )
Arguments
subcluster-name
- Name of the subcluster to drain. Enter an empty string to drain all subclusters in the database.
Privileges
Superuser
Examples
The following example demonstrates how to drain a subcluster named analytics:
=> SELECT subcluster_name, node_name, node_state FROM nodes;
subcluster_name | node_name | node_state
-------------------+--------------------+------------
default_subcluster | verticadb_node0001 | UP
default_subcluster | verticadb_node0002 | UP
default_subcluster | verticadb_node0003 | UP
analytics | verticadb_node0004 | UP
analytics | verticadb_node0005 | UP
analytics | verticadb_node0006 | UP
(6 rows)
=> SELECT START_DRAIN_SUBCLUSTER('analytics');
START_DRAIN_SUBCLUSTER
-------------------------------------------------------
Targeted subcluster: 'analytics'
Action: START DRAIN
(1 row)
You can confirm that the subcluster is draining by querying the DRAINING_STATUS system table:
=> SELECT node_name, subcluster_name, is_draining FROM draining_status ORDER BY 1;
node_name | subcluster_name | is_draining
-------------------+--------------------+-------
verticadb_node0001 | default_subcluster | f
verticadb_node0002 | default_subcluster | f
verticadb_node0003 | default_subcluster | f
verticadb_node0004 | analytics | t
verticadb_node0005 | analytics | t
verticadb_node0006 | analytics | t
See also
31 - START_REAPING_FILES
Eon Mode only
Starts the disk file deletion in the background as an asynchronous function. By default, this meta-function syncs the catalog before beginning deletion. Disk file deletion is handled in the foreground by FLUSH_REAPER_QUEUE.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
START_REAPING_FILES( [sync-catalog] )
Parameters
*
sync-catalog*
- Specifies to sync metadata in the database catalog on all nodes before the function executes:
-
true
(default): Sync the database catalog -
false
: Run without syncing.
-
Privileges
Superuser
Examples
Start the reaper service:
=> SELECT START_REAPING_FILES();
Start the reaper service and skip the initial catalog sync:
=> SELECT START_REAPING_FILES(false);
32 - SYNC_CATALOG
Eon Mode only
Synchronizes the catalog to communal storage to enable reviving the current catalog version in the case of an imminent crash. Vertica synchronizes all pending checkpoint and transaction logs to communal storage.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SYNC_CATALOG( [ 'node-name' ] )
Parameters
node-name
- The node to synchronize. If you omit this argument, Vertica synchronizes the catalog on all nodes.
Privileges
Superuser
Examples
Synchronize the catalog on all nodes:
=> SELECT SYNC_CATALOG();
Synchronize the catalog on one node:
=> SELECT SYNC_CATALOG( 'node001' );
33 - UNSANDBOX_SUBCLUSTER
Removes a subcluster from a sandbox.
Note
Vertica recommends using the admintoolsunsandbox_subcluster
command to remove the sandbox's primary subcluster. This command automatically stops the sandboxed nodes, wipes the node's catalog subdirectories, and restarts the nodes. If you use the UNSANDBOX_SUBCLUSTER function, these steps must be completed manually.
After stopping the nodes in the sandboxed subcluster, you must run this function in the main cluster from which the sandboxed subcluster was spun-off. The function changes the metadata in the main cluster that designates the specified subcluster as sandboxed, but does not restart the subcluster and rejoin it to the main cluster.
If you are unsandboxing a secondary subcluster from the sandbox, Vertica recommends that you also call the UNSANDBOX_SUBCLUSTER function in the sandbox cluster. This makes sure that both clusters are aware of the state of the subcluster and that relevant system tables accurately reflect the subcluster's status.
To rejoin the subcluster to the main cluster and return the nodes to their normal state, you must complete the following tasks:
-
Wipe the catalog subdirectory from the sandboxed nodes. The main cluster provides the current catalog information on node restart.
-
Restart the nodes. On successful restart, the nodes should rejoin the main cluster.
-
If unsandboxing the last subcluster in a sandbox, remove the sandbox metadata prefix from the shared communal storage location. This helps avoid problems that might arise form reusing the same sandbox name.
Note
If you upgraded the Vertica version of the sandboxed subcluster, you must downgrade the version of the subcluster before rejoining it to the main cluster.If there are no more active sandboxes, you can run CLEAN_COMMUNAL_STORAGE to remove any data created in the sandbox. The main cluster can also resume processing data queued for deletion.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
UNSANDBOX_SUBCLUSTER( 'subcluster-name', 'options' )
Arguments
subcluster-name
- Identifies the subcluster to unsandbox. This must be a currently sandboxed subcluster.
options
- Currently, there are no options for this function.
Privileges
SuperuserExamples
In the following example, the function unsandboxes the sc_02
secondary subcluster from the sand
sandbox. After stopping the nodes in the subcluster, you can unsandbox the subcluster by calling the UNSANDBOX_SUBCLUSTER function from the main cluster:
=> SELECT UNSANDBOX_SUBCLUSTER('sc_02', '');
UNSANDBOX_SUBCLUSTER
---------------------------------------------------------------------------------------------------------------
Subcluster 'sc_02' has been unsandboxed. If wiped out and restarted, it should be able to rejoin the cluster.
(1 row)
To rejoin the nodes to the main cluster, you must wipe the local catalog from each of the previously sandboxed nodes—whose catalog location can be found by querying NODES—and then restart the nodes:
$ rm -rf paths-to-node-catalogs
$ admintools -t restart_node -s list-of-nodes -p password
After the nodes restart, you can query the NODES system table to confirm that the previously sandboxed nodes are UP and are no longer a member of sand
:
=> SELECT node_name, subcluster_name, node_state, sandbox FROM NODES;
node_name | subcluster_name | node_state | sandbox
----------------------+--------------------+------------+---------
v_verticadb_node0001 | default_subcluster | UP |
v_verticadb_node0002 | default_subcluster | UP |
v_verticadb_node0003 | default_subcluster | UP |
v_verticadb_node0004 | sc_01 | UNKNOWN | sand
v_verticadb_node0005 | sc_01 | UNKNOWN | sand
v_verticadb_node0006 | sc_01 | UNKNOWN | sand
v_verticadb_node0007 | sc_02 | UP |
v_verticadb_node0008 | sc_02 | UP |
v_verticadb_node0009 | sc_02 | UP |
(9 rows)
Because sc_02
was a secondary subcluster in the sandbox, you should also call the UNSANDBOX_SUBCLUSTER function in the sandbox cluster. This makes sure that both clusters are aware of the state of the subcluster and that relevant system tables accurately reflect the subcluster's status:
=> SELECT UNSANDBOX_SUBCLUSTER('sc_02', '');
UNSANDBOX_SUBCLUSTER
-------------------------------------------------------------------------------------------------------------------
Subcluster 'sc_02' has been unsandboxed from 'sand'. This command should be executed in the main cluster as well.
(1 row)
If there are no more active sandboxes, you can run the CLEAN_COMMUNAL_STORAGE function to remove any data created in the sandbox. You should also remove the sandbox's metadata from the shared communal storage location, which can be found at /path-to-communal-storage/
metadata
/sandbox_name
.
The following example removes the sandbox's metadata from an S3 bucket and then calls CLEAN_COMMUNAL_STORAGE to cleanup any data from the sandbox:
$ aws s3 rm /path-to-communal/metadata/sandbox_name
=> SELECT CLEAN_COMMUNAL_STORAGE('true');
CLEAN_COMMUNAL_STORAGE
-----------------------------------------------------------------
CLEAN COMMUNAL STORAGE
Total leaked files: 143
Files have been queued for deletion.
Check communal_cleanup_records for more information.
(1 row)