The functions in this section support profiling and analyzing database and query performance.
This is the multi-page printable view of this section. Click here to print.
Performance analysis functions
- 1: Profiling functions
- 1.1: CLEAR_PROFILING
- 1.2: DISABLE_PROFILING
- 1.3: ENABLE_PROFILING
- 1.4: SHOW_PROFILING_CONFIG
- 2: Statistics management functions
- 2.1: ANALYZE_EXTERNAL_ROW_COUNT
- 2.2: ANALYZE_STATISTICS
- 2.3: ANALYZE_STATISTICS_PARTITION
- 2.4: DROP_EXTERNAL_ROW_COUNT
- 2.5: DROP_STATISTICS
- 2.6: DROP_STATISTICS_PARTITION
- 2.7: EXPORT_STATISTICS
- 2.8: EXPORT_STATISTICS_PARTITION
- 2.9: IMPORT_STATISTICS
- 2.10: VALIDATE_STATISTICS
- 3: Workload management functions
- 3.1: ANALYZE_WORKLOAD
- 3.2: CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY
- 3.3: CHANGE_RUNTIME_PRIORITY
- 3.4: MOVE_STATEMENT_TO_RESOURCE_POOL
- 3.5: SLEEP
1 - Profiling functions
This section contains profiling functions specific to Vertica.
1.1 - CLEAR_PROFILING
Clears from memory data for the specified profiling type.
Note
Vertica stores profiled data in memory, so profiling can be memory intensive depending on how much data you collect.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CLEAR_PROFILING( 'profiling-type' [, 'scope'] )
Parameters
profiling-type
- The type of profiling data to clear:
-
session
: Clear profiling for basic session parameters and lock time out data. -
query
: Clear profiling for general information about queries that ran, such as the query strings used and the duration of queries. -
ee
: Clear profiling for information about the execution run of each query.
-
scope
- Specifies at what scope to clear profiling on the specified data, one of the following:
-
local
: Clear profiling data for the current session. -
global
: Clear profiling data across all database sessions.
-
Examples
The following statement clears profiled data for queries:
=> SELECT CLEAR_PROFILING('query');
See also
1.2 - DISABLE_PROFILING
Disables for the current session collection of profiling data of the specified type. For detailed information, see Enabling profiling.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
DISABLE_PROFILING( 'profiling-type' )
Parameters
profiling-type
- The type of profiling data to disable:
-
session
: Disables profiling for basic session parameters and lock time out data. -
query
: Disables profiling for general information about queries that ran, such as the query strings used and the duration of queries. -
ee
: Disables profiling for information about the execution run of each query.
-
Examples
The following statement disables profiling on query execution runs:
=> SELECT DISABLE_PROFILING('ee');
DISABLE_PROFILING
-----------------------
EE Profiling Disabled
(1 row)
See also
1.3 - ENABLE_PROFILING
Enables collection of profiling data of the specified type for the current session. For detailed information, see Enabling profiling.
Note
Vertica stores session and query profiling data in memory, so profiling can be memory intensive, depending on how much data you collect.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
ENABLE_PROFILING( 'profiling-type' )
Parameters
profiling-type
- The type of profiling data to enable:
-
session
: Enable profiling for basic session parameters and lock time out data. -
query
: Enable profiling for general information about queries that ran, such as the query strings used and the duration of queries. -
ee
: Enable profiling for information about the execution run of each query.
-
Examples
The following statement enables profiling on query execution runs:
=> SELECT ENABLE_PROFILING('ee');
ENABLE_PROFILING
----------------------
EE Profiling Enabled
(1 row)
See also
1.4 - SHOW_PROFILING_CONFIG
Shows whether profiling is enabled.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
StableSyntax
SHOW_PROFILING_CONFIG ()
Examples
The following statement shows that profiling is enabled globally for all profiling types (session, execution engine, and query):
=> SELECT SHOW_PROFILING_CONFIG();
SHOW_PROFILING_CONFIG
------------------------------------------
Session Profiling: Session off, Global on
EE Profiling: Session off, Global on
Query Profiling: Session off, Global on
(1 row)
See also
2 - Statistics management functions
This section contains Vertica functions for collecting and managing table data statistics.
2.1 - ANALYZE_EXTERNAL_ROW_COUNT
Calculates the exact number of rows in an external table. ANALYZE_EXTERNAL_ROW_COUNT
runs in the background.
Note
You cannot calculate row counts on external tables withDO_TM_TASK
.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
ANALYZE_EXTERNAL_ROW_COUNT ('[[[database.]schema.]table-name ]')
Parameters
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table-name
- Specifies the name of the external table for which to calculate the exact row count. If you supply an empty string, Vertica calculate the exact number of rows for all external tables.
Privileges
Any INSERT/UPDATE/DELETE privilege on the external table
Examples
Calculate the exact row count for all external tables:
=> SELECT ANALYZE_EXTERNAL_ROW_COUNT('');
Calculate the exact row count for table loader_rejects
:
=> SELECT ANALYZE_EXTERNAL_ROW_COUNT('loader_rejects');
See also
2.2 - ANALYZE_STATISTICS
Collects and aggregates data samples and storage information from all nodes that store projections associated with the specified table.The function skips columns of complex data types. By default, Vertica analyzes multiple columns in a single-query execution plan, depending on resource limits. Such multi-column analysis facilitates the following objectives:
-
Reduce plan execution latency.
-
Speed up analysis of relatively small tables with many columns.
Vertica writes statistics to the database catalog. The query optimizer uses this collected data to create query plans. Without this data, the query optimizer assumes uniform distribution of data values and equal storage usage for all projections.
You can cancel statistics collection with CTRL+C or by calling INTERRUPT_STATEMENT.
ANALYZE_STATISTICS is an alias of the function ANALYZE_HISTOGRAM, which is no longer documented.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
ANALYZE_STATISTICS ('[[[database.]schema.]table]' [, 'column-list' [, percent ]] )
Returns
0—Success
If an error occurs, refer to
vertica.log
for details.
Parameters
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table
- Table on which to collect data. If set to an empty string, Vertica collects statistics for all database tables and their projections.
column-list
- Comma-delimited list of columns in
table
, typically predicate columns. Vertica narrows the scope of the data collection to the specified columns. Columns of complex types are not supported.If you alter a table to add a column and populate its contents with either default or other values, call ANALYZE_STATISTICS on this column to get the most current statistics.
percent
- A float value between 0 and 100 that specifies what percentage of data to read from disk (not the amount of data to analyze). If you omit this argument, Vertica sets the percentage to 10.
Analyzing more than 10 percent disk space takes proportionally longer to process, but produces a higher level of sampling accuracy.
Privileges
Non-superuser:
-
Schema: USAGE
-
Table: One of INSERT, DELETE, or UPDATE
Restrictions
- Vertica supports ANALYZE_STATISTICS on local and global temporary tables. In both cases, you can obtain statistics only on tables that are created with the option ON COMMIT PRESERVE ROWS. Otherwise, Vertica deletes table content on committing the current transaction, so no table data is available for analysis. Vertica collects no statistics from the following projections: * Live aggregate and Top-K projections * Projections that are defined to include an SQL function within an expression
- Vertica collects no statistics on columns of ARRAY, SET, or ROW types.
Examples
See Collecting table statistics.
See also
ANALYZE_STATISTICS_PARTITION2.3 - ANALYZE_STATISTICS_PARTITION
Collects and aggregates data samples and storage information for a range of partitions in the specified table. Vertica writes the collected statistics to the database catalog.
You can cancel statistics collection with CTRL+C or meta-function INTERRUPT_STATEMENT.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
ANALYZE_STATISTICS_PARTITION ('[[database.]schema.]table', 'min-range-value','max-range-value' [, 'column-list' [, percent ]] )
Returns
0: Success
If an error occurs, refer to
vertica.log
for details.
Parameters
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table
- Table on which to collect data.
min-range-value
max-range-value
- Minimum and maximum value of partition keys to analyze, where
min-range-value
must be ≤max-range-value
. To analyze one partition,min-range-value
andmax-range-value
must be equal. column-list
- Comma-delimited list of columns in
table
, typically a predicate column. Vertica narrows the scope of the data collection to the specified columns. percent
- Float value between 0 and 100 that specifies what percentage of data to read from disk (not the amount of data to analyze). If you omit this argument, Vertica sets the percentage to 10.
Analyzing more than 10 percent disk space takes proportionally longer to process, but produces a higher level of sampling accuracy.
Privileges
Non-superuser:
-
Schema: USAGE
-
Table: One of INSERT, DELETE, or UPDATE
Requirements and restrictions
The following requirements and restrictions apply to ANALYZE_STATISTICS_PARTITION:
-
The table must be partitioned and cannot contain unpartitioned data.
-
The table partition expression must specify a single column. The following expressions are supported:
-
Expressions that specify only the column—that is, partition on all column values. For example:
PARTITION BY ship_date GROUP BY CALENDAR_HIERARCHY_DAY(ship_date, 2, 2)
-
If the column is a DATE or TIMESTAMP/TIMESTAMPTZ, the partition expression can specify a supported date/time function that returns that column or any portion of it, such as month or year. For example, the following partition expression specifies to partition on the year portion of column
order_date
:PARTITION BY YEAR(order_date)
-
Expressions that perform addition or subtraction on the column. For example:
PARTITION BY YEAR(order_date) -1
-
-
The table partition expression cannot coerce the specified column to another data type.
-
Vertica collects no statistics from the following projections:
-
Live aggregate and Top-K projections
-
Projections that are defined to include an SQL function within an expression
-
Examples
2.4 - DROP_EXTERNAL_ROW_COUNT
Removes external table row count statistics compiled by
ANALYZE_EXTERNAL_ROW_COUNT
. DROP_EXTERNAL_ROW_COUNT
runs in the background.
Caution
Statistics can be time consuming to regenerate.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
DROP_EXTERNAL_ROW_COUNT ('[[[database.]schema.]table-name ]');
Parameters
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table-name
- The external table for which to remove the exact row count. If you specify an empty string, Vertica drops the exact row count statistic for all external tables.
Privileges
-
INSERT/UPDATE/DELETE privilege on table
-
USAGE privilege on schema that contains the table
Examples
Drop row count statistics for external table loader_rejects
:
=> SELECT DROP_EXTERNAL_ROW_COUNT('loader_rejects');
See also
Collecting database statistics2.5 - DROP_STATISTICS
Removes statistical data on database projections previously generated by
ANALYZE_STATISTICS
. When you drop this data, the Vertica optimizer creates query plans using default statistics.
Caution
Regenerating statistics can incur significant overhead.This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
DROP_STATISTICS ('[[[database.]schema.]table]' [, 'category' [, '[column-list]'] )
Parameters
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table
- Table on which to drop statistics. If set to an empty string, Vertica drops statistics for all database tables and their projections.
category
- Category of statistics to drop, one of the following:
-
ALL
(default): Drop all statistics, including histograms and row counts. -
HISTOGRAMS
: Drop only histograms. Row count statistics remain.
-
column-list
- Comma-delimited list of columns in
table
, typically predicate columns. Vertica narrows the scope of dropped statistics to the specified columns. If you omit this parameter or supply an empty string, Vertica drops statistics on all columns.
Privileges
Non-superuser:
-
Schema: USAGE
-
Table: One of INSERT, DELETE, or UPDATE
Examples
Drop all base statistics for the table store.store_sales_fact
:
=> SELECT DROP_STATISTICS('store.store_sales_fact');
DROP_STATISTICS
-----------------
0
(1 row)
Drop statistics for all table projections:
=> SELECT DROP_STATISTICS ('');
DROP_STATISTICS
-----------------
0
(1 row)
See also
DROP_STATISTICS_PARTITION
2.6 - DROP_STATISTICS_PARTITION
Removes statistical data on database projections previously generated by
ANALYZE_STATISTICS_PARTITION
. When you drop this data, the Vertica optimizer creates query plans using table-level statistics, if available, or default statistics.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
DROP_STATISTICS_PARTITION ('[[database.]schema.]table', '[min-range-value]', '[max-range-value]' [, category [, '[column-list]'] )
Parameters
[
database
.]
schema
Database and schema. The default schema is
public
. If you specify a database, it must be the current database.table
- Table on which to drop statistics.
-
min-range-value max-range-value
- The minimum and maximum value of partition keys on which to drop statistics, where
min-range-value
must be ≤max-range-value
. If you supply empty strings for both parameters, Vertica drops all partition-level statistics for this table or the specified columns.Important
The range of keys to drop must be equal to, or a superset of, the full range of partitions previously analyzed byANALYZE_STATISTICS_PARTITION
. If the range omits any analyzed partition,DROP_STATISTICS_PARTITION
drops no statistics. category
- The category of statistics to drop, one of the following:
-
BASE
(default): Drop histograms and row counts (min/max column values, histogram). -
HISTOGRAMS
: Drop only histograms. Row count statistics remain. -
ALL
: Drop all statistics.
-
column-list
- A comma-delimited list of columns in
table
, typically predicate columns. Vertica narrows the scope of dropped statistics to the specified columns. If you omit this parameter or supply an empty string, Vertica drops statistics on all columns.
Privileges
Non-superuser:
-
Schema: USAGE
-
Table: One of INSERT, DELETE, or UPDATE
See also
DROP_STATISTICS
2.7 - EXPORT_STATISTICS
Generates statistics in XML format from data previously collected by ANALYZE_STATISTICS. Before you export statistics, collect the latest data by calling ANALYZE_STATISTICS.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
StableSyntax
EXPORT_STATISTICS ('[ filename ]' [,'table-spec' [,'column[,...]']])
Arguments
filename
- Specifies where to write the generated XML. If
filename
already exists, EXPORT_STATISTICS overwrites it. If you supply an empty string, EXPORT_STATISTICS writes the XML to standard output. table-spec
- Specifies the table on which to export projection statistics:
[[database.]schema.]table
The default schema is
public
. If you specify a database, it must be the current database.If
table-spec
is omitted or set to an empty string, Vertica exports all statistics for the database. column
- The name of a column in
table-spec
, typically a predicate column. You can specify multiple comma-delimited columns. Vertica narrows the scope of exported statistics to the specified columns.
Privileges
Superuser
Restrictions
EXPORT_STATISTICS does not export statistics for LONG data type columns.
Examples
The following statement exports statistics on the VMart example database to a file:
=> SELECT EXPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/vmart_stats.xml');
EXPORT_STATISTICS
-----------------------------------
Statistics exported successfully
(1 row)
The next statement exports statistics on a single column (price) from a table named food:
=> SELECT EXPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/price.xml', 'food.price');
EXPORT_STATISTICS
-----------------------------------
Statistics exported successfully
(1 row)
See also
2.8 - EXPORT_STATISTICS_PARTITION
Generates partition-level statistics in XML format from data previously collected by ANALYZE_STATISTICS_PARTITION.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
StableSyntax
EXPORT_STATISTICS_PARTITION ('[ filename ]', 'table-spec', 'min-range-value','max-range-value' [, 'column[,...]' )
Arguments
filename
- Specifies where to write the generated XML. If
filename
already exists, EXPORT_STATISTICS_PARTITION overwrites it. If you supply an empty string, the function writes to standard output. table-spec
- Specifies the table on which to export partition statistics:
[[database.]schema.]table
The default schema is
public
. If you specify a database, it must be the current database. min-range-value
,max-range-value
- The minimum and maximum value of partition keys on which to export statistics, where
min-range-value
must be ≤max-range-value
.Important
The range of keys to export must be equal to, or a superset of, the full range of partitions previously analyzed by ANALYZE_STATISTICS_PARTITION. If the range omits any analyzed partition, EXPORT_STATISTICS_PARTITION exports no statistics. column
- The name of a column in
table
, typically a predicate column. You can specify multiple comma-delimited columns. Vertica narrows the scope of exported statistics to the specified columns.
Privileges
Superuser
Restrictions
EXPORT_STATISTICS_PARTITION does not export statistics for LONG data type columns.
See also
EXPORT_STATISTICS2.9 - IMPORT_STATISTICS
Imports statistics from the XML file that was generated by
EXPORT_STATISTICS
. Imported statistics override existing statistics for the projections that are referenced in the XML file.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
StableSyntax
IMPORT_STATISTICS ( 'filename' )
Parameters
filename
- The path and name of an XML input file that was generated by
EXPORT_STATISTICS
.
Privileges
Superuser
Restrictions
-
IMPORT_STATISTICS
imports only valid statistics. If the source XML file has invalid statistics for a specific column, those statistics are not imported and Vertica throws a warning. If the statistics file has an invalid structure, the import operation fails. To check a statistics file for validity, runVALIDATE_STATISTICS
. -
IMPORT_STATISTICS
returns warnings for LONG data type columns, as the source XML file generated byEXPORT_STATISTICS
contains no statistics for columns of that type.
Examples
Import the statistics for the VMart database from an XML file previously created by EXPORT_STATISTICS
:
=> SELECT IMPORT_STATISTICS('/opt/vertica/examples/VMart_Schema/vmart_stats.xml');
IMPORT_STATISTICS
----------------------------------------------------------------------------
Importing statistics for projection date_dimension_super column date_key failure (stats did not contain row counts)
Importing statistics for projection date_dimension_super column date failure (stats did not contain row counts)
Importing statistics for projection date_dimension_super column full_date_description failure (stats did not contain row counts)
...
(1 row)
See also
2.10 - VALIDATE_STATISTICS
Validates statistics in the XML file generated by
EXPORT_STATISTICS
.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
StableSyntax
VALIDATE_STATISTICS ( 'XML-file' )
Parameters
XML-file
- the path and name of the XML file that contains the statistics to validate.
Privileges
Superuser
Reporting valid statistics
The following example shows the results when the statistics are valid:
=> SELECT EXPORT_STATISTICS('cust_dim_stats.xml','customer_dimension');
EXPORT_STATISTICS
-----------------------------------
Statistics exported successfully
(1 row)
=> SELECT VALIDATE_STATISTICS('cust_dim_stats.xml');
VALIDATE_STATISTICS
---------------------
(1 row)
Identifying invalid statistics
If VALIDATE_STATISTICS
is unable to read a document's XML, it throws this error:
=> SELECT VALIDATE_STATISTICS('/home/dbadmin/stats.xml');
VALIDATE_STATISTICS
----------------------------------------------------------------------------
Error validating statistics file: At line 1:1. Invalid document structure
(1 row)
If some table statistics are invalid, VALIDATE_STATISTICS
returns a report that identifies them. In the following example, the function reports that attributes distinct
, buckets
, rows
, count
, and distinctCount
cannot be negative numbers.
=> SELECT VALIDATE_STATISTICS('/stats.xml');
WARNING 0: Invalid value '-1' for attribute 'distinct' under column 'public.t.x'.
Please use a positive value.
WARNING 0: Invalid value '-1' for attribute 'buckets' under column 'public.t.x'.
Please use a positive value.
WARNING 0: Invalid value '-1' for attribute 'rows' under column 'public.t.x'.
Please use a positive value.
WARNING 0: Invalid value '-1' for attribute 'count' under bound '1', column 'public.t.x'.
Please use a positive value.
WARNING 0: Invalid value '-1' for attribute 'distinctCount' under bound '1', column 'public.t.x'.
Please use a positive value.
VALIDATE_STATISTICS
---------------------
(1 row)
In this case, run
ANALYZE_STATISTICS
on the table again to create valid statistics.
See also
3 - Workload management functions
This section contains workload management functions specific to Vertica.
3.1 - ANALYZE_WORKLOAD
Runs Workload Analyzer, a utility that analyzes system information held in system tables.
Workload Analyzer intelligently monitors the performance of SQL queries and workload history, resources, and configurations to identify the root causes for poor query performance. ANALYZE_WORKLOAD
returns tuning recommendations for all events within the scope and time that you specify, from system table
TUNING_RECOMMENDATIONS
.
Tuning recommendations are based on a combination of statistics, system and data collector events, and database-table-projection design. Workload Analyzer recommendations can help you quickly and easily tune query performance.
See Workload analyzer recommendations for the common triggering conditions and recommendations.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
ANALYZE_WORKLOAD ( '[ scope ]' [, 'since-time' | save-data ] );
Parameters
scope
- Specifies the catalog objects to analyze, as follows:
[[database.]schema.]table
If set to an empty string, Vertica returns recommendations for all database objects.
If you specify a database, it must be the current database.
since-time
- Specifies the start time for the analysis time span, which continues up to the current system status, inclusive. If you omit this parameter,
ANALYZE_WORKLOAD
returns recommendations on events since the last time you called this function.Note
You must explicitly cast strings to
TIMESTAMP
orTIMESTAMPTZ
. For example:SELECT ANALYZE_WORKLOAD('T1', '2010-10-04 11:18:15'::TIMESTAMPTZ); SELECT ANALYZE_WORKLOAD('T1', TIMESTAMPTZ '2010-10-04 11:18:15');
save-data
- Specifies whether to save returned values from
ANALYZE_WORKLOAD
:-
false
(default): Results are discarded. -
true
: Saves the results returned byANALYZE_WORKLOAD
. Subsequent calls toANALYZE_WORKLOAD
return results that start from the last invocation when results were saved. Object events preceding that invocation are ignored.
-
Return values
Returns aggregated tuning recommendations from
TUNING_RECOMMENDATIONS
.
Privileges
Superuser
Examples
See Getting tuning recommendations.
See also
3.2 - CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY
Changes the run-time priority of an active query.
Note
This function replaces deprecated functionCHANGE_RUNTIME_PRIORITY
.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY(transaction-id, 'value')
Parameters
transaction-id
- Identifies the transaction, obtained from the system table
SESSIONS
. value
- The
RUNTIMEPRIORITY
value:HIGH
,MEDIUM
, orLOW
.
Privileges
-
Superuser: None
-
Non-superusers can only change the runtime priority of their own queries, and cannot raise the runtime priority of a query to a level higher than that of the resource pool.
Examples
3.3 - CHANGE_RUNTIME_PRIORITY
Changes the run-time priority of a query that is actively running. Note that, while this function is still valid, you should instead use CHANGE_CURRENT_STATEMENT_RUNTIME_PRIORITY
to change run-time priority. CHANGE_RUNTIME_PRIORITY
will be deprecated in a future release of Vertica.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
CHANGE_RUNTIME_PRIORITY(TRANSACTION_ID,STATEMENT_ID, 'value')
Parameters
TRANSACTION_ID
- An identifier for the transaction within the session.
TRANSACTION_ID
cannot be NULL.You can find the transaction ID in the Sessions table.
STATEMENT_ID
- A unique numeric ID assigned by the Vertica catalog, which identifies the currently executing statement.
You can find the statement ID in the Sessions table.
You can specify NULL to change the run-time priority of the currently running query within the transaction.
'value'
- The
RUNTIMEPRIORITY
value. Can be HIGH, MEDIUM, or LOW.
Privileges
No special privileges required. However, non-superusers can change the run-time priority of their own queries only. In addition, non-superusers can never raise the run-time priority of a query to a level higher than that of the resource pool.
Examples
=> SELECT CHANGE_RUNTIME_PRIORITY(45035996273705748, NULL, 'low');
3.4 - MOVE_STATEMENT_TO_RESOURCE_POOL
Attempts to move the specified query to the specified target pool.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
MOVE_STATEMENT_TO_RESOURCE_POOL (session_id , transaction_id, statement_id, target_resource_pool_name)
Parameters
session_id
- Identifier for the session where the query you want to move is currently executing.
transaction_id
- Identifier for the transaction within the session.
statement_id
- Unique numeric ID for the statement you want to move.
target_resource_pool_name
- Name of the existing resource pool to which you want to move the specified query.
Outputs
The function may return the following results:
MOV_REPLAN: Target pool does not have sufficient resources. See v_monitor.resource_pool_move for details. Vertica will attempt to replan the statement on target pool. |
MOV_REPLAN: Target pool has priority HOLD. Vertica will attempt to replan the statement on target pool. |
MOV_FAILED: Statement not found. |
MOV_NO_OP: Statement already on target pool. |
MOV_REPLAN: Statement is in queue. Vertica will attempt to replan the statement on target pool. |
MOV_SUCC: Statement successfully moved to target pool. |
Privileges
Superuser
Examples
The following example shows how you can move a specific statement to a resource pool called my_target_pool:
=> SELECT MOVE_STATEMENT_TO_RESOURCE_POOL ('v_vmart_node0001.example.-31427:0x82fbm', 45035996273711993, 1, 'my_target_pool');
See also:
3.5 - SLEEP
Waits a specified number of seconds before executing another statement or command.
This is a meta-function. You must call meta-functions in a top-level SELECT statement.
Behavior type
VolatileSyntax
SLEEP( seconds )
Parameters
seconds
- The wait time, specified in one or more seconds (0 or higher) expressed as a positive integer. Single quotes are optional; for example,
SLEEP(3)
is the same asSLEEP('3')
.
Notes
-
This function returns value 0 when successful; otherwise it returns an error message due to syntax errors.
-
You cannot cancel a sleep operation.
-
Be cautious when using SLEEP() in an environment with shared resources, such as in combination with transactions that take exclusive locks.
Examples
The following command suspends execution for 100 seconds:
=> SELECT SLEEP(100);
sleep
-------
0
(1 row)