The nodes of an Eon Mode database fetch data from communal storage as needed to process queries, and cache that data locally on disk. The cached data of all nodes within a subcluster comprise that cluster's depot. Vertica uses depots to facilitate query execution: when processing a query, Vertica first checks the current depot for the required data. If the data is unavailable, Vertica fetches it from communal storage and saves a copy in the depot to expedite future queries. Vertica also uses the depot for load operations, caching newly-loaded data in the depot before uploading it to communal storage.
This is the multi-page printable view of this section. Click here to print.
Depot management
1 - Managing depot caching
You can control depot caching in several ways:
-
Configure gateway parameters so a depot caches only queried data or loaded data.
-
Control fetching of queried data from communal storage.
-
Manage eviction of cached data.
-
Enable depot warming on new and restarted nodes.
You can monitor depot activity and settings with several V_MONITOR
system tables, or with the Management Console.
Note
Depot caching is supported only on primary shard subscriber nodes.Depot gateway parameters
Vertica depots can cache two types of data:
- Queried data: The depot facilitates query execution by fetching queried data from communal storage and caching it in the depot. The cached data remains available until it is evicted to make room for fresher data, or data that is fetched for more recent queries.
- Loaded data: The depot expedites load operations such as COPY by temporarily caching data until it is uploaded to communal storage.
By default, depots are configured to cache both types of data.
Two configuration parameters determine whether a depot caches queried or loaded data:
UseDepotForReads
(BOOLEAN)- If
1
(default), search the depot for the queried data and if it is not found, fetch the data from communal storage. If0
, bypass the depot and fetch queried data from communal storage. UseDepotForWrites
(BOOLEAN)- If
1
(default)m write loaded data to the depot and then upload files to communal storage. If0
, bypass the depot and write directly to communal storage.
Both parameters can be set at session, user and database levels.
If set at session or user levels, these parameters can be used to segregate read and write activity on the depots of different subclusters. For example, parameters UseDepotForReads and UseDepotForWrites might be set as follows for users joe
and rhonda
:
=> SHOW USER joe ALL;
name | setting
-------------------------+---------
UseDepotForReads | 1
UseDepotForWrites | 0
(2 rows)
=> SHOW USER rhonda ALL;
name | setting
-------------------------+---------
UseDepotForReads | 0
UseDepotForWrites | 1
(2 rows)
Given these user settings, when joe
connects to a Vertica subcluster, his session only uses the current depot to process queries; all load operations are uploaded to communal storage. Conversely, rhonda
's sessions only use the depot to process load operations; all queries must fetch their data from communal storage.
Depot fetching
If a depot is enabled to cache queried data (UseDepotForReads = 1
), you can configure how it fetches data from communal storage with configuration parameter DepotOperationsForQuery. This parameter has three settings:
-
ALL
(default): Fetch file data from communal storage, if necessary displace existing files by evicting them from the depot. -
FETCHES
: Fetch file data from communal storage only if space is available; otherwise, read the queried data directly from communal storage. -
NONE
: Do not fetch file data to the depot, read the queried data directly from communal storage.
You can set fetching behavior at four levels, in ascending levels of precedence:
-
Database: ALTER DATABASE...SET PARAMETER
-
Per user: ALTER USER...SET PARAMETER
-
Per session: ALTER SESSION...SET PARAMETER
-
Per query: DEPOT_FETCH hint
For example, you can set DepotOperationsForQuery at the database level as follows:
=> ALTER DATABASE default SET PARAMETER DepotOperationsForQuery = FETCHES;
ALTER DATABASE
This setting applies to all database depots unless overridden at other levels. For example, the following ALTER USER statement overrides database-level fetching behavior: file data is always fetched to the depot for all queries from user joe
:
=> ALTER USER joe SET PARAMETER DepotOperationsForQuery = ALL;
ALTER USER
Finally, joe
can override his own DepotOperationsForQuery setting by including the DEPOT_FETCH hint in individual queries:
=> SELECT /*+DEPOT_FETCH(NONE)*/ count(*) FROM bar;
Evicting depot data
In general, Vertica evicts data from the depot as needed to provide room for new data and expedite request processing. Before writing new data to the depot, Vertica evaluates it as follows:
- Data fetched from communal storage: Vertica sizes the download and evicts data from the depot accordingly.
- Data uploaded from a DML operation such as COPY: Vertica cannot estimate the total size of the upload before it is complete, so it sizes individual buffers and evicts data from the depot as needed.
In both cases, Vertica evicts objects from the depot in the following order, from highest eviction priority to lowest:
- Least recently used objects with an anti-pinning policy.
- Objects with an anti-pinning policy.
- Least recently used unpinned object evicted for any new object, pinned or unpinned.
- Least recently used pinned object evicted for a new pinned object. Only pinned storage can evict other pinned storage.
Depot eviction policies
Vertica supports two policy types to manage precedence of object eviction from the depot:
- Apply pinning policies to objects so Vertica is less likely to evict them from the depot than other objects.
- Apply anti-pinning policies to objects so Vertica is more likely to evict them than other objects.
You can apply either type of policy on individual subclusters, or on the entire database. Policies can apply at different levels of granularity—table, projection, and partitions. Eviction policies that set on an individual subclusters have no effect on how other subclusters handle depot object eviction.
Pinning policies
You can set pinning policies on database objects to reduce their exposure to eviction from the depot. Pinning policies can be set on individual subclusters, or on the entire database, and at different levels of granularity—table, projection, and partitions:
- Tables: SET_DEPOT_PIN_POLICY_TABLE
- Projections: SET_DEPOT_PIN_POLICY_PROJECTION
- Partitions: SET_DEPOT_PIN_POLICY_PARTITION
By default, pinned objects are queued for download from communal storage as needed to execute a query or DML operation. SET_DEPOT_PIN_POLICY functions can specify to override this behavior and immediately queue newly pinned objects for download: set the last Boolean argument of the function to true
. For example:
=> SELECT SET_DEPOT_PIN_POLICY_TABLE ('store.store_orders_fact', 'default_subluster', true );
Tip
How soon Vertica downloads a pinned object from communal storage depends on a number of factors, including space availability and precedence of other pinned objects that are queued for download. You can force immediate download of queued objects by calling FINISH_FETCHING_FILES.Anti-pinning policies
Vertica complements pinning policies with anti-pinning policies. Among all depot-cached objects, Vertica chooses objects with an anti-pinning policy for eviction before all others. Like pinning policies, you can set anti-pinning policies on individual subclusters, or on the entire database. and at different levels of granularity—table, projection, and partitions:
- Tables: SET_DEPOT_ANTI_PIN_POLICY_TABLE
- Projections: SET_DEPOT_ANTI_PIN_POLICY_PROJECTION
- Partitions: SET_DEPOT_ANTI_PIN_POLICY_PARTITION
In some cases, object-specific anti-pinning might be preferable over depot-wide exclusions, such as setting the depot to be read- or write-only, or excluding specific users from fetching objects to the depot. For example, you might want to set anti-pinning on an infrequently-used table to prevent it from displacing tables that are used more frequently.
Overlapping eviction policies
If you set multiple eviction policies on a table or projection, Vertica gives precedence to the most recent policy. For example, if you issue an anti-pinning policy on a table that already has a pinning policy, the Vertica favors the anti-pinning policy over the pinning policy.
If you issue partition-level eviction policies on the same partitioned table, and the key ranges of these policies overlap, Vertica acts as follows:
-
If the overlapping policies are of the same type—that is, all are either anti-pinning or pinning partition policies—then Vertica collates the key ranges. For example, if you create two anti-pinning partition policies with key ranges of 1-3 and 2-10, Vertica combines them into a single anti-pinning partition policy with a key range of 1-10.
-
If there are overlapping pinning and anti-pinning policies, Vertica favors the newer policy, either truncating or splitting the older policy.
For example, if you create an anti-partition pinning policy and then a pinning policy with key ranges of 1-10 and 5-20, respectively, Vertica favors the newer pinning policy by truncating the earlier anti-pinning policy's key range:
policy_type min_value max_value PIN 5 20 ANTI_PIN 1 4 If the new pinning policy's partition range falls inside the range of an older anti-pinning policy, Vertica splits the anti-pinning policy. So, given an existing partition anti-pinning policy with a key range from 1 through 20, a new partition pinning policy with a key range from 5 through 10 splits the anti-pinning policy:
policy_type min_value max_value ANTI_PIN 1 4 PIN 5 10 ANTI_PIN 11 20
Eviction policy guidelines
Pinning and anti-pinning policies granularly control which objects consume depot space. When depot space is claimed by pinned objects, you guarantee that these objects and their operations take precedence over operations that involve unpinned objects or objects with an anti-pinning policy. However, if you do not create efficient pinning and anti-pinning policies, you might increase eviction frequency and adversely affect overall performance.
To minimize contention over depot usage, consider the following guidelines:
- Pin only those objects that are most active in DML operations and queries.
- Minimize the size of pinned data by setting policies at the smallest effective level. For example, pin only the data of a table's active partition.
- Periodically review eviction policies across all database subclusters, and update as needed to optimize depot usage.
You can also use the configuration parameters UseDepotForReads or UseDepotForWrites to optimize distribution of query and load activity across database subcluster depots.
Clearing depot policies
You clear pinning and anti-pinning policies from objects with the following functions:
-
Tables: CLEAR_DEPOT_PIN_POLICY_TABLE, CLEAR_DEPOT_ANTI_PIN_POLICY_TABLE
-
Projections: CLEAR_DEPOT_PIN_POLICY_PROJECTION, CLEAR_DEPOT_ANTI_PIN_POLICY_PROJECTION
-
Partitions: CLEAR_DEPOT_PIN_POLICY_PARTITION, CLEAR_DEPOT_ANTI_PIN_POLICY_PARTITION
Depot warming
On startup, the depots of new nodes are empty, while the depots of restarted nodes often contain stale data that must be refreshed. When depot warming is enabled, a node that is undergoing startup preemptively loads its depot with frequently queried and pinned data. When the node completes startup and begins to execute queries, its depot already contains much of the data it needs to process those queries. This reduces the need to fetch data from communal storage, and expedites query performance accordingly.
Note
Fetching data to a warming depot can delay node startup.By default, depot warming is disabled (EnableDepotWarmingFromPeers = 0). A node executes depot warming as follows:
- The node checks configuration parameter PreFetchPinnedObjectsToDepotAtStartup. If enabled (set to 1), the node:
-
Gets from the database catalog a list of all objects pinned by the subcluster.
-
Queues the pinned objects for fetching and calculates their total size.
-
- The node checks configuration parameter EnableDepotWarmingFromPeers. If enabled (set to 1), the node:
-
Identifies a peer node in the same subcluster whose depot contents it can copy.
-
After taking into account all pinned objects, calculates how much space remains available in the warming depot.
-
Gets from the peer node a list of the most recently used objects that can fit in the depot.
-
Queues the objects for fetching.
-
- If BackgroundDepotWarming is enabled (set to 1, default), the node loads queued objects into its depot while it is warming, and continues to do so in the background after the node becomes active and starts executing queries. Otherwise (BackgroundDepotWarming = 0), node activation is deferred until the depot fetches and loads all queued objects.
Monitoring the depot
You can monitor depot activity and settings with several V_MONITOR system tables.
Tip
You can also use the Management Console to monitor depot activity. For details, see Monitoring depot activity with MC- DATA_READS: All storage locations that a query reads to obtain data.
- DEPOT_EVICTIONS: Details about objects that were evicted from the depot.
- DEPOT_FETCH_QUEUE: Pending depot requests for queried file data to fetch from communal storage.
- DEPOT_FILES: Objects that are cached in database depots.
- DEPOT_PIN_POLICIES: Objects —tables and table partitions—that have depot eviction policies.
- DEPOT_SIZES: Depot caching capacity per node.
- DEPOT_UPLOADS: Details about depot uploads to communal storage.
2 - Resizing depot caching capacity
Each node in an Eon database caches depot data in a predefined storage location. The storage location path depends on your Vertica installation's filesystem. By default, each node in a cluster can use up to 60 percent of disk space on the storage location's filesystem to cache depot data. You can change caching capacity with ALTER_LOCATION_SIZE, by specifying to a fixed size or a percentage of total disk space. The function can specify a single node, a subcluster, or all nodes in the database cluster. You can increase depot caching capacity for each node up to 80 percent.
In the following example, ALTER_LOCATION_SIZE increases depot caching capacity to 80 percent of disk space on the storage location's filesystem. The function supplies an empty string as the second (node-name
) argument, so the change applies to all nodes:
Important
By default, depot caching capacity cannot exceed 80 percent of disk space on the store location file system; attempts to set it to a higher value return an error. Vertica requires at least 20 percent of disk space for the catalog, Data Collector tables, and temp files.=> SELECT node_name, location_label, location_path, max_size, disk_percent FROM storage_locations WHERE location_usage = 'DEPOT' ORDER BY node_name;
node_name | location_label | location_path | max_size | disk_percent
------------------+-----------------+-------------------------+-------------+--------------
v_vmart_node0001 | auto-data-depot | /home/dbadmin/verticadb | 36060108800 | 70%
v_vmart_node0002 | auto-data-depot | /home/dbadmin/verticadb | 36059377664 | 70%
v_vmart_node0003 | auto-data-depot | /home/dbadmin/verticadb | 36060108800 | 70%
(3 rows)
=> SELECT alter_location_size('depot', '','80%');
alter_location_size
---------------------
depotSize changed.
(1 row)
=> SELECT node_name, location_label, location_path, max_size, disk_percent FROM storage_locations WHERE location_usage = 'DEPOT' ORDER BY node_name;
node_name | location_label | location_path | max_size | disk_percent
------------------+-----------------+-------------------------+-------------+--------------
v_vmart_node0001 | auto-data-depot | /home/dbadmin/verticadb | 41211552768 | 80%
v_vmart_node0002 | auto-data-depot | /home/dbadmin/verticadb | 41210717184 | 80%
v_vmart_node0003 | auto-data-depot | /home/dbadmin/verticadb | 41211552768 | 80%
(3 rows)
Rescaling depot capacity
When a database is revived on an instance with greater or lesser disk space than it had previously, Vertica evaluates the depot size settings that were previously in effect. If depot size was specified as a percentage of available disk space, Vertica proportionately rescales depot capacity. For example, if depot caching capacity for a given node was set to 70 percent, the revived node applies that setting to the new disk space and adjusts depot caching capacity accordingly. If depot capacity was set to a fixed size, Vertica applies that setting, unless doing so will consume more than 80 percent of available disk space. In that case, Vertica automatically adjusts depot size as needed.