<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Best practices for managing workload resources</title>
    <link>/en/admin/managing-db/managing-workloads/workload-best-practices/</link>
    <description>Recent content in Best practices for managing workload resources on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/admin/managing-db/managing-workloads/workload-best-practices/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Admin: Basic principles for scalability and concurrency tuning</title>
      <link>/en/admin/managing-db/managing-workloads/workload-best-practices/scalability-concurrency-tuning/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-workloads/workload-best-practices/scalability-concurrency-tuning/</guid>
      <description>
        
        
        &lt;p&gt;A Vertica database runs on a cluster of commodity hardware. All loads and queries running against the database take up system resources, such as CPU, memory, disk I/O bandwidth, file handles, and so forth. The performance (run time) of a given query depends on how much resource it has been allocated.&lt;/p&gt;
&lt;p&gt;When running more than one query concurrently on the system, both queries are sharing the resources; therefore, each query could take longer to run than if it was running by itself. In an efficient and scalable system, if a query takes up all the resources on the machine and runs in X time, then running two such queries would double the run time of each query to 2X. If the query runs in &amp;gt; 2X, the system is not linearly scalable, and if the query runs in &amp;lt; 2X then the single query was wasteful in its use of resources. Note that the above is true as long as the query obtains the minimum resources necessary for it to run and is limited by CPU cycles. Instead, if the system becomes bottlenecked so the query does not get enough of a particular resource to run, then the system has reached a limit. In order to increase concurrency in such cases, the system must be expanded by adding more of that resource.&lt;/p&gt;
&lt;p&gt;In practice, Vertica should achieve near linear scalability in run times, with increasing concurrency, until a system resource limit is reached. When adequate concurrency is reached without hitting bottlenecks, then the system can be considered as ideally sized for the workload.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Typically Vertica queries on segmented tables run on multiple (likely all) nodes of the cluster. Adding more nodes generally improves the run time of the query almost linearly.

&lt;/div&gt;&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Setting a runtime limit for queries</title>
      <link>/en/admin/managing-db/managing-workloads/workload-best-practices/setting-runtime-limit-queries/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-workloads/workload-best-practices/setting-runtime-limit-queries/</guid>
      <description>
        
        
        &lt;p&gt;You can set a limit for the amount of time a query is allowed to run. You can set this limit at three levels, listed in descending order of precedence:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The resource pool to which the user is assigned.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;User profile with &lt;code&gt;RUNTIMECAP&lt;/code&gt; configuredby 
&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/statements/create-statements/create-user/#&#34;&gt;CREATE USER&lt;/a&gt;&lt;/code&gt;/
&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/statements/alter-statements/alter-user/#&#34;&gt;ALTER USER&lt;/a&gt;&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Session queries, set by 
&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/statements/set-statements/set-session-runtimecap/#&#34;&gt;SET SESSION RUNTIMECAP&lt;/a&gt;&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In all cases, you set the runtime limit with an &lt;a href=&#34;../../../../../en/sql-reference/language-elements/literals/datetime-literals/interval-literal/&#34;&gt;interval&lt;/a&gt; value that does not exceed one year. When you set runtime limit at multiple levels, Vertica always uses the shortest value. If a runtime limit is set for a non-superuser, that user cannot set any session to a longer runtime limit. Superusers can set the runtime limit for other users and for their own sessions, to any value up to one year, inclusive.&lt;/p&gt;
&lt;h2 id=&#34;example&#34;&gt;Example&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;user1&lt;/code&gt; is assigned to the &lt;code&gt;ad_hoc_queries&lt;/code&gt; resource pool:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE USER user1 RESOURCE POOL ad_hoc_queries;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;RUNTIMECAP&lt;/code&gt; for &lt;code&gt;user1&lt;/code&gt; is set to 1 hour:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER USER user1 RUNTIMECAP &amp;#39;60 minutes&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;RUNTIMECAP&lt;/code&gt; for the &lt;code&gt;ad_hoc_queries&lt;/code&gt; resource pool is set to 30 minutes:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER RESOURCE POOL ad_hoc_queries RUNTIMECAP &amp;#39;30 minutes&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In this example, Vertica terminates &lt;code&gt;user1&lt;/code&gt;&#39;s queries if they exceed 30 minutes. Although the &lt;code&gt;user1&lt;/code&gt;&#39;s runtime limit is set to one hour, the pool on which the query runs, which has a 30-minute runtime limit, has precedence.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

If a secondary pool for the &lt;code&gt;ad_hoc_queries&lt;/code&gt; pool is specified using the &lt;code&gt;CASCADE TO&lt;/code&gt; function, the query executes on that pool when the &lt;code&gt;RUNTIMECAP&lt;/code&gt; on the &lt;code&gt;ad_hoc_queries&lt;/code&gt; pool is surpassed.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;see-also&#34;&gt;See also&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../../en/sql-reference/system-tables/v-catalog-schema/resource-pools/#&#34;&gt;RESOURCE_POOLS&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../../en/admin/managing-db/managing-workloads/resource-pool-architecture/defining-secondary-resource-pools/#&#34;&gt;Defining secondary resource pools&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Handling session socket blocking</title>
      <link>/en/admin/managing-db/managing-workloads/workload-best-practices/handling-session-socket-blocking/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-workloads/workload-best-practices/handling-session-socket-blocking/</guid>
      <description>
        
        
        &lt;p&gt;A session socket can be blocked while awaiting client input or output for a given query. Session sockets are typically blocked for numerous reasons—for example, when the Vertica execution engine transmits data to the client, or a 
&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/statements/copy-local/#&#34;&gt;COPY LOCAL&lt;/a&gt;&lt;/code&gt; operation awaits load data from the client.&lt;/p&gt;
&lt;p&gt;In rare cases, a session socket can remain indefinitely blocked. For example, a query times out on the client, which tries to forcibly cancel the query, or relies on the session &lt;a href=&#34;../../../../../en/sql-reference/statements/set-statements/set-session-runtimecap/&#34;&gt;&lt;code&gt;RUNTIMECAP&lt;/code&gt; setting&lt;/a&gt; to terminate it. In either case, if the query ends while awaiting messages or data, the socket can remain blocked and the session hang until it is forcibly closed.&lt;/p&gt;
&lt;h2 id=&#34;configuring-a-grace-period&#34;&gt;Configuring a grace period&lt;/h2&gt;
&lt;p&gt;You can configure the system with a grace period, during which a lagging client or server can catch up and deliver a pending response. If the socket is blocked for a continuous period that exceeds the grace period setting, the server shuts down the socket and throws a fatal error. The session is then terminated. If no grace period is set, the query can maintain its block on the socket indefinitely.&lt;/p&gt;
&lt;p&gt;You should set the session grace period high enough to cover an acceptable range of latency and avoid closing sessions prematurely—for example, normal client-side delays in responding to the server. Very large load operations might require you to adjust the session grace period as needed.&lt;/p&gt;
&lt;p&gt;You can set the grace period at four levels, listed in descending order of precedence:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Session (highest)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;User&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Node&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Database&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;setting-grace-periods-for-the-database-and-nodes&#34;&gt;Setting grace periods for the database and nodes&lt;/h2&gt;
&lt;p&gt;At the database and node levels, you set the grace period to any &lt;a href=&#34;../../../../../en/sql-reference/language-elements/literals/datetime-literals/interval-literal/&#34;&gt;interval&lt;/a&gt; up to 20 days, through configuration parameter &lt;code&gt;BlockedSocketGracePeriod&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;

&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/statements/alter-statements/alter-db/#&#34;&gt;ALTER DATABASE&lt;/a&gt; &lt;span class=&#34;code-variable&#34;&gt;db-name&lt;/span&gt; SET BlockedSocketGracePeriod = &#39;&lt;span class=&#34;code-variable&#34;&gt;interval&lt;/span&gt;&#39;;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;

&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/statements/alter-statements/alter-node/#&#34;&gt;ALTER NODE&lt;/a&gt; &lt;span class=&#34;code-variable&#34;&gt;node-name&lt;/span&gt; SET BlockedSocketGracePeriod = &#39;&lt;span class=&#34;code-variable&#34;&gt;interval&lt;/span&gt;&#39;;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By default, the grace period for both levels is set to an empty string, which allows unlimited blocking.&lt;/p&gt;
&lt;h2 id=&#34;setting-grace-periods-for-users-and-sessions&#34;&gt;Setting grace periods for users and sessions&lt;/h2&gt;
&lt;p&gt;You can set the grace period for individual users and for a given session, as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;

&lt;code&gt;{ &lt;a href=&#34;../../../../../en/sql-reference/statements/create-statements/create-user/#&#34;&gt;CREATE&lt;/a&gt; | &lt;a href=&#34;../../../../../en/sql-reference/statements/alter-statements/alter-user/#&#34;&gt;ALTER USER&lt;/a&gt; } &lt;span class=&#34;code-variable&#34;&gt;user-name&lt;/span&gt; GRACEPERIOD {&#39;&lt;span class=&#34;code-variable&#34;&gt;interval&lt;/span&gt;&#39; | NONE };&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;

&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/statements/set-statements/set-session-graceperiod/#&#34;&gt;SET SESSION GRACEPERIOD&lt;/a&gt; { &#39;&lt;span class=&#34;code-variable&#34;&gt;interval&lt;/span&gt;&#39; | = DEFAULT | NONE };&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A user can set a session to any interval equal to or less than the grace period set for that user. Superusers can set the grace period for other users, and for their own sessions, to any value up to 20 days, inclusive.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;Superuser &lt;code&gt;dbadmin&lt;/code&gt; sets the database grace period to 6 hours. This limit only applies to non-superusers. &lt;code&gt;dbadmin&lt;/code&gt; can sets the session grace period for herself to any value up to 20 days—in this case, 10 hours:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER DATABASE VMart SET BlockedSocketGracePeriod = &amp;#39;6 hours&amp;#39;;
ALTER DATABASE
=&amp;gt; SHOW CURRENT BlockedSocketGracePeriod;
  level   |           name           | setting
----------+--------------------------+---------
 DATABASE | BlockedSocketGracePeriod | 6 hours
(1 row)

=&amp;gt; SET SESSION GRACEPERIOD &amp;#39;10 hours&amp;#39;;
SET
=&amp;gt; SHOW GRACEPERIOD;
    name     | setting
-------------+---------
 graceperiod | 10:00
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;dbadmin&lt;/code&gt; creates user &lt;code&gt;user777&lt;/code&gt; created with no grace period setting. Thus, the effective grace period for &lt;code&gt;user777&lt;/code&gt; is derived from the database setting of &lt;code&gt;BlockedSocketGracePeriod&lt;/code&gt;, which is 6 hours. Any attempt by &lt;code&gt;user777&lt;/code&gt; to set the session grace period to a value greater than 6 hours returns with an error:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
=&amp;gt; CREATE USER user777;
=&amp;gt; \c - user777
You are now connected as user &amp;#34;user777&amp;#34;.
=&amp;gt; SHOW GRACEPERIOD;
    name     | setting
-------------+---------
 graceperiod | 06:00
(1 row)

=&amp;gt; SET SESSION GRACEPERIOD &amp;#39;7 hours&amp;#39;;
ERROR 8175:  The new period 07:00 would exceed the database limit of 06:00
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;dbadmin&lt;/code&gt; sets a grace period of 5 minutes for &lt;code&gt;user777&lt;/code&gt;. Now, &lt;code&gt;user777&lt;/code&gt; can set the session grace period to any value equal to or less than the user-level setting:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
=&amp;gt; \c
You are now connected as user &amp;#34;dbadmin&amp;#34;.
=&amp;gt; ALTER USER user777 GRACEPERIOD &amp;#39;5 minutes&amp;#39;;
ALTER USER
=&amp;gt; \c - user777
You are now connected as user &amp;#34;user777&amp;#34;.
=&amp;gt; SET SESSION GRACEPERIOD &amp;#39;6 minutes&amp;#39;;
ERROR 8175:  The new period 00:06 would exceed the user limit of 00:05
=&amp;gt; SET SESSION GRACEPERIOD &amp;#39;4 minutes&amp;#39;;
SET
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Admin: Managing workloads with resource pools and user profiles</title>
      <link>/en/admin/managing-db/managing-workloads/workload-best-practices/managing-workloads-with-resource-pools-and-user-profiles/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-workloads/workload-best-practices/managing-workloads-with-resource-pools-and-user-profiles/</guid>
      <description>
        
        
        &lt;p&gt;The scenarios in this section describe common workload-management issues, and provide solutions with examples.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Tuning built-in pools</title>
      <link>/en/admin/managing-db/managing-workloads/workload-best-practices/tuning-built-pools/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-workloads/workload-best-practices/tuning-built-pools/</guid>
      <description>
        
        
        &lt;p&gt;The scenarios in this section describe how to tune built-in pools.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../../en/admin/managing-db/managing-workloads/workload-best-practices/tuning-built-pools/restricting-to-take-only-60-of-memory/#&#34;&gt;Restricting Vertica to take only 60% of memory&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../../en/admin/managing-db/managing-workloads/workload-best-practices/tuning-built-pools/tuning-recovery/#&#34;&gt;Tuning for recovery&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../../en/admin/managing-db/managing-workloads/workload-best-practices/tuning-built-pools/tuning-refresh/#&#34;&gt;Tuning for refresh&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../../en/admin/managing-db/managing-workloads/workload-best-practices/tuning-built-pools/tuning-tuple-mover-pool-settings/#&#34;&gt;Tuning tuple mover pool settings&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href=&#34;../../../../../en/admin/managing-db/managing-workloads/workload-best-practices/tuning-built-pools/tuning-ml/#&#34;&gt;Tuning for machine learning&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Reducing query run time</title>
      <link>/en/admin/managing-db/managing-workloads/workload-best-practices/reducing-query-run-time/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-workloads/workload-best-practices/reducing-query-run-time/</guid>
      <description>
        
        
        &lt;p&gt;Query run time depends on the complexity of the query, the number of operators in the plan, data volumes, and projection design. I/O or CPU bottlenecks can cause queries to run slower than expected. You can often remedy high CPU usage with &lt;a href=&#34;../../../../../en/admin/configuring-db/creating-db-design/creating-custom-designs/&#34;&gt;better projection design&lt;/a&gt;. High I/O can often be traced to contention caused by joins and sorts that spill to disk. However, no single solution addresses all queries that incur high CPU or I/O usage. You must analyze and tune each queryindividually.&lt;/p&gt;
&lt;p&gt;You can evaluate a slow-running query in two ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Prefix the query with 
&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/statements/explain/#&#34;&gt;EXPLAIN&lt;/a&gt;&lt;/code&gt; to view the optimizer&#39;s query plan.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Examine the execution profile by querying system tables 
&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/system-tables/v-monitor-schema/query-consumption/#&#34;&gt;QUERY_CONSUMPTION&lt;/a&gt;&lt;/code&gt; or 
&lt;code&gt;&lt;a href=&#34;../../../../../en/sql-reference/system-tables/v-monitor-schema/execution-engine-profiles/#&#34;&gt;EXECUTION_ENGINE_PROFILES&lt;/a&gt;&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Examining the query plan can reveal one or more of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Suboptimal projection sort order&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Predicate evaluation on an unsorted or unencoded column&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use of &lt;code&gt;GROUPBY HASH&lt;/code&gt; instead of &lt;code&gt;GROUPBY PIPE&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;profiling&#34;&gt;Profiling&lt;/h2&gt;
&lt;p&gt;Vertica provides profiling mechanisms that help you evaluate database performance at different levels. For example, you can collect profiling data for a single statement, a single session, or for all sessions on all nodes. For details, see &lt;a href=&#34;../../../../../en/admin/profiling-db-performance/#&#34;&gt;Profiling database performance&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Managing workload resources in an Eon Mode database</title>
      <link>/en/admin/managing-db/managing-workloads/workload-best-practices/managing-workload-resources-an-eon-db/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/managing-workloads/workload-best-practices/managing-workload-resources-an-eon-db/</guid>
      <description>
        
        
        &lt;p&gt;You primarily control workloads in an Eon Mode database using subclusters. For example, you can create subclusters for specific use cases, such as ETL or query workloads, or you can create subclusters for different groups of users to isolate workloads. Within each subcluster, you can create individual resource pools to optimize resource allocation according to workload. See &lt;a href=&#34;../../../../../en/eon/managing-subclusters/#&#34;&gt;Managing subclusters&lt;/a&gt; for more information about how Vertica uses subclusters.&lt;/p&gt;
&lt;h2 id=&#34;global-and-subcluster-specific-resource-pools&#34;&gt;Global and subcluster-specific resource pools&lt;/h2&gt;
&lt;p&gt;You can define global resource pool allocations that affect all nodes in the database. You can also create resource pool allocations at the subcluster level. If you create both, the subcluster-level settings override the global settings.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

The &lt;span class=&#34;sql&#34;&gt;GENERAL&lt;/span&gt; pool requires at least 25% of available memory to function properly. If you attempt to set &lt;span class=&#34;sql&#34;&gt;MEMORYSIZE&lt;/span&gt; for a user-defined resource pool to more than 75%, Vertica returns an error.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;You can use this feature to remove global resource pools that the subcluster does not need. Additionally, you can create a resource pool with settings that are adequate for most subclusters, and then tailor the settings for specific subclusters as needed.&lt;/p&gt;
&lt;h2 id=&#34;optimizing-etl-and-query-subclusters&#34;&gt;Optimizing ETL and query subclusters&lt;/h2&gt;
&lt;p&gt;Overriding resource pool settings at the subcluster level allows you to isolate built-in and user-defined resource pools and optimize them by workload. You often assign specific roles to different subclusters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Subclusters dedicated to ETL workloads and DDL statements that alter the database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Subclusters dedicated to running in-depth, long-running analytics queries. These queries need more resources allocated for the best performance.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Subclusters that run many short-running &amp;quot;dashboard&amp;quot; queries that you want to finish quickly and run in parallel.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After you define the type of queries executed by each subcluster, you can create a subcluster-specific resource pool that is optimized to improve efficiency for that workload.&lt;/p&gt;
&lt;p&gt;The following scenario optimizes 3 subclusters by workload:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;etl: A subcluster that performs ETL that you want to optimize for Tuple Mover operations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;dashboard: A subcluster that you want to designate for short-running queries executed by a large number of users to refresh a web page.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;analytics: A subcluster that you want to designate for long-running queries.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../../en/admin/managing-db/managing-workloads/workload-best-practices/#&#34;&gt;Best practices for managing workload resources&lt;/a&gt; for additional scenarios about resource pool tuning.&lt;/p&gt;
&lt;h3 id=&#34;configure-an-etl-subcluster-to-improve-tm-performance&#34;&gt;Configure an ETL subcluster to improve TM performance&lt;/h3&gt;
&lt;p&gt;Vertica chooses the subcluster that has the most ROS containers involved in a mergeout operation in its depot to execute a mergeout (see &lt;a href=&#34;../../../../../en/admin/managing-db/tuple-mover/#The&#34;&gt;The Tuple Mover in Eon Mode Databases&lt;/a&gt;). Often, a subcluster performing ETL will be the best candidate to perform a mergeout because the data it loaded is involved in the mergeout. You can choose to improve the performance of mergeout operations on a subcluster by altering the TM pool&#39;s &lt;span class=&#34;sql&#34;&gt;MAXCONCURRENCY&lt;/span&gt; setting to increase the number of threads available for mergeout operations. You cannot change this setting at the subcluster level, so you must set it globally:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER RESOURCE POOL TM MAXCONCURRENCY 10;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;See &lt;a href=&#34;../../../../../en/admin/managing-db/managing-workloads/workload-best-practices/tuning-built-pools/tuning-tuple-mover-pool-settings/#&#34;&gt;Tuning tuple mover pool settings&lt;/a&gt; for additional information about Tuple Mover resources.&lt;/p&gt;
&lt;h3 id=&#34;configure-the-dashboard-query-subcluster&#34;&gt;Configure the dashboard query subcluster&lt;/h3&gt;
&lt;p&gt;By default, secondary subclusters have memory allocated to Tuple Mover resource pools. This pool setting allows Vertica to assign mergeout operations to the subcluster, which can add a small overhead. If you primarily use a secondary subcluster for queries, the best practice is to reclaim the memory used by the TM pool and prevent mergeout operations being assigned to the subcluster.&lt;/p&gt;
&lt;p&gt;To optimize your dashboard query secondary subcluster, set their TM pool&#39;s &lt;span class=&#34;sql&#34;&gt;MEMORYSIZE&lt;/span&gt; and &lt;span class=&#34;sql&#34;&gt;MAXMEMORYSIZE&lt;/span&gt; settings to 0:&lt;/p&gt;
&lt;p&gt;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; ALTER RESOURCE POOL TM FOR SUBCLUSTER dashboard MEMORYSIZE &amp;#39;0%&amp;#39;
   MAXMEMORYSIZE &amp;#39;0%&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Do not set the TM pool&#39;s MEMORYSIZE and MAXMEMORYSIZE settings to 0 on &lt;a class=&#34;glosslink&#34; href=&#34;../../../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary subclusters&lt;/a&gt;. They must always be able to run the Tuple Mover.
&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;To confirm the overrides, query the &lt;a href=&#34;../../../../../en/sql-reference/system-tables/v-catalog-schema/subcluster-resource-pool-overrides/#&#34;&gt;SUBCLUSTER_RESOURCE_POOL_OVERRIDES&lt;/a&gt; table:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT pool_oid, name, subcluster_name, memorysize, maxmemorysize
          FROM SUBCLUSTER_RESOURCE_POOL_OVERRIDES;

     pool_oid      | name | subcluster_name | memorysize | maxmemorysize
-------------------+------+-----------------+------------+---------------
 45035996273705046 | tm   | dashboard       | 0%         | 0%
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To optimize the dashboard subcluster for short-running queries on a web page, create a &lt;span class=&#34;sql&#34;&gt;dash_pool&lt;/span&gt; subcluster-level resource pool that uses 70% of the subcluster&#39;s memory. Additionally, increase &lt;span class=&#34;sql&#34;&gt;PLANNEDCONCURRENCY&lt;/span&gt; to use all of the machine&#39;s logical cores, and limit &lt;span class=&#34;sql&#34;&gt;EXECUTIONPARALLELISM&lt;/span&gt; to no more than half of the machine&#39;s available cores:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE RESOURCE POOL dash_pool FOR SUBCLUSTER dashboard
     MEMORYSIZE &amp;#39;70%&amp;#39;
     PLANNEDCONCURRENCY 16
     EXECUTIONPARALLELISM 8;
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;configure-the-analytic-query-subcluster&#34;&gt;Configure the analytic query subcluster&lt;/h3&gt;
&lt;p&gt;To optimize the analytics subcluster for long-running queries, create an &lt;span class=&#34;sql&#34;&gt;analytics_pool&lt;/span&gt; subcluster-level resource pool that uses 60% of the subcluster&#39;s memory. In this scenario, you cannot allocate more memory to this pool because the nodes in this subcluster still have memory assigned to their TM pools. Additionally, set &lt;span class=&#34;sql&#34;&gt;EXECUTIONPARALLELISM&lt;/span&gt; to &lt;span class=&#34;sql&#34;&gt;AUTO&lt;/span&gt; to use all cores available on the node to process a query, and limit &lt;span class=&#34;sql&#34;&gt;PLANNEDCONCURRENCY&lt;/span&gt; to no more than 8 concurrent queries:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; CREATE RESOURCE POOL analytics_pool FOR SUBCLUSTER analytics
      MEMORYSIZE &amp;#39;60%&amp;#39;
      EXECUTIONPARALLELISM AUTO
      PLANNEDCONCURRENCY 8;
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
  </channel>
</rss>
