<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Tuple mover</title>
    <link>/en/admin/managing-db/tuple-mover/</link>
    <description>Recent content in Tuple mover on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/admin/managing-db/tuple-mover/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Admin: Mergeout</title>
      <link>/en/admin/managing-db/tuple-mover/mergeout/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/tuple-mover/mergeout/</guid>
      <description>
        
        
        &lt;p&gt;Mergeout is a Tuple Mover process that consolidates ROS containers and purges deleted records. DML activities such as COPY and data partitioning generate new ROS containers that typically require consolidation, while deleting and repartitioning data requires reorganization of existing containers. The Tuple Mover constantly monitors these activities, and executes mergeout as needed to consolidate and reorganize containers. By doing so, the Tuple Mover seeks to avoid two problems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Performance degradation when column data is fragmented across multiple ROS containers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Risk of ROS pushback when ROS containers for a given projection increase faster than the Tuple Mover can handle them. A projection can have up to 1024 ROS containers; when it reaches that limit, Vertica starts to return ROS pushback errors on all attempts to query the projection.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Admin: Managing the tuple mover</title>
      <link>/en/admin/managing-db/tuple-mover/managing-tuple-mover/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/admin/managing-db/tuple-mover/managing-tuple-mover/</guid>
      <description>
        
        
        &lt;p&gt;The Tuple Mover is preconfigured to handle typical workloads. However, some situations might require you to adjust Tuple Mover behavior. You can do so in various ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#Configur&#34;&gt;Configure the TM resource pool&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#Managing2&#34;&gt;Manage active data partitions&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a name=&#34;Configur&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;configuring-the-tm-resource-pool&#34;&gt;Configuring the TM resource pool&lt;/h2&gt;
&lt;p&gt;The Tuple Mover uses the built-in &lt;a href=&#34;../../../../en/admin/managing-db/managing-workloads/resource-pool-architecture/built-resource-pools-config/#TM&#34;&gt;TM&lt;/a&gt; resource pool to handle its workload. Several settings of this resource pool can be adjusted to facilitate handling of high volume loads:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#MemorySize&#34;&gt;MEMORYSIZE&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#MAXMEMOR&#34;&gt;MAXMEMORYSIZE&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#MaxConcurrency&#34;&gt;MAXCONCURRENCY&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;#PlannedConcurrency&#34;&gt;PLANNEDCONCURRENCY&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a name=&#34;MemorySize&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;memorysize&#34;&gt;MEMORYSIZE&lt;/h3&gt;
&lt;p&gt;Specifies how much memory is reserved for the TM pool per node. The TM pool can grow beyond this lower limit by borrowing from the GENERAL pool. By default, this parameter is set to 5% of available memory. If MEMORYSIZE of the GENERAL resource pool is also set to a percentage, the TM pool can compete with it for memory. This value must always be less than or equal to MAXMEMORYSIZE setting.

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

Increasing MEMORYSIZE to a large percentage can cause regressions in memory-sensitive queries that run in the GENERAL pool.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;MAXMEMOR&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;maxmemorysize&#34;&gt;MAXMEMORYSIZE&lt;/h3&gt;
&lt;p&gt;Sets the upper limit of memory that can be allocated to the TM pool. The TM pool can grow beyond the value set by MEMORYSIZE by borrowing memory from the GENERAL pool. This value must always be equal to or greater than the MEMORYSIZE setting.&lt;/p&gt;
&lt;p&gt;In an Eon Mode database, if you set this value to 0 on a subcluster level, the Tuple Mover is disabled on the subcluster.

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Never set the TM pool&#39;s MAXMEMORYSIZE to 0 on a &lt;a class=&#34;glosslink&#34; href=&#34;../../../../en/glossary/primary-subcluster/&#34; title=&#34;In Eon Mode, a primary subcluster is a type of subcluster that is intended to form the core of your database.&#34;&gt;primary subcluster&lt;/a&gt;. Primary subclusters must always run the Tuple Mover.
&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;MaxConcurrency&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;maxconcurrency&#34;&gt;MAXCONCURRENCY&lt;/h3&gt;
&lt;p&gt;Sets across all nodes the maximum number of concurrent execution slots available to TM pool. The default value is 7. This setting specifies the maximum number of merges that can occur simultaneously on multiple threads.&lt;/p&gt;

&lt;p&gt;&lt;a name=&#34;PlannedConcurrency&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;plannedconcurrency&#34;&gt;PLANNEDCONCURRENCY&lt;/h3&gt;
&lt;p&gt;Specifies the preferred number queries to execute concurrently in the resource pool, across all nodes, by default set to 6. The Resource Manager uses PLANNEDCONCURRENCY to calculate the target memory that is available to a given query:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;TM-memory-size&lt;/code&gt;&lt;/em&gt; / PLANNEDCONCURRENCY&lt;/p&gt;
&lt;p&gt;The PLANNEDCONCURRENCY setting must be proportional to the size of RAM, the CPU, and the storage subsystem. Depending on the storage type, increasing PLANNEDCONCURRENCY for Tuple Mover threads might create a storage I/O bottleneck. Monitor the storage subsystem; if it becomes saturated with long I/O queues, more than two I/O queues, and long latency in read and write, adjust the PLANNEDCONCURRENCY parameter to keep the storage subsystem resources below saturation level.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Managing2&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;managing-active-data-partitions&#34;&gt;Managing active data partitions&lt;/h2&gt;
&lt;p&gt;The Tuple Mover assumes that all loads and updates to a partitioned table are targeted to one or more partitions that it identifies as &lt;em&gt;active&lt;/em&gt;. In general, the partitions with the largest partition keys—typically, the most recently created partitions—are regarded as active. As the partition ages, its workload typically shrinks and becomes mostly read-only.&lt;/p&gt;

&lt;p&gt;You can specify how many partitions are active for partitioned tables at two levels, in ascending order of precedence:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Configuration parameter &lt;a href=&#34;../../../../en/sql-reference/config-parameters/tuple-mover-parameters/&#34;&gt;ActivePartitionCount&lt;/a&gt; determines how many partitions are active for partitioned tables in the database. By default, ActivePartitionCount is set to 1. The Tuple Mover applies this setting to all tables that do not set their own active partition count.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Individual tables can supersede ActivePartitionCount by setting their own active partition count with &lt;a href=&#34;../../../../en/sql-reference/statements/create-statements/create-table/#&#34;&gt;CREATE TABLE&lt;/a&gt; and &lt;a href=&#34;../../../../en/sql-reference/statements/alter-statements/alter-table/#&#34;&gt;ALTER TABLE&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For details, see &lt;a href=&#34;../../../../en/admin/partitioning-tables/active-and-inactive-partitions/#&#34;&gt;Active and inactive partitions&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;see-also&#34;&gt;See also&lt;/h2&gt;
&lt;a href=&#34;../../../../en/admin/managing-db/managing-workloads/workload-best-practices/#&#34;&gt;Best practices for managing workload resources&lt;/a&gt;

      </description>
    </item>
    
  </channel>
</rss>
