<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – After you upgrade</title>
    <link>/en/setup/upgrading/after-you-upgrade/</link>
    <description>Recent content in After you upgrade on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/setup/upgrading/after-you-upgrade/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Setup: Rebuilding partitioned projections with pre-aggregated data</title>
      <link>/en/setup/upgrading/after-you-upgrade/rebuilding-partitioned-projections-with-pre-aggregated-data/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/upgrading/after-you-upgrade/rebuilding-partitioned-projections-with-pre-aggregated-data/</guid>
      <description>
        
        
        &lt;p&gt;If you created projections in earlier (pre-10.0.x) releases with &lt;a href=&#34;../../../../en/data-analysis/data-aggregation/pre-aggregating-data-projections/&#34;&gt;pre-aggregated data&lt;/a&gt; (for example, LAPs and TopK projections) and the anchor tables were partitioned with a GROUP BY clause, their ROS containers are liable to be corrupted from various DML and ILM operations. In this case, you must rebuild the projections:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Run the meta-function &lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/projection-functions/refresh/#&#34;&gt;REFRESH&lt;/a&gt; on the database. If REFRESH detects problematic projections, it returns with failure messages. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT REFRESH();
                                               REFRESH
-----------------------------------------------------------------------------------------------------
Refresh completed with the following outcomes:
Projection Name: [Anchor Table] [Status] [ Refresh Method] [Error Count]
&amp;#34;public&amp;#34;.&amp;#34;store_sales_udt_sum&amp;#34;: [store_sales] [failed: Drop and recreate projection] [] [1]
&amp;#34;public&amp;#34;.&amp;#34;product_sales_largest&amp;#34;: [store_sales] [failed: Drop and recreate projection] [] [1]
&amp;#34;public&amp;#34;.&amp;#34;store_sales_recent&amp;#34;: [store_sales] [failed: Drop and recreate projection] [] [1]

(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The database also logs messages to &lt;code&gt;vertica.log&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;2020-07-07 11:28:41.618 Init Session:ox7fabbbfff700-aoo000000oosbs [Txnl &amp;lt;INFO&amp;gt; Be in Txn: aoooooooooo5b5 &amp;#39;Refresh: Evaluating which projection to refresh&amp;#39;
2020-07-07 11:28:41.640 Init Session:ex7fabbbfff7oe-aooooeeeeoosbs [Refresh] &amp;lt;INFO&amp;gt; Storage issues detected, unable to refresh projection &amp;#39;store_sales_recent&amp;#39;. Drop and recreate this projection, then refresh.
2020-07-07 11:28:41.641 Init Session:Ox7fabbbfff700-aooooeooooosbs [Refresh] &amp;lt;INFO&amp;gt; Storage issues detected, unable to refresh projection &amp;#39;product_sales_largest&amp;#39;. Drop and recreate this projection, then refresh.
2020-07-07 11:28:41.641 Init Session:Ox7fabbbfff700-aeoeeeaeeeosbs [Refresh] &amp;lt;INFO&amp;gt; Storage issues detected, unable to refresh projection &amp;#39;store_sales_udt_sum&amp;#39;. Drop and recreate this projection, then refresh.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Export the DDL of these projections with &lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/catalog-functions/export-objects/#&#34;&gt;EXPORT_OBJECTS&lt;/a&gt; or &lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/catalog-functions/export-tables/#&#34;&gt;EXPORT_TABLES&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;../../../../en/sql-reference/statements/drop-statements/drop-projection/&#34;&gt;Drop&lt;/a&gt; the projections, then recreate them as defined in the exported DDL.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run REFRESH. The database rebuilds the projections with new storage containers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Verifying catalog memory consumption</title>
      <link>/en/setup/upgrading/after-you-upgrade/verifying-catalog-memory-consumption/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/upgrading/after-you-upgrade/verifying-catalog-memory-consumption/</guid>
      <description>
        
        
        &lt;p&gt;Database versions ≥ 9.2 significantly reduce how much memory database catalogs consume. After you upgrade, check catalog memory consumption on each node to verify that the upgrade refactored catalogs correctly. If memory consumption for a given catalog is as large as or larger than it was in the earlier database, restart the host node.&lt;/p&gt;
&lt;h2 id=&#34;known-issues&#34;&gt;Known issues&lt;/h2&gt;
&lt;p&gt;Certain operations might significantly inflate catalog memory consumption. For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You created a backup on a 9.1.1 database and restored objects from the backup to a new database of version ≥ 9.2.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You &lt;a href=&#34;../../../../en/admin/backup-and-restore/replicating-objects-to-another-db-cluster/&#34;&gt;replicated objects&lt;/a&gt; from a 9.1.1 database to a database of version ≥ 9.2.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To refactor database catalogs and reduce their memory footprint, restart the database.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Reinstalling packages</title>
      <link>/en/setup/upgrading/after-you-upgrade/reinstalling-packages/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/upgrading/after-you-upgrade/reinstalling-packages/</guid>
      <description>
        
        
        &lt;p&gt;In most cases, the database automatically reinstalls all default packages when you restart your database for the first time after running the upgrade script. Occasionally, however, one or more packages might fail to reinstall correctly.&lt;/p&gt;
&lt;p&gt;To verify that the database succeeded in reinstalling all packages:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Restart the database after upgrading.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter a correct password.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If any packages failed to reinstall, the database issues a message that specifies the uninstalled packages. You can manually reinstall the packages with &lt;a href=&#34;../../../../en/admin/using-admin-tools/admin-tools-reference/&#34;&gt;admintools&lt;/a&gt; or the &lt;a href=&#34;../../../../en/admin/managing-db/https-service/#&#34;&gt;HTTPS service&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To reinstall with admintools, run the &lt;code&gt;install_package&lt;/code&gt; command with the option &lt;code&gt;--force-reinstall&lt;/code&gt;:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;$ admintools -t install_package -d &lt;span class=&#34;code-variable&#34;&gt;db-name&lt;/span&gt; -p &lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt; -P &lt;span class=&#34;code-variable&#34;&gt;pkg-spec&lt;/span&gt; --force-reinstall
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To reinstall with the HTTPS service, make a POST request to the &lt;code&gt;/v1/packages&lt;/code&gt; endpoint on any node in the cluster:

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Only the &lt;a href=&#34;../../../../en/admin/db-users-and-privileges/db-users/types-of-db-users/db-admin-user/&#34;&gt;dbadmin&lt;/a&gt; can access the HTTPS service.
&lt;/div&gt;&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;$ curl -X POST -k -w &lt;span class=&#34;s2&#34;&gt;&amp;#34;\n&amp;#34;&lt;/span&gt; --user dbadmin:&lt;span class=&#34;code-variable&#34;&gt;password&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;https://&lt;span class=&#34;code-variable&#34;&gt;ip-address&lt;/span&gt;:8443/v1/packages?force-install=true&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The &lt;code&gt;-w&lt;/code&gt; flag prints information to the console after the command succeeds.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;admintools-options&#34;&gt;admintools options&lt;/h2&gt;

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Option&lt;/th&gt; 

&lt;th &gt;
Function&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;-d &lt;/code&gt;&lt;em&gt;&lt;code&gt;db-name&lt;/code&gt;&lt;/em&gt;&lt;br /&gt;&lt;code&gt;--dbname=&lt;/code&gt;*&lt;code&gt;db-name&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Database name&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;-p &lt;/code&gt;&lt;em&gt;&lt;code&gt;password&lt;/code&gt;&lt;/em&gt;&lt;br /&gt;&lt;code&gt;--password=&lt;/code&gt;&lt;em&gt;&lt;code&gt;pword&lt;/code&gt;&lt;/em&gt;&lt;/td&gt; 

&lt;td &gt;
Database administrator password&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;-P &lt;/code&gt;&lt;em&gt;&lt;code&gt;pkg&lt;/code&gt;&lt;/em&gt;&lt;br /&gt;&lt;code&gt;--package=&lt;/code&gt;&lt;em&gt;&lt;code&gt;pkg-spec&lt;/code&gt;&lt;/em&gt;&lt;/td&gt; 

&lt;td &gt;








&lt;p&gt;Specifies which packages to install, where &lt;em&gt;&lt;code&gt;pkg&lt;/code&gt;&lt;/em&gt; is one of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The name of a package—for example, &lt;code&gt;flextable&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;all&lt;/code&gt;: All available packages&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;default&lt;/code&gt; : All default packages that are currently installed&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
&lt;code&gt;--force-reinstall&lt;/code&gt;&lt;/td&gt; 

&lt;td &gt;
Force installation of a package even if it is already installed.&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;

&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;Force reinstallation of default packages:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
$ admintools -t install_package -d VMart -p &amp;#39;password&amp;#39; -P default --force-reinstall
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Force reinstallation of one package, &lt;code&gt;flextable&lt;/code&gt;:&lt;br /&gt;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ admintools -t install_package -d VMart -p &amp;#39;password&amp;#39; -P flextable --force-reinstall
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Setup: Writing bundle metadata to the catalog</title>
      <link>/en/setup/upgrading/after-you-upgrade/writing-bundle-metadata-to-catalog/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/upgrading/after-you-upgrade/writing-bundle-metadata-to-catalog/</guid>
      <description>
        
        
        &lt;p&gt;OpenText™ Analytics Database internally stores physical table data in bundles together with metadata on the bundle contents. The query optimizer uses bundle metadata to look up and fetch the data it needs for a given query.&lt;/p&gt;
&lt;p&gt;The database also stores bundle metadata in the database catalog. This is especially beneficial in Eon mode: instead of fetching this metadata from remote (S3) storage, the optimizer can find it in the local catalog. This minimizes S3 reads, and facilitates faster query planning and overall execution.&lt;/p&gt;
&lt;p&gt;The database writes bundle metadata to the catalog on two events:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Any DML operation that changes table content, such as &lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt;, or &lt;code&gt;COPY&lt;/code&gt;. The database writes bundle metadata to the catalog on the new or changed table data. DML operations have no effect on bundle metadata for existing table data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Invocations of function &lt;code&gt;UPDATE_STORAGE_CATALOG&lt;/code&gt;, as an argument to the database meta-function 
&lt;code&gt;&lt;a href=&#34;../../../../en/sql-reference/functions/management-functions/storage-functions/do-tm-task/#&#34;&gt;DO_TM_TASK&lt;/a&gt;&lt;/code&gt;, on existing data. You can narrow the scope of the catalog update operation to a specific projection or table. If no scope is specified, the operation is applied to the entire database.&lt;/p&gt;

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
After upgrading to any database version ≥ 9.2.1, you only need to call &lt;code&gt;UPDATE_STORAGE_CATALOG&lt;/code&gt; once on existing data. Bundle metadata on all new or updated data is always written automatically to the catalog.
&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, the following &lt;code&gt;DO_TM_TASK&lt;/code&gt; call writes bundle metadata on all projections in table &lt;code&gt;store.store_sales_fact&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT DO_TM_TASK (&amp;#39;update_storage_catalog&amp;#39;, &amp;#39;store.store_sales_fact&amp;#39;);
                                  do_tm_task
-------------------------------------------------------------------------------
 Task: update_storage_catalog
(Table: store.store_sales_fact) (Projection: store.store_sales_fact_b0)
(Table: store.store_sales_fact) (Projection: store.store_sales_fact_b1)
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;validating-bundle-metadata&#34;&gt;Validating bundle metadata&lt;/h2&gt;
&lt;p&gt;You can query system table 
&lt;code&gt;&lt;a href=&#34;../../../../en/sql-reference/system-tables/v-monitor-schema/storage-bundle-info-statistics/#&#34;&gt;STORAGE_BUNDLE_INFO_STATISTICS&lt;/a&gt;&lt;/code&gt; to determine which projections have invalid bundle metadata in the database catalog. For example, results from the following query show that the database catalog has invalid metadata for projections &lt;code&gt;inventory_fact_b0&lt;/code&gt; and &lt;code&gt;inventory_fact_b1&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; SELECT node_name, projection_name, total_ros_count, ros_without_bundle_info_count
    FROM v_monitor.storage_bundle_info_statistics where ros_without_bundle_info_count &amp;gt; 0
    ORDER BY projection_name, node_name;
    node_name     |  projection_name  | total_ros_count | ros_without_bundle_info_count
------------------+-------------------+-----------------+-------------------------------
 v_vmart_node0001 | inventory_fact_b0 |               1 |                             1
 v_vmart_node0002 | inventory_fact_b0 |               1 |                             1
 v_vmart_node0003 | inventory_fact_b0 |               1 |                             1
 v_vmart_node0001 | inventory_fact_b1 |               1 |                             1
 v_vmart_node0002 | inventory_fact_b1 |               1 |                             1
 v_vmart_node0003 | inventory_fact_b1 |               1 |                             1
(6 rows)
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;best-practices&#34;&gt;Best practices&lt;/h2&gt;
&lt;p&gt;Updating the database catalog with &lt;code&gt;UPDATE_STORAGE_CATALOG&lt;/code&gt; is recommended only for Eon users. Enterprise users are unlikely to see measurable performance improvements from this update.&lt;/p&gt;
&lt;p&gt;Calls to &lt;code&gt;UPDATE_STORAGE_CATALOG&lt;/code&gt; can incur considerable overhead, as the update process typically requires numerous and expensive S3 reads. It is not advised to run this operation on the entire database. Instead, consider an incremental approach:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Call &lt;code&gt;UPDATE_STORAGE_CATALOG&lt;/code&gt; on a single large fact table. You can use performance metrics to estimate how much time updating other files will require.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Identify which tables are subject to frequent queries and prioritize catalog updates accordingly.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Upgrading the streaming data scheduler utility</title>
      <link>/en/setup/upgrading/after-you-upgrade/upgrading-streaming-data-scheduler-utility/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/upgrading/after-you-upgrade/upgrading-streaming-data-scheduler-utility/</guid>
      <description>
        
        
        &lt;p&gt;If you have integrated your database with a streaming data application, such as Apache Kafka, you must update the streaming data scheduler utility after you update the database.&lt;/p&gt;
&lt;p&gt;From a command prompt, enter the following command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/opt/vertica/packages/kafka/bin/vkconfig scheduler --upgrade --upgrade-to-schema &lt;span class=&#34;code-variable&#34;&gt;schema_name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Running the upgrade task more than once has no effect.&lt;/p&gt;
&lt;p&gt;For more information on the Scheduler utility, refer to &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/scheduler-tool-options/#&#34;&gt;Scheduler tool options&lt;/a&gt;.&lt;/p&gt;

      </description>
    </item>
    
  </channel>
</rss>
