<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Manually configured operating system settings</title>
    <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/</link>
    <description>Recent content in Manually configured operating system settings on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Setup: SUSE control groups configuration</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/suse-control-groups-config/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/suse-control-groups-config/</guid>
      <description>
        
        
        &lt;p&gt;On SuSE 12, the installer checks the control group (cgroup) setting for the cgroups that Vertica may run under:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;verticad&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;vertica_agent&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;sshd&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The installer verifies that the &lt;code&gt;pid.max&lt;/code&gt; resource is large enough for all the threads that Vertica creates. We check the contents of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;/sys/fs/cgroup/pids/system.slice/verticad.service/pids.max&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;/sys/fs/cgroup/pids/system.slice/vertica_agent.service/pids.max&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;/sys/fs/cgroup/pids/system.slice/sshd.service/pids.max&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If these files exist and they fail to include the value &lt;code&gt;max&lt;/code&gt;, the installation stops and the installer returns a failure message (code S0340).&lt;/p&gt;
&lt;p&gt;If these files do not exist, they are created automatically when the &lt;code&gt;systemd&lt;/code&gt; runs the &lt;code&gt;verticad&lt;/code&gt; and &lt;code&gt;vertica_agent&lt;/code&gt; startup scripts. However, the site&#39;s cgroup configuration process managed their default values. Vertica does not change the defaults.&lt;/p&gt;
&lt;h2 id=&#34;pre-installation-configuration&#34;&gt;Pre-installation configuration&lt;/h2&gt;
&lt;p&gt;Before installing Vertica, configure your system as follows:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# Create the following directories:
sudo mkdir /sys/fs/cgroup/pids/system.slice/verticad.service/
sudo mkdir /sys/fs/cgroup/pids/system.slice/vertica_agent.service/
# sshd service dir should already exist, so don&amp;#39;t need to create it

# Set pids.max values:
sudo sh -c &amp;#39;echo &amp;#34;max&amp;#34; &amp;gt; /sys/fs/cgroup/pids/system.slice/verticad.service/pids.max&amp;#39;
sudo sh -c &amp;#39;echo &amp;#34;max&amp;#34; &amp;gt; /sys/fs/cgroup/pids/system.slice/vertica_agent.service/pids.max&amp;#39;
sudo sh -c &amp;#39;echo &amp;#34;max&amp;#34; &amp;gt; /sys/fs/cgroup/pids/system.slice/sshd.service/pids.max&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;persisting-configuration-for-restart&#34;&gt;Persisting configuration for restart&lt;/h2&gt;
&lt;p&gt;After installation, you can configure control groups for subsequent reboots of the Vertica database. You do so by editing configuration file &lt;code&gt;/etc/init.d/after.local&lt;/code&gt; and adding the commands shown earlier.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Because &lt;code&gt;after.local&lt;/code&gt; is executed as root, it can omit &lt;code&gt;sudo&lt;/code&gt; commands.

&lt;/div&gt;&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Cron required for scheduled jobs</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/cron-required-scheduled-jobs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/cron-required-scheduled-jobs/</guid>
      <description>
        
        
        &lt;p&gt;Admintools uses the Linux &lt;code&gt;cron&lt;/code&gt; package to schedule jobs that regularly rotate the database logs. Without this package installed, the database logs will never be rotated. The lack of rotation can lead to a significant consumption of storage for logs. On busy clusters, Vertica can produce hundreds of gigabytes of logs per day.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;cron&lt;/code&gt; is installed by default on most Linux distributions, but it may not be present on some SUSE 12 systems.&lt;/p&gt;
&lt;p&gt;To install &lt;code&gt;cron&lt;/code&gt;, run this command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ sudo zypper install cron
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Setup: Disk readahead</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/disk-readahead/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/disk-readahead/</guid>
      <description>
        
        
        &lt;p&gt;Vertica requires that &lt;a href=&#34;http://en.wikipedia.org/wiki/Readahead&#34;&gt;Disk Readahead&lt;/a&gt; be set to at least 2048. The installer reports this issue with the identifier: &lt;strong&gt;S0020&lt;/strong&gt;.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;These commands must be executed with root privileges and assumes the blockdev program is in &lt;code&gt;/sbin&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The blockdev program operates on whole devices, and not individual partitions. You cannot set the readahead value to different settings on the same device. If you run blockdev against a partition, for example: /dev/sda1, then the setting is still applied to the entire /dev/sda device. For instance, running &lt;code&gt;/sbin/blockdev --setra 2048 /dev/sda1&lt;/code&gt; also causes /dev/sda2 &lt;em&gt;through&lt;/em&gt; /dev/sda&lt;em&gt;N&lt;/em&gt; to use a readahead value of 2048.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;redhatcentos-and-suse-based-systems&#34;&gt;RedHat/CentOS and SuSE based systems&lt;/h2&gt;
&lt;p&gt;For each drive in the Vertica system, Vertica recommends that you set the readahead value to at least 2048 for most deployments. The command immediately changes the readahead value for the specified disk. The second line adds the command to &lt;code&gt;/etc/rc.local&lt;/code&gt; so that the setting is applied each time the system is booted. Note that some deployments may require a higher value and the setting can be set as high as 8192, under guidance of support.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

For systems that do not support &lt;code&gt;/etc/rc.local&lt;/code&gt;, use the equivalent startup script that is run after the destination runlevel has been reached. For example SUSE uses &lt;code&gt;/etc/init.d/after.local&lt;/code&gt;.&lt;br /&gt;

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;The following example sets the readahead value of the drive sda to 2048:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /sbin/blockdev --setra 2048 /dev/sda
$ echo &amp;#39;/sbin/blockdev --setra 2048 /dev/sda&amp;#39; &amp;gt;&amp;gt; /etc/rc.local
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are using Red Hat 7.0 or CentOS 7.0 or higher, run the following command as root or sudo:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ chmod +x /etc/rc.d/rc.local
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;ubuntu-and-debian-systems&#34;&gt;Ubuntu and debian systems&lt;/h2&gt;
&lt;p&gt;For each drive in the Vertica system, set the readahead value to 2048. Run the command once in your shell, then add the command to &lt;code&gt;/etc/rc.local&lt;/code&gt; so that the setting is applied each time the system is booted. Note that on Ubuntu systems, the last line in rc.local must be &amp;quot;&lt;code&gt;exit 0&lt;/code&gt;&amp;quot;. So you must manually add the following line to &lt;code&gt;etc/rc.local&lt;/code&gt; before the last line with &lt;code&gt;exit 0&lt;/code&gt;.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

For systems that do not support &lt;code&gt;/etc/rc.local&lt;/code&gt;, use the equivalent startup script that is run after the destination runlevel has been reached. For example SuSE uses &lt;code&gt;/etc/init.d/after.local&lt;/code&gt;.&lt;br /&gt;

&lt;/div&gt;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/sbin/blockdev --setra 2048 /dev/sda
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Setup: I/O scheduling</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/io-scheduling/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/io-scheduling/</guid>
      <description>
        
        
        &lt;p&gt;Vertica requires that &lt;a href=&#34;http://en.wikipedia.org/wiki/I/O_scheduling&#34;&gt;I/O Scheduling&lt;/a&gt; be set to 
&lt;code&gt;&lt;a href=&#34;https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/ch06s04s02.html&#34;&gt;deadline&lt;/a&gt;&lt;/code&gt; or 
&lt;code&gt;&lt;a href=&#34;https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/ch06s04s03.html&#34;&gt;noop&lt;/a&gt;&lt;/code&gt;. The installer checks what scheduler the system is using, reporting an unsupported scheduler issue with identifier: &lt;strong&gt;S0150&lt;/strong&gt;. If the installer cannot detect the type of scheduler in use (typically if your system is using a RAID array), it reports that issue with identifier: &lt;strong&gt;S0151&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If your system is not using a RAID array, then complete the following steps to change your system to a supported I/O Scheduler. If you are using a RAID array, then consult your RAID vendor documentation for the best performing scheduler for your hardware.&lt;/p&gt;
&lt;h2 id=&#34;configure-the-io-scheduler&#34;&gt;Configure the I/O scheduler&lt;/h2&gt;
&lt;p&gt;The Linux kernel can use several different I/O schedulers to prioritize disk input and output. Most Linux distributions use the Completely Fair Queuing (CFQ) scheme by default, which gives input and output requests equal priority. This scheduler is efficient on systems running multiple tasks that need equal access to I/O resources. However, it can create a bottleneck when used on Vertica drives containing the catalog and data directories, because it gives write requests equal priority to read requests, and its per-process I/O queues can penalize processes making more requests than other processes.&lt;/p&gt;
&lt;p&gt;Instead of the CFQ scheduler, configure your hosts to use either the Deadline or NOOP I/O scheduler for the drives containing the catalog and data directories:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The Deadline scheduler gives priority to read requests over write requests. It also imposes a deadline on all requests. After reaching the deadline, such requests gain priority over all other requests. This scheduling method helps prevent processes from becoming starved for I/O access. The Deadline scheduler is best used on physical media drives (disks using spinning platters), since it attempts to group requests for adjacent sectors on a disk, lowering the time the drive spends seeking.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The NOOP scheduler uses a simple FIFO approach, placing all input and output requests into a single queue. This scheduler is best used on solid state drives (SSDs). Because SSDs do not have a physical read head, no performance penalty exists when accessing non-adjacent sectors.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Failure to use one of these schedulers for the Vertica drives containing the catalog and data directories can result in slower database performance. Other drives on the system (such as the drive containing swap space, log files, or the Linux system files) can still use the default CFQ scheduler (although you should always use the NOOP scheduler for SSDs).&lt;/p&gt;
&lt;p&gt;You can set your disk device scheduler by writing the name of the scheduler to a file in the &lt;code&gt;/sys&lt;/code&gt; directory or using a kernel boot parameter.&lt;/p&gt;
&lt;h2 id=&#34;changing-the-scheduler-through-the-sys-directory&#34;&gt;Changing the scheduler through the /sys directory&lt;/h2&gt;
&lt;p&gt;You can view and change the scheduler Linux uses for I/O requests to a single drive using a virtual file under the &lt;code&gt;/sys&lt;/code&gt; directory. The name of the file that controls the scheduler a block device uses is:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/sys/block/&lt;span class=&#34;code-variable&#34;&gt;deviceName&lt;/span&gt;/queue/scheduler
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Where &lt;em&gt;&lt;code&gt;deviceName&lt;/code&gt;&lt;/em&gt; is the name of the disk device, such as &lt;code&gt;sda&lt;/code&gt; or &lt;code&gt;cciss\!c0d1&lt;/code&gt; (the first disk on an OpenText RAID array). Viewing the contents of this file shows you all of the possible settings for the scheduler. The currently-selected scheduler is surrounded by square brackets:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To change the scheduler, write the name of the scheduler you want the device to use to its scheduler file. You must have root privileges to write to this file. For example, to set the sda drive to use the deadline scheduler, run the following command as root:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
# echo deadline &amp;gt; /sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Changing the scheduler immediately affects the I/O requests for the device. The Linux kernel starts using the new scheduler for all of the drive&#39;s input and output requests.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

While tests show that changing the scheduler settings while Vertica is running does not cause problems, Vertica recommends shutting down. Before changing the I/O schedule, or making any other changes to the system configuration, consider shutting down any running database.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;Changes to the I/O scheduler made through the &lt;code&gt;/sys&lt;/code&gt; directory only last until the system is rebooted, so you need to add the commands that change the I/O scheduler to a startup script (such as those stored in &lt;code&gt;/etc/init.d&lt;/code&gt;, or though a command in &lt;code&gt;/etc/rc.local&lt;/code&gt;). You also need to use a separate command for each drive on the system whose scheduler you want to change.&lt;/p&gt;
&lt;p&gt;For example, to make the configuration take effect immediately and add it to rc.local so it is used on subsequent reboots.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

For systems that do not support &lt;code&gt;/etc/rc.local&lt;/code&gt;, use the equivalent startup script that is run after the destination runlevel has been reached. For example SuSE uses &lt;code&gt;/etc/init.d/after.local&lt;/code&gt;.&lt;br /&gt;

&lt;/div&gt;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;echo deadline &amp;gt; /sys/block/sda/queue/scheduler
echo &amp;#39;echo deadline &amp;gt; /sys/block/sda/queue/scheduler&amp;#39; &amp;gt;&amp;gt; /etc/rc.local
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

On some Ubuntu/Debian systems, the last line in rc.local must be &amp;quot;&lt;code&gt;exit 0&lt;/code&gt;&amp;quot;. So you must manually add the following line to &lt;code&gt;etc/rc.local&lt;/code&gt; before the last line with &lt;code&gt;exit 0&lt;/code&gt;.

&lt;/div&gt;
&lt;p&gt;You may prefer to use this method of setting the I/O scheduler over using a boot parameter if your system has a mix of solid-state and physical media drives, or has many drives that do not store Vertica catalog and data directories.&lt;/p&gt;
&lt;p&gt;If you are using Red Hat 7.0 or CentOS 7.0 or higher, run the following command as root or sudo:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ chmod +x /etc/rc.d/rc.local
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;changing-the-scheduler-with-a-boot-parameter&#34;&gt;Changing the scheduler with a boot parameter&lt;/h2&gt;
&lt;p&gt;Use the &lt;code&gt;elevator&lt;/code&gt; kernel boot parameter to change the default scheduler used by all disks on your system. This is the best method to use if most or all of the drives on your hosts are of the same type (physical media or SSD) and will contain catalog or data files. You can also use the boot parameter to change the default to the scheduler the majority of the drives on the system need, then use the &lt;code&gt;/sys&lt;/code&gt; files to change individual drives to another I/O scheduler. The format of the elevator boot parameter is:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;elevator=&lt;span class=&#34;code-variable&#34;&gt;schedulerName&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Where &lt;em&gt;&lt;code&gt;schedulerName&lt;/code&gt;&lt;/em&gt; is &lt;code&gt;deadline&lt;/code&gt;, &lt;code&gt;noop&lt;/code&gt;, or &lt;code&gt;cfq&lt;/code&gt;. You set the boot parameter using your bootloader (grub or grub2 on most recent Linux distributions). See your distribution&#39;s documentation for details on how to add a kernel boot parameter.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Enabling or disabling transparent hugepages</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-or-disabling-transparent-hugepages/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-or-disabling-transparent-hugepages/</guid>
      <description>
        
        
        &lt;p&gt;You can modify transparent hugepages to meet Vertica configuration requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;For Red Hat/CentOS and SUSE 15.1, Vertica provides recommended settings to optimize your system performance by workload.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For all other systems, you must disable transparent hugepages or set them to &lt;code&gt;madvise&lt;/code&gt;. The installer reports this issue with the identifier: &lt;strong&gt;S0310&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;recommended-settings-by-workload-for-red-hatcentos-and-suse-151&#34;&gt;Recommended settings by workload for Red Hat/CentOS and SUSE 15.1&lt;/h2&gt;
&lt;p&gt;Vertica recommends transparent hugepages settings to optimize performance by workload. The following table contains recommendations for systems that primarily run concurrent queries (such as short-running dashboard queries), or sequential SELECT or load (COPY) queries:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Operating System&lt;/th&gt; 

&lt;th &gt;
Concurrent&lt;/th&gt; 

&lt;th &gt;
Sequential&lt;/th&gt; 

&lt;th &gt;
Important Notes&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Red Hat and CentOS&lt;/td&gt; 

&lt;td &gt;
Disable&lt;/td&gt; 

&lt;td &gt;
Enable&lt;/td&gt; 

&lt;td &gt;
&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
SUSE 15.1&lt;/td&gt; 

&lt;td &gt;
Disable&lt;/td&gt; 

&lt;td &gt;
Enable&lt;/td&gt; 

&lt;td &gt;











&lt;p&gt;Additionally, Vertica recommends the following &lt;code&gt;khugepaged&lt;/code&gt; settings to optimize for each workload:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Concurrent Workloads:&lt;/strong&gt;&lt;br /&gt;Disable &lt;code&gt;khugepaged&lt;/code&gt; with the following command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;echo 0 &amp;gt; /sys/kernel/mm/transparent_hugepage/khugepaged/defrag&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sequential Workloads:&lt;/strong&gt;&lt;br /&gt;Enable &lt;code&gt;khugepaged&lt;/code&gt; with the following command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;echo 1 &amp;gt; /sys/kernel/mm/transparent_hugepage/khugepaged/defrag&lt;/code&gt;&lt;/p&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../../en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-or-disabling-defrag/#&#34;&gt;Enabling or disabling defrag&lt;/a&gt; for additional settings that optimize your system performance by workload.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Red_Hat-CentOS_7_Users&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;enabling-transparent-hugepages-on-red-hatcentos-and-suse-151&#34;&gt;Enabling transparent hugepages on Red Hat/CentOS and SUSE 15.1&lt;/h2&gt;
&lt;p&gt;Determine if transparent hugepages is enabled. To do so, run the following command.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The setting returned in brackets is your current setting.&lt;/p&gt;
&lt;p&gt;For systems that do not support &lt;code&gt;/etc/rc.local&lt;/code&gt;, use the equivalent startup script that is run after the destination runlevel has been reached. For example SuSE uses &lt;code&gt;/etc/init.d/after.local&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can enable transparent hugepages by editing &lt;code&gt;/etc/rc.local&lt;/code&gt; and adding the following script:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
    echo always &amp;gt; /sys/kernel/mm/transparent_hugepage/enabled
fi
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You must reboot your system for the setting to take effect, or, as root, run the following echo line to proceed with the install without rebooting:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# echo always &amp;gt; /sys/kernel/mm/transparent_hugepage/enabled
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you are using Red Hat or CentOS or higher, run the following command as root or sudo:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ chmod +x /etc/rc.d/rc.local
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;disabling-transparent-hugepages-on-other-systems&#34;&gt;Disabling transparent hugepages on other systems&lt;/h2&gt;

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

SUSE did not offer transparent hugepage support in its initial 11.0 release. However, subsequent SUSE service packs do include support for transparent hugepages.

&lt;/div&gt;
&lt;p&gt;To determine if transparent hugepages is enabled, run the following command.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The setting returned in brackets is your current setting. Depending on your platform OS, the &lt;code&gt;madvise&lt;/code&gt; setting may not be displayed.&lt;/p&gt;
&lt;p&gt;You can disable transparent hugepages one of two ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Edit your boot loader (for example &lt;code&gt;/etc/grub.conf&lt;/code&gt;). Typically, you add the following to the end of the kernel line. However, consult the documentation for your system before editing your bootloader configuration.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;transparent_hugepage=never
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;/etc/rc.local&lt;/code&gt; (on systems that support rc.local) and add the following script.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never &amp;gt; /sys/kernel/mm/transparent_hugepage/enabled
fi
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For systems that do not support &lt;code&gt;/etc/rc.local&lt;/code&gt;, use the equivalent startup script that is run after the destination runlevel has been reached. For example SuSE uses &lt;code&gt;/etc/init.d/after.local&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Regardless of which approach you choose, you must reboot your system for the setting to take effect, or run the following two echo lines to proceed with the install without rebooting:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
echo never &amp;gt; /sys/kernel/mm/transparent_hugepage/enabled
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Setup: Check for swappiness</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/check-swappiness/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/check-swappiness/</guid>
      <description>
        
        
        &lt;p&gt;The swappiness kernel parameter defines the amount, and how often, the kernel copies RAM contents to a swap space. Vertica recommends a value of 0. The installer reports any swappiness issues with identifier &lt;strong&gt;S0112&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;You can check the swappiness value by running the following command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat /proc/sys/vm/swappiness
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To set the swappiness value add or update the following line in &lt;code&gt;/etc/sysctl.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vm.swappiness = 0
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This also ensures that the value persists after a reboot.&lt;/p&gt;
&lt;p&gt;If necessary, you change the swappiness value at runtime by logging in as root and running the following:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ echo 0 &amp;gt; /proc/sys/vm/swappiness
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Setup: Enabling network time protocol (NTP)</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-network-time-protocol-ntp/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-network-time-protocol-ntp/</guid>
      <description>
        
        
        
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
&lt;p&gt;Data damage and performance issues might occur if you change host NTP settings while the database is running. Before you change the NPT settings, stop the database. If you cannot stop the database, stop the Vertica process of each host and change the NTP settings one host at a time.&lt;/p&gt;
&lt;p&gt;For details, see &lt;a href=&#34;../../../../../en/admin/using-admin-tools/admin-tools-reference/advanced-menu-options/stopping-on-host/#&#34;&gt;Stopping the database on host&lt;/a&gt;.&lt;/p&gt;

&lt;/div&gt;
&lt;p&gt;The network time protocol (NTP) daemon must be running on all of the hosts in the cluster so that their clocks are synchronized. The spread daemon relies on all of the nodes to have their clocks synchronized for timing purposes. If your nodes do not have NTP running, the installation can fail with a spread configuration error or other errors.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

Different Linux distributions refer to the NTP daemon in different ways. For example, SUSE and Debian/Ubuntu refer to it as &lt;code&gt;ntp&lt;/code&gt;, while CentOS and Red Hat refer to it as &lt;code&gt;ntpd&lt;/code&gt;. If the following commands produce errors, try using the other NTP daemon reference name.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;verify-that-ntp-is-running&#34;&gt;Verify that NTP is running&lt;/h2&gt;
&lt;p&gt;To verify that your hosts are configured to run the NTP daemon on startup, enter the following command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ chkconfig --list ntpd
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Debian and Ubuntu do not support &lt;code&gt;chkconfig&lt;/code&gt;, but they do offer an optional package. You can install this package with the command &lt;code&gt;sudo apt-get install sysv-rc-conf&lt;/code&gt;. To verify that your hosts are configured to run the NTP daemon on startup with the &lt;code&gt;sysv-rc-conf&lt;/code&gt; utility, enter the following command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ sysv-rc-conf --list ntpd
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The &lt;code&gt;chkconfig&lt;/code&gt; command can produce an error similar to &lt;code&gt;ntpd: unknown service&lt;/code&gt;. If you get this error, verify that your Linux distribution refers to the NTP daemon as &lt;code&gt;ntpd&lt;/code&gt; rather than &lt;code&gt;ntp&lt;/code&gt;. If it does not, you need to install the NTP daemon package before you can configure it. Consult your Linux documentation for instructions on how to locate and install packages.&lt;/p&gt;
&lt;p&gt;If the NTP daemon is installed, your output should resemble the following:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;ntp 0:off 1:off 2:on 3:on 4:off 5:on 6:off
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The output indicates the runlevels where the daemon runs. Verify that the current runlevel of the system (usually 3 or 5) has the NTP daemon set to &lt;code&gt;on&lt;/code&gt;. If you do not know the current runlevel, you can find it using the &lt;code&gt;runlevel&lt;/code&gt; command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ runlevel
N 3
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;configure-ntp-for-red-hat-6centos-6-and-sles&#34;&gt;Configure NTP for red hat 6/CentOS 6 and SLES&lt;/h2&gt;
&lt;p&gt;If your system is based on Red Hat 6/CentOS 6 or SUSE Linux Enterprise Server, use the &lt;code&gt;service&lt;/code&gt; and &lt;code&gt;chkconfig&lt;/code&gt; utilities to start NTP and have it start at startup.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /sbin/service ntpd restart
$ /sbin/chkconfig ntpd on
&lt;/code&gt;&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Red Hat 6/CentOS 6&lt;/strong&gt;—NTP uses the default time servers at ntp.org. You can change the default NTP servers by editing &lt;code&gt;/etc/ntpd.conf&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SLES&lt;/strong&gt;—By default, no time servers are configured. You must edit &lt;code&gt;/etc/ntpd.conf&lt;/code&gt; after the install completes and add time servers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;configure-ntp-for-ubuntu-and-debian&#34;&gt;Configure NTP for ubuntu and debian&lt;/h2&gt;
&lt;p&gt;By default, the &lt;a href=&#34;https://help.ubuntu.com/lts/serverguide/NTP.html&#34;&gt;NTP daemon&lt;/a&gt; is not installed on some Ubuntu and Debian systems. First, install NTP, and then start the NTP process. You can change the default NTP servers by editing &lt;code&gt;/etc/ntpd.conf&lt;/code&gt;as shown:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ sudo apt-get install ntp
$ sudo /etc/init.d/ntp reload
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;verify-that-ntp-is-operating-correctly&#34;&gt;Verify that NTP is operating correctly&lt;/h2&gt;
&lt;p&gt;To verify that the Network Time Protocol Daemon (NTPD) is operating correctly, issue the following command on all nodes in the cluster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;For Red Hat 6/CentOS 6 and SLES:&lt;/strong&gt;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /usr/sbin/ntpq -c rv | grep stratum
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;strong&gt;For Ubuntu and Debian:&lt;/strong&gt;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ ntpq -c rv | grep stratum
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;A stratum level of 16 indicates that NTP is not synchronizing correctly.&lt;/p&gt;
&lt;p&gt;If a stratum level of 16 is detected, wait 15 minutes and issue the command again. It may take this long for the NTP server to stabilize.&lt;/p&gt;
&lt;p&gt;If NTP continues to detect a stratum level of 16, verify that the NTP port (UDP Port 123) is open on all firewalls between the cluster and the remote machine to which you are attempting to synchronize.&lt;/p&gt;
&lt;h2 id=&#34;red-hat-documentation-related-to-ntp&#34;&gt;Red hat documentation related to NTP&lt;/h2&gt;
&lt;p&gt;The preceding links were current as of the last publication of the Vertica documentation and could change between releases.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;http://kbase.redhat.com/faq/docs/DOC-6731&#34;&gt;http://kbase.redhat.com/faq/docs/DOC-6731&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;http://kbase.redhat.com/faq/docs/DOC-6902&#34;&gt;http://kbase.redhat.com/faq/docs/DOC-6902&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;http://kbase.redhat.com/faq/docs/DOC-6991&#34;&gt;http://kbase.redhat.com/faq/docs/DOC-6991&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Enabling chrony or ntpd for Red Hat and CentOS systems</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-chrony-or-ntpd-red-hat-7centos-7-systems/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-chrony-or-ntpd-red-hat-7centos-7-systems/</guid>
      <description>
        
        
        &lt;p&gt;Before you can install Vertica, you must enable one of the following on your system for clock synchronization:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;chrony&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;NTPD&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You must enable and activate the Network Time Protocol (NTP) before installation. Otherwise, the installer reports this issue with the identifier &lt;strong&gt;S0030&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;For information on installing and using chrony, see the information below. For information on NTPD see &lt;a href=&#34;../../../../../en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-network-time-protocol-ntp/#&#34;&gt;Enabling network time protocol (NTP)&lt;/a&gt;. For more information about chrony, see &lt;a href=&#34;https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/using-chrony_configuring-basic-system-settings&#34;&gt;Using chrony&lt;/a&gt; in the Red Hat documentation.&lt;/p&gt;
&lt;h2 id=&#34;install-chrony&#34;&gt;Install chrony&lt;/h2&gt;
&lt;p&gt;The chrony suite consists of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;chronyd - the daemon for clock synchronization.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;chronyc - the command-line utility for configuring chronyd .&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;chrony is installed by default on some versions of Red Hat/CentOS 7. However, if chrony is not installed on your system, you must download it. To download chrony, run the following command as sudo or root:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# dnf install chrony
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;verify-that-chrony-is-running&#34;&gt;Verify that chrony is running&lt;/h2&gt;
&lt;p&gt;To view the status of the chronyd daemon, run the following command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ systemctl status chronyd
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If chrony is running, an output similar to the following appears:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;chronyd.service - NTP client/server
    Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled)
    Active: active (running) since Mon 2015-07-06 16:29:54 EDT; 15s ago
Main PID: 2530 (chronyd)
    CGroup: /system.slice/chronyd.service
            ââ2530 /usr/sbin/chronyd -u chrony
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If chrony is not running, execute the following command as sudo or root. This command also causes chrony to run at boot time:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# systemctl enable chronyd
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;verify-that-chrony-is-operating-correctly&#34;&gt;Verify that chrony is operating correctly&lt;/h2&gt;
&lt;p&gt;To verify that the chrony daemon is operating correctly, issue the following command on all nodes in the cluster:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ chronyc tracking
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;An output similar to the following appears:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
Reference ID    : 198.247.63.98 (time01.website.org)
Stratum         : 3
Ref time (UTC)  : Thu Jul  9 14:58:01 2015
System time     : 0.000035685 seconds slow of NTP time
Last offset     : -0.000151098 seconds
RMS offset      : 0.000279871 seconds
Frequency       : 2.085 ppm slow
Residual freq   : -0.013 ppm
Skew            : 0.185 ppm
Root delay      : 0.042370 seconds
Root dispersion : 0.022658 seconds
Update interval : 1031.0 seconds
Leap status     : Normal
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;A stratum level of 16 indicates that chrony is not synchronizing correctly. If chrony continues to detect a stratum level of 16, verify that the UDP port 323 is open. This port must be open on all firewalls between the cluster and the remote machine to which you are attempting to synchronize.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: SELinux configuration</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/selinux-config/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/selinux-config/</guid>
      <description>
        
        
        &lt;p&gt;OpenText™ Analytics Database is supported to run on a server with SELinux in both permissive and enforcing modes. To run OpenText™ Analytics Database on a server with SELinux running in permissive mode, you only need to make sure that you have a dbadmin user and group with the correct permissions for the files that OpenText™ Analytics Database touches. To run OpenText™ Analytics Database on a server with SELinux running in enforcing mode, you also have the same requirements for the user and group, but must also use a different install method. For information about installing OpenText™ Analytics Database on SELinux in enforcing mode, see &lt;a href=&#34;../../../../../en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/install-on-selinux-enforcing-mode/#&#34;&gt;Installing OpenText Analytics Database on SELinux in Enforcing mode&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
When using SELinux in permission mode, you cannot use the server as an Orchestration node and DB creation must be done on a different host.
&lt;/div&gt;
&lt;p&gt;If you want to disable or run SELinux in permissive mode, use the section below.&lt;/p&gt;
&lt;h2 id=&#34;disabling-selinux&#34;&gt;Disabling SELinux&lt;/h2&gt;
&lt;p&gt;To disable SELinux on Red Hat and SUSE system:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;/etc/selinux/config&lt;/code&gt; and change setting for SELinux to disabled (&lt;code&gt;SELINUX=disabled&lt;/code&gt;). This disables SELinux at boot time.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As root/sudo, type &lt;code&gt;setenforce 0&lt;/code&gt; to disable SELinux immediately.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To disable SELinux on Ubuntu or Debian systems:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;/selinux/config&lt;/code&gt; and change setting for SELinux to disabled (&lt;code&gt;SELINUX=disabled&lt;/code&gt;). This disables SELinux at boot time.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As root/sudo, type &lt;code&gt;setenforce 0&lt;/code&gt; to disable SELinux immediately.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;running-selinux-in-permissive-mode&#34;&gt;Running SELinux in permissive mode&lt;/h2&gt;
&lt;p&gt;To set permissive mode on Red Hat and SUSE systems:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;/etc/selinux/config&lt;/code&gt; and change setting for SELINUX to permissive (&lt;code&gt;SELINUX=Permissive&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As root/sudo, type &lt;code&gt;setenforce Permissive&lt;/code&gt; to switch to permissive mode immediately.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To set permissive mode on Ubutu or Debian systems:&lt;/p&gt;
&lt;p&gt;To change SELinux to use permissive mode:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;/selinux/config&lt;/code&gt; and change setting for SELinux to permissive (&lt;code&gt;SELINUX=Permissive&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As root/sudo, type &lt;code&gt;setenforce Permissive&lt;/code&gt; to switch to permissive mode immediately.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Installing OpenText Analytics Database on SELinux in Enforcing mode</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/install-on-selinux-enforcing-mode/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/install-on-selinux-enforcing-mode/</guid>
      <description>
        
        
        &lt;p&gt;Use the steps below to install OpenText™ Analytics Database on a server with SELinux in enforcing mode.&lt;/p&gt;
&lt;h3 id=&#34;as-the-root-user-on-the-server&#34;&gt;As the root user on the server:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Copy the OpenText™ Analytics Database rpm from Orchestration server onto the node into &lt;code&gt;/tmp&lt;/code&gt; directory.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following to install OpenText™ Analytics Database:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
rpm -Uvh /tmp/vertica-latest.rhel.x86_64.rpm
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Run the seinstall_root script to set up the dbadmin user account and group:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
/opt/vertica/selinux/seinstall_root.sh
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;If the commands were run using sudo, log out and log back in to apply the new dbadmin SELinux context.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;as-dbadmin-or-the-user-you-specified-previously&#34;&gt;As dbadmin (or the user you specified previously):&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Run the following on the node and copy the resulting .json and .pem files to /tmp on every node:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
/opt/vertica/selinux/gen_httpstls_json.sh
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Run the seinstall.sh script to set up the vertica node management agent (NMA) on each node:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
/opt/vertica/selinux/seinstall.sh 
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;On one node, run the following command to create the database specifying the information for your system:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
vcluster create_db --db-name &amp;lt;database name&amp;gt; --hosts &amp;lt;list of hosts&amp;gt; --catalog-path /vertica/data --data-path /vertica/data --depot-path /vertica/data/depot --password &amp;lt;password&amp;gt; --depot-size &amp;lt;depot size&amp;gt; --verbose --communal-storage-location &amp;lt;s3 storage location&amp;gt; --shard-count &amp;lt;shard count&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example command with system information:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
vcluster create_db --db-name selinux_vdb --hosts 10.10.10.1,10.10.10.2,10.10.10.3,10.10.10.4 --catalog-path /vertica/data --data-path /vertica/data --depot-path /vertica/data/depot --password pw --depot-size 80% --communal-storage-location s3://vertica-fleeting/selinux_vdb --shard-count 8
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;On each node, make sure both the NMA and vertica are not running unconfined using &lt;code&gt;ps xfZ&lt;/code&gt;. vertica/nma/etc should be running with SELinux context &lt;code&gt;sysadm_u:sysadm_r:vertica_t&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Example Script for Installing on SELinux</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/selinux-script-example/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/selinux-script-example/</guid>
      <description>
        
        
        &lt;p&gt;The following script can be used as an example of how to automate the installation of OpenText™ Analytics Database on a SELinux server in enforcing mode.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
/opt/vertica/selinux/gen_httpstls_json.sh

DBADMIN=$(id -un)
DBADMIN_GROUP=$(id -gn)

ROOT_DIR=/vertica

hosts=(192.168.0.50 192.168.0.51 192.168.0.52)
hostlist=
for h in $hosts; do
    if [ -z &amp;#34;$hostlist&amp;#34; ]; then
        hostlist=$h
    else
        hostlist=&amp;#34;$hostislt,$h&amp;#34;
    fi

    scp vertica-latest.rhel.x86_64.rpm $h:/tmp
    scp httpstls.json *.pem $h:/tmp

    ssh $h sudo rpm -Uvh /tmp/vertica-latest.rhel.x86_64.rpm
    ssh $h sudo DBADMIN=$DBADMIN DBADMIN_GROUP=$DBADMIN_GROUP ROOT_DIR=$ROOT_DIR /opt/vertica/selinux/seinstall_root.sh

    ssh $DBADMIN@$h ROOT_DIR=$ROOT_DIR /opt/vertica/selinux/seinstall.sh
done

ssh ${hosts[1]} vcluster create_db --db-name selinux_vdb --hosts $hostlist --catalog-path $ROOT_DIR --data-path $ROOT_DIR --depot-path $ROOT_DIR/depot --password pw --depot-size 80%  --communal-storage-location s3://vertica-fleeting/selinux_vdb --shard-count 8

for h in $hosts; do
    ssh $h ps xfZ | grep unconfined &amp;amp;&amp;amp; echo &amp;#34;warning: dbadmin processes are running unconfined&amp;#34;
done
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Setup: CPU frequency scaling</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/cpu-frequency-scaling/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/cpu-frequency-scaling/</guid>
      <description>
        
        
        &lt;p&gt;This topic details the various CPU frequency scaling methods supported by Vertica. In general, if you do not require CPU frequency scaling, then disable it so as not to impact system performance.

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Your systems may use significantly more energy when frequency scaling is disabled.
&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;The installer allows CPU frequency scaling to be enabled when the cpufreq scaling governor is set to &lt;code&gt;performance&lt;/code&gt;. If the cpu scaling governor is set to &lt;em&gt;ondemand&lt;/em&gt;, and &lt;code&gt;ignore_nice_load&lt;/code&gt; is 1 (true), then the installer &lt;strong&gt;fails&lt;/strong&gt; with the error &lt;strong&gt;S0140&lt;/strong&gt;. If the cpu scaling governor is set to &lt;em&gt;ondemand&lt;/em&gt; and &lt;code&gt;ignore_nice_load&lt;/code&gt; is 0 (false), then the installer &lt;strong&gt;warns&lt;/strong&gt; with the identifier &lt;strong&gt;S0141&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;CPU frequency scaling is a hardware and software feature that helps computers conserve energy by slowing the processor when the system load is low, and speeding it up again when the system load increases. This feature can impact system performance, since raising the CPU frequency in response to higher system load does not occur instantly. Always disable this feature on the Vertica database hosts to prevent it from interfering with performance.&lt;/p&gt;
&lt;p&gt;You disable CPU scaling in your host&#39;s system BIOS. There may be multiple settings in your host&#39;s BIOS that you need to adjust in order to completely disable CPU frequency scaling. Consult your host hardware&#39;s documentation for details on entering the system BIOS and disabling CPU frequency scaling.&lt;/p&gt;
&lt;p&gt;If you cannot disable CPU scaling through the system BIOS, you can limit the impact of CPU scaling by disabling the scaling through the Linux kernel or setting the CPU frequency governor to always run the CPU at full speed.

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

This method is not reliable, as some hardware platforms may ignore the kernel settings. For more information, see &lt;a href=&#34;https://vertica.com/kb/GenericHWGuide/Content/Hardware/GenericHWGuide.htm&#34;&gt;Vertica Hardware Guide&lt;/a&gt;.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;The method you use to disable frequency depends on the CPU scaling method being used in the Linux kernel. See your Linux distribution&#39;s documentation for instructions on disabling scaling in the kernel or changing the CPU governor.&lt;/p&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Enabling or disabling defrag</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-or-disabling-defrag/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-or-disabling-defrag/</guid>
      <description>
        
        
        &lt;p&gt;You can modify the defrag utility to meet Vertica configuration requirements, or to optimize your system performance by workload.&lt;/p&gt;
&lt;p&gt;On all Red Hat/CentOS systems, you must disable the defrag utility to meet Vertica configuration requirements.&lt;/p&gt;
&lt;p&gt;For SUSE 15.1, Vertica recommends that you enable defrag for optimized performance.&lt;/p&gt;
&lt;h2 id=&#34;recommended-settings-by-workload-for-red-hatcentos-and-suse-151&#34;&gt;Recommended settings by workload for Red Hat/CentOS and SUSE 15.1&lt;/h2&gt;
&lt;p&gt;Vertica recommends defrag settings to optimize performance by workload. The following table contains recommendations for systems that primarily run concurrent queries (such as short-running dashboard queries), or sequential SELECT or load (COPY) queries:

&lt;table class=&#34;table table-bordered&#34; &gt;



&lt;tr&gt; 

&lt;th &gt;
Operating System&lt;/th&gt; 

&lt;th &gt;
Concurrent&lt;/th&gt; 

&lt;th &gt;
Sequential&lt;/th&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
Red Hat/CentOS&lt;/td&gt; 

&lt;td &gt;
Disable&lt;/td&gt; 

&lt;td &gt;
Disable&lt;/td&gt;&lt;/tr&gt;

&lt;tr&gt; 

&lt;td &gt;
SUSE 15.1&lt;/td&gt; 

&lt;td &gt;


Enable&lt;/td&gt; 

&lt;td &gt;
Enable&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../../en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/enabling-or-disabling-transparent-hugepages/#&#34;&gt;Enabling or disabling transparent hugepages&lt;/a&gt; for additional settings that optimize your system performance by workload.&lt;/p&gt;
&lt;h2 id=&#34;disabling-defrag-on-red-hatcentos-and-suse-151&#34;&gt;Disabling defrag on Red Hat/CentOS and SUSE 15.1&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Determine if defrag is enabled by running the following command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The setting returned in brackets is your current setting. If you are not using &lt;code&gt;madvise&lt;/code&gt; or &lt;code&gt;never&lt;/code&gt; as your defrag setting, then you must disable defrag.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;/etc/rc.local,&lt;/code&gt; and add the following script:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
    echo never &amp;gt; /sys/kernel/mm/transparent_hugepage/defrag
fi
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You must reboot your system for the setting to take effect, or run the following echo line to proceed with the install without rebooting:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# echo never &amp;gt; /sys/kernel/mm/transparent_hugepage/defrag
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you are using Red Hat or CentOS, run the following command as root or sudo:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ chmod +x /etc/rc.d/rc.local
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;enabling-defrag-on-red-hatcentos-and-suse-151&#34;&gt;Enabling defrag on Red Hat/CentOS and SUSE 15.1&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Determine if defrag is enabled by running the following command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;cat /sys/kernel/mm/transparent_hugepage/defrag
[never] madvise never
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The setting returned in brackets is your current setting. If you are not using &lt;code&gt;madvise&lt;/code&gt; or &lt;code&gt;always&lt;/code&gt; as your defrag setting, then you must enable defrag.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit &lt;code&gt;/etc/rc.local,&lt;/code&gt; and add the following script:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
    echo always &amp;gt; /sys/kernel/mm/transparent_hugepage/defrag
fi
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You must reboot your system for the setting to take effect, or run the following echo line to proceed with the install without rebooting:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# echo always &amp;gt; /sys/kernel/mm/transparent_hugepage/defrag
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you are using Red Hat or CentOS, run the following command as root or sudo:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ chmod +x /etc/rc.d/rc.local
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;

      </description>
    </item>
    
    <item>
      <title>Setup: Support tools</title>
      <link>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/support-tools/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/setup/set-up-on-premises/before-you-install/manually-configured-os-settings/support-tools/</guid>
      <description>
        
        
        &lt;p&gt;Vertica suggests that the following tools are installed so support can assist in troubleshooting your system if any issues arise:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;pstack (or gstack) package. Identified by issue &lt;strong&gt;S0040&lt;/strong&gt; when not installed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;mcelog package. Identified by issue &lt;strong&gt;S0041&lt;/strong&gt; when not installed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;sysstat package. Identified by issue &lt;strong&gt;S0045&lt;/strong&gt; when not installed.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;red-hat-and-centos-systems&#34;&gt;Red Hat and CentOS systems&lt;/h2&gt;
&lt;p&gt;To install the required tools on Red Hat and CentOS systems, run the following commands as sudo or root:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# dnf install gdb
# dnf install mcelog
# dnf install sysstat
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;ubuntu-and-debian-systems&#34;&gt;Ubuntu and Debian systems&lt;/h2&gt;
&lt;p&gt;To install the required tools on Ubuntu and Debian systems, run the following commands as sudo or root:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ apt-get install pstack
$ apt-get install mcelog
$ apt-get install sysstat
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
For Ubuntu versions 18.04 and higher, run &lt;code&gt;apt-get install rasdaemon&lt;/code&gt; instead of &lt;code&gt;apt-get install mcelog&lt;/code&gt;.
&lt;/div&gt;
&lt;h2 id=&#34;suse-systems&#34;&gt;SuSE systems&lt;/h2&gt;
&lt;p&gt;To install the required tools on SuSE systems, run the following commands as sudo or root.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# zypper install sysstat
# zypper install mcelog
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;There is no individual SuSE package for pstack/gstack. However, the gdb package contains gstack, so you could optionally install gdb instead, or build pstack/gstack from source. To install the gdb package:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# zypper install gdb
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
  </channel>
</rss>
