This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

General operating system configuration - manual configuration

The following general Operating System settings must be done manually.

The following general Operating System settings must be done manually.

1 - Persisting operating system settings

Vertica requires that you manually configure several general operating system settings.

Vertica requires that you manually configure several general operating system settings. You should configure some of these settings in the /etc/rc.local script, to prevent them from reverting on reboot. This script contains scripts and commands that run each time the system is booted.

Vertica uses settings in /etc/rc.local to set the following functionality:

Editing /etc/rc.local

  1. As the root user, open /etc/rc.local:

    # vi /etc/rc.local
    
  2. Enter a script or command. For example, to configure transparent hugepages to meet Vertica requirements, enter the following:

    echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
    
  3. Save your changes, and close /etc/rc.local.

  4. If you use Red Hat 7.0 or CentOS 7.0 or higher, run the following command as root or sudo:

    $ chmod +x /etc/rc.d/rc.local
    

On reboot, the command runs during startup. You can also run the command manually as the root user, if you want it to take effect immediately.

Disabling tuning system service

If you use Red Hat 7.0 or CentOS 7.0 or higher, make sure the tuning system service does not start on when Vertica reboots. Turning off tuning prevents monitoring of your OS and any tuning of your OS based on this monitoring. Tuning also enables THP silently, which can cause issues in other areas such as read ahead.

Run the following command as sudo or root:

$ chkconfig tuned off

2 - SUSE control groups configuration

On SuSE 12, the installer checks the control group (cgroup) setting for the cgroups that Vertica may run under:.

On SuSE 12, the installer checks the control group (cgroup) setting for the cgroups that Vertica may run under:

  • verticad

  • vertica_agent

  • sshd

The installer verifies that the pid.max resource is large enough for all the threads that Vertica creates. We check the contents of:

  • /sys/fs/cgroup/pids/system.slice/verticad.service/pids.max

  • /sys/fs/cgroup/pids/system.slice/vertica_agent.service/pids.max

  • /sys/fs/cgroup/pids/system.slice/sshd.service/pids.max

If these files exist and they fail to include the value max, the installation stops and the installer returns a failure message (code S0340).

If these files do not exist, they are created automatically when the systemd runs the verticad and vertica_agent startup scripts. However, the site's cgroup configuration process managed their default values. Vertica does not change the defaults.

Pre-installation configuration

Before installing Vertica, configure your system as follows:

# Create the following directories:
sudo mkdir /sys/fs/cgroup/pids/system.slice/verticad.service/
sudo mkdir /sys/fs/cgroup/pids/system.slice/vertica_agent.service/
# sshd service dir should already exist, so don't need to create it

# Set pids.max values:
sudo sh -c 'echo "max" > /sys/fs/cgroup/pids/system.slice/verticad.service/pids.max'
sudo sh -c 'echo "max" > /sys/fs/cgroup/pids/system.slice/vertica_agent.service/pids.max'
sudo sh -c 'echo "max" > /sys/fs/cgroup/pids/system.slice/sshd.service/pids.max'

Persisting configuration for restart

After installation, you can configure control groups for subsequent reboots of the Vertica database. You do so by editing configuration file /etc/init.d/after.local and adding the commands shown earlier.

3 - Cron required for scheduled jobs

Admintools uses the Linux cron package to schedule jobs that regularly rotate the database logs.

Admintools uses the Linux cron package to schedule jobs that regularly rotate the database logs. Without this package installed, the database logs will never be rotated. The lack of rotation can lead to a significant consumption of storage for logs. On busy clusters, Vertica can produce hundreds of gigabytes of logs per day.

cron is installed by default on most Linux distributions, but it may not be present on some SUSE 12 systems.

To install cron, run this command:

$ sudo zypper install cron

4 - Disk readahead

This topic details how to change Disk Readahead to a supported value.

This topic details how to change Disk Readahead to a supported value. Vertica requires that Disk Readahead be set to at least 2048. The installer reports this issue with the identifier: S0020.

RedHat/CentOS and SuSE based systems

For each drive in the Vertica system, Vertica recommends that you set the readahead value to at least 2048 for most deployments. The command immediately changes the readahead value for the specified disk. The second line adds the command to /etc/rc.local so that the setting is applied each time the system is booted. Note that some deployments may require a higher value and the setting can be set as high as 8192, under guidance of support.

The following example sets the readahead value of the drive sda to 2048:

$ /sbin/blockdev --setra 2048 /dev/sda
$ echo '/sbin/blockdev --setra 2048 /dev/sda' >> /etc/rc.local

If you are using Red Hat 7.0 or CentOS 7.0 or higher, run the following command as root or sudo:

$ chmod +x /etc/rc.d/rc.local

Ubuntu and debian systems

For each drive in the Vertica system, set the readahead value to 2048. Run the command once in your shell, then add the command to /etc/rc.local so that the setting is applied each time the system is booted. Note that on Ubuntu systems, the last line in rc.local must be "exit 0". So you must manually add the following line to etc/rc.local before the last line with exit 0.

/sbin/blockdev --setra 2048 /dev/sda

5 - I/O scheduling

This topic details how to change I/O Scheduling to a supported scheduler.

This topic details how to change I/O Scheduling to a supported scheduler. Vertica requires that I/O Scheduling be set to deadline or noop. The installer checks what scheduler the system is using, reporting an unsupported scheduler issue with identifier: S0150. If the installer cannot detect the type of scheduler in use (typically if your system is using a RAID array), it reports that issue with identifier: S0151.

If your system is not using a RAID array, then complete the following steps to change your system to a supported I/O Scheduler. If you are using a RAID array, then consult your RAID vendor documentation for the best performing scheduler for your hardware.

Configure the I/O scheduler

The Linux kernel can use several different I/O schedulers to prioritize disk input and output. Most Linux distributions use the Completely Fair Queuing (CFQ) scheme by default, which gives input and output requests equal priority. This scheduler is efficient on systems running multiple tasks that need equal access to I/O resources. However, it can create a bottleneck when used on Vertica drives containing the catalog and data directories, because it gives write requests equal priority to read requests, and its per-process I/O queues can penalize processes making more requests than other processes.

Instead of the CFQ scheduler, configure your hosts to use either the Deadline or NOOP I/O scheduler for the drives containing the catalog and data directories:

  • The Deadline scheduler gives priority to read requests over write requests. It also imposes a deadline on all requests. After reaching the deadline, such requests gain priority over all other requests. This scheduling method helps prevent processes from becoming starved for I/O access. The Deadline scheduler is best used on physical media drives (disks using spinning platters), since it attempts to group requests for adjacent sectors on a disk, lowering the time the drive spends seeking.

  • The NOOP scheduler uses a simple FIFO approach, placing all input and output requests into a single queue. This scheduler is best used on solid state drives (SSDs). Because SSDs do not have a physical read head, no performance penalty exists when accessing non-adjacent sectors.

Failure to use one of these schedulers for the Vertica drives containing the catalog and data directories can result in slower database performance. Other drives on the system (such as the drive containing swap space, log files, or the Linux system files) can still use the default CFQ scheduler (although you should always use the NOOP scheduler for SSDs).

There are two ways for you to set the scheduler used by your disk devices:

  1. Write the name of the scheduler to a file in the /sys directory.

    --or--

  2. Use a kernel boot parameter.

Configure the I/O scheduler - changing the scheduler through the /sys directory

You can view and change the scheduler Linux uses for I/O requests to a single drive using a virtual file under the /sys directory. The name of the file that controls the scheduler a block device uses is:

/sys/block/deviceName/queue/scheduler

Where deviceName is the name of the disk device, such as sda or cciss\!c0d1 (the first disk on an OpenText RAID array). Viewing the contents of this file shows you all of the possible settings for the scheduler. The currently-selected scheduler is surrounded by square brackets:

# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

To change the scheduler, write the name of the scheduler you want the device to use to its scheduler file. You must have root privileges to write to this file. For example, to set the sda drive to use the deadline scheduler, run the following command as root:


# echo deadline > /sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq

Changing the scheduler immediately affects the I/O requests for the device. The Linux kernel starts using the new scheduler for all of the drive's input and output requests.

Changes to the I/O scheduler made through the /sys directory only last until the system is rebooted, so you need to add the commands that change the I/O scheduler to a startup script (such as those stored in /etc/init.d, or though a command in /etc/rc.local). You also need to use a separate command for each drive on the system whose scheduler you want to change.

For example, to make the configuration take effect immediately and add it to rc.local so it is used on subsequent reboots.

echo deadline > /sys/block/sda/queue/scheduler
echo 'echo deadline > /sys/block/sda/queue/scheduler' >> /etc/rc.local

You may prefer to use this method of setting the I/O scheduler over using a boot parameter if your system has a mix of solid-state and physical media drives, or has many drives that do not store Vertica catalog and data directories.

If you are using Red Hat 7.0 or CentOS 7.0 or higher, run the following command as root or sudo:

$ chmod +x /etc/rc.d/rc.local

Configure the I/O scheduler - changing the scheduler with a boot parameter

Use the elevator kernel boot parameter to change the default scheduler used by all disks on your system. This is the best method to use if most or all of the drives on your hosts are of the same type (physical media or SSD) and will contain catalog or data files. You can also use the boot parameter to change the default to the scheduler the majority of the drives on the system need, then use the /sys files to change individual drives to another I/O scheduler. The format of the elevator boot parameter is:

elevator=schedulerName

Where schedulerName is deadline, noop, or cfq. You set the boot parameter using your bootloader (grub or grub2 on most recent Linux distributions). See your distribution's documentation for details on how to add a kernel boot parameter.

6 - Enabling or disabling transparent hugepages

You can modify transparent hugepages to meet Vertica configuration requirements:.

You can modify transparent hugepages to meet Vertica configuration requirements:

  • For Red Hat 7/CentOS 7 and Amazon Linux 2.0, you must enable transparent hugepages. The installer reports this issue with the identifier: S0312.

  • For Red Hat 8/CentOS 8 and SUSE 15.1, Vertica provides recommended settings to optimize your system performance by workload.

  • For all other systems, you must disable transparent hugepages or set them to madvise. The installer reports this issue with the identifier: S0310.

Vertica recommends transparent hugepages settings to optimize performance by workload. The following table contains recommendations for systems that primarily run concurrent queries (such as short-running dashboard queries), or sequential SELECT or load (COPY) queries:

Operating System Concurrent Sequential Important Notes
Red Hat 8.0/CentOS 8.0 Disable Enable
SUSE 15.1 Disable Enable

Additionally, Vertica recommends the following khugepaged settings to optimize for each workload:

Concurrent Workloads:
Disable khugepaged with the following command:

echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag

Sequential Workloads:
Enable khugepaged with the following command:

echo 1 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag

See Enabling or disabling defrag for additional settings that optimize your system performance by workload.

Enabling transparent hugepages on red hat 7/8, CentOS 7/8, SUSE 15.1, and Amazon Linux 2.0

Determine if transparent hugepages is enabled. To do so, run the following command.

cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

The setting returned in brackets is your current setting.

For systems that do not support /etc/rc.local, use the equivalent startup script that is run after the destination runlevel has been reached. For example SuSE uses /etc/init.d/after.local.

You can enable transparent hugepages by editing /etc/rc.local and adding the following script:

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
    echo always > /sys/kernel/mm/transparent_hugepage/enabled
fi

You must reboot your system for the setting to take effect, or, as root, run the following echo line to proceed with the install without rebooting:

# echo always > /sys/kernel/mm/transparent_hugepage/enabled

If you are using Red Hat 7.0 or CentOS 7.0 or higher, run the following command as root or sudo:

$ chmod +x /etc/rc.d/rc.local

Disabling transparent hugepages on other systems

To determine if transparent hugepages is enabled, run the following command.

cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

The setting returned in brackets is your current setting. Depending on your platform OS, the madvise setting may not be displayed.

You can disable transparent hugepages one of two ways:

  • Edit your boot loader (for example /etc/grub.conf). Typically, you add the following to the end of the kernel line. However, consult the documentation for your system before editing your bootloader configuration.

    transparent_hugepage=never
    
  • Edit /etc/rc.local (on systems that support rc.local) and add the following script.

    if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
       echo never > /sys/kernel/mm/transparent_hugepage/enabled
    fi
    

For systems that do not support /etc/rc.local, use the equivalent startup script that is run after the destination runlevel has been reached. For example SuSE uses /etc/init.d/after.local.

Regardless of which approach you choose, you must reboot your system for the setting to take effect, or run the following two echo lines to proceed with the install without rebooting:


echo never > /sys/kernel/mm/transparent_hugepage/enabled

7 - Check for swappiness

The swappiness kernel parameter defines the amount, and how often, the kernel copies RAM contents to a swap space.

The swappiness kernel parameter defines the amount, and how often, the kernel copies RAM contents to a swap space. Vertica recommends a value of 0. The installer reports any swappiness issues with identifier S0112.

You can check the swappiness value by running the following command:

$ cat /proc/sys/vm/swappiness

To set the swappiness value add or update the following line in /etc/sysctl.conf:

vm.swappiness = 0

This also ensures that the value persists after a reboot.

If necessary, you change the swappiness value at runtime by logging in as root and running the following:

$ echo 0 > /proc/sys/vm/swappiness

8 - Enabling network time protocol (NTP)

Data damage and performance issues might occur if you change host NTP settings while the database is running.

The network time protocol (NTP) daemon must be running on all of the hosts in the cluster so that their clocks are synchronized. The spread daemon relies on all of the nodes to have their clocks synchronized for timing purposes. If your nodes do not have NTP running, the installation can fail with a spread configuration error or other errors.

Verify that NTP is running

To verify that your hosts are configured to run the NTP daemon on startup, enter the following command:

$ chkconfig --list ntpd

Debian and Ubuntu do not support chkconfig, but they do offer an optional package. You can install this package with the command sudo apt-get install sysv-rc-conf. To verify that your hosts are configured to run the NTP daemon on startup with the sysv-rc-conf utility, enter the following command:

$ sysv-rc-conf --list ntpd

The chkconfig command can produce an error similar to ntpd: unknown service. If you get this error, verify that your Linux distribution refers to the NTP daemon as ntpd rather than ntp. If it does not, you need to install the NTP daemon package before you can configure it. Consult your Linux documentation for instructions on how to locate and install packages.

If the NTP daemon is installed, your output should resemble the following:

ntp 0:off 1:off 2:on 3:on 4:off 5:on 6:off

The output indicates the runlevels where the daemon runs. Verify that the current runlevel of the system (usually 3 or 5) has the NTP daemon set to on. If you do not know the current runlevel, you can find it using the runlevel command:

$ runlevel
N 3

Configure NTP for red hat 6/CentOS 6 and SLES

If your system is based on Red Hat 6/CentOS 6 or SUSE Linux Enterprise Server, use the service and chkconfig utilities to start NTP and have it start at startup.

$ /sbin/service ntpd restart
$ /sbin/chkconfig ntpd on
  • Red Hat 6/CentOS 6—NTP uses the default time servers at ntp.org. You can change the default NTP servers by editing /etc/ntpd.conf.

  • SLES—By default, no time servers are configured. You must edit /etc/ntpd.conf after the install completes and add time servers.

Configure NTP for ubuntu and debian

By default, the NTP daemon is not installed on some Ubuntu and Debian systems. First, install NTP, and then start the NTP process. You can change the default NTP servers by editing /etc/ntpd.confas shown:

$ sudo apt-get install ntp
$ sudo /etc/init.d/ntp reload

Verify that NTP is operating correctly

To verify that the Network Time Protocol Daemon (NTPD) is operating correctly, issue the following command on all nodes in the cluster.

For Red Hat 6/CentOS 6 and SLES:

$ /usr/sbin/ntpq -c rv | grep stratum

For Ubuntu and Debian:

$ ntpq -c rv | grep stratum

A stratum level of 16 indicates that NTP is not synchronizing correctly.

If a stratum level of 16 is detected, wait 15 minutes and issue the command again. It may take this long for the NTP server to stabilize.

If NTP continues to detect a stratum level of 16, verify that the NTP port (UDP Port 123) is open on all firewalls between the cluster and the remote machine to which you are attempting to synchronize.

The preceding links were current as of the last publication of the Vertica documentation and could change between releases.

9 - Enabling chrony or ntpd for red hat 7/CentOS 7 systems

Before you can install Vertica, you must enable one of the following on your system for clock synchronization:.

Before you can install Vertica, you must enable one of the following on your system for clock synchronization:

  • chrony

  • NTPD

You must enable and activate the Network Time Protocol (NTP) before installation. Otherwise, the installer reports this issue with the identifier S0030.

For information on installing and using chrony, see the information below. For information on NTPD see Enabling network time protocol (NTP). For more information about chrony, see Using chrony in the Red Hat documentation.

Install chrony

The chrony suite consists of:

  • chronyd - the daemon for clock synchronization.

  • chronyc - the command-line utility for configuring chronyd .

chrony is installed by default on some versions of Red Hat/CentOS 7. However, if chrony is not installed on your system, you must download it. To download chrony, run the following command as sudo or root:

# yum install chrony

Verify that chrony is running

To view the status of the chronyd daemon, run the following command:

$ systemctl status chronyd

If chrony is running, an output similar to the following appears:

chronyd.service - NTP client/server
    Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled)
    Active: active (running) since Mon 2015-07-06 16:29:54 EDT; 15s ago
Main PID: 2530 (chronyd)
    CGroup: /system.slice/chronyd.service
            ââ2530 /usr/sbin/chronyd -u chrony

If chrony is not running, execute the following command as sudo or root. This command also causes chrony to run at boot time:

# systemctl enable chronyd

Verify that chrony is operating correctly

To verify that the chrony daemon is operating correctly, issue the following command on all nodes in the cluster:

$ chronyc tracking

An output similar to the following appears:


Reference ID    : 198.247.63.98 (time01.website.org)
Stratum         : 3
Ref time (UTC)  : Thu Jul  9 14:58:01 2015
System time     : 0.000035685 seconds slow of NTP time
Last offset     : -0.000151098 seconds
RMS offset      : 0.000279871 seconds
Frequency       : 2.085 ppm slow
Residual freq   : -0.013 ppm
Skew            : 0.185 ppm
Root delay      : 0.042370 seconds
Root dispersion : 0.022658 seconds
Update interval : 1031.0 seconds
Leap status     : Normal

A stratum level of 16 indicates that chrony is not synchronizing correctly. If chrony continues to detect a stratum level of 16, verify that the UDP port 323 is open. This port must be open on all firewalls between the cluster and the remote machine to which you are attempting to synchronize.

10 - SELinux configuration

Vertica does not support SELinux except when SELinux is running in permissive mode.

Vertica does not support SELinux except when SELinux is running in permissive mode. If it detects that SELinux is installed and the mode cannot be determined the installer reports this issue with the identifier: S0080. If the mode can be determined, and the mode is not permissive, then the issue is reported with the identifier: S0081.

Red hat and SUSE systems

You can either disable SELinux or change it to use permissive mode.

To disable SELinux:

  1. Edit /etc/selinux/config and change setting for SELinux to disabled (SELINUX=disabled). This disables SELinux at boot time.

  2. As root/sudo, type setenforce 0 to disable SELinux immediately.

To change SELinux to use permissive mode:

  1. Edit /etc/selinux/config and change setting for SELINUX to permissive (SELINUX=Permissive).

  2. As root/sudo, type setenforce Permissive to switch to permissive mode immediately.

Ubuntu and debian systems

You can either disable SELinux or change it to use permissive mode.

To disable SELinux:

  1. Edit /selinux/config and change setting for SELinux to disabled (SELINUX=disabled). This disables SELinux at boot time.

  2. As root/sudo, type setenforce 0 to disable SELinux immediately.

To change SELinux to use permissive mode:

  1. Edit /selinux/config and change setting for SELinux to permissive (SELINUX=Permissive).

  2. As root/sudo, type setenforce Permissive to switch to permissive mode immediately.

11 - CPU frequency scaling

This topic details the various CPU frequency scaling methods supported by Vertica.

This topic details the various CPU frequency scaling methods supported by Vertica. In general, if you do not require CPU frequency scaling, then disable it so as not to impact system performance.

The installer allows CPU frequency scaling to be enabled when the cpufreq scaling governor is set to performance. If the cpu scaling governor is set to ondemand, and ignore_nice_load is 1 (true), then the installer fails with the error S0140. If the cpu scaling governor is set to ondemand and ignore_nice_load is 0 (false), then the installer warns with the identifier S0141.

CPU frequency scaling is a hardware and software feature that helps computers conserve energy by slowing the processor when the system load is low, and speeding it up again when the system load increases. This feature can impact system performance, since raising the CPU frequency in response to higher system load does not occur instantly. Always disable this feature on the Vertica database hosts to prevent it from interfering with performance.

You disable CPU scaling in your host's system BIOS. There may be multiple settings in your host's BIOS that you need to adjust in order to completely disable CPU frequency scaling. Consult your host hardware's documentation for details on entering the system BIOS and disabling CPU frequency scaling.

If you cannot disable CPU scaling through the system BIOS, you can limit the impact of CPU scaling by disabling the scaling through the Linux kernel or setting the CPU frequency governor to always run the CPU at full speed.

The method you use to disable frequency depends on the CPU scaling method being used in the Linux kernel. See your Linux distribution's documentation for instructions on disabling scaling in the kernel or changing the CPU governor.

12 - Enabling or disabling defrag

You can modify the defrag utility to meet Vertica configuration requirements, or to optimize your system performance by workload.

You can modify the defrag utility to meet Vertica configuration requirements, or to optimize your system performance by workload.

On all Red Hat/CentOS systems, you must disable the defrag utility to meet Vertica configuration requirements.

For SUSE 15.1, Vertica recommends that you enable defrag for optimized performance.

Vertica recommends defrag settings to optimize performance by workload. The following table contains recommendations for systems that primarily run concurrent queries (such as short-running dashboard queries), or sequential SELECT or load (COPY) queries:

Operating System Concurrent Sequential
Red Hat 8.0/CentOS 8.0 Disable Disable
SUSE 15.1 Enable Enable

See Enabling or disabling transparent hugepages for additional settings that optimize your system performance by workload.

Disabling defrag on red hat 6/CentOS 6 systems

  1. Determine if defrag is enabled by running the following command:

    cat /sys/kernel/mm/redhat_transparent_hugepage/defrag
    [always] madvise never
    

    The setting returned in brackets is your current setting. If you are not using madvise or never as your defrag setting, then you must disable defrag.

  2. Edit /etc/rc.local, and add the following script:

    if test -f /sys/kernel/mm/redhat_transparent_hugepage/enabled; then
        echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
    fi
    

    You must reboot your system for the setting to take effect, or run the following echo line to proceed with the install without rebooting:

    # echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
    

Disabling defrag on red hat 7/CentOS 7, red hat 8/CentOS 8, and SUSE 15.1

  1. Determine if defrag is enabled by running the following command:

    cat /sys/kernel/mm/transparent_hugepage/defrag
    [always] madvise never
    

    The setting returned in brackets is your current setting. If you are not using madvise or never as your defrag setting, then you must disable defrag.

  2. Edit /etc/rc.local, and add the following script:

    if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
        echo never > /sys/kernel/mm/transparent_hugepage/defrag
    fi
    

    You must reboot your system for the setting to take effect, or run the following echo line to proceed with the install without rebooting:

    # echo never > /sys/kernel/mm/transparent_hugepage/defrag
    
  3. If you are using Red Hat 7.0/CentOS 7.0 or Red Hat 8.0/CentOS 8.0, run the following command as root or sudo:

    $ chmod +x /etc/rc.d/rc.local
    

Enabling defrag on red hat 7/8, CentOS 7/8, and SUSE 15.1

  1. Determine if defrag is enabled by running the following command:

    cat /sys/kernel/mm/transparent_hugepage/defrag
    [never] madvise never
    

    The setting returned in brackets is your current setting. If you are not using madvise or always as your defrag setting, then you must enable defrag.

  2. Edit /etc/rc.local, and add the following script:

    if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
        echo always > /sys/kernel/mm/transparent_hugepage/defrag
    fi
    

    You must reboot your system for the setting to take effect, or run the following echo line to proceed with the install without rebooting:

    # echo always > /sys/kernel/mm/transparent_hugepage/defrag
    
  3. If you are using Red Hat 7.0/CentOS 7.0 or Red Hat 8.0/CentOS 8.0, run the following command as root or sudo:

    $ chmod +x /etc/rc.d/rc.local
    

13 - Support tools

Vertica suggests that the following tools are installed so support can assist in troubleshooting your system if any issues arise:.

Vertica suggests that the following tools are installed so support can assist in troubleshooting your system if any issues arise:

  • pstack (or gstack) package. Identified by issue S0040 when not installed.

    • On Red Hat 7 and CentOS 7 systems, the pstack package is installed as part of the gdb package.
  • mcelog package. Identified by issue S0041 when not installed.

  • sysstat package. Identified by issue S0045 when not installed.

Red hat 6 and CentOS 6 systems

To install the required tools on Red Hat 6 and CentOS 6 systems, run the following commands as sudo or root:

yum install pstack
yum install mcelog
yum install sysstat

Red hat 7 and CentOS 7 systems

To install the required tools on Red Hat 7/CentOS 7 systems, run the following commands as sudo or root:

yum install gdb
yum install mcelog
yum install sysstat

Ubuntu and debian systems

To install the required tools on Ubuntu and Debian systems, run the following commands as sudo or root:

apt-get install pstack
apt-get install mcelog
apt-get install sysstat

SuSE systems

To install the required tools on SuSE systems, run the following commands as sudo or root.

zypper install sysstat
zypper install mcelog

There is no individual SuSE package for pstack/gstack. However, the gdb package contains gstack, so you could optionally install gdb instead, or build pstack/gstack from source. To install the gdb package:

zypper install gdb