这是本节的多页打印视图。 点击此处打印.

返回本页常规视图.

管理子群集

子群集可帮助您组织群集中的节点以隔离工作负载和便于弹性扩展。有关子群集如何为您提供帮助的概述,请参阅子群集

另请参阅

1 - 创建子群集

默认情况下,新的 Eon 模式数据库包含一个名为 default_subcluster 的主子群集。此子群集包含创建数据库时属于数据库的所有节点。您经常需要创建子群集来分离和管理工作负载。在数据库中创建新的子群集有三种方法:

使用 admintools 创建子群集

要创建新的子群集,请使用 admintools db_add_subcluster 工具:

$ admintools -t db_add_subcluster --help
Usage: db_add_subcluster [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of database to be modified
  -s HOSTS, --hosts=HOSTS
                        Comma separated list of hosts to add to the subcluster
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  -c SCNAME, --subcluster=SCNAME
                        Name of the new subcluster for the new node
  --is-primary          Create primary subcluster
  --is-secondary        Create secondary subcluster
  --control-set-size=CONTROLSETSIZE
                        Set the number of nodes that will run spread within
                        the subcluster
  --like=CLONESUBCLUSTER
                        Name of an existing subcluster from which to clone
                        properties for the new subcluster
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete ('never') will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.

这个最简单的命令会添加一个空子群集。它需要数据库名称、密码和新子群集的名称。此示例会将名为 analytics_cluster 的子群集添加到名为 verticadb 的数据库中:

$ adminTools -t db_add_subcluster -d verticadb -p 'password' -c analytics_cluster
Creating new subcluster 'analytics_cluster'
Subcluster added to verticadb successfully.

默认情况下,admintools 会将新子群集创建为 辅助子群集。您可以通过提供 --is-primary 实参让该工具创建一个 主子群集

在创建子群集时添加节点

您还可以为 admintools 指定一个或多个主机以作为新节点添加到子群集中。这些主机必须是群集的一部分,但还不是数据库的一部分。例如,您可以使用通过 MC 或 admintools 添加到群集的主机,或者在您从数据库中删除节点后仍保留在群集中的主机。此示例会创建一个名为 analytics_cluster 的子群集,并使用 -s 选项指定群集中的可用主机:

$ adminTools -t db_add_subcluster -c analytics_cluster -d verticadb -p 'password' -s 10.0.33.77,10.0.33.181,10.0.33.85

使用加入 V_CATALOG.NODESV_CATALOG.NODE_SUBSCRIPTIONS 系统表的以下查询查看数据库中所有节点的订阅状态:

=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
   v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
   = ns.node_name) ORDER BY 1,2,3;

subcluster_name    |      node_name       | shard_name  | subscription_state
-------------------+----------------------+-------------+--------------------
analytics_cluster  | v_verticadb_node0004 | replica     | ACTIVE
analytics_cluster  | v_verticadb_node0004 | segment0001 | ACTIVE
analytics_cluster  | v_verticadb_node0004 | segment0003 | ACTIVE
analytics_cluster  | v_verticadb_node0005 | replica     | ACTIVE
analytics_cluster  | v_verticadb_node0005 | segment0001 | ACTIVE
analytics_cluster  | v_verticadb_node0005 | segment0002 | ACTIVE
analytics_cluster  | v_verticadb_node0006 | replica     | ACTIVE
analytics_cluster  | v_verticadb_node0006 | segment0002 | ACTIVE
analytics_cluster  | v_verticadb_node0006 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0001 | replica     | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0001 | segment0003 | ACTIVE
default_subcluster | v_verticadb_node0002 | replica     | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0001 | ACTIVE
default_subcluster | v_verticadb_node0002 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | replica     | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0002 | ACTIVE
default_subcluster | v_verticadb_node0003 | segment0003 | ACTIVE
(18 rows)

如果您在创建子群集时不包括主机,则必须在以后添加节点时手动重新平衡子群集中的分片。有关详细信息,请参阅添加节点后更新分片订阅

子群集和大型群集

Vertica 具有一项名为大型群集的功能,可帮助在数据库群集增长时管理广播消息。它对添加新子群集会产生多种影响:

  • 如果创建具有 16 个或更多节点的新子群集,Vertica 会自动启用大型群集功能。它会将 控制节点的数量设置为子群集中节点数量的平方根。请参阅计划大型群集

  • 您可以使用 admintools 命令行中的 --control-set-size 选项来设置子群集中控制节点的数量。

  • 如果数据库群集有 120 个控制节点,则在您尝试添加新子群集时 Vertica 会返回错误。每个子群集必须至少有一个控制节点。数据库包含的控制节点不能超过 120 个。当数据库达到此限值时,必须减少其他子群集中的控制节点数,然后才能添加新的子群集。有关详细信息,请参阅更改控制节点的数量并重新对齐

  • 如果您尝试创建控制节点数超过 120 个控制节点限值的子群集,Vertica 会向您发出警告并创建具有更少控制节点的子群集。它会向子群集添加尽可能多的控制节点,即 120 减去群集中的当前控制节点数。例如,假设您在已经有 118 个控制节点的数据库群集中创建了一个 16 节点的子群集。在这种情况下,Vertica 会向您发出警告,并仅使用 2 个控制节点而非默认的 4 个控制节点来创建子群集。

有关大型群集功能的详细信息,请参阅大型群集

2 - 复制子群集

子群集包含许多设置,您可以对其进行优化,以使它们按照您想要的方式工作。优化子群集后,您可能需要使用以相同方式配置的其他子群集。例如,假设您有一个已优化为执行分析工作负载的子群集。为了提高查询吞吐量,您可以再创建多个配置与其完全相同的子群集。您可以将现有子群集(称为源子群集)复制到新子群集(目标子群集),而不是创建新的子群集后再从头开始手动配置它们。

当您基于另一个子群集创建新子群集时,Vertica 会复制源子群集的大部分设置。有关 Vertica 复制的设置的列表,请参阅下文。这些设置都处于节点级别和子群集级别。

目标子群集的要求

数据库群集中必须有一组您会将其用作子群集复制目标的主机。Vertica 会将这些主机组成一个目标子群集,用于接收源子群集的大部分设置。目标子群集的主机必须满足以下要求:

  • 它们必须是数据库群集的一部分,但不是数据库的一部分。例如,您可以使用从子群集中删除的主机或已移除其子群集的主机。如果您尝试将子群集复制到当前参与数据库的一个或多个节点上,Vertica 会返回错误。

  • 您为目标子群集提供的节点数必须等于源子群集中的节点数。复制子群集时,Vertica 会将源子群集中每个节点的一些节点级别设置原封不动地复制到目标中的相应节点。

  • 目标子群集中主机的 RAM 和磁盘分配至少应与源节点相同。从技术角度而言,您的目标节点可以拥有比源节点更少的 RAM 或磁盘空间。但是,您通常会在新子群集中遇到性能问题,因为原始子群集的设置不会针对目标子群集的资源进行优化。

即使源子群集中的某些节点或目标中的主机已关闭,您也可以复制子群集。如果目标中的节点已关闭,它们会在恢复时使用 Vertica 从源节点复制的编录。

复制子群集级别设置

下表列出了 Vertica 从源子群集复制到目标的子群集级别设置。

Vertica 不会 复制以下子群集设置:

复制节点级别设置

当 Vertica 复制子群集时,它会将源子群集中的每个节点映射到目标子群集中的节点。然后,它会将相关的节点级别设置从每个单独的源节点复制到相应的目标节点。

例如,假设您有一个由名为 node01、node02 和 node03 的节点组成的三节点子群集。目标子群集具有名为 node04、node05 和 node06 的节点。在这种情况下,Vertica 会将设置从 node01 复制到 node04,从 node02 复制到 node05,以及从 node03 复制到 node06。

Vertica 从源节点复制到目标节点的节点级别设置包含:

Vertica 不会 复制以下节点级别设置:

使用 admintools 复制子群集

要复制子群集,请使用创建新子群集所用的 admintools db_add_subcluster 工具(请参阅创建子群集)。除了创建子群集所需的选项(主机列表、新子群集的名称、数据库名称等)之外,您还可以将 --like 选项与要复制的源子群集名称一起传递。

以下示例演示了如何复制名为 analytics_1 的三节点子群集。第一个示例会检查 analytics_1 子群集中的一些设置:

  • 覆盖全局 TM 资源池的内存大小。

  • 它自己的名为 analytics 的资源池

  • 它在名为 analytics 的基于子群集的负载均衡组中的成员身份

=> SELECT name, subcluster_name, memorysize FROM SUBCLUSTER_RESOURCE_POOL_OVERRIDES;
 name | subcluster_name | memorysize
------+-----------------+------------
 tm   | analytics_1     | 0%
(1 row)

=> SELECT name, subcluster_name, memorysize, plannedconcurrency
      FROM resource_pools WHERE subcluster_name IS NOT NULL;
      name      | subcluster_name | memorysize | plannedconcurrency
----------------+-----------------+------------+--------------------
 analytics_pool | analytics_1     | 70%        | 8
(1 row)

=> SELECT * FROM LOAD_BALANCE_GROUPS;
   name    |   policy   |  filter   |    type    | object_name
-----------+------------+-----------+------------+-------------
 analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_1
(1 row)

以下示例会调用 admintool 的 db_add_subcluster 工具以将 analytics_1 子群集复制到由三个主机组成的主机组,以创建名为 analytics_2 的子群集。

$ admintools -t db_add_subcluster -d verticadb \
             -s 10.11.12.13,10.11.12.14,10.11.12.15 \
          -p mypassword --like=analytics_1 -c analytics_2

Creating new subcluster 'analytics_2'
Adding new hosts to 'analytics_2'
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
 Warning when creating depot location for node: v_verticadb_node0007
 WARNING: Target node v_verticadb_node0007 is down, so depot size has been
          estimated from depot location on initiator. As soon as the node comes
          up, its depot size might be altered depending on its disk size
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
 Warning when creating depot location for node: v_verticadb_node0008
 WARNING: Target node v_verticadb_node0008 is down, so depot size has been
          estimated from depot location on initiator. As soon as the node comes
          up, its depot size might be altered depending on its disk size
Eon database detected, creating new depot locations for newly added nodes
Creating depot locations for 1 nodes
 Warning when creating depot location for node: v_verticadb_node0009
 WARNING: Target node v_verticadb_node0009 is down, so depot size has been
          estimated from depot location on initiator. As soon as the node comes
          up, its depot size might be altered depending on its disk size
Cloning subcluster properties
NOTICE: Nodes in subcluster analytics_1 have network addresses, you
might need to configure network addresses for nodes in subcluster
analytics_2 in order to get load balance groups to work correctly.

    Replicating configuration to all nodes
    Generating new configuration information and reloading spread
    Starting nodes:
        v_verticadb_node0007 (10.11.12.81)
        v_verticadb_node0008 (10.11.12.209)
        v_verticadb_node0009 (10.11.12.186)
    Starting Vertica on all nodes. Please wait, databases with a large catalog
         may take a while to initialize.
    Checking database state for newly added nodes
    Node Status: v_verticadb_node0007: (DOWN) v_verticadb_node0008:
                 (DOWN) v_verticadb_node0009: (DOWN)
    Node Status: v_verticadb_node0007: (INITIALIZING) v_verticadb_node0008:
                 (INITIALIZING) v_verticadb_node0009: (INITIALIZING)
    Node Status: v_verticadb_node0007: (UP) v_verticadb_node0008:
                 (UP) v_verticadb_node0009: (UP)
Syncing catalog on verticadb with 2000 attempts.
    Multi-node DB add completed
Nodes added to subcluster analytics_2 successfully.
Subcluster added to verticadb successfully.

重新运行示例第一部分中的查询后,系统将显示 analytics_1 中的设置已复制到 analytics_2 中:

=> SELECT name, subcluster_name, memorysize FROM SUBCLUSTER_RESOURCE_POOL_OVERRIDES;
 name | subcluster_name | memorysize
------+-----------------+------------
 tm   | analytics_1     | 0%
 tm   | analytics_2     | 0%
(2 rows)

=> SELECT name, subcluster_name, memorysize, plannedconcurrency
       FROM resource_pools WHERE subcluster_name IS NOT NULL;
      name      | subcluster_name | memorysize |  plannedconcurrency
----------------+-----------------+------------+--------------------
 analytics_pool | analytics_1     | 70%        | 8
 analytics_pool | analytics_2     | 70%        | 8
(2 rows)

=> SELECT * FROM LOAD_BALANCE_GROUPS;
   name    |   policy   |  filter   |    type    | object_name
-----------+------------+-----------+------------+-------------
 analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_2
 analytics | ROUNDROBIN | 0.0.0.0/0 | Subcluster | analytics_1
(2 rows)

如前所述,即使 analytics_2 子群集是分析负载均衡组的一部分,其节点也没有为它们定义网络地址。在您为节点定义网络地址之前,Vertica 无法将客户端连接重定向到这些节点。

3 - 在子群集中添加和移除节点

您通常会希望在子群集中添加新节点和移除现有节点。此功能使您可以扩展数据库以响应不断变化的分析需求。有关将节点添加到子群集如何影响数据库性能的详细信息,请参阅扩展 Eon 模式数据库

将新节点添加到子群集

您可以将节点添加到子群集以满足额外的工作负载。添加到子群集的节点必须已经是群集的一部分。它们可以是:

要将新节点添加到子群集,请使用 admintools 的 db_add_node 命令:

$ adminTools -t db_add_node -h
Usage: db_add_node [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of the database
  -s HOSTS, --hosts=HOSTS
                        Comma separated list of hosts to add to database
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  -a AHOSTS, --add=AHOSTS
                        Comma separated list of hosts to add to database
  -c SCNAME, --subcluster=SCNAME
                        Name of subcluster for the new node
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete ('never') will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.
  --compat21            (deprecated) Use Vertica 2.1 method using node names
                        instead of hostnames

如果您不使用 -c 选项,Vertica 会将新节点添加到 默认子群集(在新数据库中设置为 default_subcluster)。此示例添加一个新节点而不指定子群集:

$ adminTools -t db_add_node -p 'password' -d verticadb -s 10.11.12.117
Subcluster not specified, validating default subcluster
Nodes will be added to subcluster 'default_subcluster'
                Verifying database connectivity...10.11.12.10
Eon database detected, creating new depot locations for newly added nodes
Creating depots for each node
        Generating new configuration information and reloading spread
        Replicating configuration to all nodes
        Starting nodes
        Starting nodes:
                v_verticadb_node0004 (10.11.12.117)
        Starting Vertica on all nodes. Please wait, databases with a
            large catalog may take a while to initialize.
        Checking database state
        Node Status: v_verticadb_node0004: (DOWN)
        Node Status: v_verticadb_node0004: (DOWN)
        Node Status: v_verticadb_node0004: (DOWN)
        Node Status: v_verticadb_node0004: (DOWN)
        Node Status: v_verticadb_node0004: (UP)
Communal storage detected: syncing catalog

        Multi-node DB add completed
Nodes added to verticadb successfully.
You will need to redesign your schema to take advantage of the new nodes.

要将节点添加到特定的现有子群集,请使用 db_add_node 工具的 -c 选项:

$ adminTools -t db_add_node -s 10.11.12.178 -d verticadb -p 'password' \
             -c analytics_subcluster
Subcluster 'analytics_subcluster' specified, validating
Nodes will be added to subcluster 'analytics_subcluster'
                Verifying database connectivity...10.11.12.10
Eon database detected, creating new depot locations for newly added nodes
Creating depots for each node
        Generating new configuration information and reloading spread
        Replicating configuration to all nodes
        Starting nodes
        Starting nodes:
                v_verticadb_node0007 (10.11.12.178)
        Starting Vertica on all nodes. Please wait, databases with a
              large catalog may take a while to initialize.
        Checking database state
        Node Status: v_verticadb_node0007: (DOWN)
        Node Status: v_verticadb_node0007: (DOWN)
        Node Status: v_verticadb_node0007: (DOWN)
        Node Status: v_verticadb_node0007: (DOWN)
        Node Status: v_verticadb_node0007: (UP)
Communal storage detected: syncing catalog

        Multi-node DB add completed
Nodes added to verticadb successfully.
You will need to redesign your schema to take advantage of the new nodes.

添加节点后更新分片订阅

将节点添加到子群集后,它们尚未订阅分片。您可以使用以下联接 V_CATALOG.NODESV_CATALOG.NODE_SUBSCRIPTIONS 系统表的查询来查看数据库中所有节点的订阅状态:

=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
   v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
   = ns.node_name) ORDER BY 1,2,3;

   subcluster_name    |      node_name       | shard_name  | subscription_state
----------------------+----------------------+-------------+--------------------
 analytics_subcluster | v_verticadb_node0004 |             |
 analytics_subcluster | v_verticadb_node0005 |             |
 analytics_subcluster | v_verticadb_node0006 |             |
 default_subcluster   | v_verticadb_node0001 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0003 | ACTIVE
(12 rows)

可以看到新增的 analytics_subcluster 中没有一个节点有订阅。

要更新新节点的订阅,请调用 REBALANCE_SHARDS 函数。您可以通过将其名称传递给 REBALANCE_SHARDS 函数调用,将重新平衡限制为包含新节点的子群集。以下示例运行重新平衡分片以更新 analytics_subcluster 的订阅:


=> SELECT REBALANCE_SHARDS('analytics_subcluster');
 REBALANCE_SHARDS
-------------------
 REBALANCED SHARDS
(1 row)

=> SELECT subcluster_name, n.node_name, shard_name, subscription_state FROM
   v_catalog.nodes n LEFT JOIN v_catalog.node_subscriptions ns ON (n.node_name
   = ns.node_name) ORDER BY 1,2,3;

   subcluster_name    |      node_name       | shard_name  | subscription_state
----------------------+----------------------+-------------+--------------------
 analytics_subcluster | v_verticadb_node0004 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0004 | segment0001 | ACTIVE
 analytics_subcluster | v_verticadb_node0004 | segment0003 | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | segment0001 | ACTIVE
 analytics_subcluster | v_verticadb_node0005 | segment0002 | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | replica     | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | segment0002 | ACTIVE
 analytics_subcluster | v_verticadb_node0006 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0001 | segment0003 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0001 | ACTIVE
 default_subcluster   | v_verticadb_node0002 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | replica     | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0002 | ACTIVE
 default_subcluster   | v_verticadb_node0003 | segment0003 | ACTIVE
(18 rows)

移除节点

您的数据库必须满足以下要求,然后才能从子群集中移除节点:

  • 要从 主子群集中移除节点,子群集中的所有 主节点必须运行,并且在移除主节点后数据库必须能够保持仲裁(请参阅Eon 模式数据库中的数据完整性和高可用性)。这些要求是必要的,因为 Vertica 调用 REBALANCE_SHARDS 在子群集中的剩余节点之间重新分配分片订阅。如果您在数据库不满足要求时尝试移除主节点,则重新平衡分片过程将等待,直到关闭的节点恢复或超时。在等待期间,您会定期看到一条消息“重新平衡分片轮询迭代次数 (Rebalance shards polling iteration number [nn])”,表明重新平衡过程正在等待完成。

    即使子群集中的节点已关闭,您也可以从 辅助子群集中移除节点。

  • 当您的数据库启用了大型群集功能时,如果节点是子群集的最后一个 控制节点并且有依赖于它的节点,则无法移除该节点。有关详细信息,请参阅大型群集

    如果子群集中还有其他控制节点,您可以删除一个控制节点。Vertica 将依赖于被删除节点的节点重新分配给其他控制节点。

要移除一个或多个节点,请使用 admintools 的 db_remove_node 工具:

$ adminTools -t db_remove_node -p 'password' -d verticadb -s 10.11.12.117
connecting to 10.11.12.10
Waiting for rebalance shards. We will wait for at most 36000 seconds.
Rebalance shards polling iteration number [0], started at [14:56:41], time out at [00:56:41]
Attempting to drop node v_verticadb_node0004 ( 10.11.12.117 )
        Shutting down node v_verticadb_node0004
        Sending node shutdown command to '['v_verticadb_node0004', '10.11.12.117', '/vertica/data', '/vertica/data']'
        Deleting catalog and data directories
        Update admintools metadata for v_verticadb_node0004
        Eon mode detected. The node v_verticadb_node0004 has been removed from host 10.11.12.117. To remove the
        node metadata completely, please clean up the files corresponding to this node, at the communal
        location: s3://eonbucket/metadata/verticadb/nodes/v_verticadb_node0004
        Reload spread configuration
        Replicating configuration to all nodes
        Checking database state
        Node Status: v_verticadb_node0001: (UP) v_verticadb_node0002: (UP) v_verticadb_node0003: (UP)
Communal storage detected: syncing catalog

当您从子群集中移除一个或多个节点时,Vertica 会自动重新平衡子群集中的分片。移除节点后,您无需手动重新平衡分片。

在子群集之间移动节点

要将节点从一个子群集移动到另一个子群集:

  1. 从当前所属的子群集中移除一个或多个节点。

  2. 将节点添加到要将其移动到的子群集中。

4 - 使用子群集管理工作负载

默认情况下,查询仅限于在包含启动程序节点(客户端连接到的节点)的子群集中的节点上执行。此示例演示了如何在连接到群集的节点 4 时执行查询的解释计划。节点 4 是包含节点 4 到 6 的子群集的一部分。您可以看到只有子群集中的节点才会参与查询:

=> EXPLAIN SELECT customer_name, customer_state FROM customer_dimension LIMIT 10;

                                   QUERY PLAN
--------------------------------------------------------------------------------

 QUERY PLAN DESCRIPTION:
 ------------------------------

 EXPLAIN SELECT customer_name, customer_state FROM customer_dimension LIMIT 10;

 Access Path:
 +-SELECT  LIMIT 10 [Cost: 442, Rows: 10 (NO STATISTICS)] (PATH ID: 0)
 |  Output Only: 10 tuples
 |  Execute on: Query Initiator
 | +---> STORAGE ACCESS for customer_dimension [Cost: 442, Rows: 10K (NO
           STATISTICS)] (PATH ID: 1)
 | |      Projection: public.customer_dimension_b0
 | |      Materialize: customer_dimension.customer_name,
            customer_dimension.customer_state
 | |      Output Only: 10 tuples
 | |      Execute on: v_verticadb_node0004, v_verticadb_node0005,
                      v_verticadb_node0006
     .   .   .

在 Eon 模式下,您可以覆盖内置全局资源池的 MEMORYSIZE MAXMEMORYSIZEMAXQUERYMEMORYSIZE 设置以微调子群集中的工作负载。有关详细信息,请参阅管理 Eon 模式数据库中的工作负载资源

子群集无法运行查询时发生的情况

为了处理查询,每个子群集的节点必须完全覆盖数据库中的所有分片。如果节点没有完全覆盖(在节点出现故障时可能会发生这种情况),子群集将无法再处理查询。此状态不会导致子群集关闭。不过,如果您尝试在此状态下对子群集运行某个查询,则会收到错误消息,告知您没有足够的节点可用于完成该查询。

=> SELECT node_name, node_state FROM nodes
   WHERE subcluster_name = 'analytics_cluster';
      node_name       | node_state
----------------------+------------
 v_verticadb_node0004 | DOWN
 v_verticadb_node0005 | UP
 v_verticadb_node0006 | DOWN
(3 rows)

=> SELECT * FROM online_sales.online_sales_fact;
ERROR 9099:  Cannot find participating nodes to run the query

发生故障的节点已恢复且子群集具有百分之百的分片覆盖率后,它将能够处理查询。

控制查询的运行位置

您可以通过控制客户端将连接到的子群集来控制特定类型的查询的运行位置。强制执行限制的最佳方式是创建一组连接负载均衡策略,将客户端从特定 IP 地址范围引导至正确子群集中的客户端。

例如,假设您有以下数据库,其中包含两个子群集:一个用于执行数据加载,一个用于执行分析。

数据加载任务来自 IP 10.20.0.0/16 地址范围内的一组 ETL 系统。分析任务可能来自任何其他 IP 地址。在这种情况下,您可以创建一组连接负载均衡策略,以确保 ETL 系统连接到数据加载子群集,而所有其他连接都进入分析子群集。

=> SELECT node_name,node_address,node_address_family,subcluster_name
   FROM v_catalog.nodes;
      node_name       | node_address | node_address_family |  subcluster_name
----------------------+--------------+---------------------+--------------------
 v_verticadb_node0001 | 10.11.12.10  | ipv4                | load_subcluster
 v_verticadb_node0002 | 10.11.12.20  | ipv4                | load_subcluster
 v_verticadb_node0003 | 10.11.12.30  | ipv4                | load_subcluster
 v_verticadb_node0004 | 10.11.12.40  | ipv4                | analytics_subcluster
 v_verticadb_node0005 | 10.11.12.50  | ipv4                | analytics_subcluster
 v_verticadb_node0006 | 10.11.12.60  | ipv4                | analytics_subcluster
(6 rows)

=> CREATE NETWORK ADDRESS node01 ON v_verticadb_node0001 WITH '10.11.12.10';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node02 ON v_verticadb_node0002 WITH '10.11.12.20';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node03 ON v_verticadb_node0003 WITH '10.11.12.30';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node04 ON v_verticadb_node0004 WITH '10.11.12.40';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node05 ON v_verticadb_node0005 WITH '10.11.12.50';
CREATE NETWORK ADDRESS
=> CREATE NETWORK ADDRESS node06 ON v_verticadb_node0006 WITH '10.11.12.60';
CREATE NETWORK ADDRESS

=> CREATE LOAD BALANCE GROUP load_subcluster WITH SUBCLUSTER load_subcluster
   FILTER '0.0.0.0/0';
CREATE LOAD BALANCE GROUP
=> CREATE LOAD BALANCE GROUP analytics_subcluster WITH SUBCLUSTER
   analytics_subcluster FILTER '0.0.0.0/0';
CREATE LOAD BALANCE GROUP

=> CREATE ROUTING RULE etl_systems ROUTE '10.20.0.0/16' TO load_subcluster;
CREATE ROUTING RULE
=> CREATE ROUTING RULE analytic_clients ROUTE '0.0.0.0/0' TO analytics_subcluster;
CREATE ROUTING RULE

创建负载均衡策略后,您可以使用 DESCRIBE_LOAD_BALANCE_DECISION 函数对其进行测试。

=> SELECT describe_load_balance_decision('192.168.1.1');

               describe_load_balance_decision
           --------------------------------
 Describing load balance decision for address [192.168.1.1]
Load balance cache internal version id (node-local): [1]
Considered rule [etl_systems] source ip filter [10.20.0.0/16]...
   input address does not match source ip filter for this rule.
Considered rule [analytic_clients] source ip filter [0.0.0.0/0]...
   input address matches this rule
Matched to load balance group [analytics_cluster] the group has
   policy [ROUNDROBIN] number of addresses [3]
(0) LB Address: [10.11.12.181]:5433
(1) LB Address: [10.11.12.205]:5433
(2) LB Address: [10.11.12.192]:5433
Chose address at position [1]
Routing table decision: Success. Load balance redirect to: [10.11.12.205]
    port [5433]

(1 row)

=> SELECT describe_load_balance_decision('10.20.1.1');

        describe_load_balance_decision
    --------------------------------
 Describing load balance decision for address [10.20.1.1]
Load balance cache internal version id (node-local): [1]
Considered rule [etl_systems] source ip filter [10.20.0.0/16]...
  input address matches this rule
Matched to load balance group [default_cluster] the group has policy
  [ROUNDROBIN] number of addresses [3]
(0) LB Address: [10.11.12.10]:5433
(1) LB Address: [10.11.12.20]:5433
(2) LB Address: [10.11.12.30]:5433
Chose address at position [1]
Routing table decision: Success. Load balance redirect to: [10.11.12.20]
  port [5433]

(1 row)

通常,使用这些策略时,ETL 系统运行的所有查询都将在加载子群集上运行。所有其他查询将在分析子群集上运行。在某些情况下(尤其是在子群集发生故障或排空的情况下),客户端可能会连接到另一个子群集中的节点。为此,客户端应始终验证它们是否已连接到正确的子群集。有关负载均衡策略的详细信息,请参阅连接负载均衡策略

5 - 启动和停止子群集

借助子群集,可以根据需要方便地启动和停止一组节点。您可以使用 admintools 命令行或 Vertica 元函数启动和停止它们。

启动子群集

要启动子群集,请使用 restart_subcluster 工具:

$ adminTools -t restart_subcluster -h
Usage: restart_subcluster [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of database whose subcluster is to be restarted
  -c SCNAME, --subcluster=SCNAME
                        Name of subcluster to be restarted
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete ('never') will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.
  -F, --force           Force the nodes in the subcluster to start and auto
                        recover if necessary

此示例启动名为 analytics_cluster 的子群集:

$ adminTools -t restart_subcluster -c analytics_cluster \
          -d verticadb -p password
*** Restarting subcluster for database verticadb ***
        Restarting host [10.11.12.192] with catalog [v_verticadb_node0006_catalog]
        Restarting host [10.11.12.181] with catalog [v_verticadb_node0004_catalog]
        Restarting host [10.11.12.205] with catalog [v_verticadb_node0005_catalog]
        Issuing multi-node restart
        Starting nodes:
                v_verticadb_node0004 (10.11.12.181)
                v_verticadb_node0005 (10.11.12.205)
                v_verticadb_node0006 (10.11.12.192)
        Starting Vertica on all nodes. Please wait, databases with a large
            catalog may take a while to initialize.
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
                     v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
                     v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
                     v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (DOWN)
                     v_verticadb_node0005: (DOWN) v_verticadb_node0006: (DOWN)
        Node Status: v_verticadb_node0002: (UP) v_verticadb_node0004: (UP)
                     v_verticadb_node0005: (UP) v_verticadb_node0006: (UP)
Communal storage detected: syncing catalog

Restart Subcluster result:  1

停止子群集

您可以使用 stop_subcluster admintools 命令或 SHUTDOWN_WITH_DRAINSHUTDOWN_SUBCLUSTER 函数停止子群集。

admintools

要停止子群集,请使用 stop_subcluster 工具:

$ adminTools -t stop_subcluster -h
Usage: stop_subcluster [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of database whose subcluster is to be stopped
  -c SCNAME, --subcluster=SCNAME
                        Name of subcluster to be stopped
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete ('never') will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.

此示例停止名为 analytics_cluster 的子群集:

$ adminTools -t stop_subcluster -c analytics_cluster -d verticadb -p password
*** Forcing subcluster shutdown ***
Verifying subcluster 'analytics_cluster'
        Node 'v_verticadb_node0004' will shutdown
        Node 'v_verticadb_node0005' will shutdown
        Node 'v_verticadb_node0006' will shutdown
Shutdown subcluster command sent to the database

正常关闭

如果您希望在关闭子群集之前清空子群集的客户端连接,则可以使用 SHUTDOWN_WITH_DRAIN 函数正常关闭子群集。该函数首先将指定子群集中的所有节点标记为正在清空。现有用户会话中的工作继续在正在清空的节点上进行,但节点拒绝新的客户端连接,并被排除在负载均衡操作之外。dbadmin 仍然可以连接到正在清空的节点。有关客户端连接清空的详细信息,请参阅排空客户端连接

要运行 SHUTDOWN_WITH_DRAIN 函数,您必须指定超时值。该函数的行为取决于超时值的正负:

  • 正值:节点清空,直到所有现有连接关闭或该函数达到超时值设置的运行时限制为止。一旦满足这些条件之一,该函数就会向子群集发送一条关闭消息并返回。
  • 零:该函数立即关闭子群集中所有活动用户会话,然后关闭子群集并返回。
  • 负值:该函数将子群集节点标记为正在清空,等待关闭子群集,直到所有活动用户会话断开连接为止。

正在清空的子群集中所有节点关闭之后,其节点自动重置为非清空状态。

以下示例演示如何使用正超时值给予活动用户会话时间,以在关闭子群集之前完成其工作:

=> SELECT node_name, subcluster_name, is_draining, count_client_user_sessions, oldest_session_user FROM draining_status ORDER BY 1;
      node_name       |  subcluster_name   | is_draining | count_client_user_sessions | oldest_session_user
----------------------+--------------------+-------------+----------------------------+---------------------
 v_verticadb_node0001 | default_subcluster | f           |                          0 |
 v_verticadb_node0002 | default_subcluster | f           |                          0 |
 v_verticadb_node0003 | default_subcluster | f           |                          0 |
 v_verticadb_node0004 | analytics          | f           |                          1 | analyst
 v_verticadb_node0005 | analytics          | f           |                          0 |
 v_verticadb_node0006 | analytics          | f           |                          0 |
(6 rows)

=> SELECT SHUTDOWN_WITH_DRAIN('analytics', 300);
NOTICE 0:  Draining has started on subcluster (analytics)
NOTICE 0:  Begin shutdown of subcluster (analytics)
                              SHUTDOWN_WITH_DRAIN
--------------------------------------------------------------------------------------------------------------------
Set subcluster (analytics) to draining state
Waited for 3 nodes to drain
Shutdown message sent to subcluster (analytics)

(1 row)

您可以查询 NODES 系统表以确认子群集已关闭:

=> SELECT subcluster_name, node_name, node_state FROM nodes;
  subcluster_name   |      node_name       | node_state
--------------------+----------------------+------------
 default_subcluster | v_verticadb_node0001 | UP
 default_subcluster | v_verticadb_node0002 | UP
 default_subcluster | v_verticadb_node0003 | UP
 analytics          | v_verticadb_node0004 | DOWN
 analytics          | v_verticadb_node0005 | DOWN
 analytics          | v_verticadb_node0006 | DOWN
(6 rows)

如果要查看有关清空和关闭事件的详细信息(例如所有用户会话是否在超时之前完成工作),可以查询 dc_draining_events 表。在本例中,当函数达到超时的时候,子群集仍有一个活动用户会话:

=> SELECT event_type, event_type_name, event_description, event_result, event_result_name FROM dc_draining_events;
 event_type |       event_type_name        |                          event_description                          | event_result | event_result_name
------------+------------------------------+---------------------------------------------------------------------+--------------+-------------------
          0 | START_DRAIN_SUBCLUSTER       | START_DRAIN for SHUTDOWN of subcluster (analytics)                  |            0 | SUCCESS
          2 | START_WAIT_FOR_NODE_DRAIN    | Wait timeout is 300 seconds                                         |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 0 seconds                                   |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 60 seconds                                  |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 120 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 125 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 180 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 240 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 250 seconds                                 |            4 | INFORMATIONAL
          4 | INTERVAL_WAIT_FOR_NODE_DRAIN | 1 sessions remain after 300 seconds                                 |            4 | INFORMATIONAL
          3 | END_WAIT_FOR_NODE_DRAIN      | Wait for drain ended with 1 sessions remaining                      |            2 | TIMEOUT
          5 | BEGIN_SHUTDOWN_AFTER_DRAIN   | Staring shutdown of subcluster (analytics) following drain          |            4 | INFORMATIONAL
(12 rows)

重新启动子群集后,可以查询 DRAINING_STATUS 系统表以确认节点已将其清空状态重置为非清空:

=> SELECT node_name, subcluster_name, is_draining, count_client_user_sessions, oldest_session_user FROM draining_status ORDER BY 1;
      node_name       |  subcluster_name   | is_draining | count_client_user_sessions | oldest_session_user
----------------------+--------------------+-------------+----------------------------+---------------------
 v_verticadb_node0001 | default_subcluster | f           |                          0 |
 v_verticadb_node0002 | default_subcluster | f           |                          0 |
 v_verticadb_node0003 | default_subcluster | f           |                          0 |
 v_verticadb_node0004 | analytics          | f           |                          0 |
 v_verticadb_node0005 | analytics          | f           |                          0 |
 v_verticadb_node0006 | analytics          | f           |                          0 |
(6 rows)

立即关闭

如果想要立即关闭子群集,则可以调用 SHUTDOWN_SUBCLUSTER。以下示例立即关闭名为 analytics 的子群集,而不检查任何活动客户端连接:

=> SELECT SHUTDOWN_SUBCLUSTER('analytics');
 SHUTDOWN_SUBCLUSTER
---------------------
Subcluster shutdown
(1 row)

6 - 更改子群集设置

您可以使用 ALTER SUBCLUSTER 语句在子群集上更改多个设置。您还可以将子群集从主子群集切换到辅助子群集,或者从辅助子群集切换到主子群集。

重命名子群集

要重命名现有子群集,请使用 ALTER SUBCLUSTER 语句的 RENAME TO 子句:

=> ALTER SUBCLUSTER default_subcluster RENAME TO load_subcluster;
ALTER SUBCLUSTER

=> SELECT DISTINCT subcluster_name FROM subclusters;
  subcluster_name
-------------------
 load_subcluster
 analytics_cluster
(2 rows)

更改默认子群集

如果您在将节点添加到数据库时未明确指定子群集,则默认子群集会指定 Vertica 要将节点添加到的子群集。创建新数据库时(或从 9.3.0 之前的版本升级数据库时),default_subcluster 是默认值。您可以通过查询 SUBCLUSTERS 系统表的 is_default 列找到当前的默认子群集。

以下示例演示如何查找默认子群集,然后将其更改为名为 analytics_cluster 的子群集:

=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
  subcluster_name
--------------------
 default_subcluster
(1 row)

=> ALTER SUBCLUSTER analytics_cluster SET DEFAULT;
ALTER SUBCLUSTER
=> SELECT DISTINCT subcluster_name FROM SUBCLUSTERS WHERE is_default = true;
  subcluster_name
-------------------
 analytics_cluster
(1 row)

将子群集从主子群集转换为辅助子群集,或者从辅助子群集转换为主子群集

您通常会在创建子群集时选择子群集是 主子群集还是 辅助子群集(请参阅创建子群集以了解详细信息)。但在创建子群集后,您可以在这两个设置之间切换。您可能希望更改子群集是主子群集还是辅助子群集,来影响数据库的 K-Safety。例如,如果您有一个主子群集,其中包含无法轻松替换的关闭节点,则可以将辅助子群集提升为主子群集,以确保丢失另一个主节点不会导致数据库关闭。另一方面,您可以选择在最终关闭之前将主子群集转换为辅助子群集。如果您要关闭的子群集包含数据库中主节点总数的一半或以上,则此转换可以防止数据库丢失 K-Safety。

要将辅助子群集设为主子群集,请使用 PROMOTE_SUBCLUSTER_TO_PRIMARY 函数:

=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | f
 load_subcluster   | t
(2 rows)


=> SELECT PROMOTE_SUBCLUSTER_TO_PRIMARY('analytics_cluster');
 PROMOTE_SUBCLUSTER_TO_PRIMARY
-------------------------------
 PROMOTE SUBCLUSTER TO PRIMARY
(1 row)


=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | t
 load_subcluster   | t
(2 rows)

将主子群集设为辅助子群集的方法与此类似。与将辅助子群集转换为主子群集不同,有几个问题可能会阻止您将主子群集设为辅助子群集。如果以下任一情况属实,Vertica 会阻止您将主子群集设为辅助子群集:

  • 子群集包含一个 关键节点

  • 子群集是数据库中唯一的主子群集。您必须至少有一个主子群集。

  • 启动程序节点是指您尝试降级的子群集的成员。您必须从另一个子群集调用 DEMOTE_SUBCLUSTER_TO_SECONDARY。

要将主子群集转换为辅助子群集,请使用 DEMOTE_SUBCLUSTER_TO_SECONDARY 函数:

=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | t
 load_subcluster   | t
(2 rows)

=> SELECT DEMOTE_SUBCLUSTER_TO_SECONDARY('analytics_cluster');
 DEMOTE_SUBCLUSTER_TO_SECONDARY
--------------------------------
 DEMOTE SUBCLUSTER TO SECONDARY
(1 row)

=> SELECT DISTINCT subcluster_name, is_primary from subclusters;
  subcluster_name  | is_primary
-------------------+------------
 analytics_cluster | f
 load_subcluster   | t
(2 rows)

7 - 移除子群集

从数据库移除子群集会从 Vertica 编录中删除子群集。在移除期间,Vertica 从数据库移除子群集中的所有节点。这些节点仍是数据库群集的一部分,但不再是数据库的一部分。如果您在 MC 中查看群集,您将看到这些节点的状态为“待机 (STANDBY)”。通过将它们添加到另一个子群集,可以将它们添加回数据库。请参阅创建子群集将新节点添加到子群集

Vertica 对移除子群集有一些限制:

  • 您不能移除 默认子群集。如果希望移除设为默认值的子群集,则必须将另一个子群集设为默认值。有关详细信息,请参阅更改默认子群集

  • 不能移除数据库中的最后一个 主子群集。数据库必须始终至少有一个主子群集。

要移除子群集,请使用 admintools 命令行 db_remove_subcluster 工具:

$ adminTools -t db_remove_subcluster -h
Usage: db_remove_subcluster [options]

Options:
  -h, --help            show this help message and exit
  -d DB, --database=DB  Name of database to be modified
  -c SCNAME, --subcluster=SCNAME
                        Name of subcluster to be removed
  -p DBPASSWORD, --password=DBPASSWORD
                        Database password in single quotes
  --timeout=NONINTERACTIVE_TIMEOUT
                        set a timeout (in seconds) to wait for actions to
                        complete ('never') will wait forever (implicitly sets
                        -i)
  -i, --noprompts       do not stop and wait for user input(default false).
                        Setting this implies a timeout of 20 min.
  --skip-directory-cleanup
                        Caution: this option will force you to do a manual
                        cleanup. This option skips directory deletion during
                        remove subcluster. This is best used in a cloud
                        environment where the hosts being removed will be
                        subsequently discarded.

此示例移除名为 analytics_cluster 的子群集:

$ adminTools -t db_remove_subcluster -d verticadb -c analytics_cluster -p 'password'
Found node v_verticadb_node0004 in subcluster analytics_cluster
Found node v_verticadb_node0005 in subcluster analytics_cluster
Found node v_verticadb_node0006 in subcluster analytics_cluster
Found node v_verticadb_node0007 in subcluster analytics_cluster
Waiting for rebalance shards. We will wait for at most 36000 seconds.
Rebalance shards polling iteration number [0], started at [17:09:35], time
    out at [03:09:35]
Attempting to drop node v_verticadb_node0004 ( 10.11.12.40 )
    Shutting down node v_verticadb_node0004
    Sending node shutdown command to '['v_verticadb_node0004', '10.11.12.40',
        '/vertica/data', '/vertica/data']'
    Deleting catalog and data directories
    Update admintools metadata for v_verticadb_node0004
    Eon mode detected. The node v_verticadb_node0004 has been removed from
        host 10.11.12.40. To remove the node metadata completely, please clean
        up the files corresponding to this node, at the communal location:
        s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0004
Attempting to drop node v_verticadb_node0005 ( 10.11.12.50 )
    Shutting down node v_verticadb_node0005
    Sending node shutdown command to '['v_verticadb_node0005', '10.11.12.50',
        '/vertica/data', '/vertica/data']'
    Deleting catalog and data directories
    Update admintools metadata for v_verticadb_node0005
    Eon mode detected. The node v_verticadb_node0005 has been removed from
        host 10.11.12.50. To remove the node metadata completely, please clean
        up the files corresponding to this node, at the communal location:
        s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0005
Attempting to drop node v_verticadb_node0006 ( 10.11.12.60 )
    Shutting down node v_verticadb_node0006
    Sending node shutdown command to '['v_verticadb_node0006', '10.11.12.60',
        '/vertica/data', '/vertica/data']'
    Deleting catalog and data directories
    Update admintools metadata for v_verticadb_node0006
    Eon mode detected. The node v_verticadb_node0006 has been removed from
        host 10.11.12.60. To remove the node metadata completely, please clean
        up the files corresponding to this node, at the communal location:
        s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0006
Attempting to drop node v_verticadb_node0007 ( 10.11.12.70 )
    Shutting down node v_verticadb_node0007
    Sending node shutdown command to '['v_verticadb_node0007', '10.11.12.70',
        '/vertica/data', '/vertica/data']'
    Deleting catalog and data directories
    Update admintools metadata for v_verticadb_node0007
    Eon mode detected. The node v_verticadb_node0007 has been removed from
        host 10.11.12.70. To remove the node metadata completely, please clean
        up the files corresponding to this node, at the communal location:
        s3://eonbucket/verticadb/metadata/verticadb/nodes/v_verticadb_node0007
    Reload spread configuration
    Replicating configuration to all nodes
    Checking database state
    Node Status: v_verticadb_node0001: (UP) v_verticadb_node0002: (UP)
        v_verticadb_node0003: (UP)
Communal storage detected: syncing catalog