This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
VerticaDB operator
The Vertica operator automates error-prone and time-consuming tasks that a Vertica on Kubernetes administrator must otherwise perform manually.
The Vertica operator automates error-prone and time-consuming tasks that a Vertica on Kubernetes administrator must otherwise perform manually. The operator:
-
Installs Vertica
-
Creates an Eon Mode database
-
Upgrades Vertica
-
Revives an existing Eon Mode database
-
Restarts and reschedules DOWN pods
-
Scales subclusters
-
Manages services for pods
-
Monitors pod health
-
Handles load balancing for internal and external traffic
The Vertica operator is a Go binary that uses the SDK operator framework. It runs in its own pod, and is namespace-scoped to limit any failures to the objects in its namespace.
For details about installing and upgrading the operator, see Installing the Vertica DB operator.
Monitoring desired state
Each namespace is allowed one operator pod that acts as a custom controller and monitors the state of the custom resource objects within that namespace. The operator uses the control loop mechanism to reconcile state changes by investigating state change notifications from the custom resource instance, and periodically comparing the current state with the desired state.
If the operator detects a change in the desired state, it determines what change occurred and reconciles the current state with the new desired state. For example, if the user deletes a subcluster from the custom resource instance and successfully saves the changes, the operator deletes the corresponding subcluster objects in Kubernetes.
Validating state changes
The verticadb-operator Helm chart includes an admission controller, which uses a webhook to prevent invalid state changes to the custom resource. When you save a change to a custom resource, the admission controller webhook queries a REST endpoint that provides rules for mutable states in a custom resource. If a change violates the state rules, the admission controller prevents the change and returns an error. For example, it returns an error if you try to save a change that violates K-Safety.
Limitations
The operator has the following limitations:
-
You must manually configure TLS. For details, see Containerized Vertica on Kubernetes.
-
Vertica recommends that you do not use the Large cluster feature. If a control nodes fails, it might cause more than half of the database nodes to fail. This results in the database losing quorum.
-
Backup and Restore is a manual process.
-
Importing and exporting data between a cluster outside of Kubernetes requires that you expose the service with the NodePort or LoadBalancer service type and properly configure the network.
Important
When configuring the network to import or export data, you must assign each node a static IP export address. When pods are rescheduled to different nodes, you must update the static IP address to reflect the new node.
See Configuring the Network to Import and Export Data for more information.
1 - Installing the Vertica DB operator
The custom resource definition (CRD), DB operator, and admission controller work together to maintain the state of your environment and automate tasks:.
The custom resource definition (CRD), VerticaDB operator, and admission controller work together to maintain the state of your environment and automate tasks:
-
The CRD extends the Kubernetes API to provide custom objects. It serves as a blueprint for custom resource (CR) instances that specify the desired state of your environment.
-
The VerticaDB operator is a custom controller that monitors CR instances to maintain the desired state of VerticaDB objects. You can deploy one VerticaDB operator per namespace, and the operator monitors only the VerticaDB objects within that namespace.
-
The admission controller is a webhook that queries a REST endpoint to verify changes to mutable states in a CR instance.
Prerequisites
Installation options
Vertica provides two separate options to install the VerticaDB operator and admission controller:
Note
Each install option has its own workflow that is incompatible with the other option. For example, you cannot install the VerticaDB operator with the Helm charts, and then deploy an operator in the same environment using OperatorHub.io.
Quickstart
You can quickly deploy the VerticaDB operator Helm chart with minimal commands. After you deploy the operator, you can further customize it with Helm chart parameters. For detailed information about Helm chart installations, see Helm charts.
The following steps deploy the VerticaDB operator in the current namespace with its default configuration:
Add the Vertica Helm charts to your local repository, then update your local repository to ensure that it contains the latest available version of the Vertica Helm charts.
When you add the charts, give the local chart repository a descriptive name for future reference. The following add
command names the charts vertica-charts
:
$ helm repo add vertica-charts https://vertica.github.io/charts
$ helm repo update
- Install the Helm chart to deploy the VerticaDB operator into the current namespace. The following command names this chart instance
vdb-op
:
$ helm install vdb-op vertica-charts/verticadb-operator
For helm install
options, see the Helm documentation. For example commands for additional installation scenarios, see Installing the Helm chart.
Helm charts
Vertica packages VerticaDB operator and admission controller in a Helm chart. Vertica on Kubernetes allows one operator instance per namespace.
Important
Vertica recommends that you use Kubernetes 1.21.1 or later. Earlier versions require that you add the kubernetes.io/metadata.name=
namespace-name
label to each namespace that contains an operator.
Configuring TLS for the admission controller
Before you can install the VerticaDB Helm chart, you must configure TLS for the admission controller. The admission controller uses a webhook that requires TLS certificates for data encryption. Use the webhook.certSource
Helm chart parameter to manage the TLS certificates.
By default, webhook.certSource
is set to internal
, which generates a self-signed certificate before starting the admission controller. To use custom certificates, set this parameter to secret
and store your certificates in a Secret. You add the Secret to the Helm chart with the webhook.tlsSecret
Helm chart parameter.
Defining custom certificates
Custom certificates require a TLS key that sets the Subjective Alternative Name (SAN) using the admission controller webhook's fully-qualified domain name (FDQN). You can set the SAN in a configuration file with the following format:
[alt_names]
DNS.1 = verticadb-operator-webhook-service.namespace.svc
DNS.2 = verticadb-operator-webhook-service.namespace.svc.cluster.local
For more information about TLS and Vertica, see TLS protocol.
When you install the VerticaDB operator and admission controller Helm chart, you can pass parameters to customize the Helm chart. Conceal custom certificates in a Secret before you pass them as parameters. The following command creates a Secret that stores the TLS key, TLS certificate, and CA certificate:
$ kubectl create secret generic tls-secret --from-file=tls.key=/path/to/tls.key --from-file=tls.crt=/path/to/tls.crt --from-file=ca.crt=/path/to/ca.crt
Use tls-secret
when you install the VerticaDB operator and admission controller Helm chart. For a detailed example, see Helm chart parameters.
Granting operator privileges
Optionally, you can authorize a user without cluster administrator privileges to install the operator in a specific namespace. You can grant these operator privileges with a preconfigured Kubernetes service account.
Vertica leverages Kubernetes RBAC to authorize a service account with operator privileges to perform operator actions. You can grant these privileges to a Role resource type, then define a RoleBinding resource type that associates that Role with a ServiceAccount.
After the cluster administrator binds that ServiceAccount to a namespace, any user can perform operator actions if they install the Helm chart with the ServiceAccount.
Cluster administrator set up
The cluster administrator creates a namespace and then binds to it a service account with the required operator privileges:
-
Install the CRDs from the vertica-kubernetes GitHub repository:
$ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/verticadbs.vertica.com-crd.yaml
$ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/verticaautoscalers.vertica.com-crd.yaml
-
Create a namespace:
$ kubectl create namespace namespace
-
Apply the ServiceAccount, Roles, and RoleBindings required to grant operator privileges to a service account.
The following command applies operator-rbac.yaml
, a sample file that defines the required operator privileges:
$ kubectl -n namespace apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/operator-rbac.yaml
-
Verify the changes with kubectl get
:
-
ServiceAccount:
$ kubectl get serviceaccounts -n namespace
NAME SECRETS AGE
default 1 71m
verticadb-operator-controller-manager 1 69m
-
Roles in the correct namespace:
$ kubectl get roles -n namespace
NAME CREATED AT
verticadb-operator-leader-election-role 2022-04-14T16:26:53Z
verticadb-operator-manager-role 2022-04-14T16:26:53Z
-
RoleBindings in the correct namespace:
$ kubectl get rolebinding -n namespace
NAME ROLE AGE
verticadb-operator-leader-election-rolebinding Role/verticadb-operator-leader-election-role 73m
verticadb-operator-manager-rolebinding Role/verticadb-operator-manager-role 73m
Non-cluster administrator installation
Any user can perform operator actions if they use the serviceAccountOverride parameter to install the helm chart with the ServiceAccount with privileges.
-
Add the Vertica Helm charts to your local repository, then update your local repository to ensure that it contains the latest available version of the Vertica Helm charts.
When you add the charts, give the local chart repository a descriptive name for future reference. The following add
command names the charts vertica-charts
:
$ helm repo add vertica-charts https://vertica.github.io/charts
$ helm repo update
-
Install the operator:
$ helm install vdb-op -n namespace vertica-charts/verticadb-operator \
--skip-crds \
--set webhook.enable=false \
--set prometheus.createProxyRBAC=false \
--set skipRoleAndRoleBindingCreation=true \
--set serviceAccountNameOverride=verticadb-operator-controller-manager
Installing the Helm chart
Before you can install the Helm chart, you must select a method to configure TLS for the admission controller.
The following install steps use custom certificates:
-
Add the Vertica Helm charts to your local repository, then update your local repository to ensure that it contains the latest available version of the Vertica Helm charts.
When you add the charts, give the local chart repository a descriptive name for future reference. The following add
command names the charts vertica-charts
:
$ helm repo add vertica-charts https://vertica.github.io/charts
$ helm repo update
-
Install the operator Helm chart. The following examples demonstrate the most common Helm chart configurations. For details about the Helm chart options and parameters, see Helm chart parameters.
Note
Each of the following commands include the --create-namespace
option to create the provided namespace if it does not exist. If you do not provide the namespace during install, Helm installs the operator in the current namespace that is defined in the kubectl
configuration file.
Enter one of the following commands to customize your Helm chart installation:
-
Default configuration. The following command requires cluster administrator privileges:
$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator
-
Custom certificates. Pass custom certificates with the webhook.caBundle
, webhook.certSource
, and webhook.tlsSecret
. The following command requires cluster administrator privileges, and uses the tls-secret Secret created in Defining Custom Certificates:
$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
--set webhook.certSource=secret \
--set webhook.tlsSecret=tls-secret
-
Service account override. Use service accounts to allow users without cluster administrator privileges to install the operator. Pass the service account with the serviceAccountNameOverride
parameter:
$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
--set serviceAccountNameOverride=service-account-name
For details, see Granting Operator Installation Privileges.
-
Do not install the admission controller webhook. Deploying the webhook requires cluster-scoped privileges that are not required to install the operator. If you use a service account that is granted the privileges required to install the operator but not the webhook, provide the service account with serviceAccountNameOverride
, and set webhook.enable
to false
to deploy only the operator:
$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
--set serviceAccountNameOverride=service-account-name
--set webhook.enable=false
Caution
Webhooks prevent invalid state changes to the custom resource. Running Vertica on Kubernetes without webhook validations might result in invalid state transitions.
For additional details about helm install
, see the official documentation.
OperatorHub.io
OperatorHub.io is a registry that allows vendors to share Kubernetes operators. Each vendor must adhere to packaging guidelines to simplify user adoption.
To install the VerticaDB operator from OperatorHub.io, navigate to the Vertica operator page and follow the install instructions.
2 - Upgrading the VerticaDB operator
Vertica supports two separate options to upgrade the VerticaDB operator:.
Vertica supports two separate options to upgrade the VerticaDB operator:
-
OperatorHub.io
-
Helm Charts
Note
You must upgrade the operator with the same option that you selected for installation. For example, you cannot install the VerticaDB operator with Helm charts, and then upgrade the operator in the same environment using OperatorHub.io.
Prerequisites
OperatorHub.io
The Operator Lifecycle Manager (OLM) operator manages upgrades for OperatorHub.io installations. You can configure the OLM operator to upgrade the VerticaDB operator manually or automatically with the Subscription object's spec.installPlanApproval
parameter.
Automatic upgrade
To configure automatic version upgrades, set spec.installPlanApproval
to Automatic
, or omit the setting entirely. When the OLM operator refreshes the catalog source, it installs the new VerticaDB operator automatically.
Manual upgrade
Upgrade the VerticaDB operator manually to approve version upgrades for specific install plans. To manually upgrade, set spec.installPlanApproval
parameter to Manual
and complete the following:
-
Verify if there is an install plan that requires approval to proceed with the upgrade:
$ kubectl get installplan
NAME CSV APPROVAL APPROVED
install-ftcj9 verticadb-operator.v1.7.0 Manual false
install-pw7ph verticadb-operator.v1.6.0 Manual true
The command output shows that the install plan install-ftcj9
for VerticaDB operator version 1.7.0 is not approved.
-
Approve the install plan with a patch command:
$ kubectl patch installplan install-ftcj9 --type=merge --patch='{"spec": {"approved": true}}'
installplan.operators.coreos.com/install-ftcj9 patched
After you set the approval, the OLM operator silently upgrades the VerticaDB operator. To monitor its progress, inspect the STATUS column of the Subscription object:
$ kubectl describe subscription subscription-object-name
Helm charts
The CRD is included when you install the Helm chart, but the helm install
command does not overwrite an existing CRD. To upgrade the operator, you must update the CRD with the manifest from the GitHub repository. Upgrading the operator with the CRD requires the following prerequisites:
Additionally, you must upgrade the VerticaAutoscaler CRD and the EventTrigger CRD, even if you do not use either in your environment. These CRDs are installed with the operator and maintained as separate YAML manifests. Upgrade the VerticaAutoscaler and EventTrigger to ensure that your operator is upgraded completely.
Use kubectl apply
to upgrade the CRDs:
-
Upgrade the VerticaDB operator CRD:
$ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/verticadbs.vertica.com-crd.yaml
-
Upgrade the VerticaAutoscaler CRD:
$ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/verticaautoscalers.vertica.com-crd.yaml
-
Upgrade the EventTrigger CRD:
$ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/eventtriggers.vertica.com-crd.yaml
-
Upgrade the Helm chart:
$ helm upgrade operator-name --wait vertica-charts/verticadb-operator
3 - Helm chart parameters
The following table describes the available settings for the VerticaDB operator and admission controller Helm chart.
The following list describes the available settings for the VerticaDB operator and admission controller Helm chart:
affinity
- Applies rules that constrain the VerticaDB operator to specific nodes. It is more expressive than
nodeSelector
. If this parameter is not set, then the operator uses no affinity setting.
image.name
- The name of the image that runs the operator.
Default: vertica/verticadb-operator:version
imagePullSecrets
- List of Secrets that store credentials to authenticate to the private container repository specified by
image.repo
and rbac_proxy_image
. For details, see Specifying ImagePullSecrets in the Kubernetes documentation.
image.repo
- The server that hosts the repository that contains
image.name
. Use this parameter for deployments that require control over a private hosting server, such as an air-gapped operator.
Use this parameter with rbac_proxy_image.name
and rbac_proxy_image.repo
.
Default: docker.io
logging.filePath
- The path to a log file in the VerticaDB operator filesystem. If this value is not specified, Vertica writes logs to standard output.
Default: Empty string (' ') that indicates standard output.
logging.level
- Minimum logging level. This parameter accepts the following values:
Default: info
logging.maxFileSize
- When
logging.filePath
is set, the maximum size in MB of the logging file before log rotation occurs.
Default: 500
logging.maxFileAge
- When
logging.filePath
is set, the maximum age in days of the logging file before log rotation deletes the file.
Default: 7
logging.maxFileRotation
- When
logging.filePath
is set, the maximum number of files that are kept in rotation before the old ones are removed.
Default: 3
nameOverride
- Sets the prefix for the name assigned to all objects that the Helm chart creates.
If this parameter is not set, each object name begins with the name of the Helm chart, verticadb-operator
.
nodeSelector
- Provides control over which nodes are used to schedule the operator pod. If this is not set, the node selector is omitted from the operator pod when it is created. To set this parameter, provide a list of key/value pairs.
The following example schedules the operator only on nodes that have the region=us-east
label:
nodeSelector:
region: us-east
priorityClassName
- The PriorityClass name assigned to the operator pod. This affects where the pod is scheduled.
prometheus.createProxyRBAC
- When set to true, creates role-based access control (RBAC) rules that authorize access to the operator's
/metrics
endpoint for the Prometheus integration.
Default: true
prometheus.createServiceMonitor
-
Deprecated
This parameter is deprecated and will be removed in a future release.
When set to true, creates the ServiceMonitor custom resource for the Prometheus operator. You must install the Prometheus operator before you set this to true and install the Helm chart.
For details, see the Prometheus operator GitHub repository.
Default: false
prometheus.expose
- Configures the operator's
/metrics
endpoint for the Prometheus integration. The following options are valid:
-
EnableWithAuthProxy: Creates a new service object that exposes an HTTPS /metrics
endpoint. The RBAC proxy controls access to the metrics.
-
EnableWithoutAuth: Creates a new service object that exposes an HTTP /metrics
endpoint that does not authorize connections. Any client with network access can read the metrics.
-
Disable: Prometheus metrics are not exposed.
Default: EnableWithAuthProxy
prometheus.tlsSecret
- Secret that contains the TLS certificates for the Prometheus
/metrics
endpoint. You must create this Secret in the same namespace that you deployed the Helm chart.
The Secret requires the following values:
To ensure that the operator uses the certificates in this parameter, you must set prometheus.expose
to EnableWithAuthProxy
.
If prometheus.expose
is not set to EnableWithAuthProxy
, then this parameter is ignored, and the RBAC proxy sidecar generates its own self-signed certificate.
rbac_proxy_image.name
- The name of the Kubernetes RBAC proxy image that performs authorization. Use this parameter for deployments that require authorization by a proxy server, such as an air-gapped operator.
Use this parameter with image.repo
and rbac_proxy_image.repo
.
Default: kubebuilder/kube-rbac-proxy:v0.11.0
rbac_proxy_image.repo
- The server that hosts the repository that contains
rbac_proxy_image.name
. Use this parameter for deployments that perform authorization by a proxy server, such as an air-gapped operator.
Use this parameter with image.repo
and rbac_proxy_image.name
.
Default: gcr.io
resources.limits
and resources.requests
- The resource requirements for the operator pod.
resources.limits
is the maximum amount of CPU and memory that an operator pod can consume from its host node.
resources.requests
is the maximum amount of CPU and memory that an operator pod can request from its host node.
Defaults:
resources:
limits:
cpu: 100m
memory: 750Mi
requests:
cpu: 100m
memory: 20Mi
serviceAccountNameOverride
- Service account that identifies any pods in the cluster for apiserver access. A cluster administrator can create a service account that grants the privileges required to install the operator so that users without cluster administrator privileges can install the Helm chart.
To correctly control access, the service account's Roles and RoleBindings must exist before you add the service account to the CR. If these are not set, the Vertica Helm chart creates and uses a service account.
Vertica provides the required Roles and RoleBindings as GitHub release artifacts.
Default: Empty string ("")
skipRoleAndRoleBindingCreation
- Determines whether the Helm chart creates any Roles or RoleBindings to authorize service accounts with VerticaDB operator privileges.
When set to true, the Helm chart does not create any Roles or RoleBindings. This allows a user that cannot create Roles and RoleBindings to install the Helm chart.
Vertica provides the required Roles and RoleBindings as GitHub release artifacts.
The service account that installs the Helm chart must exist, and you must set serviceAccountNameOverride
to that service account.
Default: false
tolerations
- Any taints and tolerations that influence where the operator pod is scheduled.
webhook.caBundle
- A PEM-encoded certificate authority (CA) bundle that validates the webhook's server certificate. If this is not set, the webhook uses the system trust roots on the apiserver.
Deprecated
This parameter is deprecated and will be removed in a future release. To add a CA bundle, see webhook.tlsSecret
.
If webhook.caBundle
is set and the webhook.tlsSecret
Secret contains a ca.crt key, then the webhook.tlsSecret
CA value takes precedence.
webhook.certSource
- How TLS certificates are provided for the admission controller webhook. This parameter accepts the following values:
-
internal: The VerticaDB operator internally generates a self-signed, 10-year expiry certificate before starting the managing controller. When the certificate expires, you must manually restart the operator pod to create a new certificate.
-
secret: You generate the custom certificates before you create the Helm chart and store them in a Secret. This option requires that you set webhook.tlsSecret
.
If webhook.tlsSecret
is set, then this option is implicitly selected.
Default: internal
For details, see Installing the Vertica DB operator.
webhook.enable
- Whether the Helm chart installs the admission controller webhooks for the VerticaDB custom resource and VerticaAutoscaler. If you do not have the privileges required to install the admission controller, set this value to false to deploy the operator only.
This parameter enables or disables both webhooks. You cannot enable one webhook and disable the other.
Caution
Webhooks prevent invalid state changes to the custom resource. Running Vertica on Kubernetes without webhook validations might result in invalid state transitions.
Default: true
webhook.tlsSecret
- Secret that contains a PEM-encoded certificate authority (CA) bundle and its keys.
The CA bundle validates the webhook's server certificate. If this is not set, the webhook uses the system trust roots on the apiserver.
This Secret includes the following keys for the CA bundle:
4 - Red Hat OpenShift integration
Red Hat OpenShift is a hybrid cloud platform that provides enhanced security features and greater control over the Kubernetes cluster.
Red Hat OpenShift is a hybrid cloud platform that provides enhanced security features and greater control over the Kubernetes cluster. In addition, OpenShift provides the OperatorHub, a catalog of operators that meet OpenShift requirements.
For comprehensive instructions about the OpenShift platform, refer to the Red Hat OpenShift documentation.
Note
If your Kubernetes cluster is in the cloud or on a managed service, each Vertica node must operate in the same availability zone.
Enhanced security with security context constraints
OpenShift requires that each deployment uses a security context constraint (SCC) to enforce enhanced security measures. The SCC lets administrators control the privileges of the pods in a cluster. For example, you can restrict namespace access for specific users in a multi-user environment.
Default SCCs
OpenShift provides default SCCs that provide a range of security features without manual configuration. Vertica on Kubernetes supports the privileged
SCC, the most relaxed default SCC. The privileged
SCC allows Vertica to assign user and group IDs to the Kubernetes objects in the cluster. In addition, the privileged
SCC has the following Linux capabilities that enable internal SSH communication between the pods:
Vertica provides anyuid-extra
, a custom SCC that you can create that extends the anyuid
SCC. The anyuid-extra
SCC runs Vertica with more restrictions than the privileged
SSC. For example, if you do not have the privileges to grant the privileged
SCC, you can create the anyuid-extra
SCC and add it to your Vertica workloads service account.
For installation details, see Creating a Custom SCC with anyuid-extra.
Installing the operator
The VerticaDB operator is a community operator that is maintained by Vertica. Each operator available in the OperatorHub must adhere to requirements defined by the Operator Lifecycle Manager (OLM). To meet these requirements, vendors must provide a cluster service version (CSV) manifest for each operator. Vertica provides a CSV for each version of the VerticaDB operator available in the OpenShift OperatorHub.
The VerticaDB operator supports OpenShift versions 4.8 and higher.
You must have cluster-admin privileges on your OpenShift account to install the VerticaDB operator. For detailed installation instructions, refer to the OpenShift documentation.
Installing the operator in multiple OpenShift namespaces
By default, the OpenShift user interface (UI) installs the VerticaDB operator in a single OpenShift namespace. In some circumstances, you might require that the operator watch and manage resource objects across multiple OpenShift namespaces.
Prequisites:
The following steps add the VerticaDB operator to an additional namespace:
-
Create a YAML-formatted OperatorGroup object file. The following example creates file named operatorgroup.yaml:
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: vertica-operatorgroup
namespace: $NAMESPACE
spec:
targetNamespaces:
- $NAMESPACE
In the previous command, $NAMESPACE
is the namespace where you want to install the operator.
-
Create the OperatorGroup object:
$ oc apply -f operatorgroup.yaml
-
Create a YAML-formatted Subscription object file to subscribe a namespace to an operator. The following example creates a file named sub.yaml:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: verticadb-operator
namespace: $NAMESPACE
spec:
channel: stable
name: verticadb-operator
source: community-operators
sourceNamespace: openshift-marketplace
-
Create the Subscription object:
$ oc apply -f sub.yaml
After you create the Subscription object, the OLM is aware of the operator.
-
Use kubectl get
to view the installation progress in a separate shell:
$ kubectl get -n $NAMESPACE clusterserviceversion -w --selector operators.coreos.com/verticadb-operator.$NAMESPACE
When the installation is complete, you can manage the operator from the UI.
Before you can create an operator, you must create the anyuid-extra
SCC and add it to your Vertica workloads service account. The Vertica anyuid-extra
SCC manifest is available on the Vertica GitHub repository.
-
Create the custom SCC using the anyuid-extra
YAML-formatted manifest:
$ kubectl apply -f https://github.com/vertica/vertica-kubernetes/releases/latest/download/custom-scc.yaml
For detailed instructions, refer to the OpenShift documentation.
-
Execute the following command to add the custom SCC to your Vertica workloads service account:
$ oc adm policy add-scc-to-user -n $NAMESPACE -z verticadb-operator-controller-manager anyuid-extra
In the previous command, $NAMESPACE
is the namespace with the operator installation.
By default, the anyuid-extra
has a priority setting of 10, so it is automatically selected instead of the default privileged
SCC. For additional details about the priority setting, refer to the OpenShift documentation.
Deploying Vertica on OpenShift
After you installed the VerticaDB operator and added a supported SCC to your Vertica workloads service account, you can deploy Vertica on OpenShift.
For details about installing OpenShift in supported environments, see the OpenShift Container Platform installation overview.
Before you deploy Vertica on OpenShift, create the required Secrets to store sensitive information. For details about Secrets and OpenShift, see the OpenShift documentation. For guidance on deploying a Vertica custom resource, see VerticaDB CRD.
5 - Prometheus integration
Vertica on Kubernetes integrates with Prometheus to scrape time series metrics about the VerticaDB operator.
Vertica on Kubernetes integrates with Prometheus to scrape time series metrics about the VerticaDB operator and Vertica server process. These metrics create a detailed model of your application over time to provide valuable performance and troubleshooting insights as well as facilitate internal and external communications and service discovery in microservice and containerized architectures.
Prometheus requires that you set up targets—metrics that you want to monitor. Each target is exposed on an endpoint, and Prometheus periodically scrapes that endpoint to collect target data. Vertica exports metrics and provides access methods for both the VerticaDB operator and server process.
Operator metrics
The VerticaDB operator supports the Operator SDK framework, which requires that an authorization proxy impose role-based-access control (RBAC) to access operator metrics over HTTPS. To increase flexibility, Vertica provides the following options to access the Prometheus /metrics
endpoint:
-
HTTPS access: Meet operator SDK requirements and use a sidecar container as an RBAC proxy to authorize connections.
-
HTTP access: Expose the /metrics
endpoint to external connections without RBAC. Any client with network access can read from /metrics
.
-
Disable Prometheus entirely.
Vertica provides Helm chart parameters and YAML manifests to configure each option.
Note
If you installed the VerticaDB operator with
OperatorHub.io, you can use the Prometheus integration with the default Helm chart settings. OperatorHub.io installations cannot configure any Helm chart parameters.
Prerequisites
HTTPS with RBAC
The operator SDK framework requires that operators use an authorization proxy for metrics access. Because the operator sends metrics to localhost only, Vertica meets these requirements with a sidecar container with localhost access that enforces RBAC.
RBAC rules are cluster-scoped, and the sidecar authorizes connections from clients associated with a service account that has the correct ClusterRole and ClusterRoleBindings. Vertica provides the following example manifests:
For additional details about ClusterRoles and ClusterRoleBindings, see the Kubernetes documentation.
Create RBAC rules
Note
This section details how to create RBAC rules for environments that require that you set up ClusterRole and ClusterRoleBinding objects outside of the Helm chart installation.
The following steps create the ClusterRole and ClusterRoleBindings objects that grant access to the /metrics
endpoint to a non-Kubernetes resource such as Prometheus. Because RBAC rules are cluster-scoped, you must create or add to an existing ClusterRoleBinding:
-
Create a ClusterRoleBinding that binds the role for the RBAC sidecar proxy with a service account:
-
Create a ClusterRoleBinding:
$ kubectl create clusterrolebinding verticadb-operator-proxy-rolebinding \
--clusterrole=verticadb-operator-proxy-role \
--serviceaccount=namespace:serviceaccount
-
Add a service account to an existing ClusterRoleBinding:
$ kubectl patch clusterrolebinding verticadb-operator-proxy-rolebinding \
--type='json' \
-p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "serviceaccount","namespace": "namespace" } }]'
-
Create a ClusterRoleBinding that binds the role for the non-Kubernetes object to the RBAC sidecar proxy service account:
-
Create a ClusterRoleBinding:
$ kubectl create clusterrolebinding verticadb-operator-metrics-reader \
--clusterrole=verticadb-operator-metrics-reader \
--serviceaccount=namespace:serviceaccount \
--group=system:authenticated
-
Bind the service account to an existing ClusterRoleBinding:
$ kubectl patch clusterrolebinding verticadb-operator-metrics-reader \
--type='json' \
-p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "serviceaccount","namespace": "namespace"},{"op":"add","path":"/subjects/-","value":{"kind": "Group", "name": "system:authenticated"} }]'
$ kubectl patch clusterrolebinding verticadb-operator-metrics-reader \
--type='json' \
-p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "serviceaccount","namespace": "namespace" } }]'
When you install the Helm chart, the ClusterRole and ClusterRoleBindings are created automatically. By default, the prometheus.expose parameter is set to EnableWithProxy, which creates the service object and exposes the operator's /metrics
endpoint.
For details about creating a sidecar container, see VerticaDB CRD.
Service object
Vertica provides a service object verticadb-operator-metrics-service
to access the Prometheus /metrics
endpoint. The VerticaDB operator does not manage this service object. By default, the service object uses the ClusterIP service type to support RBAC.
Connect to the /metrics
endpoint at port 8443 with the following path:
https://verticadb-operator-metrics-service.namespace.svc.cluster.local:8443/metrics
Bearer token authentication
Kubernetes authenticates requests to the API server with service account credentials. Each pod is associated with a service account and has the following credentials stored in the filesystem of each container in the pod:
Use these credentials to authenticate to the /metrics
endpoint through the service object. You must use the credentials for the service account that you used to create the ClusterRoleBindings.
For example, the following cURL request accesses the /metrics
endpoint. Include the --insecure
option only if you do not want to verify the serving certificate:
$ curl --insecure --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://verticadb-operator-metrics-service.vertica:8443/metrics
For additional details about service account credentials, see the Kubernetes documentation.
TLS client certificate authentication
Some environments might prevent you from authenticating to the /metrics
endpoint with the service account token. For example, you might run Prometheus outside of Kubernetes. To allow external client connections to the /metrics
endpoint, you have to supply the RBAC proxy sidecar with TLS certificates.
You must create a Secret that contains the certificates, and then use the prometheus.tlsSecret
Helm chart parameter to pass the Secret to the RBAC proxy sidecar when you install the Helm chart. The following steps create the Secret and install the Helm chart:
-
Create a Secret that contains the certificates:
$ kubectl create secret generic metrics-tls --from-file=tls.key=/path/to/tls.key --from-file=tls.crt=/path/to/tls.crt --from-file=ca.crt=/path/to/ca.crt
-
Install the Helm chart with prometheus.tlsSecret
set to the Secret that you just created:
$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
--set prometheus.tlsSecret=metrics-tls
The prometheus.tlsSecret
parameter forces the RBAC proxy to use the TLS certificates stored in the Secret. Otherwise, the RBAC proxy sidecar generates its own self-signed certificate.
After you install the Helm chart, you can authenticate to the /metrics
endpoint with the certificates in the Secret. For example:
$ curl --key tls.key --cert tls.crt --cacert ca.crt https://verticadb-operator-metrics-service.vertica.svc:8443/metrics
HTTP access
You might have an environment that does not require privileged access to Prometheus metrics. For example, you might run Prometheus outside of Kubernetes.
To allow external access to the /metrics
endpoint with HTTP, set prometheus.expose to EnableWithoutAuth. For example:
$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
--set prometheus.expose=EnableWithoutAuth
Service object
Vertica provides a service object verticadb-operator-metrics-service
to access the Prometheus /metrics
endpoint. The VerticaDB operator does not manage this service object. By default, the service object uses the ClusterIP service type, so you must change the serviceType for external client access. The service object's fully-qualified domain name (FQDN) is as follows:
verticadb-operator-metrics-service.namespace.svc.cluster.local
Connect to the /metrics
endpoint at port 8443 with the following path:
http://verticadb-operator-metrics-service.namespace.svc.cluster.local:8443/metrics
Prometheus operator integration (optional)
Vertica on Kubernetes integrates with the Prometheus operator, which provides custom resources (CRs) that simplify targeting metrics. Vertica supports the ServiceMonitor CR that discovers the VerticaDB operator automatically, and authenticates requests with a bearer token.
The ServiceMonitor CR is available as a release artifact in our GitHub repository. See Helm chart parameters for details about the prometheus.createServiceMonitor
parameter.
Server metrics
Vertica exports server metrics on port 8443 at the following endpoint:
https://host-address:8443/api-version/metrics
Only the dbadmin user can authenticate to the HTTPS service, and the service accepts only mutual TLS (mTLS) authentication. The setup for both Vertica on Kubernetes and non-containerized Vertica environments is identical. For details, see HTTPS service.
Vertica on Kubernetes manages its HTTP service with the following custom resource parameters:
httpServerMode
: Controls whether the HTTP server starts. By default, the service is enabled. When you configure mTLS, HTTPS is enforced.
subclusters[i].httpNodePort
: Sets a custom port for the HTTPS service for NodePort
serviceTypes.
For request and response examples, see the /metrics
endpoint description. For a list of available metrics, see Prometheus metrics.
Disabling Prometheus
To disable Prometheus, set the prometheus.expose Helm chart parameter to Disable
:
$ helm install operator-name --namespace namespace --create-namespace vertica-charts/verticadb-operator \
--set prometheus.expose=Disable
For details about Helm install commands, see Installing the Vertica DB operator.