Backup and restore containerized Vertica (vbr)
Important
Image versions 24.1.0-0
and higher do not support any of the following features:
- Backup and restore with vbr
- Hybrid Kubernetes clusters
These operations require Administration tools (admintools), which is not included in the Vertica server image beginning with version 24.1.0-0
. To perform these tasks, you must use image version 23.4.0-0
or lower with one of the following API version configurations:
v1beta1
v1
and vertica.com/vcluster-ops annotation set tofalse
Important
This section describes backup and restore operations with the vbr command-line utility.
Vertica provides the VerticaRestorePoints custom resource definition (CRD), a beta feature for in-database backup and restore operations in test environments. You can perform full database and object restore operations with only a VerticaRestorePoints custom resource and native SQL statements.
In Vertica on Kubernetes, backup and restore operations use the same components and tooling as non-containerized environments, including the following:
- Configuration file that specifies environment details and custom options
- Backup location that was prepared by an init script
- vbr utility
Containerized backup and restore operations require that you make these components and tools available to the Vertica server process running within a pod. The following sections describe strategies that back up and restore your VerticaDB custom resource (CR) with Kubernetes objects.
For comprehensive backup and restore documentation, see Backing up and restoring the database.
Prerequisites
- Complete Installing the VerticaDB operator
- Create a VerticaDB CR object
- The kubectl command line tool
- Review Setting up backup locations and complete necessary steps
Sample configuration file
The vbr configuration file defines parameters that the vbr utility uses to execute backup and restore tasks. For details, see the following:
To define a vbr configuration file in Kubernetes, you can create a ConfigMap whose data
field defines vbr configuration values. After you create the ConfigMap object in your Kubernetes environment, you can run vbr commands from within a pod that has access to the ConfigMap.
The following backup-configmap.yaml
manifest creates a configuration file named backup.ini
that backs up to an S3 bucket:
apiVersion: v1
kind: ConfigMap
metadata:
name: backup-configmap
data:
backup-host: |
backup-pod-dns
backup.ini: |
[CloudStorage]
cloud_storage_backup_path = s3://backup-bucket/database-backup-path
cloud_storage_backup_file_system_path = [backup-pod-dns]:/opt/vertica/config/
[Database]
dbName = database-name
[Misc]
tempDir = /tmp/vbr
restorePointLimit = 7
objectRestoreMode = coexist
To create the ConfigMap object, apply the manifest to your Kubernetes environment:
$ kubectl apply -f backup-configmap.yaml
backup-host definition
In the sample configuration file, backup-pod-dns is a portion of the pod's fully qualified domain name (FQDN). Vertica on Kubernetes creates a headless service object that constructs the FQDN for each object. The DNS format for each pod is as follows:
podName.headlessServiceName
Note
The headless service name always matches the VerticaDB CR name.The podName portion of the DNS is itself constructed from Kubernetes object names. For example, the following is a complete pod DNS:
vdb-main-0.vdb
In the preceding example:
vdb
: VerticaDB CR namemain
: Subcluster name0
: StatefulSet ordinal indexvdb
: Headless service name (always identical to the VerticaDB CR name)
To access a pod from outside the namespace, append the namespace to the pod DNS:
podName.headlessService.namespace
For additional details, see the Kubernetes documentation.
Mount the configuration file
After you define a ConfigMap with vbr configuration information, you must make it available to the Vertica pods that can execute the vbr utility. You can mount the ConfigMap as a volume in a VerticaDB CR instance. For details about mounting volumes in the VerticaDB CR, see Mounting custom volumes.
Cloud storage locations require access to information that you cannot provide in your configuration file, such as environment variables. You can set environment variables in your CR with annotations.
The following mounted-vbr-config.yaml
manifest mounts a backup-config
ConfigMap object in the Vertica container's /vbr
directory:
apiVersion: vertica.com/v1beta1
kind: VerticaDB
metadata:
name: verticadb
spec:
annotations:
VBR_BACKUP_STORAGE_SECRET_ACCESS_KEY: "access-key"
VBR_BACKUP_STORAGE_ACCESS_KEY_ID: "access-key-id"
VBR_BACKUP_STORAGE_ENDPOINT_URL: "https://path/to/backup/storage"
VBR_COMMUNAL_STORAGE_SECRET_ACCESS_KEY: "access-key"
VBR_COMMUNAL_STORAGE_ACCESS_KEY_ID: "access-key-id"
VBR_COMMUNAL_STORAGE_ENDPOINT_URL: "https://path/to/communal/storage"
communal:
endpoint: https://path/to/s3-endpoint
path: "s3://bucket/database-path"
image: vertica/vertica-k8s:version
subclusters:
- isPrimary: true
name: main
volumeMounts:
- name: backup-configmap
mountPath: /vbr
volumes:
- name: backup-configmap
configMap:
name: backup-configmap
Deprecated
Thev1beta1
API version is deprecated and will be removed in a future release. Vertica recommends that you upgrade to API version v1
. For details, see Upgrading Vertica on Kubernetes and VerticaDB custom resource definition.
To mount the ConfigMap, apply the manifest
$ kubectl apply -f mounted-vbr-config.yaml
After you apply the manifest, each Vertica pod restarts, and the new backup volume is mounted.
Prepare the backup location
Before you can run a backup, you must prepare the backup location with the vbr init command. This command initializes a directory on the backup host to receive and store Vertica backup data. You need to initialize a backup location only once. For details, see Setting up backup locations.
The following backup-init.yaml
manifest creates a pod to initialize the backup-host
defined in the sample configuration file:
apiVersion: v1
kind: Pod
metadata:
name: backup-init
spec:
restartPolicy: OnFailure
containers:
- name: main
image: vertica/vertica-k8s:version
command:
- bash
- -c
- "ssh -o 'StrictHostKeyChecking no' -i /home/dbadmin/.ssh/id_rsa dbadmin@$BACKUP_HOST '/opt/vertica/bin/vbr -t init --cloud-force-init --config-file /vbr/backup.ini'"
env:
- name: BACKUP_HOST
valueFrom:
configMapKeyRef:
key: backup-host
name: backup-configmap
Apply the manifest to initialize the backup location:
$ kubectl create -f backup-init.yaml
Run a backup
Your organization might run backups as needed or on a schedule. The following sections use the sample configuration file ConfigMap to demonstrate both scenarios.
On-demand backups
In some circumstances, you might need to run backup operations as needed. You can create a Kubernetes Job to run an on-demand backup. The following backup-on-demand.yaml
manifest creates a Job object that executes a backup:
apiVersion: batch/v1
kind: Job
metadata:
generateName: vertica-backup-
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: main
image: vertica/vertica-k8s:version
command:
- bash
- -c
- "ssh -o 'StrictHostKeyChecking no' -i /home/dbadmin/.ssh/id_rsa dbadmin@$BACKUP_HOST '/opt/vertica/bin/vbr -t backup --config-file /vbr/backup.ini'"
env:
- name: BACKUP_HOST
valueFrom:
configMapKeyRef:
key: backup-host
name: backup-configmap
Each time that you want to create a new backup, execute the following command:
$ kubectl create -f backup-on-demand.yaml
Scheduled backups
You might need to schedule a backup at a fixed time or interval. You can run the backup as a Kubenetes CronJob object that schedules a Kubernetes Job as specified in Cron format.
The following backup-cronjob.yaml
manifest runs a daily backup at 2:00 AM:
apiVersion: batch/v1
kind: CronJob
metadata:
generateName: vertica-backup-
spec:
schedule: "00 2 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: main
image: vertica/vertica-k8s:version
command:
- bash
- -c
- "ssh -o 'StrictHostKeyChecking no' -i /home/dbadmin/.ssh/id_rsa dbadmin@$BACKUP_HOST '/opt/vertica/bin/vbr -t backup --config-file /vbr/backup.ini'"
env:
- name: BACKUP_HOST
valueFrom:
configMapKeyRef:
key: backup-host
name: backup-configmap
To schedule the backup, create the CronJob object:
$ kubectl create -f backup-cronjob.yaml
Restore from a backup
You can create a Kubernetes Job to restore database objects from a backup. For comprehensive documentation about the vbr restore task, see Restoring backups.
The following restore-on-demand-job.yaml
manifest creates a Job object that restores a database:
apiVersion: batch/v1
kind: Job
metadata:
generateName: vertica-restore-
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: main
image: vertica/vertica-k8s:version
command:
- bash
- -c
- "ssh -o 'StrictHostKeyChecking no' -i /home/dbadmin/.ssh/id_rsa dbadmin@$BACKUP_HOST '/opt/vertica/bin/vbr -t restore --config-file /vbr/backup.ini'"
env:
- name: BACKUP_HOST
valueFrom:
configMapKeyRef:
key: backup-host
name: backup-configmap
The restore process requires that you stop the database, run the restore operation, and then restart the database. This workflow requires additional steps in a containerized environment because Kubernetes has components that monitor and maintain the desired state of the database. You must temporarily adjust some settings to provide time for the restore operation to complete. For details about the settings in this section, see Custom resource definition parameters.
The following steps change the environment for the resource process and then restore the original values:
-
Update the CR to extend the livenessProbe timeout. This timeout triggers a container restart when it expires. The default livenessProbe timeout is about two and half minutes, which does not provide enough time to restore the database. The following
patch
command uses thelivenessProberOverride
parameter to set the timeout to about 20 minutes:$ kubectl patch vdb customResourceName --type=json --patch '[ { "op": "add", "path": "/spec/livenessProbeOverride", "value": {"initialDelaySeconds": 60, "periodSeconds": 30, "failureThreshold": 38}}]'
-
Delete the StatefulSet for each subcluster so that the pods are restarted with the new
livenessProberOverride
setting:$ kubectl delete statefulset customResourceName-subclusterName
-
Wait until the pods restart and the new pod IPs are present in
admintools.conf
:$ kubectl wait --for=condition=Ready=True pod --selector=app.kubernetes.io/instance=customResourceName --timeout=10m
-
Set
autoRestartVertica
tofalse
so that the Vertica server process does not automatically restart when you stop the database:$ kubectl patch vdb customResourceName --type=merge --patch '{"spec": {"autoRestartVertica": false}}'
-
Access a shell in a host that is running a Vertica pod, and stop the database with admintools:
$ kubectl exec -it hostname -- admintools -t stop_db -d database-name
After you stop the database, wait for the cluster lease to expire.
Note
In some scenarios, estimating when the cluster lease expires is difficult. If the restore fails, it logs when the lease expires.
You can also experiment with the restartPolicy and backoff failure policy in the Job spec to control how many times to retry the restore.
-
Apply the manifest to run a Job that restores the backup:
$ kubectl create -f restore-on-demand-job.yaml
-
After the Job completes, use
patch
to reset the livenessProbe timeout to its default setting:$ kubectl patch vdb customResourceName --type=json --patch '[{ "op": "remove", "path": "/spec/livenessProbeOverride" }]'
-
Set
autoRestartVertica
back totrue
to reset the restart behavior to its state before the restore operation:$ kubectl patch vdb customResourceName --type=merge --patch '{"spec": {"autoRestartVertica": true}}'
-
To speed up the restart process, delete the StatefulSet for each subcluster. The restart speed was affected when you increased the
livenessProbeOverride
setting:$ kubectl delete statefulset customResourceName-subclusterName
-
Wait for the Vertica server to restart:
$ kubectl wait --for=condition=Ready=True pod --selector=app.kubernetes.io/instance=customResourceName --timeout=10m