Client proxy for subclusters

Enables you to configure a client proxy for each subcluster to communicate with all its nodes instead of connecting directly to the database nodes.

A proxy between the client and the Vertica server helps manage communication. You can configure client proxy pod(s) for each subcluster which communicate with all nodes in the subcluster instead of connecting directly to the database nodes. The VerticaDB operator mounts a config map as the configuration file in the proxy pod(s) and automatically updates the config map when the state of the subcluster changes.

For each subcluster, a client proxy deployment with the name <vdb-name>-<subcluster-name>-proxy and a client proxy config map with the name <vdb-name>-<subcluster-name>-proxy-cm are created. You can only verify if the deployment and config map with these names have been created, but you must not edit them.

When a new connection request is made, it is redirected to a node based on the workload specified in the request. If no workload is provided, the default workload is used. The proxy retrieves the list of available nodes for that workload and redirects the request according to the load balancing policy. To reduce performance impact, the proxy caches the node list for a predefined period, which minimizes server calls and improves overall performance.

During an online upgrade, Vertica transfers active connections from a subcluster that is scheduled to shut down. The proxy detects and handles session transfer messages from the server.

Enabling client proxy pod

To enable client proxy for the Vertica database, set the vertica.com/use-client-proxy annotation to true.

metadata:
  annotations:
    vertica.com/use-client-proxy: "true"
    vertica.com/client-proxy-log-level: INFO
...
spec:
...
  proxy:
    image: opentext/client-proxy:latest
  ...
  subclusters:
  - affinity: {}
    name: sc1
    proxy:
      replicas: 1
      resources: {}

Creating replicas of client proxy pod

You can create more than one client proxy pod for a subcluster. To do this, set spec.subclusters[].proxy.replicas to a value >1 based on your requirement.

  ...
  subclusters:
  - affinity: {}
    name: sc1
    proxy:
      replicas: 1
      resources: {}

Verifying deployment and config map

After client proxy is enabled, you can verify the deployment and config map.

To check the deployment:

$ kubectl get deployment
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
vertica-db-sc1-proxy         1/1     1            1           5m57s
verticadb-operator-manager   1/1     1            1           3h42m

To check the config map:

$ kubectl get cm
NAME                                DATA   AGE
vertica-db-sc1-proxy-cm             1      6m10s
verticadb-operator-manager-config   24     3h42m
$ kubectl describe configmap vertica-db-sc1-proxy-cm
Name:         vertica-db-sc1-proxy-cm
Namespace:    vertica
Labels:       app.kubernetes.io/component=database
              app.kubernetes.io/instance=vertica-db
              app.kubernetes.io/managed-by=verticadb-operator
              app.kubernetes.io/name=vertica
              app.kubernetes.io/version=25.1.0-0
              vertica.com/database=vertica
Annotations:  vertica.com/operator-deployment-method: helm
              vertica.com/operator-version: 25.1.0-0
 
Data
====
config.yaml:
----
listener:
  host: ""
  port: 5433
database:
  nodes:
  - vertica-db-sc1-0.vertica-db.vertica.svc.cluster.local:5433
  - vertica-db-sc1-1.vertica-db.vertica.svc.cluster.local:5433
  - vertica-db-sc1-2.vertica-db.vertica.svc.cluster.local:5433
log:
  level: INFO
 
 
BinaryData
====
 
Events:  <none>

Connecting to Vertica nodes through client proxy

You can run the following command to verify that the client proxy pod is created:

$ kubectl get pods -o wide
NAME                                          READY   STATUS    RESTARTS   AGE   IP             NODE                              NOMINATED NODE   READINESS GATES
vertica-db-sc1-0                              2/2     Running   0          19h   10.244.1.244   k8s-ubuntu20-05.verticacorp.com   <none>           <none>
vertica-db-sc1-1                              2/2     Running   0          19h   10.244.1.246   k8s-ubuntu20-05.verticacorp.com   <none>           <none>
vertica-db-sc1-2                              2/2     Running   0          19h   10.244.2.218   k8s-ubuntu20-06                   <none>           <none>
vertica-db-sc1-proxy-b46578c96-bhs5r          1/1     Running   0          19h   10.244.2.214   k8s-ubuntu20-06                   <none>           <none>
verticadb-operator-manager-75ddffb477-qmbpf   1/1     Running   0          23h   10.244.1.214   k8s-ubuntu20-05.verticacorp.com   <none>           <none>

In this example, the IP of the client proxy pod is 10.244.2.214.

You can still use NodePort or load balancer to connect to the service of the subcluster through the client proxy. The service will now redirect the connection to the client proxy instead of the Vertica nodes. Here, the service verica-db-sc1 has a load balancer a24fb01e0875e4adc844aa046951366f-55b4172b9dacecfb.elb.us-east-1.amazonaws.com.

$ kubectl get svc
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP                                                                     PORT(S)                               AGE
vertica-db       ClusterIP      None            <none>                                                                          5434/TCP,4803/TCP,8443/TCP,5554/TCP   13d
vertica-db-sc1   LoadBalancer   172.30.84.160   a24fb01e0875e4adc844aa046951366f-55b4172b9dacecfb.elb.us-east-1.amazonaws.com   5433:31475/TCP,8443:30239/TCP         13d

In the following example, we use the vsql client to connect via the service:

$ /opt/vertica/bin/vsql -h a24fb01e0875e4adc844aa046951366f-55b4172b9dacecfb.elb.us-east-1.amazonaws.com -U dbadmin
Welcome to vsql, the Vertica Analytic Database interactive terminal.
 
Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit
 
vertica=> select node_name,client_hostname,client_type,client_os_hostname from current_session;
     node_name      |  client_hostname   | client_type |       client_os_hostname
--------------------+--------------------+-------------+---------------------------------
v_vertica_node0001 | 10.244.2.214:46750 | vsql        | k8s-ubuntu20-04.verticacorp.com
(1 row)

You will notice that in the server session, the client_hostname shows the client proxy’s Cluster-IP (10.244.2.214 in this case) instead of the actual client machine.