Configure Kafka for TLS
This page covers procedures for configuring TLS connections Vertica, Kafka, and the scheduler.
Note that the following example configures TLS for a Kafka server where ssl.client.auth=required
, which requires the following:
-
kafka_SSL_Certificate
-
kafka_SSL_PrivateKey_secret
-
kafka_SSL_PrivateKeyPassword_secret
-
A keystore for the Scheduler
If your configuration uses ssl.client.auth=none
or ssl.client.auth=requested
, these parameters and the scheduler keystore are optional.
Creating certificates for Vertica and clients
The CA certificate in this example is self-signed. In a production environment, you should instead use a trusted CA.
This example uses the same self-signed root CA to sign all of the certificates used by the scheduler, Kafka brokers, and Vertica. If you cannot use the same CA to sign the keys for all of these systems, make sure you include the entire chain of trust in your keystores.
For more information, see Generating TLS certificates and keys.
-
Generate a private key,
root.key
.$ openssl genrsa -out root.key Generating RSA private key, 2048 bit long modulus .............................................................................. ............................+++ ...............+++ e is 65537 (0x10001)
-
Generate a self-signed CA certificate.
$ openssl req -new -x509 -key root.key -out root.crt You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:MA Locality Name (eg, city) []:Cambridge Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []:*.mycompany.com Email Address []:myemail@mycompany.com
-
Restrict to the owner read/write permissions for
root.key
androot.crt
. Grant read permissions to other groups forroot.crt
.$ ls root.crt root.key $ chmod 600 root.key $ chmod 644 root.crt
-
Generate the server private key,
server.key
.$ openssl genrsa -out server.key Generating RSA private key, 2048 bit long modulus ....................................................................+++ ......................................+++ e is 65537 (0x10001)
-
Create a certificate signing request (CSR) for your CA. Be sure to set the "Common Name" field to a wildcard (asterisk) so the certificate is accepted for all Vertica nodes in the cluster:
$ openssl req -new -key server.key -out server_reqout.txt You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:MA Locality Name (eg, city) []:Cambridge Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []:*.mycompany.com Email Address []:myemail@mycompany.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: server_key_password An optional company name []:
-
Sign the server certificate with your CA. This creates the server certificate
server.crt
.$ openssl x509 -req -in server_reqout.txt -days 3650 -sha1 -CAcreateserial -CA root.crt \ -CAkey root.key -out server.crt Signature ok subject=/C=US/ST=MA/L=Cambridge/O=My Company/CN=*.mycompany.com/emailAddress=myemail@mycompany.com Getting CA Private Key
-
Set the appropriate permissions for the key and certificate.
$ chmod 600 server.key $ chmod 644 server.crt
Create a client key and certificate (mutual mode only)
In Mutual Mode, clients and servers verify each other's certificates before establishing a connection. The following procedure creates a client key and certificate to present to Vertica. The certificate must be signed by a CA that Vertica trusts.
The steps for this are identical to those above for creating a server key and certificate for Vertica.
$ openssl genrsa -out client.key
Generating RSA private key, 2048 bit long modulus
................................................................+++
..............................+++
e is 65537 (0x10001)
$ openssl req -new -key client.key -out client_reqout.txt
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:MA
Locality Name (eg, city) []:Cambridge
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:*.mycompany.com
Email Address []:myemail@mycompany.com
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []: server_key_password
An optional company name []:
$ openssl x509 -req -in client_reqout.txt -days 3650 -sha1 -CAcreateserial -CA root.crt \
-CAkey root.key -out client.crt
Signature ok
subject=/C=US/ST=MA/L=Cambridge/O=My Company/CN=*.mycompany.com/emailAddress=myemail@mycompany.com
Getting CA Private Key
$ chmod 600 client.key
$ chmod 644 client.crt
Set up mutual mode client-server TLS
Configure Vertica for mutual mode
The following keys and certificates must be imported and then distributed to the nodes on your Vertica cluster with TLS CONFIGURATION for Mutual Mode:
-
root.key
-
root.crt
-
server.key
-
server.crt
You can view existing keys and certificates by querying CRYPTOGRAPHIC_KEYS and CERTIFICATES.
-
Import the server and root keys and certificates into Vertica with CREATE KEY and CREATE CERTIFICATE. See Generating TLS certificates and keys for details.
=> CREATE KEY imported_key TYPE 'RSA' AS '-----BEGIN PRIVATE KEY-----...-----END PRIVATE KEY-----'; => CREATE CA CERTIFICATE imported_ca AS '-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----'; => CREATE CERTIFICATE imported_cert AS '-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----';
In this example, \set is used to retrieve the contents of
root.key
,root.crt
,server.key
, andserver.crt
.=> \set ca_cert ''''`cat root.crt`'''' => \set serv_key ''''`cat server.key`'''' => \set serv_cert ''''`cat server.crt`'''' => CREATE CA CERTIFICATE root_ca AS :ca_cert; CREATE CERTIFICATE => CREATE KEY server_key TYPE 'RSA' AS :serv_key; CREATE KEY => CREATE CERTIFICATE server_cert AS :serv_cert; CREATE CERTIFICATE
-
Follow the steps for Mutual Mode in Configuring client-server TLS to set the proper TLSMODE and TLS CONFIGURATION parameters.
Configure a client for mutual mode
Clients must have their private key, certificate, and CA certificate. The certificate will be presented to Vertica when establishing a connection, and the CA certificate will be used to verify the server certificate from Vertica.
This example configures the vsql
client for mutual mode.
-
Create a
.vsql
directory in the user's home directory.$ mkdir ~/.vsql
-
Copy
client.key
,client.crt
, androot.crt
to thevsql
directory.$ cp client.key client.crt root.crt ~/.vsql
-
Log into Vertica with
vsql
and query the SESSIONS system table to verify that the connection is using mutual mode:$ vsql Password: user-password Welcome to vsql, the Vertica Analytic Database interactive terminal. Type: \h or \? for help with vsql commands \g or terminate with semicolon to execute query \q to quit SSL connection (cipher: DHE-RSA-AES256-GCM-SHA384, bits: 256, protocol: TLSv1.2) => select user_name,ssl_state from sessions; user_name | ssl_state -----------+----------- dbadmin | Mutual (1 row)
Configure Kafka for TLS
Configure the Kafka brokers
This procedure configures Kafka to use TLS with client connections. You can also configure Kafka to use TLS to communicate between brokers. However, inter-broker TLS has no impact on establishing an encrypted connection between Vertica and Kafka.
-
Create a truststore file for all of your Kafka brokers, importing your CA certificate. This example uses the self-signed
root.crt
created above.=> $ keytool -keystore kafka.truststore.jks -alias CARoot -import \ -file root.crt Enter keystore password: some_password Re-enter new password: some_password Owner: EMAILADDRESS=myemail@mycompany.com, CN=*.mycompany.com, O=MyCompany, L=Cambridge, ST=MA, C=US Issuer: EMAILADDRESS=myemail@mycompany.com, CN=*.mycompany.com, O=MyCompany, L=Cambridge, ST=MA, C=US Serial number: c3f02e87707d01aa Valid from: Fri Mar 22 13:37:37 EDT 2019 until: Sun Apr 21 13:37:37 EDT 2019 Certificate fingerprints: MD5: 73:B1:87:87:7B:FE:F1:6E:94:55:FD:AF:5D:D0:C3:0C SHA1: C0:69:1C:93:54:21:87:C7:03:93:FE:39:45:66:DE:22:18:7E:CD:94 SHA256: 23:03:BB:B7:10:12:50:D9:C5:D0:B7:58:97:41:1E:0F:25:A0:DB: D0:1E:7D:F9:6E:60:8F:79:A6:1C:3F:DD:D5 Signature algorithm name: SHA256withRSA Subject Public Key Algorithm: 2048-bit RSA key Version: 3 Extensions: #1: ObjectId: 2.5.29.35 Criticality=false AuthorityKeyIdentifier [ KeyIdentifier [ 0000: 50 69 11 64 45 E9 CC C5 09 EE 26 B5 3E 71 39 7C Pi.dE.....&.>q9. 0010: E5 3D 78 16 .=x. ] ] #2: ObjectId: 2.5.29.19 Criticality=false BasicConstraints:[ CA:true PathLen:2147483647 ] #3: ObjectId: 2.5.29.14 Criticality=false SubjectKeyIdentifier [ KeyIdentifier [ 0000: 50 69 11 64 45 E9 CC C5 09 EE 26 B5 3E 71 39 7C Pi.dE.....&.>q9. 0010: E5 3D 78 16 .=x. ] ] Trust this certificate? [no]: yes Certificate was added to keystore
-
Create a keystore file for the Kafka broker named
kafka01
. Each broker's keystore should be unique.The
keytool
command adds the a Subject Alternative Name (SAN) used as a fallback when establishing a TLS connection. Use your Kafka' broker's fully-qualified domain name (FQDN) as the value for the SAN and "What is your first and last name?" prompt.In this example, the FQDN is
kafka01.example.com
. The alias forkeytool
is set tolocalhost
, so local connections to the broker use TLS.$ keytool -keystore kafka01.keystore.jks -alias localhost -validity 365 -genkey -keyalg RSA \ -ext SAN=DNS:kafka01.mycompany.com Enter keystore password: some_password Re-enter new password: some_password What is your first and last name? [Unknown]: kafka01.mycompany.com What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: MyCompany What is the name of your City or Locality? [Unknown]: Cambridge What is the name of your State or Province? [Unknown]: MA What is the two-letter country code for this unit? [Unknown]: US Is CN=Database Admin, OU=MyCompany, O=Unknown, L=Cambridge, ST=MA, C=US correct? [no]: yes Enter key password for <localhost> (RETURN if same as keystore password):
-
Export the Kafka broker's certificate. In this example, the certificate is exported as
kafka01.unsigned.crt
.$ keytool -keystore kafka01.keystore.jks -alias localhost \ -certreq -file kafka01.unsigned.crt Enter keystore password: some_password
-
Sign the broker's certificate with the CA certificate.
$ openssl x509 -req -CA root.crt -CAkey root.key -in kafka01.unsigned.crt \ -out kafka01.signed.crt -days 365 -CAcreateserial Signature ok subject=/C=US/ST=MA/L=Cambridge/O=Unknown/OU=MyCompany/CN=Database Admin Getting CA Private Key
-
Import the CA certificate into the broker's keystore.
Note
If you use different CAs to sign the certificates in your environment, you must add the entire chain of CAs you used to sign your certificate to the keystore, all the way up to the root CA. Including the entire chain of trust helps other systems verify the identity of your Kafka broker.$ keytool -keystore kafka01.keystore.jks -alias CARoot -import -file root.crt Enter keystore password: some_password Owner: EMAILADDRESS=myemail@mycompany.com, CN=*.mycompany.com, O=MyCompany, L=Cambridge, ST=MA, C=US Issuer: EMAILADDRESS=myemail@mycompany.com, CN=*.mycompany.com, O=MyCompany, L=Cambridge, ST=MA, C=US Serial number: c3f02e87707d01aa Valid from: Fri Mar 22 13:37:37 EDT 2019 until: Sun Apr 21 13:37:37 EDT 2019 Certificate fingerprints: MD5: 73:B1:87:87:7B:FE:F1:6E:94:55:FD:AF:5D:D0:C3:0C SHA1: C0:69:1C:93:54:21:87:C7:03:93:FE:39:45:66:DE:22:18:7E:CD:94 SHA256: 23:03:BB:B7:10:12:50:D9:C5:D0:B7:58:97:41:1E:0F:25:A0:DB:D0:1E:7D:F9:6E:60:8F:79:A6:1C:3F:DD:D5 Signature algorithm name: SHA256withRSA Subject Public Key Algorithm: 2048-bit RSA key Version: 3 Extensions: #1: ObjectId: 2.5.29.35 Criticality=false AuthorityKeyIdentifier [ KeyIdentifier [ 0000: 50 69 11 64 45 E9 CC C5 09 EE 26 B5 3E 71 39 7C Pi.dE.....&.>q9. 0010: E5 3D 78 16 .=x. ] ] #2: ObjectId: 2.5.29.19 Criticality=false BasicConstraints:[ CA:true PathLen:2147483647 ] #3: ObjectId: 2.5.29.14 Criticality=false SubjectKeyIdentifier [ KeyIdentifier [ 0000: 50 69 11 64 45 E9 CC C5 09 EE 26 B5 3E 71 39 7C Pi.dE.....&.>q9. 0010: E5 3D 78 16 .=x. ] ] Trust this certificate? [no]: yes Certificate was added to keystore
-
Import the signed Kafka broker certificate into the keystore.
$ keytool -keystore kafka01.keystore.jks -alias localhost \ -import -file kafka01.signed.crt Enter keystore password: some_password Owner: CN=Database Admin, OU=MyCompany, O=Unknown, L=Cambridge, ST=MA, C=US Issuer: EMAILADDRESS=myemail@mycompany.com, CN=*.mycompany.com, O=MyCompany, L=Cambridge, ST=MA, C=US Serial number: b4bba9a1828ecaaf Valid from: Tue Mar 26 12:26:34 EDT 2019 until: Wed Mar 25 12:26:34 EDT 2020 Certificate fingerprints: MD5: 17:EA:3E:15:B4:15:E9:93:67:EE:59:C0:4F:D1:4C:01 SHA1: D5:35:B7:F7:44:7C:D6:B4:56:6F:38:2D:CD:3A:16:44:19:C1:06:B7 SHA256: 25:8C:46:03:60:A7:4C:10:A8:12:8E:EA:4A:FA:42:1D:A8:C5:FB:65:81:74:CB:46:FD:B1:33:64:F2:A3:46:B0 Signature algorithm name: SHA256withRSA Subject Public Key Algorithm: 2048-bit RSA key Version: 1 Trust this certificate? [no]: yes Certificate was added to keystore
-
If you are not logged into the Kafka broker for which you prepared the keystore, copy the truststore and keystore to it using scp. If you have already decided where to store the keystore and truststore files in the broker's filesystem, you can directly copy them to their final destination. This example just copies them to the root user's home directory temporarily. The next step moves them into their final location.
$ scp kafka.truststore.jks kafka01.keystore.jks root@kafka01.mycompany.com: root@kafka01.mycompany.com's password: root_password kafka.truststore.jks 100% 1048 1.0KB/s 00:00 kafka01.keystore.jks 100% 3277 3.2KB/s 00:00
-
Repeat steps 2 through 7 for the remaining Kafka brokers.
Allow Kafka to read the keystore and truststore
If you did not copy the truststore and keystore to directory where Kafka can read them in the previous step, you must copy them to a final location on the broker. You must also allow the user account you use to run Kafka to read these files. The easiest way to ensure the user's access is to give this user ownership of these files.
In this example, Kafka is run by a Linux user kafka
. If you use another user to run Kafka, be sure to set the permissions on the truststore and keystore files appropriately.
-
Log into the Kafka broker as root.
-
Copy the truststore and keystore to a directory where Kafka can access them. There is no set location for these files: you can choose a directory under /etc, or some other location where configuration files are usually stored. This example copies them from root's home directory to Kafka's configuration directory named /opt/kafka/config/. In your own system, this configuration directory may be in a different location depending on how you installed Kafka.
-
Copy the truststore and keystore to a directory where Kafka can access them. There is no set location for these files: you can choose a directory under
/etc
, or some other location where configuration files are usually stored. This example copies them from root's home directory to Kafka's configuration directory named/opt/kafka/config/
. In your own system, this configuration directory may be in a different location depending on how you installed Kafka.~# cd /opt/kafka/config/ /opt/kafka/config# cp /root/kafka01.keystore.jks /root/kafka.truststore.jks .
-
If you aren't logged in as a user account that runs Kafka, change the ownership of the truststore and keystore files. This example changes the ownership from root (which is the user currently logged in) to the kafka user:
/opt/kafka/config# ls -l total 80 ... -rw-r--r-- 1 kafka nogroup 1221 Feb 21 2018 consumer.properties -rw------- 1 root root 3277 Mar 27 08:03 kafka01.keystore.jks -rw-r--r-- 1 root root 1048 Mar 27 08:03 kafka.truststore.jks -rw-r--r-- 1 kafka nogroup 4727 Feb 21 2018 log4j.properties ... /opt/kafka/config# chown kafka kafka01.keystore.jks kafka.truststore.jks /opt/kafka/config# ls -l total 80 ... -rw-r--r-- 1 kafka nogroup 1221 Feb 21 2018 consumer.properties -rw------- 1 kafka root 3277 Mar 27 08:03 kafka01.keystore.jks -rw-r--r-- 1 kafka root 1048 Mar 27 08:03 kafka.truststore.jks -rw-r--r-- 1 kafka nogroup 4727 Feb 21 2018 log4j.properties ...
-
Repeat steps 1 through 3 for the remaining Kafka brokers.
Configure Kafka to use TLS
With the truststore and keystore in place, your next step is to edit the Kafka's server.properties
configuration file to tell Kafka to use TLS/SSL encryption. This file is usually stored in the Kafka config directory. The location of this directory depends on how you installed Kafka. In this example, the file is located in /opt/kafka/config
.
When editing the files, be sure you do not change their ownership. The best way to ensure Linux does not change the file's ownership is to use su to become the user account that runs Kafka, assuming you are not already logged in as that user:
$ /opt/kafka/config# su -s /bin/bash kafka
Note
The previous command lets you start a shell as the kafka system user even if that user cannot log in.The server.properties
file contains Kafka broker settings in a property
=
value
format. To configure the Kafka broker to use SSL, alter or add the following property settings:
listeners
- Host names and ports on which the Kafka broker listens. If you are not using SSL for connections between brokers, you must supply both a PLANTEXT and SSL option. For example:
listeners=PLAINTEXT://
hostname
:9092,SSL://
hostname
:9093
ssl.keystore.location
- Absolute path to the keystore file.
ssl.keystore.password
- Password for the keystore file.
ssl.key.password
- Password for the Kafka broker's key in the keystore. You can make this password different than the keystore password if you choose.
ssl.truststore.location
- Location of the truststore file.
ssl.truststore.password
- Password to access the truststore.
ssl.enabled.protocols
- TLS/SSL protocols that Kafka allows clients to use.
ssl.client.auth
- Specifies whether SSL authentication is required or optional. The most secure setting for this setting is required to verify the client's identity.
Important
These settings vary depending on your version of Kafka. Always consult the Apache Kafka documentation for your version of Kafka before making changes toserver.properties
. In particular, be aware that Kafka version 2.0 and later enables host name verification for clients and inter-broker communications by default.
This example configures Kafka to verify client identities via SSL authentication. It does not use SSL to communicate with other brokers, so the server.properties
file defines both SSL and PLAINTEXT listener ports. It does not supply a host name for listener ports which tells Kafka to listen on the default network interface.
The lines added to the kafka01 broker's copy of server.properties
for this configuration are:
listeners=PLAINTEXT://:9092,SSL://:9093
ssl.keystore.location=/opt/kafka/config/kafka01.keystore.jks
ssl.keystore.password=vertica
ssl.key.password=vertica
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
ssl.truststore.password=vertica
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.client.auth=required
You must make these changes to the server.properties
file on all of your brokers.
After making your changes to your broker's server.properties
files, restart Kafka. How you restart Kafka depends on your installation:
-
If Kafka is running as part of a Hadoop cluster, you can usually restart it from within whatever interface you use to control Hadoop (such as Ambari).
-
If you installed Kafka directly, you can restart it either by directly running the
kafka-server-stop.sh
andkafka-server-start.sh
scripts or via the Linux system's service control commands (such assystemctl
). You must run this command on each broker.
Test the configuration
If you have not configured client authentication, you can quickly test whether Kafka can access its keystore by running the command:
$ openssl s_client -debug -connect broker_host_name:9093 -tls1
If Kafka is able to access its keystore, this command will output a dump of the broker's certificate (exit with CTRL+C):
=> # openssl s_client -debug -connect kafka01.mycompany.com:9093 -tls1
CONNECTED(00000003)
write to 0xa4e4f0 [0xa58023] (197 bytes => 197 (0xC5))
0000 - 16 03 01 00 c0 01 00 00-bc 03 01 76 85 ed f0 fe ...........v....
0010 - 60 60 7e 78 9d d4 a8 f7-e6 aa 5c 80 b9 a7 37 61 ``~x......\...7a
0020 - 8e 04 ac 03 6d 52 86 f5-84 4b 5c 00 00 62 c0 14 ....mR...K\..b..
0030 - c0 0a 00 39 00 38 00 37-00 36 00 88 00 87 00 86 ...9.8.7.6......
0040 - 00 85 c0 0f c0 05 00 35-00 84 c0 13 c0 09 00 33 .......5.......3
0050 - 00 32 00 31 00 30 00 9a-00 99 00 98 00 97 00 45 .2.1.0.........E
0060 - 00 44 00 43 00 42 c0 0e-c0 04 00 2f 00 96 00 41 .D.C.B...../...A
0070 - c0 11 c0 07 c0 0c c0 02-00 05 00 04 c0 12 c0 08 ................
0080 - 00 16 00 13 00 10 00 0d-c0 0d c0 03 00 0a 00 ff ................
0090 - 01 00 00 31 00 0b 00 04-03 00 01 02 00 0a 00 1c ...1............
00a0 - 00 1a 00 17 00 19 00 1c-00 1b 00 18 00 1a 00 16 ................
00b0 - 00 0e 00 0d 00 0b 00 0c-00 09 00 0a 00 23 00 00 .............#..
00c0 - 00 0f 00 01 01 .....
read from 0xa4e4f0 [0xa53ad3] (5 bytes => 5 (0x5))
0000 - 16 03 01 08 fc .....
. . .
The above method is not conclusive, however; it only tells you if Kafka is able to find its keystore.
The best test of whether Kafka is able to accept TLS connections is to configure the command-line Kafka producer and consumer. In order to configure these tools, you must first create a client keystore. These steps are identical to creating a broker keystore.
Note
This example assumes that Kafka has a topic named test that you can send test messages to.-
Create the client keystore:
keytool -keystore client.keystore.jks -alias localhost -validity 365 -genkey -keyalg RSA -ext SAN=DNS:fqdn_of_client_system
-
Respond to the "What is your first and last name?" with the FQDN of the system you will use to run the producer and/or consumer. Answer the rest of the prompts with the details of your organization.
-
Export the client certificate so it can be signed:
keytool -keystore client.keystore.jks -alias localhost -certreq -file client.unsigned.cert
-
Sign the client certificate with the root CA:
openssl x509 -req -CA root.crt -CAkey root.key -in client.unsigned.cert -out client.signed.cert \ -days 365 -CAcreateserial
-
Add the root CA to keystore:
keytool -keystore client.keystore.jks -alias CARoot -import -file root.crt
-
Add the signed client certificate to the keystore:
keytool -keystore client.keystore.jks -alias localhost -import -file client.signed.cert
-
Copy the keystore to a location where you will use it. For example, you could choose to copy it to the same directory where you copied the keystore for the Kafka broker. If you choose to copy it to some other location, or intend to use some other user to run the command-line clients, be sure to add a copy of the truststore file you created for the brokers. Clients can reuse this truststore file for authenticating the Kafka brokers because the same CA is used to sign all of the certificates. Also set the file's ownership and permissions accordingly.
Next, you must create a properties file (similar to the broker's server.properties
file) that configures the command-line clients to use TLS. For a client running on the Kafka broker named kafka01, your configuration file could would look like this:
security.protocol=SSL
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
ssl.truststore.password=trustore_password
ssl.keystore.location=/opt/kafka/config/client.keystore.jks
ssl.keystore.password=keystore_password
ssl.key.password=key_password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.client.auth=required
This property file assumes the keystore file is located in the Kafka configuration directory.
Finally, you can run the command line producer or consumer to ensure they can connect and process data. You supply these clients the properties file you just created. The following example assumes you stored the properties file in the Kafka configuration directory, and that Kafka is installed in /opt/kafka
:
~# cd /opt/kafka
/opt/kafka# bin/kafka-console-producer.sh --broker-list kafka01.mycompany.com:9093 \
--topic test --producer.config config/client.properties
>test
>test again
>More testing. These messages seem to be getting through!
^D
/opt/kafka# bin/kafka-console-consumer.sh --bootstrap-server kafaka01.mycompany.com:9093 --topic test \
--consumer.config config/client.properties --from-beginning
test
test again
More testing. These messages seem to be getting through!
^C
Processed a total of 3 messages
Loading data from Kafka
After you configure Kafka to accept TLS connections, verify that you can directly load data from it into Vertica. You should perform this step even if you plan to create a scheduler to automatically stream data.
You can choose to create a separate key and certificate for directly loading data from Kafka. This example re-uses the key and certificate created for the Vertica server in part 2 of this example.
You directly load data from Kafka by using the KafkaSource data source function with the COPY statement (see Manually consume data from Kafka). The KafkaSource function creates the connection to Kafka, so it needs a key, certificate, and related passwords to create an encrypted connection. You pass this information via session parameters. See Kafka user-defined session parameters for a list of these parameters.
The easiest way to get the key and certificate into the parameters is by first reading them into vsql variables. You get their contents by using back quotes to read the file contents via the Linux shell. Then you set the session parameters from the variables. Before setting the session parameters, increase the MaxSessionUDParameterSize session parameter to add enough storage space in the session variables for the key and the certificates. They can be larger than the default size limit for session variables (1000 bytes).
The following example reads the server key and certificate and the root CA from the a directory named /home/dbadmin/SSL
. Because the server's key password is not saved in a file, the example sets it in a Linux environment variable named KVERTICA_PASS before running vsql. The example sets MaxSessionUDParameterSize to 100000 before setting the session variables. Finally, it enables TLS for the Kafka connection and streams data from the topic named test.
$ export KVERTICA_PASS=server_key_password
$ vsql
Password:
Welcome to vsql, the Vertica Analytic Database interactive terminal.
Type: \h or \? for help with vsql commands
\g or terminate with semicolon to execute query
\q to quit
SSL connection (cipher: DHE-RSA-AES256-GCM-SHA384, bits: 256, protocol: TLSv1.2)
=> \set cert '\''`cat /home/dbadmin/SSL/server.crt`'\''
=> \set pkey '\''`cat /home/dbadmin/SSL/server.key`'\''
=> \set ca '\''`cat /home/dbadmin/SSL/root.crt`'\''
=> \set pass '\''`echo $KVERTICA_PASS`'\''
=> alter session set MaxSessionUDParameterSize=100000;
ALTER SESSION
=> ALTER SESSION SET UDPARAMETER kafka_SSL_Certificate=:cert;
ALTER SESSION
=> ALTER SESSION SET UDPARAMETER kafka_SSL_PrivateKey_secret=:pkey;
ALTER SESSION
=> ALTER SESSION SET UDPARAMETER kafka_SSL_PrivateKeyPassword_secret=:pass;
ALTER SESSION
=> ALTER SESSION SET UDPARAMETER kafka_SSL_CA=:ca;
ALTER SESSION
=> ALTER SESSION SET UDPARAMETER kafka_Enable_SSL=1;
ALTER SESSION
=> CREATE TABLE t (a VARCHAR);
CREATE TABLE
=> COPY t SOURCE KafkaSource(brokers='kafka01.mycompany.com:9093',
stream='test|0|-2', stop_on_eof=true,
duration=interval '5 seconds')
PARSER KafkaParser();
Rows Loaded
-------------
3
(1 row)
=> SELECT * FROM t;
a
---------------------------------------------------------
test again
More testing. These messages seem to be getting through!
test
(3 rows)
Configure the scheduler
The final piece of the configuration is to set up the scheduler to use SSL when communicating with Kafka (and optionally with Vertica). When the scheduler runs a COPY command to get data from Kafka, it uses its own key and certificate to authenticate with Kafka. If you choose to have the scheduler use TLS/SSL to connect to Vertica, it can reuse the same keystore and truststore to make this connection.
Create a truststore and keystore for the scheduler
Because the scheduler is a separate component, it must have its own key and certificate. The scheduler runs in Java and uses the JDBC interface to connect to Vertica. Therefore, you must create a keystore (when ssl.client.auth=required
) and truststore for it to use when making a TLS-encrypted connection to Vertica.
Keep in mind that creating a keystore is optional if your Kafka server sets ssl.client.auth
to none
or requested
.
This process is similar to creating the truststores and keystores for Kafka brokers. The main difference is using the the -dname
option for keytool
to set the Common Name (CN) for the key to a domain wildcard. Using this setting allows the key and certificate to match any host in the network. This option is especially useful if you run multiple schedulers on different servers to provide redundancy. The schedulers can use the same key and certificate, no matter which server they are running on in your domain.
-
Create a truststore file for the scheduler. Add the CA certificate that you used to sign the keystore of the Kafka cluster and Vertica cluster. If you are using more than one CA to sign your certificates, add all of the CAs you used.
$ keytool -keystore scheduler.truststore.jks -alias CARoot -import \ -file root.crt Enter keystore password: some_password Re-enter new password: some_password Owner: EMAILADDRESS=myemail@mycompany.com, CN=*.mycompany.com, O=MyCompany, L=Cambridge, ST=MA, C=US Issuer: EMAILADDRESS=myemail@mycompany.com, CN=*.mycompany.com, O=MyCompany, L=Cambridge, ST=MA, C=US Serial number: c3f02e87707d01aa Valid from: Fri Mar 22 13:37:37 EDT 2019 until: Sun Apr 21 13:37:37 EDT 2019 Certificate fingerprints: MD5: 73:B1:87:87:7B:FE:F1:6E:94:55:FD:AF:5D:D0:C3:0C SHA1: C0:69:1C:93:54:21:87:C7:03:93:FE:39:45:66:DE:22:18:7E:CD:94 SHA256: 23:03:BB:B7:10:12:50:D9:C5:D0:B7:58:97:41:1E:0F:25:A0:DB: D0:1E:7D:F9:6E:60:8F:79:A6:1C:3F:DD:D5 Signature algorithm name: SHA256withRSA Subject Public Key Algorithm: 2048-bit RSA key Version: 3 Extensions: #1: ObjectId: 2.5.29.35 Criticality=false AuthorityKeyIdentifier [ KeyIdentifier [ 0000: 50 69 11 64 45 E9 CC C5 09 EE 26 B5 3E 71 39 7C Pi.dE.....&.>q9. 0010: E5 3D 78 16 .=x. ] ] #2: ObjectId: 2.5.29.19 Criticality=false BasicConstraints:[ CA:true PathLen:2147483647 ] #3: ObjectId: 2.5.29.14 Criticality=false SubjectKeyIdentifier [ KeyIdentifier [ 0000: 50 69 11 64 45 E9 CC C5 09 EE 26 B5 3E 71 39 7C Pi.dE.....&.>q9. 0010: E5 3D 78 16 .=x. ] ] Trust this certificate? [no]: yes Certificate was added to keystore
-
Initialize the keystore, passing it a wildcard host name as the Common Name. The alias parameter in this command is important, as you use it later to identify the key the scheduler must use when creating SSL conections:
keytool -keystore scheduler.keystore.jks -alias vsched -validity 365 -genkey \ -keyalg RSA -dname CN=*.mycompany.com
Important
If you choose to use a file format other than the standard Java Keystore (JKS) format for your keystore or truststore files, you must use the correct file extension in the filename. For example, suppose you choose to use a keystore and truststore saved in PKCS#12 format. Then your keystore and trustore files must end with the
.pfx
or.p12
extension.If the scheduler does not recognize the file's extension (or there is no extension in the file name), it assumes that the file is in JKS format. If the file is not in JKS format, you will see an error message when starting the scheduler, similar to "Failed to create an SSLSocketFactory when setting up TLS: keystore not found."
-
Export the scheduler's key so you can sign it with the root CA:
$ keytool -keystore scheduler.keystore.jks -alias vsched -certreq \ -file scheduler.unsigned.cert
-
Sign the scheduler key with the root CA:
$ openssl x509 -req -CA root.crt -CAkey root.key -in scheduler.unsigned.cert \ -out scheduler.signed.cert -days 365 -CAcreateserial
-
Re-import the scheduler key into the keystore:
$ keytool -keystore scheduler.keystore.jks -alias localhost -import -file scheduler.signed.cert
Set environment variable VKCONFIG_JVM_OPTS
You must pass several settings to the JDBC interface of the Java Virtual Machine (JVM) that runs the scheduler. These settings tell the JDBC driver where to find the keystore and truststore, as well as the key's password. The easiest way to pass in these settings is to set a Linux environment variable named VKCONFIG_JVM_OPTS. As it starts, the scheduler checks this environment variable and passes any properties defined in it to the JVM.
The properties that you need to set are:
-
javax.net.ssl.keystore: the absolute path to the keystore file to use.
-
javax.net.ssl.keyStorePassword: the password for the scheduler's key.
-
javax.net.ssl.trustStore: The absolute path to the truststore file.
The Linux command line to set the environment variable is:
export VKCONFIG_JVM_OPTS="$VKCONFIG_JVM_OPTS -Djavax.net.ssl.trustStore=/path/to/truststore \
-Djavax.net.ssl.keyStore=/path/to/keystore \
-Djavax.net.ssl.keyStorePassword=keystore_password"
Note
The previous command preserves any existing contents of the VKCONFIG_JVM_OPTS variable. If you find the variable has duplicate settings, remove the$VKCONFIG_JVM_OPTS
from your statement so you override the existing values in the variable.
For example, suppose the scheduler's truststore and keystore are located in the directory /home/dbadmin/SSL
. Then you could use the following command to set the VKCONFIG_JVM_OPTS variable:
$ export VKCONFIG_JVM_OPTS="$VKCONFIG_JVM_OPTS \
-Djavax.net.ssl.trustStore=/home/dbadmin/SSL/scheduler.truststore.jks \
-Djavax.net.ssl.keyStore=/home/dbadmin/SSL/scheduler.keystore.jks \
-Djavax.net.ssl.keyStorePassword=key_password"
Important
The Java property names are case sensitive.To ensure that this variable is always set, add the command to the ~/.bashrc
or other startup file of the user account that runs the scheduler.
If you require TLS on the JDBC connection to Vertica, add TLSmode=require
to the JDBC URL that the scheduler uses. The easiest way to add this is to use the scheduler's --jdbc-url
option. Assuming that you use a configuration file for your scheduler, you can add this line to it:
--jdbc-url=jdbc:vertica://VerticaHost:portNumber/databaseName?user=username&password=password&TLSmode=require
For more information about using the JDBC with Vertica, see Programming JDBC client applications.
Enable TLS in the scheduler configuration
Lastly, enable TLS. Every time you run vkconfig
, you must pass it the following options:
--enable-ssl
true
, to enable the scheduler to use SSL when connecting to Kafka.--ssl-ca-alias
- Alias for the CA you used to sign your Kafka broker's keys. This must match the value you supplied to the
-alias
argument of the keytool command to import the CA into the truststore.Note
If you used more than one CA to sign keys, omit this option to import all of the CAs into the truststore. --ssl-key-alias
- Alias assigned to the schedule key. This value must match the value you supplied to the -alias you supplied to the keytool command when creating the scheduler's keystore.
--ssl-key-password
- Password for the scheduler key.
See Common vkconfig script options for details of these options. For convenience and security, add these options to a configuration file that you pass to vkconfig. Otherwise, you run the risk of exposing the key password via the process list which can be viewed by other users on the same system. See Configuration File Format for more information on setting up a configuration file.
Add the following to the scheduler configuration file to allow it to use the keystore and truststore and enable TLS when connecting to Vertica:
enable-ssl=true
ssl-ca-alias=CAroot
ssl-key-alias=vsched
ssl-key-password=vertica
jdbc-url=jdbc:vertica://VerticaHost:portNumber/databaseName?user=username&password=password&TLSmode=require
Start the scheduler
Once you have configured the scheduler to use SSL, start it and verify that it can load data. For example, to start the scheduler with a configuration file named weblog.conf
, use the command:
$ nohup vkconfig launch --conf weblog.conf >/dev/null 2>&1 &