Hardening Cassandra Step by Step - Part 2 Hostname Verification for Internode Encryption
Overview
This article looks at hostname verification for internode encryption, which is designed to prevent man-in-the-middle attacks. This is a follow-up to the first hardening Cassandra post that explored internode encryption. If you have not already done so, take a moment and read through the earlier post before proceeding.
Hostname verification for internode encryption was added in CASSANDRA-9220 and made available in Cassandra 3.6. Clusters that solely rely on importing all node certificates into each truststore are not affected. However, clusters that use the same, common certificate authority (CA) to sign node certificates are potentially affected. When the CA signing process allows other parties to generate certificates for different purposes, those certificates can in turn be used for man-in-the-middle attacks. In the interest of hardening your Cassandra clusters, the instructions below will walk through enabling hostname verification so that we can:
- check that the node certificate is valid
- check that the certificate has been created for the node to which we are about to connect
Hostname verification can only be done in an application layer on top of Transport Layer Security (TLS) so HTTPS is required, as the process is fully integrated. Here, the HTTPS client verifies that communication is with the correct server. The client checks that the dnsName
in the subjectAltName
field in the certificate sent from the server matches the host part of the URL. This prevents attackers impersonating or redirecting communications between nodes in the cluster.
In this article we will demonstrate this set up by going step by step with a cluster in AWS that is equipped with internode encryption, including:
- client certificate authentication
- hostname verification
There are a lot of steps involved to set everything up. Some steps need to performed locally, while others need to performed on the EC2 instances running in AWS. Because doing everything manually is tedious and error prone, we will use a number of tools to automate the whole process, namely tlp-cluster and Ansible.
The tlp-cluster tool is used to provision EC2 instances and the Cassandra cluster. Ansible is used to generate all of the certificates and to apply the necessary configuration changes to the Cassandra nodes.
The next couple sections provide instructions on how to install the tools and how to run the automation. After running through the automation you should have a fully operational cluster that is configured with internode encryption and hostname verification.
The remainder of the post will then highlight and explain key steps carried out by the automation.
Setup
This section outlines the prerequisites needed for the automation.
Make sure that the following are installed:
Clone the tlp-cluster and the tlp-ansible repos in the same parent directory:
$ cd ~/Development
$ git clone https://github.com/thelastpickle/tlp-cluster
$ git clone https://github.com/thelastpickle/tlp-ansible
Install tlp-cluster
tlp-cluster requires that you have an AWS access key and secret. To get started, add the tlp-cluster/bin
directory to your $PATH
to avoid having to always type the path to the tlp-cluster
executable. For example:
$ export PATH="$PATH:/path/to/tlp-cluster/bin"
$ cd /path/to/tlp-cluster
./gradlew assemble
Run setup.sh
Next we need to run the setup.sh
script, which lives in the tlp-ansible
repo. The script will run tlp-cluster
to provision a cluster in AWS and then generate an Ansible inventory file at tlp-ansible/inventory/hosts.tlp_cluster
.
Note that /tmp/cassandra-tls-lab
is used as a working directory. setup.sh
will create the directory if it does not exist. All work is done in this directory.
If you have not run tlp-cluster
before it will prompt you for some input, notably your AWS credentials.
setup.sh
assumes tlp-cluster
and tlp-ansible
live in the same parent directory. If this is true, then it can simply be run without any arguments:
$ cd tlp-ansible
$ ./playbooks/tls_lab/setup.sh
However, if tlp-cluster
and tlp-ansible
have different parent directories, then you will have to provide the path to tlp-cluster
:
$ cd tlp-ansible
$ ./playbooks/tls_lab/setup.sh
Usage: setup.sh <path-to-tlp_cluster-repo>
After setup.sh
finishes, you will have a three node cluster set up in AWS. Cassandra however is not started on any of the machines since we will be applying further configuration changes for internode encryption.
To access the EC2 instances do the following:
$ cd /tmp/cassandra-tls-lab
$ alias ssh="ssh -F sshConfig"
Then you can conveniently log into your EC2 instances with:
$ ssh cassandra[0|1|2] # e.g., ssh cassandra2
Run the Ansible Playbooks
Next, we will run Ansible playbooks, which will do the following:
The node certificates need to have a Subject Alternative Name (SAN) in order for hostname verification to work.
We will first configure the cluster using certificates that do not have a SAN. This will allow us to see what kinds of errors may occur.
Run the following from the tlp-ansible
directory:
$ ansible-playbook -i inventory/hosts.tlp_cluster playbooks/tls_lab/internode_tls_no_ssl_ext.yml
The internode_tls_no_ssl_ext.yml
playbook generates all of the keys, certificates, and keystores locally. It then copies the certificates and keystores to the EC2 machines. Lastly, it updates /etc/cassandra/cassandra.yaml
and then starts Cassandra on each machine.
It is worth reiterating that certificates and keystores are created locally. Creating them on the EC2 instances would require copying the CA private key to those machines, which we want to avoid doing. In a production environment, we would want additional measures in place that provide things like:
- lifecycle management of keys
- automatic key rotation
- audit logging
- protection of keys using a hardware security module (HSM)
- strict policy controls to prevent misuse of keys
Now log into one of the nodes and check the cluster status.
$ ssh cassandra0
$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.31.42.245 114.48 KiB 256 100.0% 367efca2-664c-4e75-827c-24238af173c9 rack1
It only reports one node. You will find the same if you check the other nodes. This is because the nodes are unable to gossip with another. This error from /var/log/cassandra/system.log
reveals the problem:
ERROR [MessagingService-Outgoing-/172.31.7.158-Gossip] 2019-06-06 17:11:30,991 OutboundTcpConnection.java:538 - SSL handshake error for outbound connection to 22081e9e[SSL_NULL_WITH_NULL_NULL: Socket[addr=/172.31.7.158,port=7001,localport=40560]]
javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names present
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) ~[na:1.8.0_212]
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946) ~[na:1.8.0_212]
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316) ~[na:1.8.0_212]
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310) ~[na:1.8.0_212]
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639) ~[na:1.8.0_212]
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223) ~[na:1.8.0_212]
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037) ~[na:1.8.0_212]
at sun.security.ssl.Handshaker.process_record(Handshaker.java:965) ~[na:1.8.0_212]
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064) ~[na:1.8.0_212]
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367) ~[na:1.8.0_212]
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:750) ~[na:1.8.0_212]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123) ~[na:1.8.0_212]
at java.nio.channels.Channels$WritableByteChannelImpl.write(Channels.java:458) ~[na:1.8.0_212]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.doFlush(BufferedDataOutputStreamPlus.java:323) ~[apache-cassandra-3.11.4.jar:3.11.4] at java.nio.channels.Channels$WritableByteChannelImpl.write(Channels.java:458) ~[na:1.8.0_212]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.doFlush(BufferedDataOutputStreamPlus.java:323) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.flush(BufferedDataOutputStreamPlus.java:331) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:462) [apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:262) [apache-cassandra-3.11.4.jar:3.11.4]
Caused by: java.security.cert.CertificateException: No subject alternative names present
at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:145) ~[na:1.8.0_212]
at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) ~[na:1.8.0_212]
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) ~[na:1.8.0_212]
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) ~[na:1.8.0_212]
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:200) ~[na:1.8.0_212]
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124) ~[na:1.8.0_212]
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1621) ~[na:1.8.0_212]
... 12 common frames omitted
Near the top of message we see the exact cause of the error which happens during the SSL handshake:
java.security.cert.CertificateException: No subject alternative names present
The internode_tls_no_ssl_ext.yml
playbook generated certificates that do not have a Subject Alternative Name (SAN). If we were to disable hostname verification on each node (and restart), everything would work fine. We, of course, want to leave hostname verification enabled; so, we will run a playbook that generates SAN certificates.
Run the following from the tlp-ansible
directory:
$ ansible-playbook -i inventory/hosts.tlp_cluster playbooks/tls_lab/internode_tls.yml
The internode_tls.yml
playbook does everything that was done previously except that it generates SAN certificates.
Now log onto one of the machines and check the cluster status:
$ cd cassandra1
$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.31.29.239 280.9 KiB 256 67.1% 26a5b2e1-1079-4879-9e3f-31366852f095 rack1
UN 172.31.7.158 281.17 KiB 256 66.4% 0fab0bb3-0326-48f1-a49f-e4d8196e460b rack1
UN 172.31.42.245 280.36 KiB 256 66.5% 367efca2-664c-4e75-827c-24238af173c9 rack1
Nodes are now able to gossip with each other. Look for this message in the log to verify that internode encryption is in fact enabled:
INFO [main] 2019-06-06 17:25:13,962 MessagingService.java:704 - Starting Encrypted Messaging Service on SSL port 7001
We now have a functioning cluster that uses two-way (or mutual) TLS authentication with hostname verification.
Cleanup
When we are all done with the cluster, run:
$ cd /tmp/cassandra-tls-lab
$ tlp-cluster down -a
This will destroy the EC2 instances.
Review the Steps
One goal of this post is to make it easy to set up a cluster with internode encryption along with hostname verification. We accomplished this by using tlp-cluster and Ansible. We used tlp-cluster to provision EC2 instances and a Cassandra cluster. We then ran Ansible playbooks to configure the Cassandra nodes with internode encryption enabled. This involved generating keys, certificates, and keystore and then applying the necessary configuration changes to Cassandra.
Another goal of this post is to detail how to set up internode encryption and hostname verification without having to be well versed in Ansible. The following sections highlight the key steps for setting up a cluster with internode encryption that includes both client authentication and hostname verification.
Note that everything discussed in the follow sections is implemented in the Ansible playbooks that we previously ran; however, not all details from the playbooks are covered.
Create Certificate Authority
Ansible generates our own CA for signing all of our node certificates. As in the earlier post we use openssl
to generate and sign certificates.
Ansible Note: Most of the work done to generate the CA is in the cassandra_ca
role which is located at tlp-ansible/roles/cassandra_ca
.
First we need a working directory. The setup.sh
script creates this for us if it does not already exist at:
/tmp/cassandra-tls-lab/cassandra
Next we need an SSL configuration file. Ansible generates that for us at /tmp/cassandra-tls-lab/cassandra/ssl.cnf
. It looks something like:
[ req ]
distinguished_name = req_distinguished_name
prompt = no
output_password = cassandra
default_bits = 2048
[ req_distinguished_name ]
C = US
ST = North Carolina
L = Clayton
O = The Last Pickle
OU = tls_lab
CN = CassandraCA
emailAddress = info@thelastpickle.com
Ansible Note: Ansible uses the Jinja2 templating engine for templates.
Ansible Note: You can update variables in tlp-ansible/roles/tls_common/defaults/main.yml
to change the generated output in ssl.cnf
.
Next the CA certificate and private key is created with:
$ openssl req -config /tmp/cassandra-tls-lab/cassandra/ssl.cnf -new -x509 -keyout /tmp/cassandra-tls-lab/cassandra/ca.key -out /tmp/cassandra-tls-lab/cassandra/ca.crt -days 365
We can verify the contents of the certificate with:
$ cd /tmp/cassandra-tls-lab/cassandra
$ openssl x509 -in ca.crt -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number:
b7:b6:74:45:b1:99:26:3b
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=NC, L=Clayton, O=The Last Pickle, OU=sslverify, CN=CassandraCA/emailAddress=info@thelastpickle.com
Validity
Not Before: May 30 17:47:57 2019 GMT
Not After : May 29 17:47:57 2020 GMT
Subject: C=US, ST=NC, L=Clayton, O=The Last Pickle, OU=sslverify, CN=CassandraCA/emailAddress=info@thelastpickle.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:cf:00:4e:a0:20:07:a8:e8:d7:7e:14:a5:7d:ad:
38:cc:bd:99:a1:8b:02:ed:9f:27:52:a7:50:59:5b:
8e:e9:ee:e6:42:74:30:06:fb:f3:f9:5c:68:93:93:
35:4c:26:b1:b7:c6:9e:e3:50:25:ad:e2:43:90:12:
68:c4:05:98:e8:9d:74:18:d3:f5:09:a1:71:10:60:
aa:48:a0:7d:fe:d7:9a:0c:25:ae:16:e9:5f:ca:b0:
8d:70:be:5b:b3:80:8e:33:b8:6e:7e:9f:3d:d8:31:
7e:ca:85:cc:be:c5:50:82:99:cb:16:ab:6c:84:ec:
9c:5f:cd:ed:b1:58:b5:5e:b3:be:56:41:f8:7e:72:
17:5b:9e:78:8f:9c:be:8b:f8:56:f9:b5:90:b5:84:
b4:74:e8:da:9e:dd:fd:07:db:85:b3:f2:fd:9e:af:
4e:e1:5e:da:23:4f:ec:7b:1b:fa:87:51:86:60:9c:
[ req ]
af:00:79:55:8c:b1:50:e9:a8:b0:9f:e3:e4:93:82:
77:94:78:f9:6e:ea:7d:6b:41:a5:29:29:d2:1b:70:
c3:dd:6d:5d:b7:1b:a4:70:70:af:55:2f:62:b3:dc:
93:a7:f8:6c:08:24:44:de:de:67:33:dd:bf:12:73:
91:e9:b8:84:60:a5:b2:ba:1f:21:36:fa:0b:5e:dc:
d6:0d
Exponent: 65537 (0x10001)
Signature Algorithm: sha256WithRSAEncryption
57:9e:3c:46:96:92:ce:0d:d1:c5:ad:63:d0:60:25:77:83:f2:
43:78:47:8d:26:80:00:7f:b9:4c:a5:a1:4a:92:23:4c:63:fb:
ec:1d:a2:35:c7:10:65:4c:75:4f:bb:a2:4b:13:fe:7e:6a:19:
d0:9c:b2:e9:48:0d:3c:ac:94:8f:65:be:f5:e1:c1:6b:f1:ba:
d5:06:90:b1:37:4d:ef:88:57:da:3b:08:b5:72:fd:e7:db:0f:
fe:da:1e:c0:fc:76:c1:3b:00:8e:fd:b5:c2:79:c8:a0:94:93:
48:3d:94:9d:47:f6:8a:96:04:a2:53:9c:cd:2c:13:d6:e8:b3:
0d:08:cf:16:ce:5d:37:15:ca:88:4b:ea:d5:5c:5b:a2:c8:fc:
44:83:fa:7e:78:87:4f:5b:21:e0:03:c8:5f:7e:7a:01:a0:fc:
f5:22:46:1d:48:3d:e6:12:78:93:b5:74:6f:f6:0e:99:1b:f9:
44:ea:90:a3:04:cb:cd:9b:1a:36:02:fa:38:be:08:ca:fc:53:
cd:2a:0b:09:26:0e:45:d1:7d:dc:ea:3d:76:40:e6:58:3c:c1:
a1:86:b4:6e:10:9c:c9:cf:e2:3c:a2:b0:63:2d:c1:a0:9f:39:
f8:c1:36:99:a3:b4:02:78:20:05:cb:ae:a4:9b:24:9a:13:84:
22:43:b1:03
Things to note
- The line
Signature Algorithm: sha256WithRSAEncryption
tells us that the certificate has an RSA public key that was generated with a SHA-256 hash. - The issuer and the subject will be the same for root certificates.
- This task is performed locally, not on the remote EC2 instances.
Create the Truststore
Each node needs a truststore to verify requests from other nodes., i.e., to perform client certificate authentication (also referred to as mutual TLS). Because all of our node certificates will be signed by the CA, our nodes can share the truststore.
Ansible Note: The cassandra_ca
role located at tlp-ansible/roles/cassandra_ca
performs the tasks to generate the truststore.
The keystore file is created with the keytool
command by importning the CA certificate. Ansible runs the following command:
$ keytool -keystore /tmp/cassandra-tls-lab/cassandra/truststore.p12 -alias CassandraCARoot -importcert -file /tmp/cassandra-tls-lab/cassandra/ca.crt -keypass cassandra -storepass cassandra -storetype pkcs12 -noprompt
Certificate was added to keystore
We can verify the contents of the truststore with:
$ keytool -v -list -keystore /tmp/cassandra-tls-lab/cassandra/truststore.p12 -storepass cassandra
Keystore type: PKCS12
Keystore provider: SUN
Your keystore contains 1 entry
Alias name: cassandracaroot
Creation date: Jun 5, 2019
Entry type: trustedCertEntry
Owner: EMAILADDRESS=info@thelastpickle.com, CN=CassandraCA, OU=tls_lab, O=The Last Pickle, L=Clayton, ST=NC, C=US
Issuer: EMAILADDRESS=tls_lab@thelastpickle.com, CN=CassandraCA, OU=tls_lab, O=The Last Pickle, L=Clayton, ST=NC, C=US
Serial number: c733567b30c14a81
Valid from: Tue Jun 04 12:37:58 EDT 2019 until: Wed Jun 03 12:37:58 EDT 2020
Certificate fingerprints:
MD5: D3:29:03:D3:00:11:A3:C2:B0:E2:B9:5F:A1:CB:C0:F0
SHA1: 88:18:44:FC:F6:76:A0:2E:A8:D1:02:2E:E8:C5:FF:5D:EB:72:C4:9D
SHA256: 33:12:AF:A7:9B:29:90:28:E2:01:7C:9C:DA:88:4F:23:7C:1A:DC:90:99:78:FD:79:D5:CB:14:E8:5F:0E:94:67
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 1
*******************************************
*******************************************
Notice that the keystore type is PKCS12
. We are using the PKCS #12 (.p12) format for our keystore files instead of the default, JKS
. PKCS #12 is an archive file format in cryptography for storing keys and certificates. Because it is a standardized format, it can be used not only in Java but also in other languages.
Create Node Keystores
Each node needs a keystore which will store its certificate. A keystore is needed for one-way TLS authentication while the truststore is needed to enable two-way authentication.
Ansible performs the following tasks:
- create a directory for storing keys, certificates, and keystores
- create a config file that is used when generating the key and certificate
- generate the private key
- generate the PKCS #12 file, i.e., the keystore
- generate a Certificate Signing Request (CSR)
- sign the certificate with the CA
- import the CA and the signed certificate into the keystore
Ansible Note: Most of this work is implemented in the node_keystores
role located at tlp-ansible/roles/node_keystores
.
Create Directory and Config File
Ansible will create a directory for each EC2 instance. The directory name will be the public IP address of the machines. You should see something like this:
$ ls -1 /tmp/cassandra-tls-lab/cassandra/
34.219.169.240
54.188.182.229
54.201.133.12
ca.crt
ca.key
ca.srl
ssl.cnf
truststore.p12
Inside each host subdirectory you will find a ssl.cnf
that Ansible generated. It should look like:
[ req ]
distinguished_name = req_distinguished_name
prompt = no
output_password = cassandra
default_bits = 2048
req_extensions = v3_req
[ req_distinguished_name ]
C = US
ST = NC
L = Clayton
O = The Last Pickle
OU = tls_lab
CN = 34.219.169.240
emailAddress = tls_lab@thelastpickle.com
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[ alt_names ]
IP.1 = 172.31.41.144
Things to note
- The common name field,
CN
is set to EC2 instance’s public IP address. - The certificate uses a SAN which is specified by the
SubjectAltName
field. This is necessary for hostname verification. - The first alternative IP address,
IP.1
, is set to the private address of the EC2 instance. - This task is performed locally, not on the remote EC2 instances.
Ansible note: The same Jinja2 template that was used to generate the ssl.cnf
file for the CA is used here. The variable ssl_extensions_enabled
controls whether or not the SAN is included in ssl.cnf
.
Generate the RSA Key
We need to generate a private key for each host machine. The key will be used to sign node certificates. Ansible uses the following openssl
command to generate the private key for each host:
$ openssl genrsa -des3 -out <host-dir>/node.key -passout pass:cassandra 2048
where <host-dir>
might be /tmp/cassandra-tls-lab/cassandra/34.219.169.240
.
The -des3
option specifies that triple DES cipher is used to encrypt the key.
Generate the Certificate Signing Request (CSR)
Next Ansible generates the CSR with:
$ openssl req -config <host-dir>/ssl.cnf -new -key <host-dir>/node.key -out <host-dir>/node.csr -passin pass:cassandra
where <host-dir>
might be /tmp/cassandra-tls-lab/cassandra/34.219.169.240
.
Sign the Certificate
$ openssl x509 -req -CA /tmp/cassandra-tls-lab/cassandra/ca.crt -CAkey /tmp/cassandra-tls-lab/cassandra/ca.key -in <host-dir>/node.csr -out <host-dir>/node.crt -days 365 -CAcreateserial -extensions v3_req -extfile <host-dir>/ssl.cnf -passin pass:cassandra
where <host-dir>
might be /tmp/cassandra-tls-lab/cassandra/34.219.169.240
.
We can verify the contents of the certificate with:
$ openssl x509 -in node.crt -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
95:a6:a4:1d:64:4c:21:a2
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=NC, L=Clayton, O=The Last Pickle, OU=tls_lab, CN=CassandraCA/emailAddress=info@thelastpickle.com
Validity
Not Before: Jun 4 21:52:15 2019 GMT
Not After : Jun 3 21:52:15 2020 GMT
Subject: C=US, ST=NC, L=Clayton, O=The Last Pickle, OU=tls_lab, CN=34.217.54.220/emailAddress=info@thelastpickle.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:e3:50:56:e8:d3:e7:75:0a:fd:8a:30:14:dd:51:
b7:b0:51:c6:d1:d5:61:ed:0d:bd:ae:b6:57:a7:58:
5c:cf:50:48:a5:cc:d6:e7:5a:d3:87:19:d8:0c:0c:
5f:2b:7d:0f:1f:0d:eb:d8:48:6b:41:79:d8:2c:fe:
87:ad:da:c8:8b:54:49:94:36:1e:10:00:a9:99:bd:
7d:6e:bd:91:d6:35:70:df:36:aa:74:3b:64:09:e3:
1b:03:36:2c:55:8b:26:8b:10:ed:7f:fa:98:89:4d:
94:9c:db:3e:65:8e:2c:29:3f:c1:c1:19:1b:6a:8b:
c4:9d:29:7a:ac:9e:a8:48:93:6e:45:ba:a1:5d:7b:
a0:7c:41:8a:22:4d:1e:47:2b:a9:8d:80:bc:b4:12:
df:d5:80:9f:5b:ec:73:94:9d:b4:a5:bd:9a:b6:ff:
a0:34:5c:ad:23:b3:51:a7:45:6f:35:a7:2c:78:bc:
a4:4a:9e:1d:da:97:8f:57:4c:6f:67:70:72:f9:19:
2e:c0:9a:34:f5:23:5c:09:b6:05:c4:4d:4c:a4:e7:
59:bf:0c:76:63:92:6e:c9:bc:24:59:85:ac:24:5a:
20:54:1c:ac:ef:83:cc:2f:fa:8e:bb:fc:e6:10:6d:
5d:57:63:a2:0b:e9:3e:10:83:05:1d:1c:c0:64:fa:
df:31
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Subject Alternative Name:
IP Address:172.31.7.37
Signature Algorithm: sha256WithRSAEncryption
96:52:42:a7:1c:26:10:4b:c6:d9:6e:45:55:2c:a8:43:e7:37:
13:1e:10:fc:b6:30:f4:11:54:5f:89:db:9e:20:b8:9e:78:1f:
69:bf:78:74:10:68:c4:4b:7c:40:5d:a6:7c:e9:9f:d5:90:77:
68:b6:24:33:4d:02:95:83:00:79:43:41:02:8c:4f:ff:de:19:
16:90:b0:f0:7e:4f:ec:ea:7d:8e:a5:f3:e8:a1:91:07:0d:88:
b1:71:b6:af:a8:6e:5e:3b:9b:39:36:28:3a:3c:93:d1:bb:07:
f7:1a:b5:e1:c7:5f:68:45:28:80:f4:14:43:6c:23:f1:4f:49:
4f:d1:3d:8a:3a:5d:68:e2:13:dc:39:96:43:eb:25:dc:7f:72:
ec:54:31:a3:2f:ed:e3:70:0d:f7:31:16:54:96:e1:ce:db:c6:
29:12:d5:b4:15:3d:c6:11:8a:43:58:05:5a:1c:46:72:35:10:
04:fc:1f:89:f0:d7:82:03:93:c8:1e:9e:20:1a:74:0a:77:99:
c5:c2:ba:5f:e1:9f:3d:2f:8b:2e:41:df:56:af:cb:20:73:23:
63:76:d6:ef:c0:e6:7f:04:1a:a6:5c:6d:30:25:20:7c:1e:bd:
fd:65:e8:39:b1:59:eb:4d:c1:d3:7e:c3:4b:30:11:c1:dd:a2:
9a:8b:f3:4c
Things to note
- Make sure that you see the
X509V3 extensions
section that includes aSubject Alternative Name
. - The signing request includes
-extensions v3_req
. Without it the certificate will not include the SAN. - This task is performed locally, not on the remote EC2 instances.
Generate the Keystore
Ansible generates the keystore with:
$ openssl pkcs12 -export -out <host-dir>/keystore.p12 -inkey <host-dir>/node.key -in <host-dir>/node.crt -name <host> -passin pass:cassandra -passout pass:cassandra
where <host>
might be 34.219.169.240
and <host-dir>
would then be /tmp/cassandra-tls-lab/cassandra/34.219.169.240
.
Things to note
- The
.p12
file is our keystore. - We are using the PKCS #12 file format instead of the default, JKS.
- This task is performed locally, not on the remote EC2 instances.
Import the CA Certificate
We need to import the CA certificate into the keystore in order to properly establish the trust chain. We do this with the keytool
command.
Ansible runs the following for each host machine:
$ keytool -keystore <host-dir>/keystore.p12 -alias CassandraCARoot -import -file /tmp/cassandra-tls-lab/cassandra/ca.crt -noprompt -keypass cassandra -storepass cassandra
Certificate was added to keystore
where <host-dir>
might be /tmp/cassandra-tls-lab/cassandra/34.219.169.240
.
We can verify the contents of the keystore:
$ keytool -list -keystore keystore.p12 -storepass cassandra
Keystore type: PKCS12
Keystore provider: SUN
Your keystore contains 2 entries
Alias name: 34.217.54.220
Creation date: Jun 5, 2019
Entry type: PrivateKeyEntry
Certificate chain length: 2
Certificate[1]:
Owner: EMAILADDRESS=info@thelastpickle.com, CN=34.217.54.220, OU=tls_lab, O=The Last Pickle, L=Clayton, ST=NC, C=US
Issuer: EMAILADDRESS=info@thelastpickle.com, CN=CassandraCA, OU=tls_lab, O=The Last Pickle, L=Clayton, ST=NC, C=US
Serial number: 95a6a41d644c21a2
Valid from: Tue Jun 04 17:52:15 EDT 2019 until: Wed Jun 03 17:52:15 EDT 2020
Certificate fingerprints:
MD5: 89:FC:08:8D:15:E7:A0:84:AE:6A:3C:CF:88:B3:E4:24
SHA1: 0A:BA:A1:EF:A6:9F:A0:77:C8:89:B4:79:78:F5:2F:51:3B:8F:E9:F7
SHA256: AA:54:78:C8:13:73:65:A3:AF:05:68:8E:45:7F:8E:70:3E:0C:6C:43:3A:17:07:84:D3:88:49:56:0C:61:BC:F5
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3
Extensions:
#1: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
CA:false
PathLen: undefined
]
#2: ObjectId: 2.5.29.15 Criticality=false
KeyUsage [
DigitalSignature
Non_repudiation
Key_Encipherment
]
#3: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
IPAddress: 172.31.7.37
]
Certificate[2]:
Owner: EMAILADDRESS=info@thelastpickle.com, CN=CassandraCA, OU=tls_lab, O=The Last Pickle, L=Clayton, ST=NC, C=US
Issuer: EMAILADDRESS=info@thelastpickle.com, CN=CassandraCA, OU=tls_lab, O=The Last Pickle, L=Clayton, ST=NC, C=US
Serial number: c9f85a64b75bf080
Valid from: Tue Jun 04 17:52:09 EDT 2019 until: Wed Jun 03 17:52:09 EDT 2020
Certificate fingerprints:
MD5: 78:A6:01:CA:46:FE:01:F3:A7:AC:EB:62:02:69:37:57
SHA1: F0:CE:99:21:20:9E:FF:6A:0B:88:D3:DF:62:37:54:22:73:87:D7:CD
SHA256: D8:63:B2:D7:6D:5E:A1:15:92:0A:17:41:9A:47:E5:64:40:F0:03:FF:7B:00:78:34:D6:AC:7B:F0:2C:2B:D1:65
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 1
*******************************************
*******************************************
Alias name: cassandracaroot
Creation date: Jun 5, 2019
Entry type: trustedCertEntry
Owner: EMAILADDRESS=info@thelastpickle.com, CN=CassandraCA, OU=tls_lab, O=The Last Pickle, L=Clayton, ST=NC, C=US
Issuer: EMAILADDRESS=info@thelastpickle.com, CN=CassandraCA, OU=tls_lab, O=The Last Pickle, L=Clayton, ST=NC, C=US
Serial number: c9f85a64b75bf080
Valid from: Tue Jun 04 17:52:09 EDT 2019 until: Wed Jun 03 17:52:09 EDT 2020
Certificate fingerprints:
MD5: 78:A6:01:CA:46:FE:01:F3:A7:AC:EB:62:02:69:37:57
SHA1: F0:CE:99:21:20:9E:FF:6A:0B:88:D3:DF:62:37:54:22:73:87:D7:CD
SHA256: D8:63:B2:D7:6D:5E:A1:15:92:0A:17:41:9A:47:E5:64:40:F0:03:FF:7B:00:78:34:D6:AC:7B:F0:2C:2B:D1:65
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 1
*******************************************
*******************************************
Things to note
- This task is performed locally, not on the remote EC2 instances.
- The keystore should have two entries. The first entry should be a
PrivateKeyEntry
that includes the node certificate and private key. - The first entry should include a SAN with the EC2 instance’s private IP address.
- The second keystore entry should be a
trustedCertEntry
that contains the CA certificate.
Copy the Keystore Files
After the keystores have been updated, Ansible copies both the keystore and truststore files to the host machines. The files are stored in /etc/cassandra
.
Ansible Note: The copy tasks are defined in tlp-ansible/roles/node_keystores/tasks/copy_keystores.yml
.
Configure the Cluster
The Ansible playbooks update /etc/cassandra/cassandra.yaml
on each host machine. The server_encryption_options
property is set to:
server_encryption_options:
internode_encryption: all
keystore: ./conf/keystore.p12
keystore_password: cassandra
truststore: ./conf/truststore.p12
truststore_password: cassandra
require_client_auth: true
require_endpoint_verification: true
store_type: PKCS12
After updating cassandra.yaml
, Ansible restarts the node.
Ansible Note: The tasks for updating the configuration and restarting Cassandra are defined in the cassandra_configuration
role located at tlp-ansible/roles/cassandra_configuration
.
Verify Cluster State
Finally, check that the cluster is in a healthy state:
$ cd cassandra0
$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.31.29.239 280.9 KiB 256 67.1% 26a5b2e1-1079-4879-9e3f-31366852f095 rack1
UN 172.31.7.158 281.17 KiB 256 66.4% 0fab0bb3-0326-48f1-a49f-e4d8196e460b rack1
UN 172.31.42.245 280.36 KiB 256 66.5% 367efca2-664c-4e75-827c-24238af173c9 rack1
Look for this message in the log to verify that internode encryption is enabled:
INFO [main] 2019-06-06 17:25:13,962 MessagingService.java:704 - Starting Encrypted Messaging Service on SSL port 7001
We now have a functioning cluster that uses two-way (or mutual) TLS authentication with hostname verification.
Ansible Note: The playbooks do not verify the cluster state. That would be a good enhancement!
Conclusion
Transport Layer Security is necessary for securing a cluster. TLS without hostname verification leaves the cluster vulnerable. This article explained how to set up mutual TLS along with hostname verification and walked through all the steps so that you can secure internode communication.
In the next security post, we will look at client-to-node encryption. Stay tuned!