Get Started Free
‹ Back to courses
course: Apache Kafka® Security

Hands On: Setting Up Encryption

6 min
dan-weston

Dan Weston

Senior Curriculum Developer

Hands On: Setting Up Encryption

In this exercise, we’ll show you how to put a lot of the things we’ve talked about so far into practice. First, we’ll set up our Kafka environment to use SSL/TLS to encrypt our data in motion by creating certificates, and then we'll configure our brokers to use SSL.

To follow along you’ll need to clone the GitHub repository for this course, so head to https://github.com/confluentinc/learn-kafka-courses and clone the repo. The files for this course are located in the fund-kafka-security folder.

Before we start Kafka, there are a few changes we need to make to get things set up.

  1. First, let's take a look at the docker-compose file that has the instructions for Docker, but also our server configuration parameters.

This environment has three instances of ZooKeeper as well as three brokers. With this configuration we have two listeners configured, the default PLAINTEXT listener and the internal BROKER listener. You can also see the advertised listeners that have been configured on the next line.

Let's go ahead and comment both those lines out, and remove the comment from the next two lines adding in the SSL listener. We’ll want to do that for each broker:

Broker 1:
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:19092,BROKER://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092,BROKER://kafka-1:9092
      #KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:19092,SSL://0.0.0.0:19093,BROKER://0.0.0.0:9092
      #KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092,SSL://localhost:19093,BROKER://kafka-1:9092
Broker 2:
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29092,BROKER://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092,BROKER://kafka-2:9092
      #KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29092,SSL://0.0.0.0:29093,BROKER://0.0.0.0:9092
      #KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092,SSL://localhost:29093,BROKER://kafka-2:9092
Broker 3:
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:39092,BROKER://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:39092,BROKER://kafka-3:9092
      #KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:39092,SSL://0.0.0.0:39093,BROKER://0.0.0.0:9092
      #KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:39092,SSL://localhost:39093,BROKER://kafka-3:9092

Notice we also have KAFKA_LISTENER_SECURITY_PROTOCOL_MAP set to accept SSL connections as well. This is the property that determines the communication protocol used by listeners.

  1. Next, we'll create the certification authority key and certificate by running the following command in the terminal (in this exercise we are using a certificate that is self-signed; as mentioned in the previous modules if you are using a production environment we recommend using a trusted certificate authority (CA) rather than a self-signed certificate):
openssl req -new -nodes \
   -x509 \
   -days 365 \
   -newkey rsa:2048 \
   -keyout /home/training/learn-kafka-courses/fund-kafka-security/ca.key \
   -out /home/training/learn-kafka-courses/fund-kafka-security/ca.crt \
   -config /home/training/learn-kafka-courses/fund-kafka-security/ca.cnf

We are creating a new key and certificate that is valid for 365 days, uses the rsa:2048 encryption, and uses the values we’ve stored in the ca.cnf file on the machine. Feel free to take a look at that file if you are interested in the parameters used as it’s created.

You should see an output similar to the one on-screen as well as the two new files that appear in the tls directory.

  1. We now need to convert those files over to a .pem file:
cat /home/training/learn-kafka-courses/fund-kafka-security/ca.crt /home/training/learn-kafka-courses/fund-kafka-security/ca.key > /home/training/learn-kafka-courses/fund-kafka-security/ca.pem
  1. Create the server key and certificate by running the following command:
openssl req -new \
    -newkey rsa:2048 \
    -keyout /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.key \
    -out /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.csr \
    -config /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.cnf \
    -nodes
  1. Then sign the certificate with the certificate authority:
openssl x509 -req \
    -days 3650 \
    -in /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.csr \
    -CA /home/training/learn-kafka-courses/fund-kafka-security/ca.crt \
    -CAkey /home/training/learn-kafka-courses/fund-kafka-security/ca.key \
    -CAcreateserial \
    -out /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.crt \
    -extfile /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.cnf \
    -extensions v3_req
  1. We’ll need to convert the server certificate over to the pkcs12 format:
openssl pkcs12 -export \
    -in /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.crt \
    -inkey /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.key \
    -chain \
    -CAfile /home/training/learn-kafka-courses/fund-kafka-security/ca.pem \
    -name kafka-1 \
    -out /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.p12 \
    -password pass:confluent
  1. Now, we need to create the broker keystore and import the certificate:
keytool -importkeystore \
    -deststorepass confluent \
    -destkeystore /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka.kafka-1.keystore.pkcs12 \
    -srckeystore /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1.p12 \
    -deststoretype PKCS12  \
    -srcstoretype PKCS12 \
    -noprompt \
    -srcstorepass confluent
  1. Verify the kafka-1 broker keystore:
keytool -list -v \
    -keystore /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka.kafka-1.keystore.pkcs12 \
    -storepass confluent
  1. Last, we'll need to save the credentials:
sudo tee /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1_sslkey_creds << EOF >/dev/null
confluent
EOF

sudo tee /home/training/learn-kafka-courses/fund-kafka-security/kafka-1-creds/kafka-1_keystore_creds << EOF >/dev/null
confluent
EOF
  1. While we could do this for each of our brokers, we’ve simplified the steps by scripting all we just did for the two remaining brokers. Feel free to take a look at the file if you're curious; you can redo the same steps for brokers 2 and 3, or just run the command:
sudo /home/training/learn-kafka-courses/fund-kafka-security/scripts/keystore-create-kafka-2-3.sh
  1. These saved credentials are needed for the broker ssl.keystore.credentials and ssl.key.credentials broker configuration parameters. These parameters are set for our lab environment brokers in the environment section of the docker-compose.yml file:
KAFKA_SSL_KEYSTORE_CREDENTIALS: kafka-1_keystore_creds
KAFKA_SSL_KEY_CREDENTIALS: kafka-1_sslkey_creds
  1. Now we’ll change to the fund-kafka-security directory and start our Docker instance:
cd fund-kafka-security

docker-compose up -d

And make sure that everything is up and running:

docker-compose ps
  1. Then we’ll open an SSL connection with our kafka-1 broker to verify that things are working:
openssl s_client -connect localhost:19093 -tls1_3 -showcerts

Close the connection with Ctrl+C.

  1. We'll also open the connection with brokers 2 and 3 just to make sure that

everything is running as expected:

openssl s_client -connect localhost:29093 -tls1_3 -showcerts

openssl s_client -connect localhost:39093 -tls1_3 -showcerts

Congratulations! You've successfully added an SSL listener to your brokers, created a CA, created broker keystores, imported the CA into your broker keystore, and configured the SSL properties.

In the next exercise, we'll create the Kafka client truststore and import the CA, configure the Kafka client to encrypt data in transit, and require SSL for client-to-broker traffic.

Use the promo code 101SECURITY & CONFLUENTDEV1 to get $25 of free Confluent Cloud usage and skip credit card entry.

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Hands On: Setting Up Encryption

In this hands on, I'll show you how to put a lot of things that we've talked about so far into practice. First, we'll set up our Kafka environment to use SSL and TLS to encrypt our data in motion, creating certificates and then configuring our brokers to also use SSL. We'll then configure the Kafka client to encrypt the data in transit using SSL and then require SSL for our broker traffic. The first thing we need to do is get our environment set up. If you remember from the first exercise, I've already cloned the learn Kafka courses and placed it as my working directory. Don't forget to go into the fund-kafka-security folder as well. I've also set the same working directory inside of Visual Studio Code. Before we start our Kafka instance, there are a few changes that we need to make to get things set up. First, let's take a look at the Docker Compose file that has the instructions for Docker, but also for our server configuration parameters. We'll go into Visual Studio Code and click on Docker Compose. This environment has three instances of ZooKeeper as well as three brokers. If I scroll down to look at my brokers, you'll notice that I have two listeners configured, the default plain text listener and the internal broker listener. Remember, both of these send data as plain text. You can also see that the advertised listeners have been configured on the next line. Let's go ahead and comment both of these lines out and remove the comment from the next two lines adding in the SSL listener. We'll want to do that for each broker. You'll also notice that we have Kafka Listener Security Protocol Map, which already has SSL as a connection as well. This is the property that determines the communication protocol used by the listeners. We'll go ahead and save that file and go over to our command line to start creating the certificates. Now only to create the certification authority key and the certificate by running the following command in the terminal. In this exercise, we're using a certificate that is self-signed. As mentioned in the previous modules, if you're using a production environment, we recommend using a trusted certificate authority rather than a self-signed certificate. I'll paste in the command. Notice I've created a configuration file with a lot of the parameters, so that I don't have to manually enter them in. Here we're creating a new key and certificate that is valid for 365 days. It uses the RSA 248 encryption and uses the values that we've stored in the configuration file on the machine. Feel free to take a look at that file if you're interested in the parameters used as it's created. You should see an output similar to the one on screen. Now, we need to convert those files over to a .PEM file. We'll go ahead and create a new server key and certificate by running the following command and then sign the certificate with the certificate authority. Then we'll need to convert the server certificate over to the PKCS 12 format. Next we'll create the broker keystore and import the certificate. Okay, we see that it was successfully imported. Now we need to verify the broker keystore. And your output should look similar to what you see on the screen. Last but not least, we'll save the credentials so that we can use them later. Now, if you've noticed we've only done this for our kafka-one broker. We could go through and do it for the other two, or I've created a handy script that'll go ahead and finish building it out for both Kafka two and Kafka three. So we'll run that command and wait for it to finish. There we go. If we talk about the saved credentials, they're needed for the broker ssl.keystore.credentials and ssl.key.credentials broker configuration parameters. These parameters are set for our lab environment brokers in the environment section of the Docker Compose file. So let's go ahead and take a look at that. So this is on Kafka three broker, but you can see both the keystore credentials that are saved right there. Now let's go back and start up our Docker instance by running Docker Compose up. We'll make sure everything is running by running docker-compose ps. We can see our instances are running. Now I'll open an SSL connection with our Kafka one broker to verify that things are working. And you can see I'm connected. I can do the same thing with Kafka two. And last but not least Kafka three. Congratulations, we've successfully added an SSL listener to your broker, created a broker keystore, imported the CA to your broker keystore, and configured their SSL properties. In the next video, we'll create the Kafka client trust store and import the CA, configure the Kafka client to encrypt the data in transit, and then use SSL for client to broker traffic.