Get Started Free
‹ Back to courses
course: Apache Kafka® Security

Kafka Authentication with SSL and SASL_SSL

7 min
dan-weston

Dan Weston

Senior Curriculum Developer

Kafka Authentication with SSL and SASL_SSL

This module covers the specifics of securing Kafka with SSL and SASL_SSL

Enabling SSL for Kafka

You're likely familiar with SSL from HTTPS websites. When SSL is enabled for a Kafka listener, all traffic for that channel will be encrypted with TLS, which employs digital certificates for identity verification.

Client-Broker Authentication

When a client opens a connection to a broker under SSL, it verifies the broker's certificate in order to confirm the broker's identity. If it checks out, the client is satisfied, but the broker may also wish to verify the client by certificate, making sure that the KafkaPrincipal associated with the connection represents the client’s identity. To ensure that the client’s certificate is checked by the broker, you can set ssl.client.auth=required. (Note that you can also set ssl.client.auth=requested, which isn't recommended as it only authenticates clients who have a certificate, assigning all others the previously mentioned and problematic ANONYMOUS KafkaPrincipal.)

client-broker-authenticator

Since TLS uses certificates, you'll need to generate them and configure your clients and brokers with the appropriate keys and certificates. You will also need to periodically update the certificates before they expire, in order to avoid TLS handshake failures. See the Kafka documentation for more details on creating keys and certificates.

Note that by default Kafka clients verify that the hostname in the broker's URL and the hostname in the broker's certificate match. This can be disabled by setting ssl.endpoint.identification.algorithm to an empty string on the client, which can be useful in test or dev environments that use self-signed certificates. In production, however, you should always allow hostname verification in order to prevent man-in-the-middle attacks.

Inter-Broker Authentication

Everything in the previous client-broker authentication section applies to authentication between brokers using SSL. Essentially, the broker initiating the connection functions similarly to the client in the client-broker approach. Use the inter.broker.listener.name or security.inter.broker.protocol settings to configure listeners for inter-broker communication.

inter-broker-configuration

Enabling SASL-SSL for Kafka

SASL-SSL (Simple Authentication and Security Layer) uses TLS encryption like SSL but differs in its authentication process. To use the protocol, you must specify one of the four authentication methods supported by Apache Kafka: GSSAPI, Plain, SCRAM-SHA-256/512, or OAUTHBEARER. One of the main reasons you might choose SASL-SSL over SSL is because you'd like to integrate Kafka, for example, with an existing Kerberos server in your organization, such as Active Directory or LDAP.

sasl-ssl-kafka

GSSAPI provides for authentication using Kerberos, PLAIN and SCRAM-SHA-256/512 include username/password mechanisms, and OAUTHBEARER is the machine-to-machine equivalent of single sign-on. Thus keep in mind that apart from SCRAM-SHA-256/512, each of these mechanisms requires integration with third-party servers for production workloads (Kerberos for GSSAPI, a password server for PLAIN, and an OAuth server for OAUTHBEARER). Each will also require time to configure.

Considerations

  • As with security strategies in general, environmental and organizational factors will dictate your choice of protocol and authentication mechanism. SSL is ideally suited for cloud environments while SSL-SASL makes sense for enterprises that already have an authentication server.

  • Pay careful attention to your infrastructure, which is likely comprised of many different components across different environments. Use filesystem permissions to restrict access to files containing security information and avoid storing plaintext passwords, including in ZooKeeper (use disk encryption or a secure external file store instead). Also keep in mind that while SASL lets you integrate with lots of external security infrastructure options, these options add to your attack surface, so must also be locked down.

  • You should also consider connection management. Because opening a secure connection is an expensive operation, a malicious actor can exploit you by trying to quickly open numerous connections. Use Kafka quotas together with the connection.failed.authentication.delay.ms setting to protect against this.

  • Finally, be sure to apply rigorous change control procedures when promoting configurations through environments. Loose security settings in test or development environments, such as anonymous users or a lack of certificate hostname verification, can easily be overlooked and inherited by production. Once there, the loose settings can make it hard to diagnose connection and access control issues, or even worse can potentially allow malicious clients into your system.

Use the promo code 101SECURITY to get $25 of free Confluent Cloud usage

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Kafka Authentication with SSL and SASL_SSL

In the previous video, we took some time to talk about some of the basics of authentication. Now let's talk about some of the specifics of securing Kafka with SSL and SASL SSL. We're likely all familiar with SSL, if only through the use of secure HTTPS websites. When SSL is enabled for Kafka Listener, all traffic for that channel will be encrypted using the TLS cryptographic protocol. This prevents malicious eavesdroppers from intercepting the traffic. TLS uses digital certificates to verify identities. When a client opens a connection to a broker, it verifies the broker's certificate in order to confirm the broker's identity. So far, so good. The client knows it's speaking to a legitimate broker and that all traffic is being encrypted, but how does SSL provide for authenticating each client to the broker? How can we configure the broker to ensure the Kafka principle associated with the connection represents the client's identity? You can configure the SSL security protocol to require client authentication by setting ssl.client.auth=required in the broker configuration. Besides the client verifying the broker's identity, the broker will now verify the client certificate in order to confirm the client's identity. As an aside, you can also set ssl.client.auth=requested. With this setting, clients with a certificate will be identified using that certificate. Clients that don't have a certificate will be assigned the anonymous user principle. We don't recommend using the requested setting because it introduces a false sense of security. Misconfigured clients will still authenticate successfully to a broker and may be able to perform actions that have been wittingly or unwittingly granted to the anonymous user. As we've seen, TLS uses certificates to identify brokers to clients and clients to brokers. Therefore, if you use the SSL security protocol, you'll need to generate certificates and configure the clients and brokers with the appropriate keys and certificates. You should then ensure that the information is periodically updated before certificates expire to prevent TLS handshake failures. See the Kafka documentation for more details on creating SSL keys and certificates. One final comment regarding certificates. By default, Kafka clients verify that the hostname in the broker URL and the hostname in the broker certificate match. You can disable this hostname verification by setting ssl.endpoint.identification.algorithm to an empty string on the client. This is sometimes useful in development and test environments that use self-signed certificates. However, in production environments, you should always allow hostname verification so as to prevent man-in-the-middle attacks. Everything we've said here about client broker authentication using the SSL security protocol applies equally to inter-broker communication secured with SSL. In these situations, the broker initiating the connection acts as the client in the client-broker relationship. Use the inter.broker.listener.name or security.inter.broker.protocol setting to configure listeners for broker communication. SSL is one of two security protocols that Kafka supports. The other is SASL SSL. SASL stands for Simple Authentication and Security Layer. When you enable the SASL SSL security protocol for a listener, the traffic for that channel is encrypted using TLS, just as with SSL. TLS client authentication, however, is disabled. Instead, you must specify a SASL authentication mechanism. The main reason you might choose SASL SSL over the SSL security protocol is because you want to integrate Kafka with an existing Kerberos server, such as Active Directory or OpenLDAP, in your organization. Kafka supports four different SASL authentication mechanisms of which the most commonly employed is GSSAPI, which provides for authentication using Kerberos. The other mechanisms include two username and password mechanisms, PLAIN and SCRAM-SHA-256 and 512, and OAUTHBEARER, which is the machine-to-machine equivalent of single sign-on. Each of these mechanisms has some configuration overhead, and in the case of all of them, apart from SCRAM-SHA-256 and 512, for production workloads, you'll need to integrate with third-party servers, Kerberos, in the case of GSSAPI, a password server for the PLAIN mechanism, and a trusted OAuth server for OAUTHBEARER. Your choice of security protocol and authentication mechanism will depend on many environmental and organizational factors. SSL client authentication using client certificates issued by a well-known and trusted certificate authority is ideally suited to cloud environments, while for enterprise environments that already use a Kerberos server, such as Active Directory, SASL GSSAPI is the obvious choice, but don't underestimate the configuration effort involved. We'll wrap up this section with some words of caution. Kafka gives you all of the tools you need to implement strong authentication procedures, but your system is only as strong as its weakest part. You may very well have correctly configured the necessary security protocols, listeners, certificates, and so on for authenticating clients and brokers, but configuration is just one part of the solution. Pay careful attention to your infrastructure. Your system comprises many different components, servers, networks, file systems, arrayed across many different environments. Make sure you're using file system permissions to restrict access to files containing security information, such as certificate key stores and keytab files, and avoid storing passwords in plaintext anywhere on the system, including Zookeeper. Use disk encryption or a secure external credential store instead, and while SASL gives you lots of options for integrating with trusted external security infrastructure, these integrations in turn present an additional attack surface that must be locked down. Think too about connection management. Opening a secure connection is an expensive operation. Malicious clients can exploit this to perform denial-of-service attacks on a broker by attempting to open lots of connections. You can protect the availability of a cluster's brokers using Kafka quotas together with connection.failed.authentication.delay.ms setting, which can reduce the rate at which clients retry failed connection attempts, and finally, apply rigorous change control procedures when promoting configurations through environments. We've seen how you can configure Kafka clients to turn off hostname verification and how brokers can be configured to map clients without certificates to the anonymous user principle. These changes can sometimes make it easier to work in development and test environments, but they pose a severe risk for production employments. Left unchecked, you may end up facing difficult-to-diagnose connection and access control issues, or worse, allowing malicious clients into your system.