Frequently asked questions and answers about Kafka security, and how to maximize encryption, authentication, and SSL.
Apache Kafka does not support end-to-end encryption natively, but you can achieve this by using a combination of TLS for network connections and disk encryption on the brokers.
If you are looking for a more seamless end-to-end encryption solution, keep an eye on KIP-317, as it aims to provide that functionality.
By default, Apache Kafka data is not encrypted at rest. Encryption can be provided at the OS or disk level using third-party tools.
In Confluent Cloud, data is encrypted at rest. More details can be found here.
Apache Kafka supports various SASL mechanisms, such as GSSAPI (Kerberos), OAUTHBEARER, SCRAM, LDAP, and PLAIN. The specific details for configuring SASL depend on the mechanism you are using. Detailed instructions for each type can be found here.
To configure ACLs in Apache Kafka, you must first set the
authorizer.class.name property in
authorizer.class.name=kafka.security.authorizer.AclAuthorizer. This will enable the out-of-the-box authorizer.
Then you can add and remove ACLs using the
kafka-acls.sh script. Here's an example of adding an ACL:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 \ --add --allow-principal User:Tim --allow-host 123.456.789.0 \ --operation Read --operation Write --topic my-topic
More details and different use cases can be found here.
Apache Kafka does not support Role-Based Access Control (RBAC) by default.
Confluent adds RBAC support to Kafka, allowing you to define group policies for accessing services (reading/writing to topics, accessing Schema Registry, etc.) and environments (dev/staging/prod, etc.), across all of your clusters. You can learn more about it in the blog post Introducing Cluster Authorization Using RBAC, Audit Logs, and BYOK and the reference documentation Authorization using Role-Based Access Control
This hands-on course will show you how to build event-driven applications with Spring Boot and Kafka Streams.
Build a scalable, streaming data pipeline in under 20 minutes using Kafka and Confluent.