Senior Curriculum Developer
Authorization determines what an entity can do on a system once it has been authenticated. Consider an ATM, once you successfully authenticate with your card and PIN, you are able to access your accounts only, not all of the accounts on the machine.
Kafka is similar: once a broker has authenticated a client's identity, it determines the actions that the client is able to execute—whether creating a topic or producing or consuming a message.
Kafka uses access control lists (ACLs) to specify which users are allowed to perform which operations on specific resources or groups of resources. Recall that each connection is assigned a principal when it is first opened, this is what gets assigned the identity.
Each ACL contains a principal, a permission type, an operation, a resource type (e.g., cluster, topic, or group), and name:
By default, resource names are treated as literals but you can also specify that they are treated as prefixes (to specify a subset of resources), or you can use a wildcard character (*) to match all resources of a specific type (which works for principals too). Finally, you can specify a host value to limit permissions to a specific IP address.
Use the kafka-acls command-line tool to create ACLs. For example, perhaps you'd like to allow users named Alice and Fred to read and write from the finance topic:
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:alice \
--allow-principal User:fred \
--operation read
--operation write \
--topic finance`
In a real use case, however, your principal names here are likely to be more complex. If you use SSL authentication for clients, for example, the principal name will use the SSL certificate's subject name. If you use SASL_SSL with Kerberos, the Kerberos principal format will be adopted. You can also configure the way that the principal is derived from the identity by configuring ssl.principal.mapping.rules for SSL and sasl.kerberos.principal.to.local.rules for Kerberos. Refer to the documentation for more details.
ACLs created with kafka-acls are stored in ZooKeeper, then cached in memory by every broker to enable fast lookups when authorizing requests.
Kafka uses a server plugin known as an authorizer to apply ACLs to requests, and this can take multiple forms, even custom ones. An authorizer allows a requested action if there is at least one “Allow ACL” that matches the action, as long as there are no “Deny ACL” forbidding it (“Deny” always trumps “Allow”).
The default authorizer (for ZooKeeper Kafka) is AclAuthorizer, which you specify in each broker's configuration: authorizer.class.name=kafka.security.authorizer.AclAuthorizer. However, if you are using Kafka's native consensus implementation based on KRaft then you'll use a new built-in StandardAuthorizer that doesn't depend on ZooKeeper. StandardAuthorizer accomplishes all of the same things that AclAuthorizer does for ZooKeeper-dependent clusters, and it stores its ACLs in the __cluster_metadata metadata topic.
If you work in a large organization or have a large cluster topology, you might find it inefficient to specify ACLs for each individual user principal. Instead, you might wish to assign users to groups or differentiate them based on roles. You can accomplish this with Kafka, but you will need to do several things: you'll need an external system that allows you to associate individuals with roles and/or groups, something like an LDAP store; you'll need to apply ACLs to resources based not only on users but also on roles and groups; finally, you'll need to implement a custom authorizer that can call your external system to find the roles and groups for a given principal.
No matter which type of authorizer you use, whether default or custom, you should clearly distinguish between user and service accounts. You might be inclined to reuse a person’s user details to authenticate a service or application to Kafka but you shouldn't do this as people often change teams or roles (or even leave companies altogether).
Keep in mind that ACLs require careful management. If you are working in a development or test environment, it may be tempting to use anonymous principles, or Kafka's notion of the superuser, or its allow.everyone.if.no.acl.found setting. But it’s easy to accidentally promote these settings to a production environment so it’s far better to develop good habits; from the outset, you should automatically assign proper credentials and set ACLs strictly according to need.
If a user is compromised, you will need to remove that user from the system as soon as possible, but you will also need to check for any existing connections associated with that user since a principal is assigned to a resource upon connection. (You can set how often users need to reconnect with connections.max.reauth.ms but you should be careful not to frustrate your users with too frequent reconnections.) Note that if a connection persists for a long time, removing a user will only take effect when the connection is closed and reconnection is attempted. Thus the best solution for blocking access to a compromised user is to implement a “Deny ACL” to prevent actions on any existing connections since ACLs get propagated quickly and are checked with every request.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.
As we learned in the previous module, authentication identifies clients to brokers, brokers to other brokers, and brokers to ZooKeeper. Authorization, in contrast, determines what an identity is permitted to do on the system once it has been authenticated. When I authenticate to an ATM using my card number and pin, which is roughly speaking the equivalent of a username and password, I don't then get unfettered access to all of the bank's accounts. Rather, I'm permitted only to access and manage the funds belonging to me. Authentication in Kafka is the same. When a client opens a connection to a broker and issues a request, the broker first authenticates the client's identity and then authorization kicks in to determine whether the request action to create a topic or produce and consume a message, for example, is permitted for that identity. How then do you ensure your users and applications have access to the resources in Kafka they need to do their job, but are denied access to the resources they're not permitted to use? Use secure Kafka resources using Access Control Lists, or ACLs. An Access Control List describes which users are permitted to perform certain operations on specific resources or groups of resources and which users are denied permissions. Remember, Kafka represents client identities in the form of a Kafka principal, and associates a principal with a connection when that connection is first opened. An ACL binding specifies a principal, a resource type, such as a cluster, topic, or group, together with a resource name, an operation, and a permission type, which determines whether the operation is being allowed or denied on behalf of the principal. Examples of operations include the ability to create or delete a topic or group or write or read from a topic. Resource names are by default treated as literals, but you can specify that they should be treated as prefixes which allows the ACL binding to match a subset of resources whose name begins with the specified prefix. You can also use a wildcard character. This allows you to match all resources of a particular resource type. Likewise, you can use the wild card character to match all principals. You can also specify a host value that further limits the permission based on the source IP address of the connection with which the principal is associated. You can create and manage ACLs using the kafka-acls command line tool. Here, you can see an example of using kafka-acls to create ACLs that allow Alice and Fred to write and to read from the finance topic. The example in the previous slide showed very simple principal names, such as User:Alice and User:Fred, but in real world deployments, these principal names can be more complex. If you use the SSL security protocol to authenticate clients, the principal name will be in the form of the SSL certificate subject name. If you use the SASL SSL security protocol with Kerberos authentication, as described in the authentication module, the principal will adopt the Kerberos principal format. If you want, you can configure the way the principal is derived from the identity. There's no custom code, just configuration. To map SSL subject names to a principal, you can configure ssl.principal.mapping.rules. To map Kerberos identities to your preferred principal format, you can use sasl.kerberos.principal.to.local.rules. See the documentation for more details on using these rules to map SSL and Kerberos identities to a principal. Now let's take a look at how ACLs are used by Kafka when handling requests. When you create ACLs using the Kafka ACLs tool, the ACLs are stored in ZooKeeper, and then cached in memory by every broker. So as to enable fast lookups when authorizing requests. Kafka uses a server plugin called Authorizer to actually apply ACLs to requests. The default authorizer is called AclAuthorizer, which you specify in each broker in its configuration. Authorizer.class.name equals kafka.security.authorizer.AclAuthorizer. If you were using Kafka's native consensus implementation based on KRaft rather than ZooKeeper, then Kafka uses a new builtin authorizer, StandardAuthorizer, that does not depend on ZooKeeper. This means that you can now run a Kafka cluster without needing ZooKeeper for consensus or security. StandardAuthorizer stores its ACLs in __cluster_metadatatopic, and it is used by default in KRaft clusters. StandardAuthorizer does all of the same things that AclAuthorizer does for ZooKeeper dependent clusters. When a broker receives a request, the authorizer authorizes the request action. If there are no deny ACLs that match the action, and there's at least one allow ACL that matches the action. In other words, deny ACLs always trump allow ACLs. Out of the box then, Kafka gives you the tools you need to apply fine grain access controls to all your Kafka resources with permissions specified on behalf of individual user principals. But the fine grain nature of the default authorization mechanisms in Kafka can be a double edged sword, particularly in large organizations or with large cluster topologies. If you work in a large organization that comprises many different business units and teams and which perhaps spans multiple geographies, you're likely familiar with access control mechanisms that can assign users to groups, or differentiate them based on roles, and which then allow you to manage permissions based on groups and roles. These additional constructs help to chunk up the work and reduce the operational overhead of managing identity and access control. Now, for sure, you can implement authorization controls across your entire organization using only user principals. But the complexity of doing so can quickly become a bottleneck when trying to roll out a new cluster and streaming applications. How then might you better manage access controls for large organizations with many business units, teams, and clusters? Fortunately, Apache Kafka has you covered. But there are some additional configuration and even developmental overhead involved. To implement group or role-based access control, you'll need to do several things. First, you'll need an external system that allows you to associate users with roles and/or groups. Typically, this would be something like an LDAP store. You configure your group role and user hierarchies in this system. Next, you apply ACLs to resources based not only on users but also on roles and groups. So the finance topic, for example, might have ACLs that allow anyone in the admin role, or anyone belonging to the finance group, to write to it, but which deny right permissions to Bob. Finally, you need to implement a custom authorizer that, given the principal of an authenticated user, can go to the external system and find all of the roles and groups associated with that user. The custom authorizer can then use the resulting list of users, groups, and roles to match the ACLs on the resource and determine whether the user principal is permitted or denied a specific action. So in summary, you continue to store ACLs in ZooKeeper, but now with groups and roles as well as users accorded or denied permissions for the specific resources. The user, group, and role hierarchy, the information that tells you which group a user belongs to for example, is stored in an external system such as LDAP. And a custom authorizer ties it all together, fetching an expanded list of users, groups, and roles for a given user principal and resolving this list against the resource ACLs. Irrespective of whether you use the ACL authorizer or a custom authorizer to implement more complex access control hierarchies, you should clearly distinguish between user accounts and application or service accounts. When developing streaming applications, and following the path of least resistance, you might be inclined to reuse a person's details to authenticate an application or service to Kafka. Don't do this. People change teams or roles or sometimes even leave the company. If you have a robust security hygiene practice, and you should, that removes ACLs when users change roles or leave, applications that one moment were working fine may soon start failing. Instead, create separate service credentials for each application or service and secure access with ACLs for each service or application principal. This wouldn't be a security course if we didn't temper our guidance with a healthy note of caution. ACLs provide a powerful fine grained means of securing Kafka resources, but they do require careful management. In development and test environments, it's often tempting to grant access to the ANONYMOUS principal or to use Kafka's notion of super user or allow everyone if no ACL found setting to grant broad access to resources and simplify ACL management. But used carelessly, these shortcuts, if promoted to production environments, can leave resources and sensitive data unprotected. Super users in particular cannot be denied access using deny ACLs. To be honest, it's worthwhile developing good habits from the outset and automating the process of creating user credentials and assigning ACLs that grant the minimum permissions necessary for the user to do their job. Permissions that can then quickly be revoked if necessary. Even for dev and test environments. Which brings us finally to the dreaded question, "What should I do if a user's compromised?" If this happens, you'll want to remove that user from the system as quickly as possible. Doing so, however, won't have any impact on existing connections associated with that user. These connections will continue to pose a security risk. Remember, a principal is assigned to a connection only when the connection is first opened. You can adjust how frequently users and applications have to reconnect by setting connections.max.reauth.ms. Just remember, if this timeout is set for too short, then you risk annoying users by forcing them to reconnect too frequently. If the connection persists for a long time, removing a user from the system will only take effect when the connection is closed and the client attempts to establish a new connection. The solution here is to use a deny ACL to prevent actions on any existing connections. Kafka propagates ACLs very quickly. And given that ACLs are checked with every request, a deny ACL is the fastest way to block unwanted access.