Senior Developer Advocate
In this lecture, you will learn how to complete simple Kafka cluster administrative tasks using the Python AdminClient class. Follow along as Dave Klein (Senior Developer Advocate, Confluent) covers all of this in detail.
https://github.com/confluentinc/confluent-kafka-python/blob/master/examples/adminapi.py
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.
Hi, Dave Klein here again, with the Apache Kafka for Python Developers Course. In this module, we'll learn how we can use Python to manage Kafka topics and configurations using the AdminClient class. Let's get started. The AdminClient class, along with some helper classes, allows us to manage resources in a Kafka cluster. We can create, delete, and list topics, create partitions, list or alter configurations for brokers, topics, or consumer groups. While many KafkaClient applications won't need these types of features, there are some situations where they do come in handy. For example, if we want to check for the existence of specific topics when our application starts up, and create them if they don't already exist, or if we need to ensure that a certain topic configuration is set to an appropriate value. The AdminClient constructor, similar to the producer and consumer, takes a dictionary of configuration details. The minimum configuration required is bootstrap.servers, which points to the location of your Kafka cluster. If you're running Kafka locally, your config could be as simple as this. If you're using Confluent Cloud, your configuration might look more like this, with the username and password being your API Key and Secret. One of the most common uses for the AdminClient, as we alluded to earlier, is working with Kafka topics. We can create new topics using the AdminClient's create_topics method. Before we create topics, let's introduce a helper class that is part of the Client package. The new topic class holds the information needed to create a topic, including topic name, number of partitions, and replication factor. We can also include a config dictionary, if we want to set any topic-level configurations. The create_topics method takes a list of new topic objects at a minimum, and returns a dictionary of futures, keyed by the topic names in the list. The futures themselves have a value of none, but the keys in the result dictionary will show the topics that were created. The list_topics method doesn't require any arguments, though we can pass in the name of a single topic in order to get the details about that topic, or to see if that topic exists. The more common use is to retrieve a list of all the topics in the cluster. Either way, the return value is a ClusterMetadata object. So let's take a look at the ClusterMetadata class before we continue our discussion of list topics. The ClusterMetadata class holds quite a bit of information about our Kafka cluster, including all of the topics in the form of a dictionary of topic metadata instances, and all of the brokers in the form of a dictionary of broker metadata instances. Going a bit deeper, the topic metadata instances each hold a dictionary of partition metadata instances, which contain information about the partitions in each topic, such as the current leader, and a list of the in-sync replicas. There's a great deal of information in this object, most of which we don't need right now, but it's good to know where to find it if the need arises. Once we've retrieved a ClusterMetadata instance, we can iterate over the topics dictionary and get the topic name and partition information. In this case, we're just printing them out. But this is where we might check for the existence of a required topic to determine if we need to create it. The delete_topics method takes a list of topic names and returns a dictionary of futures, keyed by topic names, just as the create_topics method. Again, the futures themselves have a value of none, but the keys in the result dictionary will show the topics which were marked for deletion. It's important to note that the actual deletion will happen asynchronously, so it might not always be possible to recreate a topic immediately after deleting it. Though working with topics is the most common use case for the AdminClient in application development, we can also use it for working with configurations. Specifically the describe_configs and alter_configs methods can be used to query and change configurations for brokers, topics, and consumer groups. Both of these methods take a list of ConfigResource objects. So let's take a look at the key fields of the ConfigResource class. The first field is restype, which is an enumeration representing the type of resource whose configuration we are interested in. We can use the string or integer value of the resource we are interested in. The next field is the name of the resource. For a topic, this is a topic name. For a broker or consumer group, it is the ID. The set_config field is a dictionary of configuration names and values. We'll talk more about that one when we discuss the alter_configs method. We can use the describe_configs method to retrieve the configuration settings for a single resource or several of them at once. The result of the call will be a dictionary of futures keyed on the ConfigResource instance. The result of the future is a dictionary of config entry instances keyed by configuration name. The config entry holds details about a configuration setting. The two fields we'll need from there are name and value. In this example, we are retrieving the value of the retention.ms property. To change the value of a configuration setting, we can use the alter_configs method. We call it much the same as we'd call describe_configs, except that we'll add a third parameter to the ConfigResource constructor. This will be the set_config field, which is a dictionary of configuration names and values. In this example, we're changing the topic's retention period to one day. The result of the alter_configs call is a dictionary of futures keyed by the config resource. The result value of the future is none. One word of caution when using alter_configs. Any configuration on the affected resource that is not included in the set_configs dictionary will be set to the default value. There's much more that the AdminClient can do, but this is what you're most likely to need when building Kafka applications in Python. And now let's head to the next module, where we'll get some hands-on experience with the Python AdminClient class. If you are not already on Confluent Developer, head there now using the link in the video description to access the rest of this course and its hands-on exercises.