VP Developer Relations
So far we have talked about events, topics, and partitions, but as of yet, we have not been too explicit about the actual computers in the picture. From a physical infrastructure standpoint, Apache Kafka is composed of a network of machines called brokers. In a contemporary deployment, these may not be separate physical servers but containers running on pods running on virtualized servers running on actual processors in a physical datacenter somewhere. However they are deployed, they are independent machines each running the Kafka broker process. Each broker hosts some set of partitions and handles incoming requests to write new events to those partitions or read events from them. Brokers also handle replication of partitions between each other.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.
Hey, Tim Berglund here with a few words on Kafka brokers. Now when I was talking about partitioning, I did kind of mention that Kafka is a distributed system, but let me just be explicit about that for a moment. Now, we've talked about events, topics, and partitions, but I have not formally introduced you to the actual computers that are doing this work. So from a physical infrastructure standpoint, Kafka is composed of a network of machines called brokers. In a contemporary deployment, these may well not be separate physical servers, of course, these could be cloud instances, or containers running on pods, running on virtualized servers, running on actual processors in a physical data center somewhere. I mean, we all know how that goes in the cloud. But however they're deployed--whether they're that, or they're physical pieces of sheet metal, whose blinky lights you can see in whose fence, you can hear and feel, these are independent machines each running the Kafka broker process. So each broker hosts some set of Kafka partitions, and handles incoming requests to write new events to those partitions or read events from them. Brokers also handle replication of partitions between each other. Other than that, brokers don't do a lot. They are intentionally kept very simple. That's a key design priority. You wanna keep them simple so that they'll scale easily, they'll be easy to understand, modify and extend as Kafka evolves into the future.