Frequently asked questions and answers about restarting, stopping, or checking the status of your Kafka operation.
The most common way to monitor Kafka is by enabling JMX. JMX can be enabled by setting the
JMX_PORT environment variable, for example:
JMX_PORT=9999 bin/kafka-server-start.sh config/server.properties
Once JMX is enabled, standard Java tooling such as jconsole can be used to observe Kafka status.
The documentation provides more details related to monitoring Kafka, including available metrics, and Confluent provides Confluent Control Center for an out-of-the-box Kafka cluster monitoring system.
If you want to inspect the health of a broker that is already running, and you have access to the server, you can check that the process is running:
jps | grep Kafka
And you can also check that it is listening for client connections (port 9092 by default):
nc -vz localhost 9092
The default ports used for Kafka and for services in the Kafka ecosystem are as follows:
|Kafka Control Plane||9093|
By default Kafka listens for client connections on port
9092. The listeners configuration is used to configure different or additional client ports. For more details on configuring Kafka listeners for access across networks see this blog about
If you have terminal access to the broker machine, you can pass the
--version flag to many of the Kafka commands to see the version. For example:
If your Kafka broker has remote JMX enabled, you can obtain the version with a JMX query, for example:
bin/kafka-run-class.sh kafka.tools.JmxTool \ --jmx-url service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi \ --object-name kafka.server:type=app-info \ --attributes version --one-time true
Trying to connect to JMX url: service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi. "time","kafka.server:type=app-info:version" 1638974783597,3.0.0
If you need to do software upgrades, broker configuration updates, or cluster maintenance, then you will need to restart all of the brokers in your Kafka cluster. To do this, you can do a rolling restart. Restarting the brokers one at a time provides high availability since it avoids downtime for your end users.
See the rolling restart documentation for a detailed workflow, including considerations and tips for other cluster maintenance tasks.
kafka-server-stop.sh script located in the installation path's
This will work if you've installed Kafka using the Confluent Platform or Apache Kafka tarball installations.
For more details and other installation options such as RPM and Debian see the documentation and Confluent Developer.
Apache ZooKeeper™ is a service for coordinating configuration, naming, and other synchronization tasks for distributed systems.
Currently, ZooKeeper provides the authoritative store of metadata holding the system’s most important facts: broker information, partition locations, replica leadership, and so on.
In the future, ZooKeeper will not be required for Kafka, once KRaft mode is production ready.
Generally, production environments can start with a small cluster of three nodes and scale as necessary. Specifically, ZooKeeper should be deployed in 2n + 1 nodes, where n is any number greater than 0. The odd number of servers is required in order to allow ZooKeeper to perform majority elections for leadership.
Did you know, you may not even need ZooKeeper?
As of version 3.0, Kafka still needs Apache ZooKeeper when deployed in production.
Kafka version 2.8 and onwards includes a preview mode of Kafka Raft metadata mode, known as KRaft. With KRaft, there is no need for ZooKeeper since Kafka itself is responsible for metadata management using a new "Event-Driven Consensus" mechanism.
Learn more about KRaft here.
As of Kafka 3.0, an early release of Kafka's KRaft mode is available to preview, but it is not ready for production workloads. Development is underway to fully support all of the features currently provided by ZooKeeper.
You can learn more about KRaft mode here.
This hands-on course will show you how to build event-driven applications with Spring Boot and Kafka Streams.
Build a scalable, streaming data pipeline in under 20 minutes using Kafka and Confluent.