Get Started Free

Kafka Operations FAQs

Frequently asked questions and answers about restarting, stopping, or checking the status of your Kafka operation.

How to check Kafka broker status?

The most common way to monitor Kafka is by enabling JMX. JMX can be enabled by setting the JMX_PORT environment variable, for example:

JMX_PORT=9999 bin/kafka-server-start.sh config/server.properties

Once JMX is enabled, standard Java tooling such as jconsole can be used to observe Kafka status.

The documentation provides more details related to monitoring Kafka, including available metrics, and Confluent provides Confluent Control Center for an out-of-the-box Kafka cluster monitoring system.


If you want to inspect the health of a broker that is already running, and you have access to the server, you can check that the process is running:

jps | grep Kafka

And you can also check that it is listening for client connections (port 9092 by default):

nc -vz localhost 9092

What ports does Kafka use?

The default ports used for Kafka and for services in the Kafka ecosystem are as follows:

Service Default Port
Kafka Clients 9092
Kafka Control Plane 9093
ZooKeeper 2181
Kafka Connect 8083
Schema Registry 8081
REST Proxy 8082
ksqlDB 8088

By default Kafka listens for client connections on port 9092. The listeners configuration is used to configure different or additional client ports. For more details on configuring Kafka listeners for access across networks see this blog about advertised.listeners.

How do I find out the Kafka version?

If you have terminal access to the broker machine, you can pass the --version flag to many of the Kafka commands to see the version. For example:

bin/kafka-topics.sh --version
3.0.0 (Commit:8cb0a5e9d3441962)

If your Kafka broker has remote JMX enabled, you can obtain the version with a JMX query, for example:

bin/kafka-run-class.sh kafka.tools.JmxTool \
  --jmx-url service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi \
  --object-name kafka.server:type=app-info \
  --attributes version --one-time true
Trying to connect to JMX url: service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi.
"time","kafka.server:type=app-info:version"
1638974783597,3.0.0

How to restart Kafka

If you need to do software upgrades, broker configuration updates, or cluster maintenance, then you will need to restart all of the brokers in your Kafka cluster. To do this, you can do a rolling restart. Restarting the brokers one at a time provides high availability since it avoids downtime for your end users.

See the rolling restart documentation for a detailed workflow, including considerations and tips for other cluster maintenance tasks.

How to stop Kafka

Use the kafka-server-stop.sh script located in the installation path's bin directory:

bin/kafka-server-stop.sh

This will work if you've installed Kafka using the Confluent Platform or Apache Kafka tarball installations.

For more details and other installation options such as RPM and Debian see the documentation and Confluent Developer.

Where are Kafka's data files?

The directory where Kafka stores data is set by the configuration log.dir on the broker.

By default this is /tmp/kafka-logs. If you are using the RPM or Debian installation of Confluent Platform, the default data directory is /var/lib/kafka.

What is ZooKeeper?

Apache ZooKeeper™ is a service for coordinating configuration, naming, and other synchronization tasks for distributed systems.

What is ZooKeeper used for in Kafka?

ZooKeeper provides the authoritative store of metadata holding the system’s most important facts: broker information, partition locations, replica leadership, and so on.

As of Apache Kafka 3.3, Kafka can also be run in KRaft mode for new production clusters. With KRaft mode, Kafka handles metadata management instead of ZooKeeper.

How many ZooKeeper nodes for Kafka?

Generally, production environments can start with a small cluster of three nodes and scale as necessary. Specifically, ZooKeeper should be deployed in 2n + 1 nodes, where n is any number greater than 0. The odd number of servers is required in order to allow ZooKeeper to perform majority elections for leadership.

Did you know, you may not even need ZooKeeper?

  • KRaft mode, where Kafka itself is used for metadata management instead of ZooKeeper, is production ready for new clusters as of Apache Kafka 3.3. See this KRaft explanation for details.
  • Confluent Cloud provides a fully managed Kafka service so you don't have to be concerned with either ZooKeeper or KRaft.

Can we use Kafka without ZooKeeper?

Kafka version 2.8 and onwards includes a preview mode of Kafka Raft metadata mode, known as KRaft. With KRaft, there is no need for ZooKeeper since Kafka itself is responsible for metadata management using a new "Event-Driven Consensus" mechanism.

With the 3.5 release, KRaft is production ready and you can migrate Kafka clusters from ZK to KRaft mode. Note that migration is preview only for testing or non-production clusters.

In a future release, 4.0, Kafka will only ship in KRaft mode, tentatively scheduled for April 2024 but subject to change.

Learn more about KRaft here.

Is KRaft ready for production use?

As of the Kafka 3.3 release KRaft is considered production ready for new clusters. With the 3.5 release you can now migrate Kafka clusters from ZK to KRaft mode (Preview only, for testing or non-production clusters).

In a future release, 4.0, Kafka will only ship in KRaft mode, tenatively scheduled for April 2024, but subject to change.

There are few features left to implement in KRaft and development is underway to reach feature parity with ZooKeeper mode. KIP-833 contains details of features to come and the major milestones in the direction of a ZooKeeper-free Kafka.

You can learn more about KRaft mode here.

Learn more with these free training courses

Apache Kafka® 101

Learn how Kafka works, how to use it, and how to get started.

Spring Framework and Apache Kafka®

This hands-on course will show you how to build event-driven applications with Spring Boot and Kafka Streams.

Building Data Pipelines with Apache Kafka® and Confluent

Build a scalable, streaming data pipeline in under 20 minutes using Kafka and Confluent.

Confluent Cloud is a fully managed Apache Kafka service available on all three major clouds. Try it for free today.

Try it for free