Get Started Free

Apache Kafka® Quick Start

The guide below demonstrates how to quickly get started with Apache Kafka. You'll connect to a broker, create a topic, produce some messages, and consume them. Be sure to also check out the client code examples to learn more.

Confluent Cloud

1. Sign up for Confluent Cloud

First sign up for a free Confluent Cloud account.

Next, install the Confluent CLI by navigating to a directory in which to install it and running:

curl -sL --http1.1 https://cnfl.io/cli | sh -s -- latest

Then after installing the CLI log into your Confluent Cloud account by running the following:

confluent login --prompt --save

2. Create a Kafka cluster

Create a Basic Kafka cluster by entering the following command, where <provider> is one of aws, azure, or gcp, and <region> is a region ID available in the cloud provider you choose. You can view the available regions for a given cloud provider by running confluent kafka region list --cloud <provider>.

confluent kafka cluster create quickstart --cloud <provider> --region <region>

For example:

confluent kafka cluster create quickstart --cloud aws --region us-east-1
confluent kafka cluster create quickstart --cloud azure --region eastus
confluent kafka cluster create quickstart --cloud gcp --region us-east1

3. Wait for cluster to be running

It may take a few minutes for the cluster to be created. Validate that the cluster is running by ensuring that its Status is Up when you run the following command:

confluent kafka cluster list

For example:

confluent kafka cluster list
       Id      |    Name    | Type  | Provider |   Region    | Availability | Status
---------------+------------+-------+----------+-------------+--------------+---------
    lkc-123456 | quickstart | BASIC | gcp      | us-east1    | single-zone  | UP

4. Set active cluster

Make your cluster active in the CLI so that you don't need to specify it in later commands:

confluent kafka cluster use <cluster ID>

For example:

confluent kafka cluster use lkc-123456
Set Kafka cluster "lkc-123456" as the active cluster for environment "env-123456".

5. Create a topic

Create a topic named quickstart that has 1 partition:

confluent kafka topic create quickstart --partitions 1

6. Create an API key

Create an API key for the cluster that you will use to produce and consume messages:

confluent api-key create --resource <cluster ID>

Then set it as the active API key so that you don't need to specify the API key and secret on the command line when producing and consuming messages:

confluent api-key use <API key> --resource <cluster ID>

7. Produce a message to the topic

Produce a message to the quickstart topic:

confluent kafka topic produce quickstart

Enter the message hello world and then enter Ctrl-C or Ctrl-D to exit:

Starting Kafka Producer. Use Ctrl-C or Ctrl-D to exit.
hello world
^C

8. Consume the message from the topic

Now consume the message that you just produced:

confluent kafka topic consume quickstart --from-beginning

What's Next

  • Build Apps
  • Build Pipelines
  • Operate
Build Apps
  • Build Apps
  • Build Pipelines
  • Operate

Tutorials with Full Code Examples

Learn the basics

Step through the basics of the CLI, Kafka topics, and building applications.

Explore top use cases

Run pre-built ksqlDB recipes that tackle the highest impact use cases for stream processing

Master advanced concepts

Learn how to route events, manipulate streams, aggregate data, and more.

Get Started with Kafka Clients

Write your first application using these full code examples in Java, Python, Go, .NET, Node.js, C/C++, REST, Spring Boot, and further languages and CLIs.

Top 3 Courses for Application Developers

Confluent Cloud is a fully managed Apache Kafka service available on all three major clouds. Try it for free today.

Try it for free

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.