confluent kafka topic create input-topic
How do you get started building your first Kafka consumer application?
This tutorial requires access to an Apache Kafka cluster, and the quickest way to get started free is on Confluent Cloud, which provides Kafka as a fully managed service.
After you log in to Confluent Cloud, click Environments
in the lefthand navigation, click on Add cloud environment
, and name the environment learn-kafka
. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.
From the Billing & payment
section in the menu, apply the promo code CC100KTS
to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1
. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.
Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.
Make a local directory anywhere you’d like for this project:
mkdir kafka-consumer-application && cd kafka-consumer-application
Next, create a directory for configuration data:
mkdir configuration
From the Confluent Cloud Console, navigate to your Kafka cluster and then select Clients
in the lefthand navigation. From the Clients
view, create a new client and click Java
to get the connection information customized to your cluster.
Create new credentials for your Kafka cluster and Schema Registry, writing in appropriate descriptions so that the keys are easy to find and delete later. The Confluent Cloud Console will show a configuration similar to below with your new credentials automatically populated (make sure Show API keys
is checked).
Copy and paste it into a configuration/ccloud.properties
file on your machine.
# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BOOTSTRAP_SERVERS }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips
# Best practice for Kafka producer to prevent data loss
acks=all
# Required connection configs for Confluent Cloud Schema Registry
schema.registry.url={{ SR_URL }}
basic.auth.credentials.source=USER_INFO
basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}
Do not directly copy and paste the above configuration. You must copy it from the Confluent Cloud Console so that it includes your Confluent Cloud information and credentials. |
This tutorial has some steps for Kafka topic management and producing and consuming events, for which you can use the Confluent Cloud Console or the Confluent CLI. Follow the instructions here to install the Confluent CLI, and then follow these steps connect the CLI to your Confluent Cloud cluster.
In this step we’re going to create a topic for use during this tutorial. Use the following command to create the topic:
confluent kafka topic create input-topic
Create the following Gradle build file, named build.gradle
for the project:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "gradle.plugin.com.github.jengelman.gradle.plugins:shadow:7.0.0"
}
}
plugins {
id "java"
id "idea"
id "eclipse"
}
sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
version = "0.0.1"
repositories {
mavenCentral()
maven {
url "https://packages.confluent.io/maven"
}
}
apply plugin: "com.github.johnrengelman.shadow"
dependencies {
implementation "org.slf4j:slf4j-simple:2.0.7"
implementation "org.apache.kafka:kafka-clients:3.4.0"
testImplementation "junit:junit:4.13.2"
testImplementation 'org.hamcrest:hamcrest:2.2'
}
test {
testLogging {
outputs.upToDateWhen { false }
showStandardStreams = true
exceptionFormat = "full"
}
}
jar {
manifest {
attributes(
"Class-Path": configurations.compileClasspath.collect { it.getName() }.join(" "),
"Main-Class": "io.confluent.developer.KafkaConsumerApplication"
)
}
}
shadowJar {
archiveBaseName = "kafka-consumer-application-standalone"
archiveClassifier = ''
}
And be sure to run the following command to obtain the Gradle wrapper:
gradle wrapper
Then create a development configuration file at configuration/dev.properties
:
# Consumer properties
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
max.poll.interval.ms=300000
enable.auto.commit=true
auto.offset.reset=earliest
group.id=consumer-application
# Application specific properties
file.path=consumer-records.out
input.topic.name=input-topic
Let’s do a quick overview of some of the more important properties here:
The key.deserializer
and value.deserializer
properties provide a class implementing the Deserializer
interface for converting byte
arrays into the expected object type of the key and value respectively.
The max.poll.interval.ms
is the maximum amount of time a consumer may take between calls to Consumer.poll()
. If a consumer instance takes longer than the specified time, it’s considered non-responsive and removed from the consumer-group triggering a rebalance.
Setting enable.auto.commit
configuration to true
enables the Kafka consumer to handle committing offsets automatically for you. The default setting is true
, but it’s included here to make it explicit. When you enable auto commit, you need to ensure you’ve processed all records before the consumer calls poll
again. Once there is a subsequent call to poll
, all the records returned from the previous call are considered processed and the consumer commits the offsets.
auto.offset.reset
- If a consumer instance can’t locate any offsets for its topic-partition assignment(s), it will resume processing from the earliest available offset.
group.id
- Kafka uses the concept of a consumer-group which is used to represent a logical single group. A consumer-group can be made up of multiple members all sharing the same group.id
configuration. As members leave or join the consumer-group, the group-coordinator triggers a rebalance which causes topic-partition reassignment among active members of the group.
Using the command below, append the contents of configuration/ccloud.properties
(with your Confluent Cloud configuration) to configuration/dev.properties
(with the application properties).
cat configuration/ccloud.properties >> configuration/dev.properties
Create a directory for the Java files in this project:
mkdir -p src/main/java/io/confluent/developer
To complete this tutorial, you’ll build a main application class and a helper class
First, you’ll create the main application,KafkaConsumerApplication
, which is the focal point of this tutorial; consuming records from a Kafka topic.
Let’s go over some of the key parts of the KafkaConsumerApplication
starting with the constructor:
public KafkaConsumerApplication(final Consumer<String, String> consumer,
final ConsumerRecordsHandler<String, String> recordsHandler) { (1)
this.consumer = consumer;
this.recordsHandler = recordsHandler;
}
1 | Here you’re supplying instances of the Consumer and ConsumerRecordsHandler via constructor parameters. |
By using interfaces vs. concrete implementations you can more easily test the KafkaConsumerApplication
class by swapping in a MockConsumer
for the test. We’ll cover testing in an upcoming section. Also, interfaces make it simple to change ConsumerRecord
handling at run-time.
In this tutorial you’ll inject the dependencies in the KafkaConsumerApplication.main()
method, but in practice you may want to use a dependency injection framework library, such as the Spring Framework.
Next, let’s review the KafkaConsumerApplication.runConsumer()
method, which provides the core functionality of this tutorial.
public void runConsume(final Properties consumerProps) {
try {
consumer.subscribe(Collections.singletonList(consumerProps.getProperty("input.topic.name"))); (1)
while (keepConsuming) { (2)
final ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofSeconds(1)); (3)
recordsHandler.process(consumerRecords); (4)
}
} finally {
consumer.close(); (5)
}
}
1 | Subscribing to the Kafka topic. |
2 | Using an instance variable keepConsuming to run the Kafka consumer indefinitely. The KafkaConsumerApplication.shutdown() method sets keepConsuming to false . |
3 | Polling for new records, waiting at most one second for new records. The Consumer.poll() method may return zero results. The consumer is expected to call poll() again within five minutes, from the max.poll.interval.ms config described in step three, "Configure the project". |
4 | Handing off the polled ConsumerRecords to the ConsumerRecordsHandler interface. |
5 | Closing the consumer is essential to prevent resource leaking, hence the finally block. |
Now go ahead and create the src/main/java/io/confluent/developer/KafkaConsumerApplication.java
file:
package io.confluent.developer;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.file.Paths;
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
public class KafkaConsumerApplication {
private volatile boolean keepConsuming = true;
private ConsumerRecordsHandler<String, String> recordsHandler;
private Consumer<String, String> consumer;
public KafkaConsumerApplication(final Consumer<String, String> consumer,
final ConsumerRecordsHandler<String, String> recordsHandler) {
this.consumer = consumer;
this.recordsHandler = recordsHandler;
}
public void runConsume(final Properties consumerProps) {
try {
consumer.subscribe(Collections.singletonList(consumerProps.getProperty("input.topic.name")));
while (keepConsuming) {
final ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofSeconds(1));
recordsHandler.process(consumerRecords);
}
} finally {
consumer.close();
}
}
public void shutdown() {
keepConsuming = false;
}
public static Properties loadProperties(String fileName) throws IOException {
final Properties props = new Properties();
final FileInputStream input = new FileInputStream(fileName);
props.load(input);
input.close();
return props;
}
public static void main(String[] args) throws Exception {
if (args.length < 1) {
throw new IllegalArgumentException(
"This program takes one argument: the path to an environment configuration file.");
}
final Properties consumerAppProps = KafkaConsumerApplication.loadProperties(args[0]);
final String filePath = consumerAppProps.getProperty("file.path");
final Consumer<String, String> consumer = new KafkaConsumer<>(consumerAppProps);
final ConsumerRecordsHandler<String, String> recordsHandler = new FileWritingRecordsHandler(Paths.get(filePath));
final KafkaConsumerApplication consumerApplication = new KafkaConsumerApplication(consumer, recordsHandler);
Runtime.getRuntime().addShutdownHook(new Thread(consumerApplication::shutdown));
consumerApplication.runConsume(consumerAppProps);
}
}
To complete this tutorial, you’ll need to also create an interface for a helper class.
First create the interface at src/main/java/io/confluent/developer/ConsumerRecordsHandler.java
package io.confluent.developer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
public interface ConsumerRecordsHandler<K, V> {
void process(ConsumerRecords<K, V> consumerRecords);
}
Using an interface will make it easier to change how you want to work with ConsumerRecords
without having to modify all of your existing code.
Next you’ll create an implementation of the ConsumerRecordsHandler
interface named FileWritingRecordsHandler
, but before you do that, let’s take a peek under the hood to understand how the helper class works.
The FileWritingRecordsHandler
is a simple class that writes values of consumed records to a file, it’s worth a quick review of the process
method:
@Override
public void process(final ConsumerRecords<String, String> consumerRecords) {
final List<String> valueList = new ArrayList<>();
consumerRecords.forEach(record -> valueList.add(record.value())); (1)
if (!valueList.isEmpty()) { (2)
try {
Files.write(path, valueList, StandardOpenOption.CREATE, StandardOpenOption.WRITE, StandardOpenOption.APPEND); (3)
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
1 | Iterate over all of the records and store each record’s value in a List |
2 | If the List isn’t empty, let’s do something! |
3 | Pass the List<String> of records to the Files.write() method |
In practice you’re certain to do a more realistic workload.
Now go ahead and create the src/main/java/io/confluent/developer/FileWritingRecordsHandler.java
file:
package io.confluent.developer;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.ArrayList;
import java.util.List;
import org.apache.kafka.clients.consumer.ConsumerRecords;
public class FileWritingRecordsHandler implements ConsumerRecordsHandler<String, String> {
private final Path path;
public FileWritingRecordsHandler(final Path path) {
this.path = path;
}
@Override
public void process(final ConsumerRecords<String, String> consumerRecords) {
final List<String> valueList = new ArrayList<>();
consumerRecords.forEach(record -> valueList.add(record.value()));
if (!valueList.isEmpty()) {
try {
Files.write(path, valueList, StandardOpenOption.CREATE, StandardOpenOption.WRITE, StandardOpenOption.APPEND);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
}
In your terminal, run:
./gradlew shadowJar
Now that you have an uberjar for the KafkaConsumerApplication, you can launch it locally. When you run the following, the prompt won’t return, because the application will run until you exit it. There is always another message to process, so streaming applications don’t exit until you force them.
java -jar build/libs/kafka-consumer-application-standalone-0.0.1.jar configuration/dev.properties
Using a terminal window, run the following command to start a Confluent CLI producer:
confluent kafka topic produce input-topic
Each line represents input data for the KafkaConsumer application. To send all of the events below, paste the following into the prompt and press enter:
the quick brown fox
jumped over
the lazy dog
Go to Kafka Summit
All streams lead
to Kafka
Enter Ctrl-C
to exit.
Your consumer application should have consumed all the records sent and written them out to a file.
In a new terminal, run this command to print the results to the console:
cat consumer-records.out
You should see something like this:
the quick brown fox
jumped over
the lazy dog
Go to Kafka Summit
All streams lead
to Kafka
You may try another tutorial, but if you don’t plan on doing other tutorials, use the Confluent Cloud Console or CLI to destroy all of the resources you created. Verify they are destroyed to avoid unexpected charges.
First, create a test file at configuration/test.properties
:
input.topic.name=input-topic
input.topic.partitions=1
input.topic.replication.factor=1
Create a directory for the tests to live in:
mkdir -p src/test/java/io/confluent/developer
Testing a Kafka consumer application is not too complicated thanks to the MockConsumer.java. Since the KafkaConsumer
is well tested, we don’t need to use a live consumer and Kafka broker. We can simply use mock consumer to process some data you’ll feed into it.
There is only one method in KafkaConsumerApplicationTest
annotated with @Test
, and that is consumerTest()
. This method actually runs your KafkaConsumerApplication
with the mock consumer.
Now create the following file at src/test/java/io/confluent/developer/KafkaConsumerApplicationTest.java
.
package io.confluent.developer;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.MockConsumer;
import org.apache.kafka.clients.consumer.OffsetResetStrategy;
import org.apache.kafka.common.TopicPartition;
import org.junit.Test;
public class KafkaConsumerApplicationTest {
private final static String TEST_CONFIG_FILE = "configuration/test.properties";
@Test
public void consumerTest() throws Exception {
final Path tempFilePath = Files.createTempFile("test-consumer-output", ".out");
final ConsumerRecordsHandler<String, String> recordsHandler = new FileWritingRecordsHandler(tempFilePath);
final Properties testConsumerProps = KafkaConsumerApplication.loadProperties(TEST_CONFIG_FILE);
final String topic = testConsumerProps.getProperty("input.topic.name");
final TopicPartition topicPartition = new TopicPartition(topic, 0);
final MockConsumer<String, String> mockConsumer = new MockConsumer<>(OffsetResetStrategy.EARLIEST);
final KafkaConsumerApplication consumerApplication = new KafkaConsumerApplication(mockConsumer, recordsHandler);
mockConsumer.schedulePollTask(() -> addTopicPartitionsAssignmentAndAddConsumerRecords(topic, mockConsumer, topicPartition));
mockConsumer.schedulePollTask(consumerApplication::shutdown);
consumerApplication.runConsume(testConsumerProps);
final List<String> expectedWords = Arrays.asList("foo", "bar", "baz");
List<String> actualRecords = Files.readAllLines(tempFilePath);
assertThat(actualRecords, equalTo(expectedWords));
}
private void addTopicPartitionsAssignmentAndAddConsumerRecords(final String topic,
final MockConsumer<String, String> mockConsumer,
final TopicPartition topicPartition) {
final Map<TopicPartition, Long> beginningOffsets = new HashMap<>();
beginningOffsets.put(topicPartition, 0L);
mockConsumer.rebalance(Collections.singletonList(topicPartition));
mockConsumer.updateBeginningOffsets(beginningOffsets);
addConsumerRecords(mockConsumer,topic);
}
private void addConsumerRecords(final MockConsumer<String, String> mockConsumer, final String topic) {
mockConsumer.addRecord(new ConsumerRecord<>(topic, 0, 0, null, "foo"));
mockConsumer.addRecord(new ConsumerRecord<>(topic, 0, 1, null, "bar"));
mockConsumer.addRecord(new ConsumerRecord<>(topic, 0, 2, null, "baz"));
}
}
Now let’s build a test for the ConsumerRecordsHandler
implementation used in your application. Even though we have a test for the KafkaConsumerApplication
, it’s
important that you can test this helper class in isolation.
Create the following file at src/test/java/io/confluent/developer/FileWritingRecordsHandlerTest.java
.
package io.confluent.developer;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.common.TopicPartition;
import org.junit.Test;
public class FileWritingRecordsHandlerTest {
@Test
public void testProcess() throws IOException {
final Path tempFilePath = Files.createTempFile("test-handler", ".out");
try {
final ConsumerRecordsHandler<String, String> recordsHandler = new FileWritingRecordsHandler(tempFilePath);
recordsHandler.process(createConsumerRecords());
final List<String> expectedWords = Arrays.asList("it's but", "a flesh wound", "come back");
List<String> actualRecords = Files.readAllLines(tempFilePath);
assertThat(actualRecords, equalTo(expectedWords));
} finally {
Files.deleteIfExists(tempFilePath);
}
}
private ConsumerRecords<String, String> createConsumerRecords() {
final String topic = "test";
final int partition = 0;
final TopicPartition topicPartition = new TopicPartition(topic, partition);
final List<ConsumerRecord<String, String>> consumerRecordsList = new ArrayList<>();
consumerRecordsList.add(new ConsumerRecord<>(topic, partition, 0, null, "it's but"));
consumerRecordsList.add(new ConsumerRecord<>(topic, partition, 0, null, "a flesh wound"));
consumerRecordsList.add(new ConsumerRecord<>(topic, partition, 0, null, "come back"));
final Map<TopicPartition, List<ConsumerRecord<String, String>>> recordsMap = new HashMap<>();
recordsMap.put(topicPartition, consumerRecordsList);
return new ConsumerRecords<>(recordsMap);
}
}
Now run the test, which is as simple as:
./gradlew test