builder.stream(inputTopic, Consumed.with(Serdes.String(), publicationSerde))
.filter((name, publication) -> "George R. R. Martin".equals(publication.getName()))
.to(outputTopic, Produced.with(Serdes.String(), publicationSerde));
How do you filter messages in a Kafka topic to contain only those that you're interested in?
Use the .filter()
function as seen below. The filter
method takes a boolean function of each record’s key and value. The function you give it determines whether to pass each event through to the next stage of the topology.
builder.stream(inputTopic, Consumed.with(Serdes.String(), publicationSerde))
.filter((name, publication) -> "George R. R. Martin".equals(publication.getName()))
.to(outputTopic, Produced.with(Serdes.String(), publicationSerde));
This tutorial installs Confluent Platform using Docker. Before proceeding:
• Install Docker Desktop (version 4.0.0
or later) or Docker Engine (version 19.03.0
or later) if you don’t already have it
• Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.
• Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd
, via systemctl
• Verify that Docker is set up properly by ensuring no errors are output when you run docker info
and docker compose version
on the command line
Make a local directory anywhere you’d like for this project:
mkdir filter-events && cd filter-events
Next, create the following docker-compose.yml
file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):
version: '2'
services:
broker:
image: confluentinc/cp-kafka:7.4.1
hostname: broker
container_name: broker
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- 8081:8081
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: WARN
And launch it by running:
docker compose up -d
Create the following Gradle build file, named build.gradle
for the project:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "gradle.plugin.com.github.jengelman.gradle.plugins:shadow:7.0.0"
}
}
plugins {
id "java"
id "com.github.davidmc24.gradle.plugin.avro" version "1.7.0"
}
sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
version = "0.0.1"
repositories {
mavenCentral()
maven {
url "https://packages.confluent.io/maven"
}
}
apply plugin: "com.github.johnrengelman.shadow"
dependencies {
implementation "org.apache.avro:avro:1.11.1"
implementation "org.slf4j:slf4j-simple:2.0.7"
implementation 'org.apache.kafka:kafka-streams:3.4.0'
implementation ('org.apache.kafka:kafka-clients') {
version {
strictly '3.4.0'
}
}
implementation "io.confluent:kafka-streams-avro-serde:7.3.0"
testImplementation "org.apache.kafka:kafka-streams-test-utils:3.4.0"
testImplementation "junit:junit:4.13.2"
}
test {
testLogging {
outputs.upToDateWhen { false }
showStandardStreams = true
exceptionFormat = "full"
}
}
jar {
manifest {
attributes(
"Class-Path": configurations.compileClasspath.collect { it.getName() }.join(" "),
"Main-Class": "io.confluent.developer.FilterEvents"
)
}
}
shadowJar {
archiveBaseName = "kstreams-filter-standalone"
archiveClassifier = ''
}
And be sure to run the following command to obtain the Gradle wrapper:
gradle wrapper
Next, create a directory for configuration data:
mkdir configuration
Then create a development file at configuration/dev.properties
:
application.id=filtering-app
bootstrap.servers=127.0.0.1:29092
schema.registry.url=http://127.0.0.1:8081
input.topic.name=publications
input.topic.partitions=1
input.topic.replication.factor=1
output.topic.name=filtered-publications
output.topic.partitions=1
output.topic.replication.factor=1
Create a directory for the schemas that represent the events in the stream:
mkdir -p src/main/avro
Then create the following Avro schema file at src/main/avro/publication.avsc
for the publication events:
{
"namespace": "io.confluent.developer.avro",
"type": "record",
"name": "Publication",
"fields": [
{"name": "name", "type": "string"},
{"name": "title", "type": "string"}
]
}
Because this Avro schema is used in the Java code, it needs to compile it. Run the following:
./gradlew build
Create a directory for the Java files in this project:
mkdir -p src/main/java/io/confluent/developer
Then create the following file at src/main/java/io/confluent/developer/FilterEvents.java
. Notice the buildTopology
method, which uses the Kafka Streams DSL. The filter
method takes a boolean function of each record’s key and value. The function you give it determines whether to pass each event through to the next stage of the topology. In this case, we’re only interested in books authored by George R. R. Martin.
package io.confluent.developer;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.Produced;
import java.io.FileInputStream;
import java.io.InputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import java.time.Duration;
import io.confluent.common.utils.TestUtils;
import io.confluent.developer.avro.Publication;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import static io.confluent.kafka.serializers.AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG;
public class FilterEvents {
private SpecificAvroSerde<Publication> publicationSerde(final Properties allProps) {
final SpecificAvroSerde<Publication> serde = new SpecificAvroSerde<>();
Map<String, String> config = (Map)allProps;
serde.configure(config, false);
return serde;
}
public Topology buildTopology(Properties allProps,
final SpecificAvroSerde<Publication> publicationSerde) {
final StreamsBuilder builder = new StreamsBuilder();
final String inputTopic = allProps.getProperty("input.topic.name");
final String outputTopic = allProps.getProperty("output.topic.name");
builder.stream(inputTopic, Consumed.with(Serdes.String(), publicationSerde))
.filter((name, publication) -> "George R. R. Martin".equals(publication.getName()))
.to(outputTopic, Produced.with(Serdes.String(), publicationSerde));
return builder.build();
}
public void createTopics(Properties allProps) {
AdminClient client = AdminClient.create(allProps);
List<NewTopic> topics = new ArrayList<>();
topics.add(new NewTopic(
allProps.getProperty("input.topic.name"),
Integer.parseInt(allProps.getProperty("input.topic.partitions")),
Short.parseShort(allProps.getProperty("input.topic.replication.factor"))));
topics.add(new NewTopic(
allProps.getProperty("output.topic.name"),
Integer.parseInt(allProps.getProperty("output.topic.partitions")),
Short.parseShort(allProps.getProperty("output.topic.replication.factor"))));
client.createTopics(topics);
client.close();
}
public Properties loadEnvProperties(String fileName) throws IOException {
Properties allProps = new Properties();
FileInputStream input = new FileInputStream(fileName);
allProps.load(input);
input.close();
return allProps;
}
public static void main(String[] args) throws IOException {
if (args.length < 1) {
throw new IllegalArgumentException(
"This program takes one argument: the path to an environment configuration file.");
}
new FilterEvents().runRecipe(args[0]);
}
private void runRecipe(final String configPath) throws IOException {
final Properties allProps = new Properties();
try (InputStream inputStream = new FileInputStream(configPath)) {
allProps.load(inputStream);
}
allProps.put(StreamsConfig.APPLICATION_ID_CONFIG, allProps.getProperty("application.id"));
allProps.put(StreamsConfig.STATE_DIR_CONFIG, TestUtils.tempDirectory().getPath());
Topology topology = this.buildTopology(allProps, this.publicationSerde(allProps));
this.createTopics(allProps);
final KafkaStreams streams = new KafkaStreams(topology, allProps);
final CountDownLatch latch = new CountDownLatch(1);
// Attach shutdown handler to catch Control-C.
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
@Override
public void run() {
streams.close(Duration.ofSeconds(5));
latch.countDown();
}
});
try {
streams.start();
latch.await();
} catch (Throwable e) {
System.exit(1);
}
System.exit(0);
}
}
In your terminal, run:
./gradlew shadowJar
Now that an uberjar for the Kafka Streams application has been built, you can launch it locally. When you run the following, the prompt won’t return, because the application will run until you exit it:
java -jar build/libs/kstreams-filter-standalone-0.0.1.jar configuration/dev.properties
In a new terminal, run:
docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic publications --bootstrap-server broker:9092 --property value.schema="$(< src/main/avro/publication.avsc)"
When the console producer starts, it will log some messages and hang, waiting for your input. Type in one line at a time and press enter to send it. Each line represents an event. To send all of the events below, paste the following into the prompt and press enter:
{"name": "George R. R. Martin", "title": "A Song of Ice and Fire"}
{"name": "C.S. Lewis", "title": "The Silver Chair"}
{"name": "C.S. Lewis", "title": "Perelandra"}
{"name": "George R. R. Martin", "title": "Fire & Blood"}
{"name": "J. R. R. Tolkien", "title": "The Hobbit"}
{"name": "J. R. R. Tolkien", "title": "The Lord of the Rings"}
{"name": "George R. R. Martin", "title": "A Dream of Spring"}
{"name": "J. R. R. Tolkien", "title": "The Fellowship of the Ring"}
{"name": "George R. R. Martin", "title": "The Ice Dragon"}
Leaving your original terminal running, open another to consume the events that have been filtered by your application:
docker exec -it schema-registry /usr/bin/kafka-avro-console-consumer --topic filtered-publications --bootstrap-server broker:9092 --from-beginning
After the consumer starts, you should see the following messages. The prompt will hang, waiting for more events to arrive. To continue studying the example, send more events through the input terminal prompt. Otherwise, you can Control-C
to exit the process.
{"name":"George R. R. Martin","title":"A Song of Ice and Fire"}
{"name":"George R. R. Martin","title":"Fire & Blood"}
{"name":"George R. R. Martin","title":"A Dream of Spring"}
{"name":"George R. R. Martin","title":"The Ice Dragon"}
^CProcessed a total of 4 messages
First, create a test file at configuration/test.properties
:
application.id=filtering-app
bootstrap.servers=127.0.0.1:29092
schema.registry.url=mock://SR_CLOUD_DUMMY_URL:8081
input.topic.name=publications
input.topic.partitions=1
input.topic.replication.factor=1
output.topic.name=filtered-publications
output.topic.partitions=1
output.topic.replication.factor=1
Then, create a directory for the tests to live in:
mkdir -p src/test/java/io/confluent/developer
Create the following test file at src/test/java/io/confluent/developer/FilterEventsTest.java
:
package io.confluent.developer;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.TopologyTestDriver;
import org.junit.After;
import org.junit.Assert;
import org.junit.Test;
import java.io.IOException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Properties;
import java.util.stream.Collectors;
import io.confluent.developer.avro.Publication;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import static java.util.Arrays.asList;
public class FilterEventsTest {
private final static String TEST_CONFIG_FILE = "configuration/test.properties";
private TopologyTestDriver testDriver;
private SpecificAvroSerde<Publication> makeSerializer(Properties allProps) {
SpecificAvroSerde<Publication> serde = new SpecificAvroSerde<>();
Map<String, String> config = new HashMap<>();
config.put("schema.registry.url", allProps.getProperty("schema.registry.url"));
serde.configure(config, false);
return serde;
}
@Test
public void shouldFilterGRRMartinsBooks() throws IOException {
FilterEvents fe = new FilterEvents();
Properties allProps = fe.loadEnvProperties(TEST_CONFIG_FILE);
String inputTopic = allProps.getProperty("input.topic.name");
String outputTopic = allProps.getProperty("output.topic.name");
final SpecificAvroSerde<Publication> publicationSpecificAvroSerde = makeSerializer(allProps);
Topology topology = fe.buildTopology(allProps, publicationSpecificAvroSerde);
testDriver = new TopologyTestDriver(topology, allProps);
Serializer<String> keySerializer = Serdes.String().serializer();
Deserializer<String> keyDeserializer = Serdes.String().deserializer();
// Fixture
Publication iceAndFire = new Publication("George R. R. Martin", "A Song of Ice and Fire");
Publication silverChair = new Publication("C.S. Lewis", "The Silver Chair");
Publication perelandra = new Publication("C.S. Lewis", "Perelandra");
Publication fireAndBlood = new Publication("George R. R. Martin", "Fire & Blood");
Publication theHobbit = new Publication("J. R. R. Tolkien", "The Hobbit");
Publication lotr = new Publication("J. R. R. Tolkien", "The Lord of the Rings");
Publication dreamOfSpring = new Publication("George R. R. Martin", "A Dream of Spring");
Publication fellowship = new Publication("J. R. R. Tolkien", "The Fellowship of the Ring");
Publication iceDragon = new Publication("George R. R. Martin", "The Ice Dragon");
// end Fixture
final List<Publication>
input = asList(iceAndFire, silverChair, perelandra, fireAndBlood, theHobbit, lotr, dreamOfSpring, fellowship,
iceDragon);
final List<Publication> expectedOutput = asList(iceAndFire, fireAndBlood, dreamOfSpring, iceDragon);
testDriver.createInputTopic(inputTopic, keySerializer, publicationSpecificAvroSerde.serializer())
.pipeValueList(input);
List<Publication> actualOutput =
testDriver
.createOutputTopic(outputTopic, keyDeserializer, publicationSpecificAvroSerde.deserializer())
.readValuesToList()
.stream()
.filter(Objects::nonNull)
.collect(Collectors.toList());
Assert.assertEquals(expectedOutput, actualOutput);
}
@After
public void cleanup() {
testDriver.close();
}
}
Now run the test, which is as simple as:
./gradlew test
Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.
Sign up for Confluent Cloud, a fully managed Apache Kafka service.
After you log in to Confluent Cloud Console, click Environments
in the lefthand navigation, click on Add cloud environment
, and name the environment learn-kafka
. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.
From the Billing & payment
section in the menu, apply the promo code CC100KTS
to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1
. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.
Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.
Next, from the Confluent Cloud Console, click on Clients
to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.
In the case of this tutorial, add the following properties to the client application’s input properties file, substituting all curly braces with your Confluent Cloud values.
# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BROKER_ENDPOINT }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips
# Best practice for Kafka producer to prevent data loss
acks=all
# Required connection configs for Confluent Cloud Schema Registry
schema.registry.url=https://{{ SR_ENDPOINT }}
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}
Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.