KStream<String, SongEvent> rockSongs = builder.stream(rockTopic);
KStream<String, SongEvent> classicalSongs = builder.stream(classicalTopic);
KStream<String, SongEvent> allSongs = rockSongs.merge(classicalSongs);
allSongs.to(allGenresTopic);
If you have many Kafka topics with events, how do you merge them all into a single topic?
The input streams are combined using the merge
function, which creates a new stream that represents all of the events of its inputs.
The merged stream is forwarded to a combined topic via the to
method, which accepts the topic as a parameter.
KStream<String, SongEvent> rockSongs = builder.stream(rockTopic);
KStream<String, SongEvent> classicalSongs = builder.stream(classicalTopic);
KStream<String, SongEvent> allSongs = rockSongs.merge(classicalSongs);
allSongs.to(allGenresTopic);
This tutorial installs Confluent Platform using Docker. Before proceeding:
• Install Docker Desktop (version 4.0.0
or later) or Docker Engine (version 19.03.0
or later) if you don’t already have it
• Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.
• Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd
, via systemctl
• Verify that Docker is set up properly by ensuring no errors are output when you run docker info
and docker compose version
on the command line
To get started, make a new directory anywhere you’d like for this project:
mkdir merge-streams && cd merge-streams
Next, create the following docker-compose.yml
file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):
version: '2'
services:
broker:
image: confluentinc/cp-kafka:7.4.1
hostname: broker
container_name: broker
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- 8081:8081
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: WARN
And launch it by running:
docker compose up -d
Create the following Gradle build file, named build.gradle
for the project:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "gradle.plugin.com.github.jengelman.gradle.plugins:shadow:7.0.0"
}
}
plugins {
id "java"
id "com.github.davidmc24.gradle.plugin.avro" version "1.7.0"
}
sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
version = "0.0.1"
repositories {
mavenCentral()
maven {
url "https://packages.confluent.io/maven"
}
}
apply plugin: "com.github.johnrengelman.shadow"
dependencies {
implementation "org.apache.avro:avro:1.11.1"
implementation "org.slf4j:slf4j-simple:2.0.7"
implementation 'org.apache.kafka:kafka-streams:3.4.0'
implementation ('org.apache.kafka:kafka-clients') {
version {
strictly '3.4.0'
}
}
implementation "io.confluent:kafka-streams-avro-serde:7.3.0"
testImplementation "org.apache.kafka:kafka-streams-test-utils:3.4.0"
testImplementation "junit:junit:4.13.2"
}
test {
testLogging {
outputs.upToDateWhen { false }
showStandardStreams = true
exceptionFormat = "full"
}
}
jar {
manifest {
attributes(
"Class-Path": configurations.compileClasspath.collect { it.getName() }.join(" "),
"Main-Class": "io.confluent.developer.MergeStreams"
)
}
}
shadowJar {
archiveBaseName = "kstreams-merge-standalone"
archiveClassifier = ''
}
And be sure to run the following command to obtain the Gradle wrapper:
gradle wrapper
Next, create a directory for configuration data:
mkdir configuration
Then create a development file at configuration/dev.properties
:
application.id=merging-app
bootstrap.servers=127.0.0.1:29092
schema.registry.url=http://127.0.0.1:8081
input.rock.topic.name=rock-song-events
input.rock.topic.partitions=1
input.rock.topic.replication.factor=1
input.classical.topic.name=classical-song-events
input.classical.topic.partitions=1
input.classical.topic.replication.factor=1
output.topic.name=all-song-events
output.topic.partitions=1
output.topic.replication.factor=1
Create a directory for the schemas that represent the events in the stream:
mkdir -p src/main/avro
Then create the following Avro schema file at src/main/avro/song_event.avsc
for the events representing a song being played:
{
"namespace": "io.confluent.developer.avro",
"type": "record",
"name": "SongEvent",
"fields": [
{"name": "artist", "type": "string"},
{"name": "title", "type": "string"}
]
}
Because we will use this Avro schema in our Java code, we’ll need to compile it. Run the following:
./gradlew build
Create a directory for the Java files in this project:
mkdir -p src/main/java/io/confluent/developer
Then create the following file at src/main/java/io/confluent/developer/MergeStreams.java
. Notice the buildTopology
method, which uses the Kafka Streams DSL. A stream
is opened up for each input topic. The input streams are then combined using the merge
function, which creates a new stream that represents all of the events of its inputs. Note that you can chain merge
to combine as many streams as needed. The merged stream is then connected to the to
method, which the name of a Kafka topic to send the events to.
package io.confluent.developer;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.KStream;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.time.Duration;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import io.confluent.developer.avro.SongEvent;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import static io.confluent.kafka.serializers.AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG;
public class MergeStreams {
public Topology buildTopology(Properties allProps) {
final StreamsBuilder builder = new StreamsBuilder();
final String rockTopic = allProps.getProperty("input.rock.topic.name");
final String classicalTopic = allProps.getProperty("input.classical.topic.name");
final String allGenresTopic = allProps.getProperty("output.topic.name");
KStream<String, SongEvent> rockSongs = builder.stream(rockTopic);
KStream<String, SongEvent> classicalSongs = builder.stream(classicalTopic);
KStream<String, SongEvent> allSongs = rockSongs.merge(classicalSongs);
allSongs.to(allGenresTopic);
return builder.build();
}
public void createTopics(Properties allProps) {
AdminClient client = AdminClient.create(allProps);
List<NewTopic> topics = new ArrayList<>();
topics.add(new NewTopic(
allProps.getProperty("input.rock.topic.name"),
Integer.parseInt(allProps.getProperty("input.rock.topic.partitions")),
Short.parseShort(allProps.getProperty("input.rock.topic.replication.factor"))));
topics.add(new NewTopic(
allProps.getProperty("input.classical.topic.name"),
Integer.parseInt(allProps.getProperty("input.classical.topic.partitions")),
Short.parseShort(allProps.getProperty("input.classical.topic.replication.factor"))));
topics.add(new NewTopic(
allProps.getProperty("output.topic.name"),
Integer.parseInt(allProps.getProperty("output.topic.partitions")),
Short.parseShort(allProps.getProperty("output.topic.replication.factor"))));
client.createTopics(topics);
client.close();
}
public Properties loadEnvProperties(String fileName) throws IOException {
Properties allProps = new Properties();
FileInputStream input = new FileInputStream(fileName);
allProps.load(input);
input.close();
return allProps;
}
public static void main(String[] args) throws Exception {
if (args.length < 1) {
throw new IllegalArgumentException("This program takes one argument: the path to an environment configuration file.");
}
MergeStreams ms = new MergeStreams();
Properties allProps = ms.loadEnvProperties(args[0]);
allProps.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
allProps.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
allProps.put(SCHEMA_REGISTRY_URL_CONFIG, allProps.getProperty("schema.registry.url"));
Topology topology = ms.buildTopology(allProps);
ms.createTopics(allProps);
final KafkaStreams streams = new KafkaStreams(topology, allProps);
final CountDownLatch latch = new CountDownLatch(1);
// Attach shutdown handler to catch Control-C.
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
@Override
public void run() {
streams.close(Duration.ofSeconds(5));
latch.countDown();
}
});
try {
streams.start();
latch.await();
} catch (Throwable e) {
System.exit(1);
}
System.exit(0);
}
}
Note when using the merge
operator the keys and values of the two KStream
objects you’re merging must be of the same type. If you have 2 KStream
instances with different key and/or value types, you’ll have to use the KStream.map
(or KStream.mapValues
) operation first to get the types to line-up before merging.
In your terminal, run:
./gradlew shadowJar
Now that an uberjar for the Kafka Streams application has been built, you can launch it locally. When you run the following, the prompt won’t return, because the application will run until you exit it:
java -jar build/libs/kstreams-merge-standalone-0.0.1.jar configuration/dev.properties
To produce the input events to their respective topics, you’ll want two terminals running. To send the rock songs to their topic, open up a terminal and run the following:
docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic rock-song-events --bootstrap-server broker:9092 --property value.schema="$(< src/main/avro/song_event.avsc)"
When the console producer starts, it will log some messages and hang, waiting for your input. Type in one line at a time and press enter to send it. Each line represents an event. To send all of the events below, paste the following into the prompt and press enter:
{"artist": "Metallica", "title": "Fade to Black"}
{"artist": "Smashing Pumpkins", "title": "Today"}
{"artist": "Pink Floyd", "title": "Another Brick in the Wall"}
{"artist": "Van Halen", "title": "Jump"}
{"artist": "Led Zeppelin", "title": "Kashmir"}
To produce the classical songs, open up another terminal and run:
docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic classical-song-events --bootstrap-server broker:9092 --property value.schema="$(< src/main/avro/song_event.avsc)"
Then paste in the following events:
{"artist": "Wolfgang Amadeus Mozart", "title": "The Magic Flute"}
{"artist": "Johann Pachelbel", "title": "Canon"}
{"artist": "Ludwig van Beethoven", "title": "Symphony No. 5"}
{"artist": "Edward Elgar", "title": "Pomp and Circumstance"}
Leaving your original terminals running, open another to consume the events that have been merged:
docker exec -it schema-registry /usr/bin/kafka-avro-console-consumer --topic all-song-events --bootstrap-server broker:9092 --from-beginning
After the consumer starts, you should see the following messages. The order might vary depending on the timing of which the input events are sent to each topic and processed by the app. Kafka Streams will coalesce the respective input topics together in an indeterminate manner. To continue studying the example, send more events through the input terminal prompt. Otherwise, you can Control-C
to exit the process.
{"artist":"Metallica","title":"Fade to Black"}
{"artist":"Smashing Pumpkins","title":"Today"}
{"artist":"Pink Floyd","title":"Another Brick in the Wall"}
{"artist":"Van Halen","title":"Jump"}
{"artist":"Led Zeppelin","title":"Kashmir"}
{"artist":"Wolfgang Amadeus Mozart","title":"The Magic Flute"}
{"artist":"Johann Pachelbel","title":"Canon"}
{"artist":"Ludwig van Beethoven","title":"Symphony No. 5"}
{"artist":"Edward Elgar","title":"Pomp and Circumstance"}
First, create a test file at configuration/test.properties
:
application.id=merging-app
bootstrap.servers=127.0.0.1:29092
schema.registry.url=mock://merging-app:8081
input.rock.topic.name=rock-song-events
input.rock.topic.partitions=1
input.rock.topic.replication.factor=1
input.classical.topic.name=classical-song-events
input.classical.topic.partitions=1
input.classical.topic.replication.factor=1
output.topic.name=all-song-events
output.topic.partitions=1
output.topic.replication.factor=1
Then, create a directory for the tests to live in:
mkdir -p src/test/java/io/confluent/developer
Create the following test file at src/test/java/io/confluent/developer/MergeStreamsTest.java
:
package io.confluent.developer;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.TestInputTopic;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.TopologyTestDriver;
import org.apache.kafka.streams.StreamsConfig;
import org.junit.After;
import org.junit.Assert;
import org.junit.Test;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.stream.Collectors;
import io.confluent.developer.avro.SongEvent;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroDeserializer;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer;
public class MergeStreamsTest {
private final static String TEST_CONFIG_FILE = "configuration/test.properties";
private TopologyTestDriver testDriver;
public SpecificAvroSerializer<SongEvent> makeSerializer(Properties allProps) {
SpecificAvroSerializer<SongEvent> serializer = new SpecificAvroSerializer<>();
Map<String, String> config = new HashMap<>();
config.put("schema.registry.url", allProps.getProperty("schema.registry.url"));
serializer.configure(config, false);
return serializer;
}
public SpecificAvroDeserializer<SongEvent> makeDeserializer(Properties allProps) {
SpecificAvroDeserializer<SongEvent> deserializer = new SpecificAvroDeserializer<>();
Map<String, String> config = new HashMap<>();
config.put("schema.registry.url", allProps.getProperty("schema.registry.url"));
deserializer.configure(config, false);
return deserializer;
}
@Test
public void testMergeStreams() throws IOException {
MergeStreams ms = new MergeStreams();
Properties allProps = ms.loadEnvProperties(TEST_CONFIG_FILE);
allProps.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
allProps.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
String rockTopic = allProps.getProperty("input.rock.topic.name");
String classicalTopic = allProps.getProperty("input.classical.topic.name");
String allGenresTopic = allProps.getProperty("output.topic.name");
Topology topology = ms.buildTopology(allProps);
testDriver = new TopologyTestDriver(topology, allProps);
Serializer<String> keySerializer = Serdes.String().serializer();
SpecificAvroSerializer<SongEvent> valueSerializer = makeSerializer(allProps);
Deserializer<String> keyDeserializer = Serdes.String().deserializer();
SpecificAvroDeserializer<SongEvent> valueDeserializer = makeDeserializer(allProps);
List<SongEvent> rockSongs = new ArrayList<>();
List<SongEvent> classicalSongs = new ArrayList<>();
rockSongs.add(SongEvent.newBuilder().setArtist("Metallica").setTitle("Fade to Black").build());
rockSongs.add(SongEvent.newBuilder().setArtist("Smashing Pumpkins").setTitle("Today").build());
rockSongs.add(SongEvent.newBuilder().setArtist("Pink Floyd").setTitle("Another Brick in the Wall").build());
rockSongs.add(SongEvent.newBuilder().setArtist("Van Halen").setTitle("Jump").build());
rockSongs.add(SongEvent.newBuilder().setArtist("Led Zeppelin").setTitle("Kashmir").build());
classicalSongs.add(SongEvent.newBuilder().setArtist("Wolfgang Amadeus Mozart").setTitle("The Magic Flute").build());
classicalSongs.add(SongEvent.newBuilder().setArtist("Johann Pachelbel").setTitle("Canon").build());
classicalSongs.add(SongEvent.newBuilder().setArtist("Ludwig van Beethoven").setTitle("Symphony No. 5").build());
classicalSongs.add(SongEvent.newBuilder().setArtist("Edward Elgar").setTitle("Pomp and Circumstance").build());
final TestInputTopic<String, SongEvent>
rockSongsTestDriverTopic =
testDriver.createInputTopic(rockTopic, keySerializer, valueSerializer);
final TestInputTopic<String, SongEvent>
classicRockSongsTestDriverTopic =
testDriver.createInputTopic(classicalTopic, keySerializer, valueSerializer);
for (SongEvent song : rockSongs) {
rockSongsTestDriverTopic.pipeInput(song.getArtist(), song);
}
for (SongEvent song : classicalSongs) {
classicRockSongsTestDriverTopic.pipeInput(song.getArtist(), song);
}
List<SongEvent> actualOutput =
testDriver
.createOutputTopic(allGenresTopic, keyDeserializer, valueDeserializer)
.readKeyValuesToList()
.stream()
.filter(record -> record.value != null)
.map(record -> record.value)
.collect(Collectors.toList());
List<SongEvent> expectedOutput = new ArrayList<>();
expectedOutput.addAll(rockSongs);
expectedOutput.addAll(classicalSongs);
Assert.assertEquals(expectedOutput, actualOutput);
}
@After
public void cleanup() {
testDriver.close();
}
}
Now run the test, which is as simple as:
./gradlew test
Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.
Sign up for Confluent Cloud, a fully managed Apache Kafka service.
After you log in to Confluent Cloud Console, click Environments
in the lefthand navigation, click on Add cloud environment
, and name the environment learn-kafka
. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.
From the Billing & payment
section in the menu, apply the promo code CC100KTS
to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1
. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.
Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.
Next, from the Confluent Cloud Console, click on Clients
to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.
In the case of this tutorial, add the following properties to the client application’s input properties file, substituting all curly braces with your Confluent Cloud values.
# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BROKER_ENDPOINT }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips
# Best practice for Kafka producer to prevent data loss
acks=all
# Required connection configs for Confluent Cloud Schema Registry
schema.registry.url=https://{{ SR_ENDPOINT }}
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}
Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.