How do you combine aggregate values, like `count`, from multiple streams into a single result?
You can run your application with Confluent Cloud.
In the Kafka Streams application, use a combination of cogroup
and and aggregate
methods as shown below.
final Aggregator<String, LoginEvent, LoginRollup> loginAggregator = new LoginAggregator();
final KGroupedStream<String, LoginEvent> appOneGrouped = appOneStream.groupByKey();
final KGroupedStream<String, LoginEvent> appTwoGrouped = appTwoStream.groupByKey();
final KGroupedStream<String, LoginEvent> appThreeGrouped = appThreeStream.groupByKey();
appOneGrouped.cogroup(loginAggregator)
.cogroup(appTwoGrouped, loginAggregator)
.cogroup(appThreeGrouped, loginAggregator)
.aggregate(() -> new LoginRollup(new HashMap<>()), Materialized.with(Serdes.String(), loginRollupSerde))
.toStream().to(totalResultOutputTopic, Produced.with(stringSerde, loginRollupSerde));
This tutorial installs Confluent Platform using Docker. Before proceeding:
• Install Docker Desktop (version 4.0.0
or later) or Docker Engine (version 19.03.0
or later) if you don’t already have it
• Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.
• Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd
, via systemctl
• Verify that Docker is set up properly by ensuring no errors are output when you run docker info
and docker compose version
on the command line
To get started, make a new directory anywhere you’d like for this project:
mkdir cogrouping-streams && cd cogrouping-streams
Next, create the following docker-compose.yml
file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):
version: '2'
services:
broker:
image: confluentinc/cp-kafka:7.4.1
hostname: broker
container_name: broker
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- 8081:8081
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: WARN
And launch it by running:
docker compose up -d
Create the following Gradle build file, named build.gradle
for the project:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "gradle.plugin.com.github.jengelman.gradle.plugins:shadow:7.0.0"
}
}
plugins {
id "java"
id "com.google.cloud.tools.jib" version "3.3.1"
id "idea"
id "eclipse"
id "com.github.davidmc24.gradle.plugin.avro" version "1.7.0"
}
sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
version = "0.0.1"
repositories {
mavenCentral()
maven {
url "https://packages.confluent.io/maven"
}
}
apply plugin: "com.github.johnrengelman.shadow"
dependencies {
implementation "org.apache.avro:avro:1.11.1"
implementation "org.slf4j:slf4j-simple:2.0.7"
implementation 'org.apache.kafka:kafka-streams:3.4.0'
implementation ('org.apache.kafka:kafka-clients') {
version {
strictly '3.4.0'
}
}
implementation "io.confluent:kafka-streams-avro-serde:7.3.0"
testImplementation "org.apache.kafka:kafka-streams-test-utils:3.4.0"
testImplementation "junit:junit:4.13.2"
testImplementation 'org.hamcrest:hamcrest:2.2'
}
test {
testLogging {
outputs.upToDateWhen { false }
showStandardStreams = true
exceptionFormat = "full"
}
}
jar {
manifest {
attributes(
"Class-Path": configurations.compileClasspath.collect { it.getName() }.join(" "),
"Main-Class": "io.confluent.developer.CogroupingStreams"
)
}
}
shadowJar {
archiveBaseName = "cogrouping-streams-standalone"
archiveClassifier = ''
}
And be sure to run the following command to obtain the Gradle wrapper:
gradle wrapper
Next, create a directory for configuration data:
mkdir configuration
Then create a development file at configuration/dev.properties
:
application.id=cogrouping-streams
bootstrap.servers=localhost:29092
schema.registry.url=http://localhost:8081
app-one.topic.name=app-one-topic
app-one.topic.partitions=1
app-one.topic.replication.factor=1
app-two.topic.name=app-two-topic
app-two.topic.partitions=1
app-two.topic.replication.factor=1
app-three.topic.name=app-three-topic
app-three.topic.partitions=1
app-three.topic.replication.factor=1
output.topic.name=output-topic
output.topic.partitions=1
output.topic.replication.factor=1
This tutorial uses 4 streams. The three input streams have a record type of LoginEvent
used to represent a user logging into an application. The fourth stream is an output stream that writes a LoginRollup
object out to a topic. In the next steps you’ll create the Avro schemas for these objects.
Create a directory for the schemas that represent the events in the stream:
mkdir -p src/main/avro
Then create the following Avro schema file at src/main/avro/login-event.avsc
to create the LoginEvent
event:
{
"namespace": "io.confluent.developer.avro",
"type": "record",
"name": "LoginEvent",
"fields": [
{"name": "app_id", "type": "string"},
{"name": "user_id", "type": "string"},
{"name": "time", "type": "long"}
]
}
Next create another schema file src/main/avro/login-rollup.avsc
to create the LoginRollup
for the cogrouping result:
{
"namespace": "io.confluent.developer.avro",
"type": "record",
"name": "LoginRollup",
"fields": [
{"name": "login_by_app_and_user", "type": {
"type": "map",
"values": {
"type": "map",
"values": {"type": "long"}
}
}
}
]
}
Because we will use an Avro schema in our Java code, we’ll need to compile it. The Gradle Avro plugin is a part of the build, so it will see your new Avro files, generate Java code for them, and compile those and all other Java sources. Run this command to get it all done:
./gradlew build
Create a directory for the Java files in this project:
mkdir -p src/main/java/io/confluent/developer
Before you create the Java class to run the Cogrouping
example, let’s dive into the main point of this tutorial, how we use cogrouping:
final Aggregator<String, LoginEvent, LoginRollup> loginAggregator = new LoginAggregator();
final KGroupedStream<String, LoginEvent> appOneGrouped = appOneStream.groupByKey();
final KGroupedStream<String, LoginEvent> appTwoGrouped = appTwoStream.groupByKey();
final KGroupedStream<String, LoginEvent> appThreeGrouped = appThreeStream.groupByKey();
appOneGrouped.cogroup(loginAggregator)
.cogroup(appTwoGrouped, loginAggregator)
.cogroup(appThreeGrouped, loginAggregator)
.aggregate(() -> new LoginRollup(new HashMap<>()), Materialized.with(Serdes.String(), loginRollupSerde))
.toStream().to(totalResultOutputTopic, Produced.with(stringSerde, loginRollupSerde));
You’re using the cogrouping functionality here to get an overall grouping of logins per application. Kafka Streams creates this total grouping by using an Aggregator
who knows how to extract records from each grouped stream. Your Aggregator
instance here knows how to correctly combine each LoginEvent
into the larger LoginRollup
object. You’ll learn more about Aggregator
in the next step.
Next, you have three input streams: appOneStream
, appTwoStream
, and appThreeStream
. You need the intermediate object KGroupedStream
, so you execute the groupByKey()
method on each stream. For this tutorial, we have assumed the incoming records already have keys. In cases where records lack keys, you need to use a key-selecting method (selectKey()
, map()
, or groupBy()
) to successfully group by key.
Now with your KGroupedStream
objects, you start creating your larger aggregate by calling KGroupedStream.cogroup()
on the first stream, using your Aggregator
. This first step returns a CogroupedKStream
instance. Then for each remaining KGroupedStream
, you execute CogroupedKSteam.cogroup()
using one of the KGroupedStream
instances and the Aggregator
you created previously. You repeat this sequence of calls for all of the KGroupedStream
objects you want to combine into an overall aggregate.
For more background on cogrouping functionality in stream you can read the KIP-150 proposal.
Now go ahead and create the Java file at src/main/java/io/confluent/developer/CogroupingStreams.java
.
package io.confluent.developer;
import io.confluent.common.utils.TestUtils;
import io.confluent.developer.avro.LoginEvent;
import io.confluent.developer.avro.LoginRollup;
import io.confluent.kafka.serializers.AbstractKafkaSchemaSerDeConfig;
import io.confluent.kafka.serializers.KafkaAvroDeserializer;
import io.confluent.kafka.serializers.KafkaAvroSerializer;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import java.io.FileInputStream;
import java.io.IOException;
import java.time.Duration;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import org.apache.avro.specific.SpecificRecord;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Aggregator;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.KGroupedStream;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Produced;
public class CogroupingStreams {
public Topology buildTopology(Properties allProps) {
final StreamsBuilder builder = new StreamsBuilder();
final String appOneInputTopic = allProps.getProperty("app-one.topic.name");
final String appTwoInputTopic = allProps.getProperty("app-two.topic.name");
final String appThreeInputTopic = allProps.getProperty("app-three.topic.name");
final String totalResultOutputTopic = allProps.getProperty("output.topic.name");
final Serde<String> stringSerde = Serdes.String();
final Serde<LoginEvent> loginEventSerde = getSpecificAvroSerde(allProps);
final Serde<LoginRollup> loginRollupSerde = getSpecificAvroSerde(allProps);
final KStream<String, LoginEvent> appOneStream = builder.stream(appOneInputTopic, Consumed.with(stringSerde, loginEventSerde));
final KStream<String, LoginEvent> appTwoStream = builder.stream(appTwoInputTopic, Consumed.with(stringSerde, loginEventSerde));
final KStream<String, LoginEvent> appThreeStream = builder.stream(appThreeInputTopic, Consumed.with(stringSerde, loginEventSerde));
final Aggregator<String, LoginEvent, LoginRollup> loginAggregator = new LoginAggregator();
final KGroupedStream<String, LoginEvent> appOneGrouped = appOneStream.groupByKey();
final KGroupedStream<String, LoginEvent> appTwoGrouped = appTwoStream.groupByKey();
final KGroupedStream<String, LoginEvent> appThreeGrouped = appThreeStream.groupByKey();
appOneGrouped.cogroup(loginAggregator)
.cogroup(appTwoGrouped, loginAggregator)
.cogroup(appThreeGrouped, loginAggregator)
.aggregate(() -> new LoginRollup(new HashMap<>()), Materialized.with(Serdes.String(), loginRollupSerde))
.toStream().to(totalResultOutputTopic, Produced.with(stringSerde, loginRollupSerde));
return builder.build();
}
static <T extends SpecificRecord> SpecificAvroSerde<T> getSpecificAvroSerde(final Properties allProps) {
final SpecificAvroSerde<T> specificAvroSerde = new SpecificAvroSerde<>();
specificAvroSerde.configure((Map)allProps, false);
return specificAvroSerde;
}
public void createTopics(final Properties allProps) {
try (final AdminClient client = AdminClient.create(allProps)) {
final List<NewTopic> topics = new ArrayList<>();
topics.add(new NewTopic(
allProps.getProperty("app-one.topic.name"),
Integer.parseInt(allProps.getProperty("app-one.topic.partitions")),
Short.parseShort(allProps.getProperty("app-one.topic.replication.factor"))));
topics.add(new NewTopic(
allProps.getProperty("app-two.topic.name"),
Integer.parseInt(allProps.getProperty("app-two.topic.partitions")),
Short.parseShort(allProps.getProperty("app-two.topic.replication.factor"))));
topics.add(new NewTopic(
allProps.getProperty("app-three.topic.name"),
Integer.parseInt(allProps.getProperty("app-three.topic.partitions")),
Short.parseShort(allProps.getProperty("app-three.topic.replication.factor"))));
topics.add(new NewTopic(
allProps.getProperty("output.topic.name"),
Integer.parseInt(allProps.getProperty("output.topic.partitions")),
Short.parseShort(allProps.getProperty("output.topic.replication.factor"))));
client.createTopics(topics);
}
}
public Properties loadEnvProperties(String fileName) throws IOException {
final Properties allProps = new Properties();
final FileInputStream input = new FileInputStream(fileName);
allProps.load(input);
input.close();
return allProps;
}
public static void main(String[] args) throws Exception {
if (args.length < 1) {
throw new IllegalArgumentException("This program takes one argument: the path to an environment configuration file.");
}
final CogroupingStreams instance = new CogroupingStreams();
final Properties allProps = instance.loadEnvProperties(args[0]);
final Topology topology = instance.buildTopology(allProps);
instance.createTopics(allProps);
TutorialDataGenerator dataGenerator = new TutorialDataGenerator(allProps);
dataGenerator.generate();
final KafkaStreams streams = new KafkaStreams(topology, allProps);
final CountDownLatch latch = new CountDownLatch(1);
// Attach shutdown handler to catch Control-C.
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
@Override
public void run() {
streams.close(Duration.ofSeconds(5));
latch.countDown();
}
});
try {
streams.start();
latch.await();
} catch (Throwable e) {
System.exit(1);
}
System.exit(0);
}
static class TutorialDataGenerator {
final Properties properties;
public TutorialDataGenerator(final Properties properties) {
this.properties = properties;
}
public void generate() {
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class);
try (Producer<String, LoginEvent> producer = new KafkaProducer<String, LoginEvent>(properties)) {
HashMap<String, List<LoginEvent>> entryData = new HashMap<>();
List<LoginEvent> messages1 = Arrays.asList(new LoginEvent("one", "Ted", 12456L),
new LoginEvent("one", "Ted", 12457L),
new LoginEvent("one", "Carol", 12458L),
new LoginEvent("one", "Carol", 12458L),
new LoginEvent("one", "Alice", 12458L),
new LoginEvent("one", "Carol", 12458L));
final String topic1 = properties.getProperty("app-one.topic.name");
entryData.put(topic1, messages1);
List<LoginEvent> messages2 = Arrays.asList(new LoginEvent("two", "Bob", 12456L),
new LoginEvent("two", "Carol", 12457L),
new LoginEvent("two", "Ted", 12458L),
new LoginEvent("two", "Carol", 12459L));
final String topic2 = properties.getProperty("app-two.topic.name");
entryData.put(topic2, messages2);
List<LoginEvent> messages3 = Arrays.asList(new LoginEvent("three", "Bob", 12456L),
new LoginEvent("three", "Alice", 12457L),
new LoginEvent("three", "Alice", 12458L),
new LoginEvent("three", "Carol", 12459L));
final String topic3 = properties.getProperty("app-three.topic.name");
entryData.put(topic3, messages3);
entryData.forEach((topic, list) ->
list.forEach(message ->
producer.send(new ProducerRecord<String, LoginEvent>(topic, message.getAppId(), message), (metadata, exception) -> {
if (exception != null) {
exception.printStackTrace(System.out);
} else {
System.out.printf("Produced record at offset %d to topic %s %n", metadata.offset(), metadata.topic());
}
})
)
);
}
}
}
}
The Aggregator
you saw in the previous step constructs a map of maps: the count of logins per user, per application. Below is the core logic of the LoginAggregator
.
Each call to Aggregator.apply
retrieves the user login map for the given application id (or creates one if it doesn’t exist). From there, the Aggregator
increments the login count for the given user.
final String userId = loginEvent.getUserId();
final Map<String, Map<String, Long>> allLogins = loginRollup.getLoginByAppAndUser();
final Map<String, Long> userLogins = allLogins.computeIfAbsent(appId, key -> new HashMap<>());
userLogins.compute(userId, (k, v) -> v == null ? 1L : v + 1L);
While you could add the Aggregator
instance as an in-line lambda to the topology, creating a separate class allows you to test the aggregator in isolation.
Next, create the following file at src/main/java/io/confluent/developer/LoginAggregator.java
.
package io.confluent.developer;
import io.confluent.developer.avro.LoginEvent;
import io.confluent.developer.avro.LoginRollup;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.streams.kstream.Aggregator;
public class LoginAggregator implements Aggregator<String, LoginEvent, LoginRollup> {
@Override
public LoginRollup apply(final String appId,
final LoginEvent loginEvent,
final LoginRollup loginRollup) {
final String userId = loginEvent.getUserId();
final Map<String, Map<String, Long>> allLogins = loginRollup.getLoginByAppAndUser();
final Map<String, Long> userLogins = allLogins.computeIfAbsent(appId, key -> new HashMap<>());
userLogins.compute(userId, (k, v) -> v == null ? 1L : v + 1L);
return loginRollup;
}
}
In your terminal, run:
./gradlew shadowJar
Now that you have an uberjar for the Kafka Streams application, you can launch it locally. When you run the following, the prompt won’t return, because the application will run until you exit it. There is always another message to process, so streaming applications don’t exit until you force them.
The application for this tutorial includes a record generator to populate three topics with data.
java -jar build/libs/cogrouping-streams-standalone-0.0.1.jar configuration/dev.properties
Now that you have sent the login events, let’s run a consumer to read the cogrouped output from your streams application
docker exec -it schema-registry /usr/bin/kafka-avro-console-consumer --topic output-topic --bootstrap-server broker:9092 --from-beginning
You should see something like this
{"login_by_app_and_user":{"one":{"Carol":3,"Alice":1,"Ted":2}}}
{"login_by_app_and_user":{"two":{"Carol":2,"Bob":1,"Ted":1}}}
{"login_by_app_and_user":{"three":{"Carol":1,"Bob":1,"Alice":2}}}
First, create a test file at configuration/test.properties
:
application.id=cogrouping-streams
bootstrap.servers=localhost:29092
schema.registry.url=mock://cogrouping-streams-test
state.dir=cogrouping-test-state
app-one.topic.name=app-one-topic
app-one.topic.partitions=1
app-one.topic.replication.factor=1
app-two.topic.name=app-two-topic
app-two.topic.partitions=1
app-two.topic.replication.factor=1
app-three.topic.name=app-three-topic
app-three.topic.partitions=1
app-three.topic.replication.factor=1
output.topic.name=output-topic
output.topic.partitions=1
output.topic.replication.factor=1
Create a directory for the tests to live in:
mkdir -p src/test/java/io/confluent/developer
Create the following file at src/test/java/io/confluent/developer/LoginAggregatorTest.java
.
This tests the Aggregator
the Cogrouping
operation uses. As I said previously, you can easily include an instance of the Aggregator
in-line as a lambda in the original topology. But by having it as a stand alone class, you can easily test the Aggregator
in a unit test.
package io.confluent.developer;
import org.junit.Test;
import java.util.HashMap;
import io.confluent.developer.avro.LoginEvent;
import io.confluent.developer.avro.LoginRollup;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;
public class LoginAggregatorTest {
@Test
public void shouldAggregateValues() {
final LoginAggregator loginAggregator = new LoginAggregator();
final LoginRollup loginRollup = new LoginRollup();
loginRollup.setLoginByAppAndUser(new HashMap<>());
final String appOne = "app-one";
final String appTwo = "app-two";
final String appThree = "app-three";
final String user1 = "user1";
final String user2 = "user2";
loginAggregator.apply(appOne, login(appOne, user1), loginRollup);
loginAggregator.apply(appTwo, login(appTwo, user1), loginRollup);
loginAggregator.apply(appThree, login(appThree, user1), loginRollup);
assertThat(loginRollup.getLoginByAppAndUser().get(appOne).get(user1), is(1L));
assertThat(loginRollup.getLoginByAppAndUser().get(appTwo).get(user1), is(1L));
assertThat(loginRollup.getLoginByAppAndUser().get(appThree).get(user1), is(1L));
loginAggregator.apply(appOne, login(appOne, user1), loginRollup);
loginAggregator.apply(appTwo, login(appTwo, user1), loginRollup);
assertThat(loginRollup.getLoginByAppAndUser().get(appOne).get(user1), is(2L));
assertThat(loginRollup.getLoginByAppAndUser().get(appTwo).get(user1), is(2L));
assertThat(loginRollup.getLoginByAppAndUser().get(appThree).get(user1), is(1L));
loginAggregator.apply(appOne, login(appOne, user2), loginRollup);
loginAggregator.apply(appTwo, login(appTwo, user2), loginRollup);
loginAggregator.apply(appThree, login(appThree, user2), loginRollup);
loginAggregator.apply(appOne, login(appOne, user1), loginRollup);
loginAggregator.apply(appTwo, login(appTwo, user1), loginRollup);
loginAggregator.apply(appThree, login(appThree, user1), loginRollup);
assertThat(loginRollup.getLoginByAppAndUser().get(appOne).get(user1), is(3L));
assertThat(loginRollup.getLoginByAppAndUser().get(appTwo).get(user1), is(3L));
assertThat(loginRollup.getLoginByAppAndUser().get(appThree).get(user1), is(2L));
assertThat(loginRollup.getLoginByAppAndUser().get(appOne).get(user2), is(1L));
assertThat(loginRollup.getLoginByAppAndUser().get(appTwo).get(user2), is(1L));
assertThat(loginRollup.getLoginByAppAndUser().get(appThree).get(user2), is(1L));
}
private LoginEvent login(String appId, String userId) {
return new LoginEvent(appId, userId, System.currentTimeMillis());
}
}
Now create the following file at src/test/java/io/confluent/developer/CogroupingStreamsTest.java
. Testing a Kafka streams application requires a bit of test harness code, but happily the org.apache.kafka.streams.TopologyTestDriver
class makes this much more pleasant that it would otherwise be.
There is only one method in CogroupingStreamsTest
annotated with @Test
, and that is cogroupingTest()
. This method actually runs our Streams topology using the TopologyTestDriver
and some mocked data that is set up inside the test method.
package io.confluent.developer;
import static org.junit.Assert.assertEquals;
import io.confluent.developer.avro.LoginEvent;
import io.confluent.developer.avro.LoginRollup;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.TreeMap;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.TestInputTopic;
import org.apache.kafka.streams.TestOutputTopic;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.TopologyTestDriver;
import org.junit.Test;
public class CogroupingStreamsTest {
private final static String TEST_CONFIG_FILE = "configuration/test.properties";
@Test
public void cogroupingTest() throws IOException {
final CogroupingStreams instance = new CogroupingStreams();
final Properties allProps = instance.loadEnvProperties(TEST_CONFIG_FILE);
final String appOneInputTopicName = allProps.getProperty("app-one.topic.name");
final String appTwoInputTopicName = allProps.getProperty("app-two.topic.name");
final String appThreeInputTopicName = allProps.getProperty("app-three.topic.name");
final String totalResultOutputTopicName = allProps.getProperty("output.topic.name");
final Topology topology = instance.buildTopology(allProps);
try (final TopologyTestDriver testDriver = new TopologyTestDriver(topology, allProps)) {
final Serde<String> stringAvroSerde = Serdes.String();
final SpecificAvroSerde<LoginEvent> loginEventSerde = CogroupingStreams.getSpecificAvroSerde(allProps);
final SpecificAvroSerde<LoginRollup> rollupSerde = CogroupingStreams.getSpecificAvroSerde(allProps);
final Serializer<String> keySerializer = stringAvroSerde.serializer();
final Deserializer<String> keyDeserializer = stringAvroSerde.deserializer();
final Serializer<LoginEvent> loginEventSerializer = loginEventSerde.serializer();
final TestInputTopic<String, LoginEvent> appOneInputTopic = testDriver.createInputTopic(appOneInputTopicName, keySerializer, loginEventSerializer);
final TestInputTopic<String, LoginEvent> appTwoInputTopic = testDriver.createInputTopic(appTwoInputTopicName, keySerializer, loginEventSerializer);
final TestInputTopic<String, LoginEvent> appThreeInputTopic = testDriver.createInputTopic(appThreeInputTopicName, keySerializer, loginEventSerializer);
final TestOutputTopic<String, LoginRollup> outputTopic = testDriver.createOutputTopic(totalResultOutputTopicName, keyDeserializer, rollupSerde.deserializer());
final List<LoginEvent> appOneEvents = new ArrayList<>();
appOneEvents.add(LoginEvent.newBuilder().setAppId("one").setUserId("foo").setTime(5L).build());
appOneEvents.add(LoginEvent.newBuilder().setAppId("one").setUserId("bar").setTime(6l).build());
appOneEvents.add(LoginEvent.newBuilder().setAppId("one").setUserId("bar").setTime(7L).build());
final List<LoginEvent> appTwoEvents = new ArrayList<>();
appTwoEvents.add(LoginEvent.newBuilder().setAppId("two").setUserId("foo").setTime(5L).build());
appTwoEvents.add(LoginEvent.newBuilder().setAppId("two").setUserId("foo").setTime(6l).build());
appTwoEvents.add(LoginEvent.newBuilder().setAppId("two").setUserId("bar").setTime(7L).build());
final List<LoginEvent> appThreeEvents = new ArrayList<>();
appThreeEvents.add(LoginEvent.newBuilder().setAppId("three").setUserId("foo").setTime(5L).build());
appThreeEvents.add(LoginEvent.newBuilder().setAppId("three").setUserId("foo").setTime(6l).build());
appThreeEvents.add(LoginEvent.newBuilder().setAppId("three").setUserId("bar").setTime(7L).build());
appThreeEvents.add(LoginEvent.newBuilder().setAppId("three").setUserId("bar").setTime(9L).build());
final Map<String, Map<String, Long>> expectedEventRollups = new TreeMap<>();
final Map<String, Long> expectedAppOneRollup = new HashMap<>();
final LoginRollup expectedLoginRollup = new LoginRollup(expectedEventRollups);
expectedAppOneRollup.put("foo", 1L);
expectedAppOneRollup.put("bar", 2L);
expectedEventRollups.put("one", expectedAppOneRollup);
final Map<String, Long> expectedAppTwoRollup = new HashMap<>();
expectedAppTwoRollup.put("foo", 2L);
expectedAppTwoRollup.put("bar", 1L);
expectedEventRollups.put("two", expectedAppTwoRollup);
final Map<String, Long> expectedAppThreeRollup = new HashMap<>();
expectedAppThreeRollup.put("foo", 2L);
expectedAppThreeRollup.put("bar", 2L);
expectedEventRollups.put("three", expectedAppThreeRollup);
sendEvents(appOneEvents, appOneInputTopic);
sendEvents(appTwoEvents, appTwoInputTopic);
sendEvents(appThreeEvents, appThreeInputTopic);
final List<LoginRollup> actualLoginEventResults = outputTopic.readValuesToList();
final Map<String, Map<String, Long>> actualRollupMap = new HashMap<>();
for (LoginRollup actualLoginEventResult : actualLoginEventResults) {
actualRollupMap.putAll(actualLoginEventResult.getLoginByAppAndUser());
}
final LoginRollup actualLoginRollup = new LoginRollup(actualRollupMap);
assertEquals(expectedLoginRollup, actualLoginRollup);
}
}
private void sendEvents(List<LoginEvent> events, TestInputTopic<String, LoginEvent> testInputTopic) {
for (LoginEvent event : events) {
testInputTopic.pipeInput(event.getAppId(), event);
}
}
}
Now run the test, which is as simple as:
./gradlew test
Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.
Sign up for Confluent Cloud, a fully managed Apache Kafka service.
After you log in to Confluent Cloud Console, click on Add cloud environment
and name the environment learn-kafka
. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.
From the Billing & payment
section in the Menu, apply the promo code CC100KTS
to receive an additional $100 free usage on Confluent Cloud (details).
Click on LEARN and follow the instructions to launch a Kafka cluster and to enable Schema Registry.
Next, from the Confluent Cloud Console, click on Clients
to get the cluster-specific configurations, e.g. Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.
In the case of this tutorial, add the following properties to the client application’s input properties file, substituting all curly braces with your Confluent Cloud values.
# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BROKER_ENDPOINT }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips
# Best practice for Kafka producer to prevent data loss
acks=all
# Required connection configs for Confluent Cloud Schema Registry
schema.registry.url=https://{{ SR_ENDPOINT }}
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}
Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.