How to transform a stream of events

Question:

How do you transform a field in a stream of events in a Kafka topic?

Edit this page

Example use case:

Consider a topic with events that represent movies. Each event has a single attribute that combines its title and its release year into a string. In this tutorial, we'll write a program that creates a new topic with the title and release date turned into their own attributes.

Hands-on code example:

Run it

Prerequisites

1

This tutorial installs Confluent Platform using Docker. Before proceeding:

  • • Install Docker Desktop (version 4.0.0 or later) or Docker Engine (version 19.03.0 or later) if you don’t already have it

  • • Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.

  • • Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd, via systemctl

  • • Verify that Docker is set up properly by ensuring no errors are output when you run docker info and docker compose version on the command line

Initialize the project

2

To get started, make a new directory anywhere you’d like for this project:

mkdir transforming-events && cd transforming-events

Get Confluent Platform

3

Next, create the following docker-compose.yml file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):

version: '2'
services:
  broker:
    image: confluentinc/cp-kafka:7.4.1
    hostname: broker
    container_name: broker
    ports:
    - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
      KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
      CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
  schema-registry:
    image: confluentinc/cp-schema-registry:7.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
    - broker
    ports:
    - 8081:8081
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
      SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: WARN

And launch it by running:

docker compose up -d

Configure the project

4

Then create the following Gradle build file, named build.gradle for the project:

buildscript {
  repositories {
    mavenCentral()
  }
  dependencies {
    classpath 'gradle.plugin.com.github.jengelman.gradle.plugins:shadow:7.0.0'
  }
}

plugins {
  id 'java'
  id 'idea'
  id "com.github.davidmc24.gradle.plugin.avro" version "1.7.0"
}

sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
version = '0.0.1'

repositories {
  mavenCentral()

  maven {
    url 'https://packages.confluent.io/maven'
  }
}

apply plugin: 'com.github.johnrengelman.shadow'

dependencies {
  implementation 'org.apache.avro:avro:1.11.1'
  implementation 'org.slf4j:slf4j-simple:2.0.7'
  implementation 'io.confluent:kafka-streams-avro-serde:7.3.0'
  testImplementation 'junit:junit:4.13.2'
  testImplementation "org.testcontainers:kafka:1.18.0"
}

test {
  testLogging {
    outputs.upToDateWhen { false }
    showStandardStreams = true
    exceptionFormat = 'full'
  }
}

task run(type: JavaExec) {
  mainClass = 'io.confluent.developer.TransformEvents'
  classpath = sourceSets.main.runtimeClasspath
  args = ['configuration/dev.properties']
}

jar {
  manifest {
    attributes(
        'Class-Path': configurations.compileClasspath.collect { it.getName() }.join(' '),
        'Main-Class': 'io.confluent.developer.TransformEvents'
    )
  }
}

shadowJar {
  archiveBaseName = "kafka-transforming-standalone"
  archiveClassifier = ''
}

And be sure to run the following command to obtain the Gradle wrapper:

gradle wrapper

Next, create a directory for configuration data:

mkdir configuration

Then create a development file at configuration/dev.properties:

bootstrap.servers=localhost:29092
schema.registry.url=http://localhost:8081

input.topic.name=raw-movies
input.topic.partitions=1
input.topic.replication.factor=1

output.topic.name=movies
output.topic.partitions=1
output.topic.replication.factor=1

Create schemas for the events

5

Create a directory for the schemas that represent the stream of events:

mkdir -p src/main/avro

Then create the following Avro schema file at src/main/avro/input-movie-event.avsc that will define the structure of a movie and its basic fields. In this tutorial, we’re are going to refer to this as RawMovie. This is the version of the movie before any transformation.

{
  "namespace": "io.confluent.developer.avro",
  "type": "record",
  "name": "RawMovie",
  "fields": [
    {"name": "id", "type": "long"},
    {"name": "title", "type": "string"},
    {"name": "genre", "type": "string"}
  ]
}

Create another Avro schema file at src/main/avro/parsed-movies.avsc to define the structure of the movies after the transformation. The goal of this tutorial is to take the raw movies and transform them into parsed movies by splitting the title field into separate title and release_year fields.

{
  "namespace": "io.confluent.developer.avro",
  "type": "record",
  "name": "Movie",
  "fields": [
    {"name": "id", "type": "long"},
    {"name": "title", "type": "string"},
    {"name": "release_year", "type": "int"},
    {"name": "genre", "type": "string"}
  ]
}

Because these Avro schemas are going to be used by other Java classes, we need to run the build to turn the avsc files into Java code. Run the following:

./gradlew build

Create the code that does the transformation

6

Create a directory for the code that will perform the transformation:

mkdir -p src/main/java/io/confluent/developer

Create a Java file at src/main/java/io/confluent/developer/TransformationEngine.java to implement the code of the transformation. This code leverages the Apache Kafka Client API to implement producers and consumers that will be used to read the raw movies from the input topic, perform the transformation operation on them, and write the transformed movies into the output topic.

package io.confluent.developer;

import io.confluent.developer.avro.Movie;
import io.confluent.developer.avro.RawMovie;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.errors.WakeupException;

import java.time.Duration;
import java.util.Collections;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.Arrays;

import static java.util.Collections.singletonList;

public class TransformationEngine implements Runnable {

    private String inputTopic;
    private String outputTopic;
    private final AtomicBoolean closed = new AtomicBoolean(false);
    private KafkaConsumer<String, RawMovie> rawConsumer;
    private KafkaProducer<String, Movie> producer;

    public TransformationEngine(String inputTopic, String outputTopic,
        KafkaConsumer<String, RawMovie> rawConsumer,
        KafkaProducer<String, Movie> producer) {

        this.inputTopic = inputTopic;
        this.outputTopic = outputTopic;
        this.rawConsumer = rawConsumer;
        this.producer = producer;

    }

    public void run() {

        try {

            rawConsumer.subscribe(singletonList(inputTopic));

            while (!closed.get()) {

                ConsumerRecords<String, RawMovie> records = rawConsumer.poll(Duration.ofSeconds(1));
                for (ConsumerRecord<String, RawMovie> record : records) {

                    Movie movie = convertRawMovie(record.value());
                    ProducerRecord<String, Movie> transformedRecord =
                        new ProducerRecord<String, Movie>(outputTopic, movie);

                    producer.send(transformedRecord);

                }

            }

        } catch (WakeupException wue) {

            if (!closed.get()) throw wue;

        } finally {

            rawConsumer.close();
            producer.close();

        }

    }

    public void shutdown() {

        closed.set(true);
        rawConsumer.wakeup();

    }

    private Movie convertRawMovie(RawMovie rawMovie) {

        String[] titleParts = rawMovie.getTitle().split("::");
        String title = titleParts[0];
        int releaseYear = Integer.parseInt(titleParts[1]);

        return new Movie(rawMovie.getId(), title,
            releaseYear, rawMovie.getGenre());

    }

}

Next, create another Java file at src/main/java/io/confluent/developer/TransformEvents.java for the main program. The main program is responsible for creating the configuration properties that any producer and consumer created will use. It is also responsible for creating and destroying any topics necessary for the tutorial to work, as well as spawning a thread to execute the logic of the transformation.

package io.confluent.developer;

import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;

import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;

import io.confluent.developer.avro.Movie;
import io.confluent.developer.avro.RawMovie;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroDeserializer;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer;

import static io.confluent.kafka.serializers.AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG;

public class TransformEvents {

    public Properties loadEnvProperties(String fileName) {

        Properties envProps = new Properties();
        try (FileInputStream input = new FileInputStream(fileName)) {
            envProps.load(input);
        } catch (IOException ex) {
            ex.printStackTrace();
        }

        return envProps;

    }

    public Properties buildProducerProperties(Properties envProps) {

        Properties props = new Properties();

        props.put(ProducerConfig.ACKS_CONFIG, "all");
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, envProps.getProperty("bootstrap.servers"));
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, SpecificAvroSerializer.class.getName());
        props.put(SCHEMA_REGISTRY_URL_CONFIG, envProps.getProperty("schema.registry.url"));

        return props;

    }

    public Properties buildConsumerProperties(String groupId, Properties envProps) {

        Properties props = new Properties();

        props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, envProps.getProperty("bootstrap.servers"));
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, SpecificAvroDeserializer.class.getName());
        props.put(SCHEMA_REGISTRY_URL_CONFIG, envProps.getProperty("schema.registry.url"));

        return props;

    }

    public KafkaConsumer<String, RawMovie> createRawMovieConsumer(Properties consumerProps) {
        return new KafkaConsumer<>(consumerProps);
    }

    public KafkaConsumer<String, Movie> createMovieConsumer(Properties consumerProps) {
        return new KafkaConsumer<>(consumerProps);
    }

    public KafkaProducer<String, Movie> createMovieProducer(Properties producerProps) {
        return new KafkaProducer<>(producerProps);
    }

    public KafkaProducer<String, RawMovie> createRawMovieProducer(Properties producerProps) {
        return new KafkaProducer<>(producerProps);
    }

    public void createTopics(Properties envProps) {

        Map<String, Object> config = new HashMap<>();
        config.put("bootstrap.servers", envProps.getProperty("bootstrap.servers"));

        try (AdminClient adminClient = AdminClient.create(config)) {

            List<NewTopic> topics = new ArrayList<>();
            topics.add(new NewTopic(
                    envProps.getProperty("input.topic.name"),
                    Integer.parseInt(envProps.getProperty("input.topic.partitions")),
                    Short.parseShort(envProps.getProperty("input.topic.replication.factor"))));
            topics.add(new NewTopic(
                    envProps.getProperty("output.topic.name"),
                    Integer.parseInt(envProps.getProperty("output.topic.partitions")),
                    Short.parseShort(envProps.getProperty("output.topic.replication.factor"))));

            adminClient.createTopics(topics);

        }

    }

    public void deleteTopics(Properties envProps) {

        Map<String, Object> config = new HashMap<>();
        config.put("bootstrap.servers", envProps.getProperty("bootstrap.servers"));

        try (AdminClient adminClient = AdminClient.create(config)) {

            List<String> topics = new ArrayList<>();
            topics.add(envProps.getProperty("input.topic.name"));
            topics.add(envProps.getProperty("output.topic.name"));

            adminClient.deleteTopics(topics);

        }

    }

    public static void main(String[] args) {

        if (args.length < 1) {
            throw new IllegalArgumentException("This program takes one argument: the path to an environment configuration file.");
        }

        TransformEvents te = new TransformEvents();
        Properties envProps = te.loadEnvProperties(args[0]);
        te.deleteTopics(envProps);
        te.createTopics(envProps);

        String inputTopic = envProps.getProperty("input.topic.name");
        String outputTopic = envProps.getProperty("output.topic.name");

        Properties consumerProps = te.buildConsumerProperties("inputGroup", envProps);
        KafkaConsumer<String, RawMovie> rawConsumer = te.createRawMovieConsumer(consumerProps);
        Properties producerProps = te.buildProducerProperties(envProps);
        KafkaProducer<String, Movie> producer = te.createMovieProducer(producerProps);

        final TransformationEngine transEngine = new TransformationEngine(inputTopic,
            outputTopic, rawConsumer, producer);
        final Thread transEngineThread = new Thread(transEngine);
        final CountDownLatch latch = new CountDownLatch(1);

        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            transEngine.shutdown();
            latch.countDown();
        }));

        transEngineThread.start();

    }

}

Compile and run the Apache Kafka application

7

In your terminal, run:

./gradlew shadowJar

Now that you have an uberjar for the Apache Kafka application, you can launch it locally. When you run the following, the prompt won’t return because the application will keep running until you exit it. In Streaming-based applications, records are always coming in and the application must be kept running continuously so future records can also be processed.

java -jar build/libs/kafka-transforming-standalone-0.0.1.jar configuration/dev.properties

Produce events to the input topic

8

Let’s put this tutorial to the test. In order to do this, you need to produce some raw movies to the input topic. In a new terminal, run:

docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic raw-movies --bootstrap-server broker:9092 --property value.schema="$(< src/main/avro/input-movie-event.avsc)"

When the console producer starts, it will log some messages and hang, waiting for your input. Type in one line at a time and press enter to send it. Each line represents an raw movie. To send all of the raw movies below, paste the following into the prompt and press enter:

{"id": 294, "title": "Die Hard::1988", "genre": "action"}
{"id": 354, "title": "Tree of Life::2011", "genre": "drama"}
{"id": 782, "title": "A Walk in the Clouds::1995", "genre": "romance"}
{"id": 128, "title": "The Big Lebowski::1998", "genre": "comedy"}

Consume transformed events from the output topic

9

Now that we produced raw movies to the input topic, the Apache Kafka application that is running in the background should have picked them up and processed them accordingly. This means that if everything really worked, you should see the transformed movies in the output topic. Open another console to consume the records that have been produced by your application:

docker exec -it schema-registry /usr/bin/kafka-avro-console-consumer --topic movies --bootstrap-server broker:9092 --from-beginning --property schema.registry.url=http://schema-registry:8081

After the consumer starts, you should see the following messages. The prompt will hang, waiting for more events to arrive. To continue studying this tutorial, send more events through the input terminal prompt. Otherwise, you can Control-C to exit the process.

{"id":294,"title":"Die Hard","release_year":1988,"genre":"action"}
{"id":354,"title":"Tree of Life","release_year":2011,"genre":"drama"}
{"id":782,"title":"A Walk in the Clouds","release_year":1995,"genre":"romance"}
{"id":128,"title":"The Big Lebowski","release_year":1998,"genre":"comedy"}

Test it

Create a test configuration file

1

First, create a test configuration configuration/test.properties with the following content:

confluent.version=7.3.0
bootstrap.servers=localhost:29092
schema.registry.url=mock://localhost:8081

input.topic.name=raw-movies
input.topic.partitions=1
input.topic.replication.factor=1

output.topic.name=movies
output.topic.partitions=1
output.topic.replication.factor=1

Write a test

2

Then, create a directory for the tests to live in:

mkdir -p src/test/java/io/confluent/developer

In this test, we’ll use Test Containers to isolate Kafka from our development environment. Although a Test Container already exists for Kafka, we need one for Schema Registry, too. Implement a utility Java class src/test/java/io/confluent/developer/SchemaRegistryContainer.java for the Schema Registry container using the following code:

package io.confluent.developer;

import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.containers.Network;

class SchemaRegistryContainer extends GenericContainer<SchemaRegistryContainer> {

    SchemaRegistryContainer(String confluentVersion) {
        super("confluentinc/cp-schema-registry:" + confluentVersion);
        withExposedPorts(8081);
    }

    SchemaRegistryContainer withKafka(KafkaContainer kafka) {
        return withKafka(kafka.getNetwork(), kafka.getNetworkAliases().get(0) + ":9092");
    }

    private SchemaRegistryContainer withKafka(Network network, String bootstrapServers) {
        withNetwork(network);
        withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry");
        withEnv("SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8081");
        withEnv("SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS", "PLAINTEXT://" + bootstrapServers);
        return self();
    }

    String getTarget() {
        StringBuilder sb = new StringBuilder();
        sb.append("http://").append(getHost());
        sb.append(":").append(getMappedPort(8081));
        return sb.toString();
    }

}

And finally, write the test file, src/test/java/io/confluent/developer/TransformEventsTest.java, using the following code:

package io.confluent.developer;

import io.confluent.developer.avro.Movie;
import io.confluent.developer.avro.RawMovie;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;
import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.utility.DockerImageName;

import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;

import static java.time.Duration.ofMillis;

public class TransformEventsTest {

    private final static String TEST_CONFIG_FILE = "configuration/test.properties";
    private final static Properties ENVIRONMENT_PROPERTIES = loadEnvironmentProperties();

    @ClassRule
    public static KafkaContainer kafkaContainer = new KafkaContainer(
            DockerImageName.parse("confluentinc/cp-kafka:" +
                    ENVIRONMENT_PROPERTIES.getProperty("confluent.version")));

    private String inputTopic, outputTopic;
    private TransformationEngine transEngine;
    private KafkaProducer<String, Movie> movieProducer;
    private KafkaProducer<String, RawMovie> rawMovieProducer;
    private KafkaConsumer<String, RawMovie> rawMovieConsumer;
    private KafkaConsumer<String, Movie> outputConsumer;

    @Before
    public void initialize() {

        TransformEvents transformEvents = new TransformEvents();
        ENVIRONMENT_PROPERTIES.put("bootstrap.servers", kafkaContainer.getBootstrapServers());
        transformEvents.createTopics(ENVIRONMENT_PROPERTIES);

        inputTopic = ENVIRONMENT_PROPERTIES.getProperty("input.topic.name");
        outputTopic = ENVIRONMENT_PROPERTIES.getProperty("output.topic.name");
        Properties producerProps = transformEvents.buildProducerProperties(ENVIRONMENT_PROPERTIES);
        Properties inputConsumerProps = transformEvents.buildConsumerProperties("inputGroup", ENVIRONMENT_PROPERTIES);
        Properties outputConsumerProps = transformEvents.buildConsumerProperties("outputGroup", ENVIRONMENT_PROPERTIES);

        rawMovieProducer = transformEvents.createRawMovieProducer(producerProps);
        movieProducer = transformEvents.createMovieProducer(producerProps);
        rawMovieConsumer = transformEvents.createRawMovieConsumer(inputConsumerProps);
        outputConsumer = transformEvents.createMovieConsumer(outputConsumerProps);

    }

    @After
    public void tearDown() {
        transEngine.shutdown();
    }

    @Test
    public void checkIfYearFieldEndsUpSplitted() {

        List<RawMovie> input = new ArrayList<>();
        input.add(RawMovie.newBuilder().setId(294).setTitle("Die Hard::1988").setGenre("action").build());
        input.add(RawMovie.newBuilder().setId(354).setTitle("Tree of Life::2011").setGenre("drama").build());
        input.add(RawMovie.newBuilder().setId(782).setTitle("A Walk in the Clouds::1995").setGenre("romance").build());
        input.add(RawMovie.newBuilder().setId(128).setTitle("The Big Lebowski::1998").setGenre("comedy").build());

        List<Movie> expectedOutput = new ArrayList<>();
        expectedOutput.add(Movie.newBuilder().setTitle("Die Hard").setId(294).setReleaseYear(1988).setGenre("action").build());
        expectedOutput.add(Movie.newBuilder().setTitle("Tree of Life").setId(354).setReleaseYear(2011).setGenre("drama").build());
        expectedOutput.add(Movie.newBuilder().setTitle("A Walk in the Clouds").setId(782).setReleaseYear(1995).setGenre("romance").build());
        expectedOutput.add(Movie.newBuilder().setTitle("The Big Lebowski").setId(128).setReleaseYear(1998).setGenre("comedy").build());

        transEngine = new TransformationEngine(inputTopic, outputTopic,
            rawMovieConsumer, movieProducer);

        Thread transEngineThread = new Thread(transEngine);
        List<Movie> actualOutput = null;

        try {
            transEngineThread.start();
            // Produce the raw movies for the testing process...
            produceRawMovies(inputTopic, input, rawMovieProducer);
            // Read the transformed records from the output topic,
            // that has been put there by the transformation engine.
            actualOutput = consumeMovies(outputTopic, outputConsumer);
        } finally {
            transEngine.shutdown();
        }

        Assert.assertEquals(expectedOutput, actualOutput);

    }

    private List<Movie> consumeMovies(String outputTopic,
                                        KafkaConsumer<String, Movie> consumer) {

        // Wait five seconds until all the records gets persisted, to
        // avoid a race condition between producers and consumers...
        try { Thread.sleep(5000); } catch (Exception ex) {}

        List<Movie> output = new ArrayList<Movie>();
        consumer.subscribe(Arrays.asList(outputTopic));
        ConsumerRecords<String, Movie> records = consumer.poll(ofMillis(1000));

        for (ConsumerRecord<String, Movie> record : records) {
            output.add(record.value());
        }

        return output;

    }

    private void produceRawMovies(String inputTopic, List<RawMovie> rawMovies,
                                 KafkaProducer<String, RawMovie> producer) {

        ProducerRecord<String, RawMovie> record = null;
        for (RawMovie movie : rawMovies) {
            record = new ProducerRecord<String, RawMovie>(inputTopic, movie);
            producer.send(record);
        }

    }

    private static Properties loadEnvironmentProperties() {

        Properties environmentProps = new Properties();
        try (FileInputStream input = new FileInputStream(TEST_CONFIG_FILE)) {
            environmentProps.load(input);
        } catch (IOException ex) {
            ex.printStackTrace();
        }

        return environmentProps;

    }

}

Invoke the tests

3

Now run the test, which is as simple as:

./gradlew test

Deploy on Confluent Cloud

Run your app with Confluent Cloud

1

Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.

  1. Sign up for Confluent Cloud, a fully managed Apache Kafka service.

  2. After you log in to Confluent Cloud Console, click Environments in the lefthand navigation, click on Add cloud environment, and name the environment learn-kafka. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.

  3. From the Billing & payment section in the menu, apply the promo code CC100KTS to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.

  4. Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.

Confluent Cloud

Next, from the Confluent Cloud Console, click on Clients to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application. In the case of this tutorial, add the following properties to the client application’s input properties file, substituting all curly braces with your Confluent Cloud values.

# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BROKER_ENDPOINT }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips

# Best practice for Kafka producer to prevent data loss
acks=all

# Required connection configs for Confluent Cloud Schema Registry
schema.registry.url=https://{{ SR_ENDPOINT }}
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}

Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.