If you have time series events in a Kafka topic, sliding windows let you group and aggregate them in small fixed-size, contiguous time intervals. Semantically, this is the same idea as hopping windows; however, for performance reasons, hopping windows aren't the best solution for small time increments.
For example, you have a topic with events that represent temperature readings from a sensor. The following topology definition computes the average temperature for a given sensor over small 0.5-second sliding windows.
builder.stream(INPUT_TOPIC, Consumed.with(Serdes.String(), tempReadingSerde))
.groupByKey()
.windowedBy(SlidingWindows.ofTimeDifferenceAndGrace(Duration.ofMillis(500), Duration.ofMillis(100)))
.aggregate(() -> new TempAverage(0, 0),
(key, value, agg) -> new TempAverage(agg.total() + value.temp(), agg.num_readings() + 1),
Materialized.with(Serdes.String(), tempAverageSerde))
.toStream()
.map((Windowed<String> key, TempAverage tempAverage) -> {
double aveNoFormat = tempAverage.total()/(double)tempAverage.num_readings();
double formattedAve = Double.parseDouble(String.format("%.2f", aveNoFormat));
return new KeyValue<>(key.key(),formattedAve) ;
})
.to(OUTPUT_TOPIC, Produced.with(Serdes.String(), Serdes.Double()));
Let's review the key points in this example
.groupByKey()
Aggregations must group records by key so grouping by key is the first step in the topology.
.windowedBy(SlidingWindows.ofTimeDifferenceAndGrace(Duration.ofMillis(500), Duration.ofMillis(100)))
This creates a new TimeWindowedKStream that we can aggregate. The sliding windows are 500 ms long, and we allow data to arrive late by as much as 100 ms.
.aggregate(() -> new TempAverage(0, 0),
(key, value, agg) -> new TempAverage(agg.total() + value.temp(), agg.num_readings() + 1),
Materialized.with(Serdes.String(), tempAverageSerde))
Here we update the sum of temperature readings and the number of readings processed. These values are used to calculate the average temperature downstream in the topology.
.toStream()
.map((Windowed<String> key, TempAverage tempAverage) -> {
double aveNoFormat = tempAverage.total()/(double)tempAverage.num_readings();
double formattedAve = Double.parseDouble(String.format("%.2f", aveNoFormat));
return new KeyValue<>(key.key(),formattedAve) ;
})
.to(OUTPUT_TOPIC, Produced.with(Serdes.String(), Serdes.Double()));
Aggregations in Kafka Streams return a KTable instance, so it's converted to a KStream. Then map is used to calculate the average temperature before we finally emit the aggregate to the output topic.
The following steps use Confluent Cloud. To run the tutorial locally with Docker, skip to the Docker instructions section at the bottom.
git clone git@github.com:confluentinc/tutorials.git
cd tutorials
Login to your Confluent Cloud account:
confluent login --prompt --save
Install a CLI plugin that will streamline the creation of resources in Confluent Cloud:
confluent plugin install confluent-quickstart
Run the plugin from the top-level directory of the tutorials repository to create the Confluent Cloud resources needed for this tutorial. Note that you may specify a different cloud provider (gcp or azure) or region. You can find supported regions in a given cloud provider by running confluent kafka region list --cloud <CLOUD>.
confluent quickstart \
--environment-name kafka-streams-sliding-windows-env \
--kafka-cluster-name kafka-streams-sliding-windows-cluster \
--create-kafka-key \
--kafka-java-properties-file ./sliding-windows/kstreams/src/main/resources/cloud.properties
The plugin should complete in under a minute.
Create the input and output topics for the application:
confluent kafka topic create temp-readings
confluent kafka topic create output-topic
Start a console producer:
confluent kafka topic produce temp-readings --parse-key --delimiter :
Enter a few JSON-formatted temperature readings:
device-1:{"temp":80.0,"timestamp":1757703142,"device_id":"device-1"}
device-1:{"temp":90.0,"timestamp":1757703142,"device_id":"device-1"}
device-1:{"temp":95.0,"timestamp":1757703142,"device_id":"device-1"}
device-1:{"temp":100.0,"timestamp":1757703142,"device_id":"device-1"}
Enter Ctrl+C to exit the console producer.
Compile the application from the top-level tutorials repository directory:
./gradlew sliding-windows:kstreams:shadowJar
Navigate into the application's home directory:
cd sliding-windows/kstreams
Run the application, passing the Kafka client configuration file generated when you created Confluent Cloud resources:
java -cp ./build/libs/sliding-windows-standalone.jar \
io.confluent.developer.SlidingWindow \
./src/main/resources/cloud.properties
Validate that you see the correct temperature averages in the output-topic topic.
confluent kafka topic consume output-topic -b \
--print-key --delimiter : --value-format double
You should see the average updated within the same sliding window:
device-1:80.0
device-1:85.0
device-1:88.33
device-1:91.25
When you are finished, delete the kafka-streams-sliding-windows-env environment by first getting the environment ID of the form env-123456 corresponding to it:
confluent environment list
Delete the environment, including all resources created for this tutorial:
confluent environment delete <ENVIRONMENT ID>
git clone git@github.com:confluentinc/tutorials.git
cd tutorials
Start Kafka with the following command run from the top-level tutorials repository directory:
docker compose -f ./docker/docker-compose-kafka.yml up -d
Open a shell in the broker container:
docker exec -it broker /bin/bash
Create the input and output topics for the application:
kafka-topics --bootstrap-server localhost:9092 --create --topic temp-readings
kafka-topics --bootstrap-server localhost:9092 --create --topic output-topic
Start a console producer:
kafka-console-producer --bootstrap-server localhost:9092 --topic temp-readings \
--property "parse.key=true" --property "key.separator=:"
Enter a few JSON-formatted temperature readings:
device-1:{"temp":80.0,"timestamp":1757703142,"device_id":"device-1"}
device-1:{"temp":90.0,"timestamp":1757703143,"device_id":"device-1"}
device-1:{"temp":95.0,"timestamp":1757703144,"device_id":"device-1"}
device-1:{"temp":100.0,"timestamp":1757703145,"device_id":"device-1"}
Enter Ctrl+C to exit the console producer.
On your local machine, compile the app:
./gradlew sliding-windows:kstreams:shadowJar
Navigate into the application's home directory:
cd sliding-windows/kstreams
Run the application, passing the local.properties Kafka client configuration file that points to the broker's bootstrap servers endpoint at localhost:9092:
java -cp ./build/libs/sliding-windows-standalone.jar \
io.confluent.developer.SlidingWindow \
./src/main/resources/local.properties
Validate that you see the correct temperature averages in the output-topic topic.
kafka-console-consumer --bootstrap-server localhost:9092 --topic output-topic --from-beginning \
--property "print.key=true" --property "key.separator=:" \
--property "value.deserializer=org.apache.kafka.common.serialization.DoubleDeserializer"
You should see the average updated within the same sliding window:
device-1:80.0
device-1:85.0
device-1:88.33
device-1:91.25
From your local machine, stop the broker container:
docker compose -f ./docker/docker-compose-kafka.yml down