If you have time series events in a Kafka topic, how can you group them into fixed-size, non-overlapping, contiguous time intervals?
Create a TABLE
with the WINDOW TUMBLING
syntax, and specify the window duration with SIZE
within the parentheses.
CREATE TABLE rating_count
WITH (kafka_topic='rating_count') AS
SELECT title,
COUNT(*) AS rating_count,
WINDOWSTART AS window_start,
WINDOWEND AS window_end
FROM ratings
WINDOW TUMBLING (SIZE 6 HOURS)
GROUP BY title;
This tutorial installs Confluent Platform using Docker. Before proceeding:
• Install Docker Desktop (version 4.0.0
or later) or Docker Engine (version 19.03.0
or later) if you don’t already have it
• Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.
• Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd
, via systemctl
• Verify that Docker is set up properly by ensuring no errors are output when you run docker info
and docker compose version
on the command line
To get started, make a new directory anywhere you’d like for this project:
mkdir tumbling-windows && cd tumbling-windows
Then make the following directories to set up its structure:
mkdir src test
Next, create the following docker-compose.yml
file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):
version: '2'
services:
broker:
image: confluentinc/cp-kafka:7.4.1
hostname: broker
container_name: broker
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- 8081:8081
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
ksqldb-server:
image: confluentinc/ksqldb-server:0.28.2
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- schema-registry
ports:
- 8088:8088
environment:
KSQL_CONFIG_DIR: /etc/ksqldb
KSQL_LOG4J_OPTS: -Dlog4j.configuration=file:/etc/ksqldb/log4j.properties
KSQL_BOOTSTRAP_SERVERS: broker:9092
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.28.2
container_name: ksqldb-cli
depends_on:
- broker
- ksqldb-server
entrypoint: /bin/sh
environment:
KSQL_CONFIG_DIR: /etc/ksqldb
tty: true
volumes:
- ./src:/opt/app/src
- ./test:/opt/app/test
And launch it by running:
docker compose up -d
To begin developing interactively, open up the ksqlDB CLI:
docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
The first thing we’ll need to start modeling this scenario is a stream that represents ratings of movies. One important attribute of these events is their timestamp since we’ll be modeling the number of ratings that each movie receives over time.
CREATE STREAM ratings (title VARCHAR, release_year INT, rating DOUBLE, timestamp VARCHAR)
WITH (kafka_topic='ratings',
timestamp='timestamp',
timestamp_format='yyyy-MM-dd HH:mm:ss',
partitions=1,
value_format='avro');
Produce events that represent ratings about each movie over time. Note how the timestamps vary across different hours of the day.
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('Die Hard', 1998, 8.2, '2019-07-09 01:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('Die Hard', 1998, 4.5, '2019-07-09 05:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('Die Hard', 1998, 5.1, '2019-07-09 07:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('Tree of Life', 2011, 4.9, '2019-07-09 09:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('Tree of Life', 2011, 5.6, '2019-07-09 08:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('A Walk in the Clouds', 1995, 3.6, '2019-07-09 12:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('A Walk in the Clouds', 1995, 6.0, '2019-07-09 15:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('A Walk in the Clouds', 1995, 4.6, '2019-07-09 22:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('The Big Lebowski', 1998, 9.9, '2019-07-09 05:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('The Big Lebowski', 1998, 4.2, '2019-07-09 02:00:00');
INSERT INTO ratings (title, release_year, rating, timestamp) VALUES ('Super Mario Bros.', 1993, 3.5, '2019-07-09 18:00:00');
Now that you have stream with some events in it, let’s start to leverage them. The first thing to do is set the following properties to ensure that you’re reading from the beginning of the stream:
SET 'auto.offset.reset' = 'earliest';
Let’s figure out how many ratings were given to each movie in tumbling, 6-hour intervals. To do that, we issue the following transient push query to aggregate the ratings, grouped by the movie’s name. This tells ksqlDB that you only want to sum up the ratings on a per-movie basis. It also captures the window start and end times. These functions describe the boundaries that represent each 6-hour interval. The following will block and continue to return results until its limit is reached or you tell it to stop.
SELECT title,
COUNT(*) AS rating_count,
WINDOWSTART AS window_start,
WINDOWEND AS window_end
FROM ratings
WINDOW TUMBLING (SIZE 6 HOURS)
GROUP BY title
EMIT CHANGES
LIMIT 11;
This should yield the following output:
+--------------------+--------------------+--------------------+--------------------+
|TITLE |RATING_COUNT |WINDOW_START |WINDOW_END |
+--------------------+--------------------+--------------------+--------------------+
|Die Hard |1 |1562630400000 |1562652000000 |
|Die Hard |2 |1562630400000 |1562652000000 |
|Die Hard |1 |1562652000000 |1562673600000 |
|Tree of Life |1 |1562652000000 |1562673600000 |
|Tree of Life |2 |1562652000000 |1562673600000 |
|A Walk in the Clouds|1 |1562673600000 |1562695200000 |
|A Walk in the Clouds|2 |1562673600000 |1562695200000 |
|A Walk in the Clouds|1 |1562695200000 |1562716800000 |
|The Big Lebowski |1 |1562630400000 |1562652000000 |
|The Big Lebowski |2 |1562630400000 |1562652000000 |
|Super Mario Bros. |1 |1562695200000 |1562716800000 |
Limit Reached
Query terminated
That’s a fine snapshot, but we want to make this rolling count of ratings continuous. The following creates a new table that is continuously populated by its query:
CREATE TABLE rating_count
WITH (kafka_topic='rating_count') AS
SELECT title,
COUNT(*) AS rating_count,
WINDOWSTART AS window_start,
WINDOWEND AS window_end
FROM ratings
WINDOW TUMBLING (SIZE 6 HOURS)
GROUP BY title;
As a bonus, we can prove to ourselves that the window boundaries are in fact 6-hour intervals. Run the following transient push query, which uses the TIMESTAMPTOSTRING
function to convert the UNIX timestamps into something that we can read:
SELECT title,
rating_count,
TIMESTAMPTOSTRING(window_start, 'yyy-MM-dd HH:mm:ss', 'UTC') as window_start,
TIMESTAMPTOSTRING(window_end, 'yyy-MM-dd HH:mm:ss', 'UTC') as window_end
FROM rating_count
EMIT CHANGES
LIMIT 11;
The output should look similar to:
+--------------------+--------------------+--------------------+--------------------+
|TITLE |RATING_COUNT |WINDOW_START |WINDOW_END |
+--------------------+--------------------+--------------------+--------------------+
|Die Hard |1 |2019-07-09 00:00:00 |2019-07-09 06:00:00 |
|Die Hard |2 |2019-07-09 00:00:00 |2019-07-09 06:00:00 |
|Die Hard |1 |2019-07-09 06:00:00 |2019-07-09 12:00:00 |
|Tree of Life |1 |2019-07-09 06:00:00 |2019-07-09 12:00:00 |
|Tree of Life |2 |2019-07-09 06:00:00 |2019-07-09 12:00:00 |
|A Walk in the Clouds|1 |2019-07-09 12:00:00 |2019-07-09 18:00:00 |
|A Walk in the Clouds|2 |2019-07-09 12:00:00 |2019-07-09 18:00:00 |
|A Walk in the Clouds|1 |2019-07-09 18:00:00 |2019-07-10 00:00:00 |
|The Big Lebowski |1 |2019-07-09 00:00:00 |2019-07-09 06:00:00 |
|The Big Lebowski |2 |2019-07-09 00:00:00 |2019-07-09 06:00:00 |
|Super Mario Bros. |1 |2019-07-09 18:00:00 |2019-07-10 00:00:00 |
Limit Reached
Query terminated
Finally, let’s see what’s available on the underlying Kafka topic for the table. We can print that out easily.
PRINT rating_count FROM BEGINNING LIMIT 11;
Notice that the key for each message includes not just the movie title, but also the start time of the window. It should look something like this:
Key format: HOPPING(KAFKA_STRING) or TUMBLING(KAFKA_STRING)
Value format: AVRO
rowtime: 2019/07/09 01:00:00.000 Z, key: [Die Hard@1562630400000/-], value: {"RATING_COUNT": 1, "WINDOW_START": 1562630400000, "WINDOW_END": 1562652000000}, partition: 0
rowtime: 2019/07/09 05:00:00.000 Z, key: [Die Hard@1562630400000/-], value: {"RATING_COUNT": 2, "WINDOW_START": 1562630400000, "WINDOW_END": 1562652000000}, partition: 0
rowtime: 2019/07/09 07:00:00.000 Z, key: [Die Hard@1562652000000/-], value: {"RATING_COUNT": 1, "WINDOW_START": 1562652000000, "WINDOW_END": 1562673600000}, partition: 0
rowtime: 2019/07/09 09:00:00.000 Z, key: [Tree of Life@1562652000000/-], value: {"RATING_COUNT": 1, "WINDOW_START": 1562652000000, "WINDOW_END": 1562673600000}, partition: 0
rowtime: 2019/07/09 09:00:00.000 Z, key: [Tree of Life@1562652000000/-], value: {"RATING_COUNT": 2, "WINDOW_START": 1562652000000, "WINDOW_END": 1562673600000}, partition: 0
rowtime: 2019/07/09 12:00:00.000 Z, key: [A Walk in the Clouds@1562673600000/-], value: {"RATING_COUNT": 1, "WINDOW_START": 1562673600000, "WINDOW_END": 1562695200000}, partition: 0
rowtime: 2019/07/09 15:00:00.000 Z, key: [A Walk in the Clouds@1562673600000/-], value: {"RATING_COUNT": 2, "WINDOW_START": 1562673600000, "WINDOW_END": 1562695200000}, partition: 0
rowtime: 2019/07/09 22:00:00.000 Z, key: [A Walk in the Clouds@1562695200000/-], value: {"RATING_COUNT": 1, "WINDOW_START": 1562695200000, "WINDOW_END": 1562716800000}, partition: 0
rowtime: 2019/07/09 05:00:00.000 Z, key: [The Big Lebowski@1562630400000/-], value: {"RATING_COUNT": 1, "WINDOW_START": 1562630400000, "WINDOW_END": 1562652000000}, partition: 0
rowtime: 2019/07/09 05:00:00.000 Z, key: [The Big Lebowski@1562630400000/-], value: {"RATING_COUNT": 2, "WINDOW_START": 1562630400000, "WINDOW_END": 1562652000000}, partition: 0
rowtime: 2019/07/09 18:00:00.000 Z, key: [Super Mario Bros.@1562695200000/-], value: {"RATING_COUNT": 1, "WINDOW_START": 1562695200000, "WINDOW_END": 1562716800000}, partition: 0
Topic printing ceased
Now that you have a series of statements that’s doing the right thing, the last step is to put them into a file so that they can be used outside the CLI session. Create a file at src/statements.sql
with the following content:
CREATE STREAM ratings (title VARCHAR, release_year INT, rating DOUBLE, timestamp VARCHAR)
WITH (kafka_topic='ratings',
timestamp='timestamp',
timestamp_format='yyyy-MM-dd HH:mm:ss',
partitions=1,
value_format='avro');
CREATE TABLE rating_count
WITH (kafka_topic='rating_count') AS
SELECT title,
COUNT(*) AS rating_count,
WINDOWSTART AS window_start,
WINDOWEND AS window_end
FROM ratings
WINDOW TUMBLING (SIZE 6 HOURS)
GROUP BY title;
Create a file at test/input.json
with the inputs for testing:
{
"inputs": [
{
"topic": "ratings",
"value": {
"title": "Die Hard",
"release_year": 1998,
"rating": 8.2,
"timestamp": "2019-07-09 01:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "Die Hard",
"release_year": 1998,
"rating": 4.5,
"timestamp": "2019-07-09 05:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "Die Hard",
"release_year": 1998,
"rating": 5.1,
"timestamp": "2019-07-09 07:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "Tree of Life",
"release_year": 2011,
"rating": 4.9,
"timestamp": "2019-07-09 09:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "Tree of Life",
"release_year": 2011,
"rating": 5.6,
"timestamp": "2019-07-09 08:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "A Walk in the Clouds",
"release_year": 1995,
"rating": 3.6,
"timestamp": "2019-07-09 12:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "A Walk in the Clouds",
"release_year": 1995,
"rating": 6.0,
"timestamp": "2019-07-09 15:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "A Walk in the Clouds",
"release_year": 1995,
"rating": 4.6,
"timestamp": "2019-07-09 22:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "The Big Lebowski",
"release_year": 1998,
"rating": 9.9,
"timestamp": "2019-07-09 05:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "The Big Lebowski",
"release_year": 1998,
"rating": 4.2,
"timestamp": "2019-07-09 02:00:00"
}
},
{
"topic": "ratings",
"value": {
"title": "Super Mario Bros.",
"release_year": 1993,
"rating": 3.5,
"timestamp": "2019-07-09 18:00:00"
}
}
]
}
Similarly, create a file at test/output.json
with the expected outputs. Notice that because ksqlDB joins its grouping key with the window boundaries, we need to use a bit of extra expression to describe what to expect. We leverage the window
key to describe the start and end boundaries that the key represents.
{
"outputs": [
{
"topic": "rating_count",
"key": "Die Hard",
"window": {
"start": 1562630400000,
"end": 1562652000000,
"type": "time"
},
"value": {
"RATING_COUNT": 1,
"WINDOW_START": 1562630400000,
"WINDOW_END": 1562652000000
},
"timestamp": 1562634000000
},
{
"topic": "rating_count",
"key": "Die Hard",
"window": {
"start": 1562630400000,
"end": 1562652000000,
"type": "time"
},
"value": {
"RATING_COUNT": 2,
"WINDOW_START": 1562630400000,
"WINDOW_END": 1562652000000
},
"timestamp": 1562648400000
},
{
"topic": "rating_count",
"key": "Die Hard",
"window": {
"start": 1562652000000,
"end": 1562673600000,
"type": "time"
},
"value": {
"RATING_COUNT": 1,
"WINDOW_START": 1562652000000,
"WINDOW_END": 1562673600000
},
"timestamp": 1562655600000
},
{
"topic": "rating_count",
"key": "Tree of Life",
"window": {
"start": 1562652000000,
"end": 1562673600000,
"type": "time"
},
"value": {
"RATING_COUNT": 1,
"WINDOW_START": 1562652000000,
"WINDOW_END": 1562673600000
},
"timestamp": 1562662800000
},
{
"topic": "rating_count",
"key": "Tree of Life",
"window": {
"start": 1562652000000,
"end": 1562673600000,
"type": "time"
},
"value": {
"RATING_COUNT": 2,
"WINDOW_START": 1562652000000,
"WINDOW_END": 1562673600000
},
"timestamp": 1562662800000
},
{
"topic": "rating_count",
"key": "A Walk in the Clouds",
"window": {
"start": 1562673600000,
"end": 1562695200000,
"type": "time"
},
"value": {
"RATING_COUNT": 1,
"WINDOW_START": 1562673600000,
"WINDOW_END": 1562695200000
},
"timestamp": 1562673600000
},
{
"topic": "rating_count",
"key": "A Walk in the Clouds",
"window": {
"start": 1562673600000,
"end": 1562695200000,
"type": "time"
},
"value": {
"RATING_COUNT": 2,
"WINDOW_START": 1562673600000,
"WINDOW_END": 1562695200000
},
"timestamp": 1562684400000
},
{
"topic": "rating_count",
"key": "A Walk in the Clouds",
"window": {
"start": 1562695200000,
"end": 1562716800000,
"type": "time"
},
"value": {
"RATING_COUNT": 1,
"WINDOW_START": 1562695200000,
"WINDOW_END": 1562716800000
},
"timestamp": 1562709600000
},
{
"topic": "rating_count",
"key": "The Big Lebowski",
"window": {
"start": 1562630400000,
"end": 1562652000000,
"type": "time"
},
"value": {
"RATING_COUNT": 1,
"WINDOW_START": 1562630400000,
"WINDOW_END": 1562652000000
},
"timestamp": 1562648400000
},
{
"topic": "rating_count",
"key": "The Big Lebowski",
"window": {
"start": 1562630400000,
"end": 1562652000000,
"type": "time"
},
"value": {
"RATING_COUNT": 2,
"WINDOW_START": 1562630400000,
"WINDOW_END": 1562652000000
},
"timestamp": 1562648400000
},
{
"topic": "rating_count",
"key": "Super Mario Bros.",
"window": {
"start": 1562695200000,
"end": 1562716800000,
"type": "time"
},
"value": {
"RATING_COUNT": 1,
"WINDOW_START": 1562695200000,
"WINDOW_END": 1562716800000
},
"timestamp": 1562695200000
}
]
}
Lastly, invoke the tests using the test runner and the statements file that you created earlier:
docker exec ksqldb-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json
Which should pass:
>>> Test passed!
Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.
Sign up for Confluent Cloud, a fully managed Apache Kafka service.
After you log in to Confluent Cloud Console, click Environments
in the lefthand navigation, click on Add cloud environment
, and name the environment learn-kafka
. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.
From the Billing & payment
section in the menu, apply the promo code CC100KTS
to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1
. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.
Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.
Next, from the Confluent Cloud Console, click on Clients
to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.
Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.