How can you calculate the sum of one or more fields from all records in a Kafka topic?
This tutorial installs Confluent Platform using Docker. Before proceeding:
• Install Docker Desktop (version 4.0.0
or later) or Docker Engine (version 19.03.0
or later) if you don’t already have it
• Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.
• Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd
, via systemctl
• Verify that Docker is set up properly by ensuring no errors are output when you run docker info
and docker compose version
on the command line
To get started, make a new directory anywhere you’d like for this project:
mkdir aggregate-sum && cd aggregate-sum
Then make the following directories to set up its structure:
mkdir src test
Next, create the following docker-compose.yml
file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):
version: '2'
services:
broker:
image: confluentinc/cp-kafka:7.4.1
hostname: broker
container_name: broker
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- 8081:8081
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
ksqldb-server:
image: confluentinc/ksqldb-server:0.28.2
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- schema-registry
ports:
- 8088:8088
environment:
KSQL_CONFIG_DIR: /etc/ksqldb
KSQL_LOG4J_OPTS: -Dlog4j.configuration=file:/etc/ksqldb/log4j.properties
KSQL_BOOTSTRAP_SERVERS: broker:9092
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.28.2
container_name: ksqldb-cli
depends_on:
- broker
- ksqldb-server
entrypoint: /bin/sh
environment:
KSQL_CONFIG_DIR: /etc/ksqldb
tty: true
volumes:
- ./src:/opt/app/src
- ./test:/opt/app/test
And launch it by running:
docker compose up -d
To begin developing interactively, open up the ksqlDB CLI:
docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
First, you’ll need to create a Kafka topic and stream to represent the ticket sales. The statement below creates both at the same time. This stream contains the name of the movie and the price of the ticket to watch it. We model the price as an integer to make the example simple.
Another important characteristic of the data is the timestamp column, sale_ts
. Every message in Kafka is timestamped, and unless you specify otherwise, ksqlDB will use that existing timestamp for any time-related processing. In this example, we’re telling it to use a field in the message for the timestamp. This is called the event time rather than the ingestion time.
CREATE STREAM MOVIE_TICKET_SALES (title VARCHAR, sale_ts VARCHAR, ticket_total_value INT)
WITH (KAFKA_TOPIC='movie-ticket-sales',
PARTITIONS=1,
VALUE_FORMAT='avro',
TIMESTAMP='sale_ts',
TIMESTAMP_FORMAT='yyyy-MM-dd''T''HH:mm:ssX');
With the stream in place, we can now produce the following events to it:
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('Aliens', '2019-07-18T10:00:00Z', 10);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('Die Hard', '2019-07-18T10:00:00Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('Die Hard', '2019-07-18T10:01:00Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Godfather', '2019-07-18T10:01:31Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('Die Hard', '2019-07-18T10:01:36Z', 24);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Godfather', '2019-07-18T10:02:00Z', 18);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Big Lebowski', '2019-07-18T11:03:21Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Big Lebowski', '2019-07-18T11:03:50Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Godfather', '2019-07-18T11:40:00Z', 36);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Godfather', '2019-07-18T11:40:09Z', 18);
Before we continue, let’s make sure that each ksqlDB query we execute will begin its processing from the beginning of the stream:
SET 'auto.offset.reset' = 'earliest';
For the purposes of this example only, we are also going to configure ksqlDB to buffer the aggregates as it builds them. This makes the query feel like it responds more slowly, but means that you get just one row per movie from which it is simpler to understand the concept:
SET 'ksql.streams.cache.max.bytes.buffering' = '10000000';
Let’s calculate the total sales per movie using a SUM
aggregation on the TICKET_TOTAL_VALUE
field. This will block and continue to return results until it’s limit is reached or you tell it to stop.
SELECT TITLE,
SUM(TICKET_TOTAL_VALUE) AS TOTAL_VALUE
FROM MOVIE_TICKET_SALES
GROUP BY TITLE
EMIT CHANGES
LIMIT 3;
This should yield the following output:
+--------------------+--------------------+
|TITLE |TOTAL_VALUE |
+--------------------+--------------------+
|Aliens |10 |
|Die Hard |48 |
|The Big Lebowski |24 |
Limit Reached
Query terminated
Since the output looks right, the next step is to make the query persistent. We do this with the CREATE TABLE AS
statement. This statement creates a stream processor that runs continuously, always consuming events from the source stream (MOVIE_TICKET_SALES
) and creating and updating entries in the resulting table (MOVIE_REVENUE
).
It should not escape your notice that we are turning a stream into a table. A table is always the result of using the GROUP BY
clause on a stream. As we noted in the previous step, we are also computing an aggregate over the grouped values with SUM(TICKET_TOTAL_VALUE)
. This function creates a new column in the resulting table, which we give a readable name using the AS TOTAL_VALUE
clause.
Issue the following to create the new table:
CREATE TABLE MOVIE_REVENUE AS
SELECT TITLE,
SUM(TICKET_TOTAL_VALUE) AS TOTAL_VALUE
FROM MOVIE_TICKET_SALES
GROUP BY TITLE;
To check that it’s working, print out the contents of the output stream’s underlying topic:
PRINT MOVIE_REVENUE FROM BEGINNING LIMIT 3;
This should yield the following output:
Key format: KAFKA_STRING
Value format: AVRO
rowtime: 2019/07/18 10:00:00.000 Z, key: Aliens, value: {"TOTAL_VALUE": 10}, partition: 0
rowtime: 2019/07/18 10:01:36.000 Z, key: Die Hard, value: {"TOTAL_VALUE": 48}, partition: 0
rowtime: 2019/07/18 11:03:50.000 Z, key: The Big Lebowski, value: {"TOTAL_VALUE": 24}, partition: 0
Topic printing ceased
Notice that ksqlDB is storing the TITLE
in the key of the Kafka message. It does this because TITLE
is the primary key of the MOVIE_REVENUE
table.
If needed, a copy of TITLE
can also be stored in the value by adding AsValue(TITLE)
in the projection.
Now that you have a series of statements that’s doing the right thing, the last step is to put them into a file so that they can be used outside the CLI session. Create a file at src/statements.sql
with the following content:
CREATE STREAM MOVIE_TICKET_SALES (title VARCHAR, sale_ts VARCHAR, ticket_total_value INT)
WITH (KAFKA_TOPIC='movie-ticket-sales',
PARTITIONS=1,
VALUE_FORMAT='avro',
TIMESTAMP='sale_ts',
TIMESTAMP_FORMAT='yyyy-MM-dd''T''HH:mm:ssX');
CREATE TABLE MOVIE_REVENUE AS
SELECT TITLE,
SUM(TICKET_TOTAL_VALUE) AS TOTAL_VALUE
FROM MOVIE_TICKET_SALES
GROUP BY TITLE;
Create a file at test/input.json
with the inputs for testing:
{
"inputs": [
{"topic": "movie-ticket-sales", "value": {"TITLE": "Aliens", "SALE_TS": "2019-07-18T10:00:00Z", "TICKET_TOTAL_VALUE": 10}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Die Hard", "SALE_TS": "2019-07-18T10:00:00Z", "TICKET_TOTAL_VALUE": 12}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Die Hard", "SALE_TS": "2019-07-18T10:01:00Z", "TICKET_TOTAL_VALUE": 12}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "The Godfather", "SALE_TS": "2019-07-18T10:01:31Z", "TICKET_TOTAL_VALUE": 12}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Die Hard", "SALE_TS": "2019-07-18T10:01:36Z", "TICKET_TOTAL_VALUE": 24}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "The Godfather", "SALE_TS": "2019-07-18T10:02:00Z", "TICKET_TOTAL_VALUE": 18}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "The Big Lebowski", "SALE_TS": "2019-07-18T11:03:21Z", "TICKET_TOTAL_VALUE": 12}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "The Big Lebowski", "SALE_TS": "2019-07-18T11:03:50Z", "TICKET_TOTAL_VALUE": 12}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "The Godfather", "SALE_TS": "2019-07-18T11:40:00Z", "TICKET_TOTAL_VALUE": 36}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "The Godfather", "SALE_TS": "2019-07-18T11:40:09Z", "TICKET_TOTAL_VALUE": 18}}
]
}
Similarly, create a file at test/output.json
with the expected outputs:
{
"outputs": [
{"topic": "MOVIE_REVENUE", "key": "Aliens", "value": {"TOTAL_VALUE": 10}, "timestamp": 1563444000000},
{"topic": "MOVIE_REVENUE", "key": "Die Hard", "value": {"TOTAL_VALUE": 12}, "timestamp": 1563444000000},
{"topic": "MOVIE_REVENUE", "key": "Die Hard", "value": {"TOTAL_VALUE": 24}, "timestamp": 1563444060000},
{"topic": "MOVIE_REVENUE", "key": "The Godfather", "value": {"TOTAL_VALUE": 12}, "timestamp": 1563444091000},
{"topic": "MOVIE_REVENUE", "key": "Die Hard", "value": {"TOTAL_VALUE": 48}, "timestamp": 1563444096000},
{"topic": "MOVIE_REVENUE", "key": "The Godfather", "value": {"TOTAL_VALUE": 30}, "timestamp": 1563444120000},
{"topic": "MOVIE_REVENUE", "key": "The Big Lebowski", "value": {"TOTAL_VALUE": 12}, "timestamp": 1563447801000},
{"topic": "MOVIE_REVENUE", "key": "The Big Lebowski", "value": {"TOTAL_VALUE": 24}, "timestamp": 1563447830000},
{"topic": "MOVIE_REVENUE", "key": "The Godfather", "value": {"TOTAL_VALUE": 66}, "timestamp": 1563450000000},
{"topic": "MOVIE_REVENUE", "key": "The Godfather", "value": {"TOTAL_VALUE": 84}, "timestamp": 1563450009000}
]
}
Lastly, invoke the tests using the test runner and the statements file that you created earlier:
docker exec ksqldb-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json
Which should pass:
>>> Test passed!
Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.
Sign up for Confluent Cloud, a fully managed Apache Kafka service.
After you log in to Confluent Cloud Console, click on Add cloud environment
and name the environment learn-kafka
. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.
From the Billing & payment
section in the Menu, apply the promo code CC100KTS
to receive an additional $100 free usage on Confluent Cloud (details).
Click on LEARN and follow the instructions to launch a Kafka cluster and to enable Schema Registry.
Next, from the Confluent Cloud Console, click on Clients
to get the cluster-specific configurations, e.g. Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.
Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.