How can you get the minimum or maximum value of a field from all records in a Kafka topic?
This tutorial installs Confluent Platform using Docker. Before proceeding:
• Install Docker Desktop (version 4.0.0
or later) or Docker Engine (version 19.03.0
or later) if you don’t already have it
• Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.
• Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd
, via systemctl
• Verify that Docker is set up properly by ensuring no errors are output when you run docker info
and docker compose version
on the command line
To get started, make a new directory anywhere you’d like for this project:
mkdir aggregate-minmax && cd aggregate-minmax
Then make the following directories to set up the project structure:
mkdir src test
Next, create the following docker-compose.yml
file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):
version: '2'
services:
broker:
image: confluentinc/cp-kafka:7.4.1
hostname: broker
container_name: broker
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- 8081:8081
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
ksqldb-server:
image: confluentinc/ksqldb-server:0.28.2
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- schema-registry
ports:
- 8088:8088
environment:
KSQL_CONFIG_DIR: /etc/ksqldb
KSQL_LOG4J_OPTS: -Dlog4j.configuration=file:/etc/ksqldb/log4j.properties
KSQL_BOOTSTRAP_SERVERS: broker:9092
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.28.2
container_name: ksqldb-cli
depends_on:
- broker
- ksqldb-server
entrypoint: /bin/sh
environment:
KSQL_CONFIG_DIR: /etc/ksqldb
tty: true
volumes:
- ./src:/opt/app/src
- ./test:/opt/app/test
And launch it by running:
docker compose up -d
The best way to interact with ksqlDB when you’re learning how things work is with the ksqlDB CLI. Fire it up as follows:
docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
Our tutorial computes the highest grossing and lowest grossing films per year in our data set. To keep things simple, we’re going to create a source Kafka topic and ksqlDB stream with annual sales data in it. In a real-world data pipeline, this would probably be the output of another ksqlDB query that takes a stream of individual sales events and aggregates them into annual totals, but we’ll save ourselves that trouble and just create the annual sales data directly.
This line of ksqlDB DDL creates a stream and its underlying Kafka topic to represent the annual sales totals.
Note that we are defining the schema for the stream, which includes three fields: title
, release_year
, and total_sales
. We are also specifying that the underlying Kafka topic—which ksqlDB will auto-create—be called movie-ticket-sales
and have just one partition, and that its messages will be in Avro format.
CREATE STREAM MOVIE_SALES (title VARCHAR, release_year INT, total_sales INT)
WITH (KAFKA_TOPIC='movie-ticket-sales',
PARTITIONS=1,
VALUE_FORMAT='avro');
Let’s add a small amount of data to our stream, so we can see our query work. You can copy and paste all these lines into the CLI at once, or if you prefer, open up a second ksqlDB CLI and copy them one at a time after you have all the subsequent steps complete, so you can see the results produced in real time.
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Avengers: Endgame', 2019, 856980506);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Captain Marvel', 2019, 426829839);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Toy Story 4', 2019, 401486230);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('The Lion King', 2019, 385082142);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Black Panther', 2018, 700059566);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Avengers: Infinity War', 2018, 678815482);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Deadpool 2', 2018, 324512774);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Beauty and the Beast', 2017, 517218368);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Wonder Woman', 2017, 412563408);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Star Wars Ep. VIII: The Last Jedi', 2017, 517218368);
Before we get too far, let’s set the auto.offset.reset
configuration parameter to earliest
. This means all new ksqlDB queries will automatically compute their results from the beginning of a stream, rather than the end. This isn’t always what you’ll want to do in production, but it makes query results much easier to see in examples like this.
SET 'auto.offset.reset' = 'earliest';
To continue optimizing the configuration for our tutorial, let’s tell ksqlDB to buffer the aggregates as it builds them. This makes the query feel like it responds more slowly, but means that you get just one row of output per movie, which is more intuitive.
SET 'ksql.streams.cache.max.bytes.buffering' = '10000000';
With our test data in place, let’s try a query to compute the min and max. A SELECT
statement with an EMIT CHANGES
in ksqlDB is called a transient push query, meaning that after we stop it, it is gone and will not keep processing the input stream. We’ll create a persistent query, the contrast to a transient push query, a few steps from now.
If you’re familiar with SQL, the text of the query itself is fairly self-explanatory. We are calculating the highest and lowest grossing movie figures by year using MIN
and MAX
aggregations on the TOTAL_SALES
column. This query will keep running, continuing to return results until you use Ctrl-C
. Most ksqlDB queries are continuous queries that run forever in this way; there is always potentially more input available in the source stream, so the query never finishes on its own.
SELECT RELEASE_YEAR,
MIN(TOTAL_SALES) AS MIN__TOTAL_SALES,
MAX(TOTAL_SALES) AS MAX__TOTAL_SALES
FROM MOVIE_SALES
GROUP BY RELEASE_YEAR
EMIT CHANGES
LIMIT 2;
This should yield the following output:
+--------------------+--------------------+--------------------+
|RELEASE_YEAR |MIN__TOTAL_SALES |MAX__TOTAL_SALES |
+--------------------+--------------------+--------------------+
|2019 |385082142 |856980506 |
|2018 |324512774 |700059566 |
Limit Reached
Query terminated
Since the output looks right, the next step is to make the query persistent. This looks exactly like the push query, except we have added a CREATE TABLE AS
statement to the beginning of it. This statement returns to the CLI prompt right away, having created a persistent stream processing program running in the ksqlDB engine, continuously processing input records and updating the resulting MOVIE_FIGURES_BY_YEAR
table.
Moreover, we don’t see the results of the query displayed in the CLI, because they are updating the newly-created table itself. That table is available to other ksqlDB queries for further processing, and by default all its records are produced to a topic having the same name (MOVIE_FIGURES_BY_YEAR
).
CREATE TABLE MOVIE_FIGURES_BY_YEAR AS
SELECT RELEASE_YEAR,
MIN(TOTAL_SALES) AS MIN__TOTAL_SALES,
MAX(TOTAL_SALES) AS MAX__TOTAL_SALES
FROM MOVIE_SALES
GROUP BY RELEASE_YEAR
EMIT CHANGES;
Seeing is believing, so let’s directly inspect that output topic using the print
ksqlDB CLI command. We could also SELECT * FROM MOVIE_FIGURES_BY_YEAR
, but here we opt for a more direct approach.
PRINT MOVIE_FIGURES_BY_YEAR FROM BEGINNING LIMIT 2;
This should yield the following output:
Key format: KAFKA_INT
Value format: AVRO
rowtime: 2020/05/04 21:27:50.630 Z, key: 2019, value: {"MIN__TOTAL_SALES": 385082142, "MAX__TOTAL_SALES": 856980506}, partition: 0
rowtime: 2020/05/04 21:27:50.946 Z, key: 2018, value: {"MIN__TOTAL_SALES": 324512774, "MAX__TOTAL_SALES": 700059566}, partition: 0
Topic printing ceased
Notice that ksqlDB is storing the RELEASE_YEAR
in the key of the Kafka message. It does this because RELEASE_YEAR
is the primary key of the MOVIE_FIGURES_BY_YEAR
table.
If needed, a copy of RELEASE_YEAR
can also be stored in the value by adding AsValue(RELEASE_YEAR)
in the projection.
Now that we have a good ksqlDB pipeline set up, let’s take our CLI experimentation and save it to a file that we can use outside of this session. Create a file at src/statements.sql
with the following content:
CREATE STREAM MOVIE_SALES (title VARCHAR, release_year INT, total_sales INT)
WITH (KAFKA_TOPIC='movie-ticket-sales',
PARTITIONS=1,
VALUE_FORMAT='avro');
CREATE TABLE MOVIE_FIGURES_BY_YEAR AS
SELECT RELEASE_YEAR,
MIN(TOTAL_SALES) AS MIN__TOTAL_SALES,
MAX(TOTAL_SALES) AS MAX__TOTAL_SALES
FROM MOVIE_SALES
GROUP BY RELEASE_YEAR
EMIT CHANGES;
The Confluent ksqlDB CLI Docker image contains a program called the ksql-test-runner
. We can pass this program a JSON file describing our desired input data, a JSON file containing the intended output results, and a file of ksqlDB queries to run, and it will tell us whether our queries successfully turn the input into the output. To get started, create a file at test/input.json
with the inputs for testing:
{
"inputs": [
{"topic": "movie-ticket-sales", "value": {"TITLE": "Avengers: Endgame", "RELEASE_YEAR": 2019, "TOTAL_SALES": 856980506}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Captain Marvel", "RELEASE_YEAR": 2019, "TOTAL_SALES": 426829839}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Toy Story 4", "RELEASE_YEAR": 2019, "TOTAL_SALES": 401486230}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "The Lion King", "RELEASE_YEAR": 2019, "TOTAL_SALES": 385082142}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Black Panther", "RELEASE_YEAR": 2018, "TOTAL_SALES": 700059566}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Avengers: Infinity War", "RELEASE_YEAR": 2018, "TOTAL_SALES": 678815482}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Deadpool 2", "RELEASE_YEAR": 2018, "TOTAL_SALES": 324512774}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Beauty and the Beast", "RELEASE_YEAR": 2017, "TOTAL_SALES": 517218368}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Wonder Woman", "RELEASE_YEAR": 2017, "TOTAL_SALES": 412563408}},
{"topic": "movie-ticket-sales", "value": {"TITLE": "Star Wars Ep. VIII: The Last Jedi", "RELEASE_YEAR": 2017, "TOTAL_SALES": 517218368}}
]
}
Next, create a file at test/output.json
with the expected outputs:
{
"outputs": [
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2019, "value": {"MIN__TOTAL_SALES" :856980506, "MAX__TOTAL_SALES": 856980506}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2019, "value": {"MIN__TOTAL_SALES" :426829839, "MAX__TOTAL_SALES": 856980506}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2019, "value": {"MIN__TOTAL_SALES" :401486230, "MAX__TOTAL_SALES": 856980506}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2019, "value": {"MIN__TOTAL_SALES" :385082142, "MAX__TOTAL_SALES": 856980506}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2018, "value": {"MIN__TOTAL_SALES" :700059566, "MAX__TOTAL_SALES": 700059566}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2018, "value": {"MIN__TOTAL_SALES" :678815482, "MAX__TOTAL_SALES": 700059566}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2018, "value": {"MIN__TOTAL_SALES" :324512774, "MAX__TOTAL_SALES": 700059566}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2017, "value": {"MIN__TOTAL_SALES" :517218368, "MAX__TOTAL_SALES": 517218368}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2017, "value": {"MIN__TOTAL_SALES" :412563408, "MAX__TOTAL_SALES": 517218368}, "timestamp": 0},
{"topic": "MOVIE_FIGURES_BY_YEAR", "key": 2017, "value": {"MIN__TOTAL_SALES" :412563408, "MAX__TOTAL_SALES": 517218368}, "timestamp": 0}
]
}
Finally, invoke the tests using the test runner and the statements file that you created earlier:
docker exec ksqldb-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json
If it passes (and it should), you will see this output:
>>> Test passed!
Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.
Sign up for Confluent Cloud, a fully managed Apache Kafka service.
After you log in to Confluent Cloud Console, click Environments
in the lefthand navigation, click on Add cloud environment
, and name the environment learn-kafka
. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.
From the Billing & payment
section in the menu, apply the promo code CC100KTS
to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1
. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.
Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.
Next, from the Confluent Cloud Console, click on Clients
to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.
Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.