Multi-join expressions

Question:

How can you join multiple streams or tables together using a single expression in ksqlDB?

Edit this page

Example use case:

Suppose you have two tables, one for customers and one for items, and one stream containing orders made at an online store. In this tutorial, we'll build a stream of all orders and customers who made purchases, including details of the purchased items.

Hands-on code example:

Short Answer

Multi-way joins:

CREATE STREAM orders_enriched AS
  SELECT customers.customerid AS customerid, customers.customername AS customername,
         orders.orderid, orders.purchasedate,
         items.itemid, items.itemname
  FROM orders
  LEFT JOIN customers on orders.customerid = customers.customerid
  LEFT JOIN items on orders.itemid = items.itemid;

Run it

Prerequisites

1

This tutorial installs Confluent Platform using Docker. Before proceeding:

  • • Install Docker Desktop (version 4.0.0 or later) or Docker Engine (version 19.03.0 or later) if you don’t already have it

  • • Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.

  • • Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd, via systemctl

  • • Verify that Docker is set up properly by ensuring no errors are output when you run docker info and docker compose version on the command line

Initialize the project

2

To get started, make a new directory anywhere you’d like for this project:

mkdir multi-joins && cd multi-joins

Then make the following directories to set up its structure:

mkdir src test

Get Confluent Platform

3

Next, create the following docker-compose.yml file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):

version: '2'
services:
  broker:
    image: confluentinc/cp-kafka:7.4.1
    hostname: broker
    container_name: broker
    ports:
    - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
      KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
      CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
  schema-registry:
    image: confluentinc/cp-schema-registry:7.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
    - broker
    ports:
    - 8081:8081
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
  ksqldb-server:
    image: confluentinc/ksqldb-server:0.28.2
    hostname: ksqldb-server
    container_name: ksqldb-server
    depends_on:
    - broker
    - schema-registry
    ports:
    - 8088:8088
    environment:
      KSQL_CONFIG_DIR: /etc/ksqldb
      KSQL_LOG4J_OPTS: -Dlog4j.configuration=file:/etc/ksqldb/log4j.properties
      KSQL_BOOTSTRAP_SERVERS: broker:9092
      KSQL_HOST_NAME: ksqldb-server
      KSQL_LISTENERS: http://0.0.0.0:8088
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
  ksqldb-cli:
    image: confluentinc/ksqldb-cli:0.28.2
    container_name: ksqldb-cli
    depends_on:
    - broker
    - ksqldb-server
    entrypoint: /bin/sh
    environment:
      KSQL_CONFIG_DIR: /etc/ksqldb
    tty: true
    volumes:
    - ./src:/opt/app/src
    - ./test:/opt/app/test

And launch it by running:

docker compose up -d

Create input tables and streams

4

To create our application, we’ll first model some input data to mimic an online store. We will then use the ksqlDB multi-join feature to create a Stream of orders enriched with data from the inputs.

To begin developing interactively, open up the ksqlDB CLI:

docker exec -it ksqldb-cli ksql http://ksqldb-server:8088

First let’s create an input Table of customer data which will hold data in JSON format.

CREATE TABLE customers (customerid STRING PRIMARY KEY, customername STRING)
    WITH (KAFKA_TOPIC='customers',
          VALUE_FORMAT='json',
          PARTITIONS=1);

Similarly, we create a second table containing items available in our online store:

CREATE TABLE items (itemid STRING PRIMARY KEY, itemname STRING)
    WITH (KAFKA_TOPIC='items',
          VALUE_FORMAT='json',
          PARTITIONS=1);

Next we create a stream containing orders submitted to our online store, also formatted in JSON.

CREATE STREAM orders (orderid STRING KEY, customerid STRING, itemid STRING, purchasedate STRING)
    WITH (KAFKA_TOPIC='orders',
          VALUE_FORMAT='json',
          PARTITIONS=1);

Now we will populate our inputs with some sample data.

First some customer data:

INSERT INTO customers VALUES ('1', 'Adrian Garcia');
INSERT INTO customers VALUES ('2', 'Robert Miller');
INSERT INTO customers VALUES ('3', 'Brian Smith');

And some items available in our store:

INSERT INTO items VALUES ('101', 'Television 60-in');
INSERT INTO items VALUES ('102', 'Laptop 15-in');
INSERT INTO items VALUES ('103', 'Speakers');

Then we insert some orders. Each order contains a unique order id, a customer id, an item id, and a purchase date:

INSERT INTO orders VALUES ('abc123', '1', '101', '2020-05-01');
INSERT INTO orders VALUES ('abc345', '1', '102', '2020-05-01');
INSERT INTO orders VALUES ('abc678', '2', '101', '2020-05-01');
INSERT INTO orders VALUES ('abc987', '3', '101', '2020-05-03');
INSERT INTO orders VALUES ('xyz123', '2', '103', '2020-05-03');
INSERT INTO orders VALUES ('xyz987', '2', '102', '2020-05-05');

Create the multi-way join stream

5

Now that you have input data, let’s create a stream that produces orders enriched with data from the customers and items tables.

The first thing to do is set the following property to ensure that you’re reading from the beginning of the stream:

SET 'auto.offset.reset' = 'earliest';

Creating the multi-way joined stream uses common SQL join syntax.

You define the fields you want to materialize in the stream with the SELECT keyword, followed by source.field identifiers. The FROM keyword identifies the stream to base events off of and the JOIN keywords identifies the joined tables and field relationships.

Joining "N" sources is equivalent to performing "N" joins consecutively, and the order of the joins is controlled by the order in which the joins are written. The multi-way join is subject to limitations and restrictions of each regular intermediate step join. See the ksqlDB documentation for the full details on joins.

CREATE STREAM orders_enriched AS
  SELECT customers.customerid AS customerid, customers.customername AS customername,
         orders.orderid, orders.purchasedate,
         items.itemid, items.itemname
  FROM orders
  LEFT JOIN customers on orders.customerid = customers.customerid
  LEFT JOIN items on orders.itemid = items.itemid;

This should yield the following output:

 Message
----------------------------------------------
 Created query with ID CSAS_ORDERS_ENRICHED_0
----------------------------------------------

Let’s view the result by selecting the values from our new enriched orders stream:

SELECT * FROM ORDERS_ENRICHED EMIT CHANGES LIMIT 6;

The output should look similar to:

+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
|ITEMS_ITEMID     |CUSTOMERID       |CUSTOMERNAME     |ORDERID          |PURCHASEDATE     |ITEMNAME         |
+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
|101              |1                |Adrian Garcia    |abc123           |2020-05-01       |Television 60-in |
|102              |1                |Adrian Garcia    |abc345           |2020-05-01       |Laptop 15-in     |
|101              |2                |Robert Miller    |abc678           |2020-05-01       |Television 60-in |
|101              |3                |Brian Smith      |abc987           |2020-05-03       |Television 60-in |
|103              |2                |Robert Miller    |xyz123           |2020-05-03       |Speakers         |
|102              |2                |Robert Miller    |xyz987           |2020-05-05       |Laptop 15-in     |
Limit Reached
Query terminated

Finally, let’s see what’s available on the underlying Kafka topic for the new stream. We can print that out easily.

PRINT ORDERS_ENRICHED FROM BEGINNING LIMIT 6;
Key format: JSON or KAFKA_STRING
Value format: JSON or KAFKA_STRING
rowtime: 2020/12/08 21:05:41.271 Z, key: 101, value: {"CUSTOMERID":"1","CUSTOMERNAME":"Adrian Garcia","ORDERID":"abc123","PURCHASEDATE":"2020-05-01","ITEMNAME":"Television 60-in"}, partition: 0
rowtime: 2020/12/08 21:05:41.300 Z, key: 102, value: {"CUSTOMERID":"1","CUSTOMERNAME":"Adrian Garcia","ORDERID":"abc345","PURCHASEDATE":"2020-05-01","ITEMNAME":"Laptop 15-in"}, partition: 0
rowtime: 2020/12/08 21:05:41.329 Z, key: 101, value: {"CUSTOMERID":"2","CUSTOMERNAME":"Robert Miller","ORDERID":"abc678","PURCHASEDATE":"2020-05-01","ITEMNAME":"Television 60-in"}, partition: 0
rowtime: 2020/12/08 21:05:41.357 Z, key: 101, value: {"CUSTOMERID":"3","CUSTOMERNAME":"Brian Smith","ORDERID":"abc987","PURCHASEDATE":"2020-05-03","ITEMNAME":"Television 60-in"}, partition: 0
rowtime: 2020/12/08 21:05:41.386 Z, key: 103, value: {"CUSTOMERID":"2","CUSTOMERNAME":"Robert Miller","ORDERID":"xyz123","PURCHASEDATE":"2020-05-03","ITEMNAME":"Speakers"}, partition: 0
rowtime: 2020/12/08 21:05:41.414 Z, key: 102, value: {"CUSTOMERID":"2","CUSTOMERNAME":"Robert Miller","ORDERID":"xyz987","PURCHASEDATE":"2020-05-05","ITEMNAME":"Laptop 15-in"}, partition: 0
Topic printing ceased

Notice that the key for each message is the Item ID of the order. This is the result of the join with the items table being the last join for our CREATE STREAM command. The key of the last join will become the key of the records in the underlying topic.

Exit the ksqlDB CLI with the exit command.

Write your statements to a file

6

Now that you have a series of statements that’s doing the right thing, the last step is to put them into a file so that they can be used outside the CLI session. Create a file at src/statements.sql with the following content:

CREATE TABLE customers (customerid STRING PRIMARY KEY, customername STRING)
    WITH (KAFKA_TOPIC='customers',
          VALUE_FORMAT='json',
          PARTITIONS=1);

CREATE TABLE items (itemid STRING PRIMARY KEY, itemname STRING)
    WITH (KAFKA_TOPIC='items',
          VALUE_FORMAT='json',
          PARTITIONS=1);

CREATE STREAM orders (orderid STRING KEY, customerid STRING, itemid STRING, purchasedate STRING)
    WITH (KAFKA_TOPIC='orders',
          VALUE_FORMAT='json',
          PARTITIONS=1);

CREATE STREAM orders_enriched AS
  SELECT customers.customerid AS customerid, customers.customername AS customername,
         orders.orderid, orders.purchasedate,
         items.itemid, items.itemname
  FROM orders
  LEFT JOIN customers on orders.customerid = customers.customerid
  LEFT JOIN items on orders.itemid = items.itemid;

Test it

Create the test data

1

Create a file at test/input.json with the inputs for testing:

{
  "inputs": [
    {
      "topic": "customers",
      "key": "1",
      "value": {
        "customerid": "1",
        "customername": "Adrian Garcia"
      }
    },
    {
      "topic": "customers",
      "key": "2",
      "value": {
        "customerid": "2",
        "customername": "Robert Miller"
      }
    },
    {
      "topic": "customers",
      "key": "3",
      "value": {
        "customerid": "3",
        "customername": "Brian Smith"
      }
    },
    {
      "topic": "items",
      "key": "1",
      "value": {
        "itemid": "1",
        "itemname": "Television 60-in"
      }
    },
    {
      "topic": "items",
      "key": "2",
      "value": {
        "itemid": "2",
        "itemname": "Laptop 15-in"
      }
    },
    {
      "topic": "items",
      "key": "3",
      "value": {
        "itemid": "3",
        "itemname": "Speakers"
      }
    },
    {
      "topic": "orders",
      "key": "abc123",
      "value": {
        "orderid": "abc123",
        "customerid": "1",
        "itemid": "1",
        "purchasedate": "2020-05-01"
      }
    },
    {
      "topic": "orders",
      "key": "abc345",
      "value": {
        "orderid": "abc345",
        "customerid": "1",
        "itemid": "2",
        "purchasedate": "2020-05-01"
      }
    },
    {
      "topic": "orders",
      "key": "abc678",
      "value": {
        "orderid": "abc678",
        "customerid": "2",
        "itemid": "1",
        "purchasedate": "2020-05-01"
      }
    },
    {
      "topic": "orders",
      "key": "abc987",
      "value": {
        "orderid": "abc987",
        "customerid": "3",
        "itemid": "1",
        "purchasedate": "2020-05-03"
      }
    },
    {
      "topic": "orders",
      "key": "xyz123",
      "value": {
        "orderid": "xyz123",
        "customerid": "2",
        "itemid": "3",
        "purchasedate": "2020-05-03"
      }
    },
    {
      "topic": "orders",
      "key": "xyz987",
      "value": {
        "orderid": "xyz987",
        "customerid": "2",
        "itemid": "2",
        "purchasedate": "2020-05-05"
      }
    }
  ]
}

Similarly, create a file at test/output.json with the expected outputs.

{
  "outputs": [
    {
      "topic": "ORDERS_ENRICHED",
      "key": "1",
      "value": {
        "CUSTOMERID": "1",
        "CUSTOMERNAME": "Adrian Garcia",
        "ORDERID": "abc123",
        "PURCHASEDATE": "2020-05-01",
        "ITEMNAME": "Television 60-in"
      }
    },
    {
      "topic": "ORDERS_ENRICHED",
      "key": "2",
      "value": {
        "CUSTOMERID": "1",
        "CUSTOMERNAME": "Adrian Garcia",
        "ORDERID": "abc345",
        "PURCHASEDATE": "2020-05-01",
        "ITEMNAME": "Laptop 15-in"
      }
    },
    {
      "topic": "ORDERS_ENRICHED",
      "key": "1",
      "value": {
        "CUSTOMERID": "2",
        "CUSTOMERNAME": "Robert Miller",
        "ORDERID": "abc678",
        "PURCHASEDATE": "2020-05-01",
        "ITEMNAME": "Television 60-in"
      }
    },
    {
      "topic": "ORDERS_ENRICHED",
      "key": "1",
      "value": {
        "CUSTOMERID": "3",
        "CUSTOMERNAME": "Brian Smith",
        "ORDERID": "abc987",
        "PURCHASEDATE": "2020-05-03",
        "ITEMNAME": "Television 60-in"
      }
    },
    {
      "topic": "ORDERS_ENRICHED",
      "key": "3",
      "value": {
        "CUSTOMERID": "2",
        "CUSTOMERNAME": "Robert Miller",
        "ORDERID": "xyz123",
        "PURCHASEDATE": "2020-05-03",
        "ITEMNAME": "Speakers"
      }
    },
    {
      "topic": "ORDERS_ENRICHED",
      "key": "2",
      "value": {
        "CUSTOMERID": "2",
        "CUSTOMERNAME": "Robert Miller",
        "ORDERID": "xyz987",
        "PURCHASEDATE": "2020-05-05",
        "ITEMNAME": "Laptop 15-in"
      }
    }
  ]
}

Invoke the tests

2

Lastly, invoke the tests using the test runner and the statements file that you created earlier:

docker exec ksqldb-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json

Which should pass:

	 >>> Test passed!

Deploy on Confluent Cloud

Run your app with Confluent Cloud

1

Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.

  1. Sign up for Confluent Cloud, a fully managed Apache Kafka service.

  2. After you log in to Confluent Cloud Console, click Environments in the lefthand navigation, click on Add cloud environment, and name the environment learn-kafka. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.

  3. From the Billing & payment section in the menu, apply the promo code CC100KTS to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.

  4. Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.

Confluent Cloud

Next, from the Confluent Cloud Console, click on Clients to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.

Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.