How can you calculate the distance between two latitude and longitude points?
Use the geo_distance
ksqlDB function
SELECT iev_customer_name, iev_state,
geo_distance(iev_lat, iev_long, rct_lat, rct_long, 'km') AS dist_to_repairer_km
FROM insurance_event_with_repair_info
EMIT CHANGES
LIMIT 2;
This tutorial installs Confluent Platform using Docker. Before proceeding:
• Install Docker Desktop (version 4.0.0
or later) or Docker Engine (version 19.03.0
or later) if you don’t already have it
• Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.
• Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd
, via systemctl
• Verify that Docker is set up properly by ensuring no errors are output when you run docker info
and docker compose version
on the command line
To get started, make a new directory anywhere you’d like for this project:
mkdir geo-distance && cd geo-distance
Then make the following directories to set up its structure:
mkdir src test
Next, create the following docker-compose.yml
file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:7.3.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
schema-registry:
image: confluentinc/cp-schema-registry:7.3.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:9092'
ksqldb-server:
image: confluentinc/ksqldb-server:0.28.2
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- schema-registry
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksqldb"
KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksqldb/log4j.properties"
KSQL_BOOTSTRAP_SERVERS: "broker:9092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.28.2
container_name: ksqldb-cli
depends_on:
- broker
- ksqldb-server
entrypoint: /bin/sh
tty: true
environment:
KSQL_CONFIG_DIR: "/etc/ksqldb"
volumes:
- ./src:/opt/app/src
- ./test:/opt/app/test
And launch it by running:
docker compose up -d
To begin developing interactively, open up the ksqlDB CLI:
docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
We are going to start out by creating a ksqlDB table and a ksqlDB stream. Our table will hold reference data about repair centers. The stream will contain insurance related events.
Let’s start with the repair shop table. We want to be able to direct customers to their closest repair center. To accomplish that, we need to load the location of the repair shops into another ksqlDB table. Create the ksqlDB repair_center_tab
table.
CREATE TABLE repair_center_tab (repair_state VARCHAR PRIMARY KEY, long DOUBLE, lat DOUBLE)
WITH (kafka_topic='repair_center', value_format='avro', partitions=1);
Insert repair shop data into the repair_center_tab
table.
INSERT INTO repair_center_tab (repair_state, long, lat) VALUES ('NSW', 151.1169, -33.863);
INSERT INTO repair_center_tab (repair_state, long, lat) VALUES ('VIC', 145.1549, -37.9389);
Lastly, imagine we have a stream of insurance claim events for people who have lost their insured mobile phone. We know the customer name, phone model, and the state, long and lat where the loss of the mobile phone occurred. The following ksqlDB statement will create a new topic phone_event_raw
and a stream insurance_event_stream
:
CREATE STREAM insurance_event_stream (customer_name VARCHAR, phone_model VARCHAR, event VARCHAR,
state VARCHAR, long DOUBLE, lat DOUBLE)
WITH (kafka_topic='phone_event_raw', value_format='avro', partitions=1);
Now populate the stream with sample events:
INSERT INTO insurance_event_stream (customer_name, phone_model, event, state, long, lat)
VALUES ('Lindsey', 'iPhone 11 Pro', 'dropped', 'NSW', 151.25664, -33.85995);
INSERT INTO insurance_event_stream (customer_name, phone_model, event, state, long, lat)
VALUES ('Debbie', 'Samsung Note 20', 'water', 'NSW', 151.24504, -33.89640);
Before we move forward, we need to set the auto.offset.reset
property to ensure that you’re reading from the beginning of the stream:
SET 'auto.offset.reset' = 'earliest';
In order to calculate how far away the repair center is from the insurance event, we will need to create a stream that joins the insurance events with our repair center reference data. or this use case, let’s assume there is only one repair center in each STATE
and the repair center in an event’s STATE
is the closest repair center.
CREATE STREAM insurance_event_with_repair_info AS
SELECT * FROM insurance_event_stream iev
INNER JOIN repair_center_tab rct ON iev.state = rct.repair_state;
Let’s query our newly created stream, insurance_event_with_repair_info
, to view a the insurance event with location information with the ksqlDB statement below:
SELECT IEV_CUSTOMER_NAME, IEV_LONG, IEV_LAT, RCT_LONG, RCT_LAT
FROM insurance_event_with_repair_info
EMIT CHANGES
LIMIT 2;
The query will produce something like this:
+--------------------+--------------------+--------------------+--------------------+--------------------+
|IEV_CUSTOMER_NAME |IEV_LONG |IEV_LAT |RCT_LONG |RCT_LAT |
+--------------------+--------------------+--------------------+--------------------+--------------------+
|Lindsey |151.25664 |-33.85995 |151.1169 |-33.863 |
|Debbie |151.24504 |-33.8964 |151.1169 |-33.863 |
Limit Reached
Query terminated
The last thing for us to do is calculate the distance between the repair center lat-long and insurance event lat-long. We can do that with the geo_distance
ksqlDB function.
SELECT iev_customer_name, iev_state,
geo_distance(iev_lat, iev_long, rct_lat, rct_long, 'km') AS dist_to_repairer_km
FROM insurance_event_with_repair_info
EMIT CHANGES
LIMIT 2;
geo_distance
calculates the great-circle distance between two lat-long points, both specified in decimal degrees. An optional final parameter specifies km
(the default) or miles
.
The output should resemble:
+--------------------+--------------------+--------------------+
|IEV_CUSTOMER_NAME |IEV_STATE |DIST_TO_REPAIRER_KM |
+--------------------+--------------------+--------------------+
|Lindsey |NSW |12.907325150628191 |
|Debbie |NSW |12.398568134716221 |
Limit Reached
Query terminated
Now that our query reporting the distance to the nearest repair center is working, let’s update it to create a continuous query.
CREATE STREAM insurance_event_dist AS
SELECT iev_customer_name, iev_state,
geo_distance(iev_lat, iev_long, rct_lat, rct_long, 'km') AS dist_to_repairer_km
FROM insurance_event_with_repair_info;
Now that you have a series of statements that’s doing the right thing, the last step is to put them into a file so that they can be used outside the CLI session. Create a file at src/statements.sql
with the following content:
CREATE TABLE repair_center_tab (repair_state VARCHAR PRIMARY KEY, long DOUBLE, lat DOUBLE)
WITH (kafka_topic='repair_center', value_format='avro', partitions=1);
CREATE STREAM insurance_event_stream (customer_name VARCHAR, phone_model VARCHAR, event VARCHAR,
state VARCHAR, long DOUBLE, lat DOUBLE)
WITH (kafka_topic='phone_event_raw', value_format='avro', partitions=1);
CREATE STREAM insurance_event_with_repair_info AS
SELECT * FROM insurance_event_stream iev
INNER JOIN repair_center_tab rct ON iev.state = rct.repair_state;
CREATE STREAM insurance_event_dist AS
SELECT iev_customer_name, iev_state,
geo_distance(iev_lat, iev_long, rct_lat, rct_long, 'km') AS dist_to_repairer_km
FROM insurance_event_with_repair_info;
Create a file at test/input.json
with the inputs for testing:
{
"inputs": [
{
"topic": "repair_center",
"key": "NSW",
"value": {
"LONG": 151.1169,
"LAT": -33.863
}
},
{
"topic": "repair_center",
"key": "VIC",
"value": {
"LONG": 145.1549,
"LAT": -37.9389
}
},
{
"topic": "phone_event_raw",
"value": {
"CUSTOMER_NAME": "Lindsey",
"PHONE_MODEL": "iPhone 11 Pro",
"EVENT": "dropped",
"STATE": "NSW",
"LONG": 151.25664,
"LAT": -33.85995
}
},
{
"topic": "phone_event_raw",
"value": {
"CUSTOMER_NAME": "Debbie",
"PHONE_MODEL": "Samsung Note 20",
"EVENT": "water",
"STATE": "NSW",
"LONG": 151.24504,
"LAT": -33.89640
}
}
]
}
Similarly, create a file at test/output.json
with the expected outputs:
{
"outputs": [
{
"topic": "INSURANCE_EVENT_WITH_REPAIR_INFO",
"key": "NSW",
"value": {
"IEV_CUSTOMER_NAME": "Lindsey",
"IEV_PHONE_MODEL": "iPhone 11 Pro",
"IEV_EVENT": "dropped",
"IEV_LONG": 151.25664,
"IEV_LAT": -33.85995,
"RCT_REPAIR_STATE": "NSW",
"RCT_LONG": 151.1169,
"RCT_LAT": -33.863
}
},
{
"topic": "INSURANCE_EVENT_WITH_REPAIR_INFO",
"key": "NSW",
"value": {
"IEV_CUSTOMER_NAME": "Debbie",
"IEV_PHONE_MODEL": "Samsung Note 20",
"IEV_EVENT": "water",
"IEV_LONG": 151.24504,
"IEV_LAT": -33.8964,
"RCT_REPAIR_STATE": "NSW",
"RCT_LONG": 151.1169,
"RCT_LAT": -33.863
}
},
{
"topic": "INSURANCE_EVENT_DIST",
"key": "NSW",
"value": {
"IEV_CUSTOMER_NAME": "Lindsey",
"DIST_TO_REPAIRER_KM": 12.907325150628191
}
},
{
"topic": "INSURANCE_EVENT_DIST",
"key": "NSW",
"value": {
"IEV_CUSTOMER_NAME": "Debbie",
"DIST_TO_REPAIRER_KM": 12.398568134716221
}
}
]
}
Lastly, invoke the tests using the test runner and the statements file that you created earlier:
docker exec ksqldb-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json
Which should pass:
>>> Test passed!
Launch your statements into production by sending them to the REST API with the following command:
tr '\n' ' ' < src/statements.sql | \
sed 's/;/;\'$'\n''/g' | \
while read stmt; do
echo '{"ksql":"'$stmt'", "streamsProperties": {}}' | \
curl -s -X "POST" "http://localhost:8088/ksql" \
-H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
-d @- | \
jq
done
Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully-managed Apache Kafka service.
Sign up for Confluent Cloud, a fully-managed Apache Kafka service.
After you log in to Confluent Cloud Console, click on Add cloud environment
and name the environment learn-kafka
. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.
From the Billing & payment
section in the Menu, apply the promo code CC100KTS
to receive an additional $100 free usage on Confluent Cloud (details).
Click on LEARN and follow the instructions to launch a Kafka cluster and to enable Schema Registry.
Next, from the Confluent Cloud Console, click on Clients
to get the cluster-specific configurations, e.g. Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.
Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.