As this course draws to a close, we have one final piece to put into our pipeline. We've got the data in, we've processed and enriched it—now let's do something with it and write it to another cloud service or external system to drive the operational dashboard that we hypothesized about in our example.
Just as we used Kafka Connect to get the customer data in from the database earlier, we're going to use Kafka Connect again here to stream the data to the target system. There are connectors for most places you'd want to stream data to nowadays—object stores, cloud data warehouses, NoSQL stores, and so on.
We're going to use Elasticsearch, with the managed connector in Confluent Cloud. You can also run it for yourself in a self-managed Kafka Connect cluster. To learn more about Kafka Connect itself, check out the Kafka Connect course.
The Elasticsearch connector is fully managed and just requires a few details to configure:
With the data flowing into the target system, we can now build out the operational dashboard that we had in mind at the beginning of this exercise.
Let's recap what we've built. We started with a stream of events, each one with review rating information submitted by a customer. The data for these customers is held in a relational database table, which we also streamed into Apache Kafka:
We then used stream processing to enrich each rating event as it arrives with information about the customer who left the review. The resulting enriched data was written back into a new stream and then streamed to Elasticsearch, and from there we built a dashboard for the data.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.