Confluent Cloud is a fully managed platform for Apache Kafka®, designed to simplify real-time data streaming and processing. It integrates Kafka for data ingestion, Apache Flink® for stream processing, and Tableflow for converting streaming data into analytics-ready Apache Iceberg® tables. DuckDB, a lightweight analytical database, supports querying these Iceberg tables, making it an ideal tool for the workshop’s analytics component. The workshop is designed for developers with basic programming knowledge, potentially new to Kafka, Flink, or Tableflow, and aims to provide hands-on experience within a condensed time frame.
This 2-hour hands-on workshop introduces developers to building real-time data pipelines using Confluent Cloud. You’ll learn to stream data with Apache Kafka, process it in real-time with Apache Flink, and convert it into Apache Iceberg tables using Tableflow. The workshop assumes basic familiarity with programming and provides step-by-step guidance.
To make sure you can get hands on during this workshop, please make sure the following are installed on your system!
Segment | Duration | Features Covered | Objective |
---|---|---|---|
Introduction | 15 min | Kafka, Flink, Tableflow Overview | Understand event-driven architecture |
Setting Up Confluent Cloud | 15 min | Kafka Cluster Creation | Set up a managed Kafka cluster |
Kafka Hands-On | 30 min | Kafka Topics, Producers, Consumers | Stream data with Kafka |
Flink Hands-On | 45 min | Flink Stream Processing | Process data in real-time |
Tableflow Hands-On | 30 min | Tableflow, Iceberg, DuckDB | Materialize and query analytics-ready data |
Wrap-Up and Q&A | 15 min | All features | Summarize and address questions |