Get Started Free
David Anderson

David Anderson

Software Practice Lead

Apache Flink® 101

About This Course

This course is an introduction to Apache Flink, focusing on its core concepts and architecture. Learn what makes Flink tick, and how it handles some common use cases.

Today's consumers have come to expect timely and accurate information from the companies they do business with. Whether it's being alerted that someone just used your credit card to rent a car in Prague, or checking on the balance of your mobile data plan, it's not good enough to learn about yesterday's information today. We all expect the companies managing our data to provide fully up-to-the-moment reporting.

Apache Flink is a battle-hardened stream processor widely used for demanding applications like these. Its performance and robustness are the result of a handful of core design principles, including a share-nothing architecture with local state, event-time processing, and state snapshots (for recovery). Through a combination of videos and hands-on exercises, this course brings these core principles to life.

Common use cases include data analytics, fraud detection, billing, business process monitoring, rule-based alerting, etc.

Flink is a powerful system with many components, but once you understand the fundamentals presented in this course, and how they fit together – streams, state, time, and snapshots – learning the details will become much easier.

The hands-on exercises in this course use Flink SQL to illustrate and clarify how Flink works. The focus is on learning about Flink, using the SQL you already know.

What You’ll Learn in This Course

  • What Apache Flink is, and why you might use it
  • What stream processing is, and how it differs from batch processing
  • Flink’s runtime architecture
  • How to use Flink and Kafka together
  • How to use Flink SQL: tables, windows, event time, watermarks, and more
  • Stateful stream processing
  • How watermarks support event time operations
  • How Flink uses snapshots (checkpoints) for fault tolerance

Intended Audience

Anyone who knows the basics of Kafka and SQL who wants to understand what Flink is and how it works.


  • Required knowledge:
    • This course assumes some basic familiarity with Kafka and SQL. If you understand what producers and consumers are, and can explain what GROUP BY does, that’s good enough.
  • Required setup:
    • A local Docker installation.


To learn more about Kafka, see Kafka 101.

Building Flink Applications in Java is a companion course to this one, and a great way to learn more about the practical side of Flink application development.


  • Approximately 2-3 hours


David Anderson (Course Author)

David has been working as a data engineer since long before that job title was invented. He has worked on recommender systems, search engines, machine learning pipelines, and BI tools, and has been helping companies adopt stream processing and Apache Flink since 2016. David is an Apache Flink committer, and works at Confluent as a Software Practice Lead.


Use the promo code FLINK101 to get $25 of free Confluent Cloud usage

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.