Get Started Free

Current Bengaluru 2025 - Call For Papers Open

December 5, 2024

Current Bengaluru 2025: Call for Papers

Newsletter from the Desk of Confluent Developer,

🏆 Build with Confluent Cloud & Win!

Develop a data streaming or Gen AI app with Confluent Cloud and stand a chance to win a gaming laptop (up to 5 winners)! Submit your project by Dec 31, 2024.

🎮 Start Building Today!

Current Bengaluru 2025: Call for Papers

Kafka Summit in Bengaluru, 2024 was massive, and in 2025 something even bigger is happening as Current: Bengaluru. It’s the biggest data streaming event with a full-day of keynotes, breakout sessions, and an amazing expo hall. It’s the can’t-miss event for the best minds in the data streaming world!

current bengaluru cfp

The Call for Papers for Current Bengaluru 2025 is now open! This is your chance to take the stage at the premier data streaming industry event, happening on March 19 in Bengaluru, India. Do you have a compelling technical story, an innovative application, or a visionary idea in data streaming? Now’s the time to share it with the world. Submit your talk by December 19.

Know Your Developer [KYD] - Timo Walther

In this edition’s KYD section, we chat with Timo Walther, Principal Software Developer I at Confluent. Timo is well-known globally for being synonymous with Apache Flink® project.

timo photo

1. Hi Timo! Welcome to “Know Your Developer” section of the Confluent Newsletter. Would you like to introduce yourself?

I grew up in a small village in southern Germany. During my teenage years, social media was emerging, and I wanted to create my own social network. This is how I taught myself programming and databases. My passion for software engineering eventually led me to the Technical University of Berlin, where I joined the database research group — the birthplace of Apache Flink. This project developed into one of the most popular open-source initiatives within the Apache Software Foundation. I began contributing to the project as a part-time student, worked for five different employers, and experienced two exits along the way. Yet, the project remains as exciting as ever.

2. Tell us a little bit about your background and your journey with Confluent, so far.

I was a co-founder at Immerok, which Confluent acquired in 2023. Our goal was, and still is, to provide the best cloud-native experience for Flink. My team and I successfully launched it as generally available on all major clouds earlier this year. Stream processing should be as easy as using a database. I worked on the integration of Flink SQL with Confluent Cloud's Apache Kafka® and Schema Registry products. Today, I'm shaping the future by evolving Flink SQL and its ecosystem, both in open source and at Confluent. I'm confident that FLIP-440 will be a game changer. FLIP-440 proposes a new kind of user-defined function (UDF) that enables implementing user-defined SQL operators: ProcessTableFunction (PTF)

3. Processing streaming data has always been a hard problem to solve. What do you think are the elements that make stream processing a tough use case to solve?

Almost every stream processing application involves working with time and state in one way or another. Requirements such as 'There should be a timeout after 5 minutes' or 'The second event might be delayed, but I still want to show intermediate results' lead to trade-offs between waiting for data and making progress. Intermediate results must be stored for incremental computation. Event time determines when it is safe to flush or discard buffered events. In many use cases, processing streaming data often involves working with Change Data Capture (CDC) logs.

4. You have worked on Flink for a long time! How do you think Flink is placed to succeed in modern data stacks?

Flink is not just a tool for stream processing; it is a toolbox. Its flexibility in getting the job done is why it is used in some of the largest real-time platforms on the planet. Flink can be placed at various points in the modern data stack. It can serve as a hub between systems for deduplication, stream enrichment, denormalized views, and pre-aggregation. By placing state and computation close to each other, it can replace a chain of microservices with a stream processing pipeline.

5. How do you see adoption of stream processing use cases by industries?

In the early days, stream processing was mostly adopted by young startups that could design their infrastructure from scratch. Today, event-driven applications are key to succeeding in what I usually call 'the instant world.' Reacting to viral trends and major outages is crucial, while dynamic pricing generates profit. On the other hand, consumers expect to be fed with information at every phase of a product's lifecycle. Stream processing and mature infrastructure (e.g., large OLTP databases) now coexist.

‌6. Can you tell us a little bit about the future of Flink and stream data processing in general?

Flink already supports both stream and batch processing, but internally, it uses two completely different stacks to power those use cases. Similarly, the storage systems are divided along these two categories. The de facto industry standard is Apache Kafka® for streaming and Iceberg for table format supporting fast read. In the future, the lines between storage and processing should become more blurred. A batch query might also be able to adjust data computed by a streaming query. Flink's efforts around materialized tables are a good start.

Data Streaming Resources:

Links From Around the Web:

  • Want to build a truly scalable data lake? Iceberg is the key. Read Adi Polak’s blog on why Apache Iceberg is a game-changer for cloud-native data lakes—and how it fits alongside Kafka and Flink to shape the future of data engineering.
  • Take a sneak peek into the future of Apache Iceberg and the top priorities for the Iceberg community in 2025 from Yingjin Wu’s blog.

Catalyst Insight:

In our brand-new ‘Catalyst Insight’ section we intend to ask catalysts from the data streaming community to share their experiences.

‌In this edition, we request Dave Klein, Senior Developer Advocate at Imply, AZ, USA to share his insights.

Dave is a developer, mentor, author, presenter, community organizer, father of 14, and all round fun guy. He has been working in software development since the last century and focusing on streaming data for the past 5 years.

dave photo

How would you describe your role in the data world? Not necessarily as in your title, but what unique perspective and experiences do you bring?

“Helping people to take advantage of the best tools in the data space and have fun doing it!”

What advice would you offer a burgeoning data streaming engineer?

“Get involved in the community, either online (Slack, X, LinkedIn) or in person at meetups and conferences. There are so many amazing people out there willing to help.”

Want to learn more about our Confluent Community Catalyst Program? Visit the page here to get all the details!

Upcoming Events:

In-Person:

Virtual:

Virtual Workshops:

Stay up to date with all Confluent-run meetup events - by copying the following link into your personal calendar platform:

https://airtable.com/app8KVpxxlmhTbfcL/shrNiipDJkCa2GBW7/iCal?timeZone=America%2FChicago&userLocale=en

(Instructions for GCal, iCal, Outlook, etc.

By the way…

We hope you enjoyed our curated assortment of resources! If you’d like to provide feedback, suggest ideas for content you’d like to see, or you want to submit your own resource for consideration, email us at devx_newsletter@confluent.io!

If you’d like to view previous editions of the newsletter, visit our archive.

If you’re viewing this newsletter online, know that we appreciate your readership and that you can get this newsletter delivered directly to your inbox by filling out the sign-up form on the left-hand side.

P.S. If you want to learn more about Kafka, Flink, or Confluent Cloud, visit our developer site at Confluent Developer.

Subscribe Now

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Recent Newsletters