Did you know that UDFs are now generally available with Confluent Cloud for Apache Flink® on AWS and Azure? A UDF extends the capabilities of Confluent Cloud for Apache Flink® and enables you to implement custom logic like custom deserialization (e.g., CSV, XML) beyond what is supported by SQL. For example, you can implement functions such as encoding and decoding a string, performing geospatial calculations, encrypting and decrypting fields, and reusing an existing library or code from a third-party supplier.
Confluent Cloud for Apache Flink® supports UDFs written in Java. Package your custom function and its dependencies into a JAR file and upload it as an artifact to Confluent Cloud. Register the function in an Apache Flink® database by using the CREATE FUNCTION statement and invoke your UDF in Flink SQL or the Table API. Confluent Cloud provides the infrastructure to run your code.
Here are some examples of when to use UDF:
Get started with Java UDFs here.
View the Github repository of examples.
Current 2025 London
The countdown to Current 2025 London is on! Last year’s Kafka Summit has been rebranded to Current—the same great event but bigger and better. It’s the data streaming event that every data streaming enthusiast looks forward to every year. There are 100+ sessions and lightning talks on data streaming. If you haven’t signed up yet, there’s still time!
Get the details ⤵️
📅 May 20–21, 2025
📍 Excel London, UK
Use code L-PRM-DEVREL for 40% off the standard ticket price.
🔗 Register now ➡️ Begin Registration
In this edition’s KYD section, we chat with Bill Bejeck, Staff Software Engineer at Confluent.
Bill is an Apache Kafka® committer, a Project Management Committee (PMC) member, and a Kafka Streams contributor. He is also the author of the book “Kafka Streams in Action, Second Edition.” published by Manning Publications.
1. Hi, Bill! Welcome to the KYD section of the Confluent DevX Newsletter. Would you like to introduce yourself?
Hi! My name is Bill Bejeck. I’m a software engineer on the Kafka Streams team, an Apache Kafka committer, and a PMC member.
2. Tell our developers a bit about your background and your journey with Confluent so far.
I’ve been with Confluent for almost eight years now. I spent the first three on the Kafka Stream team as an engineer. I wanted to give developer relations a shot, so I moved there for a bit. But I missed full-time engineering, so I moved back last year.
3. Kafka Streams is still going strong and is very popular with data streaming engineers. Share with our developers your experience with Kafka Streams and how you picked it up.
My experience with Kafka Streams has been centered around my work as an engineer on the Kafka Streams team. I contributed regularly to the Kafka Streams project before joining Confluent as well. I’ve also written a book on Kafka Streams. Kafka Streams in Action, Second Edition
4. Processing streaming data is a complex tech domain. What would be your advice for new developers entering this field? How should they prepare?
My advice would be to get a good understanding of what streaming is and get comfortable with the basics first. From there, they could expand their knowledge by comparing different technologies. For preparation, I think nothing beats hands-on experience. Build some simple (or not so simple) applications to see how things work firsthand.
5. With the data and AI space brimming with options, how do you see the stream processing space evolving?
I see a significant increase in the demand for applying/integrating real-time events with AI and ML training. This integration will work closely with stream processing to help businesses and organizations make decisions more quickly, based on their data.
More information is in the Confluent Cloud consumer documentation with an upgrade guide.
Stay up to date with all Confluent-run meetup events - by copying the following link into your personal calendar platform:
https://airtable.com/app8KVpxxlmhTbfcL/shrNiipDJkCa2GBW7/iCal
(Instructions for GCal, iCal, Outlook, etc.)
We hope you enjoyed our curated assortment of resources! If you’d like to provide feedback, suggest ideas for content you’d like to see, or you want to submit your own resource for consideration, email us at devx_newsletter@confluent.io!
If you’d like to view previous editions of the newsletter, visit our archive.
If you’re viewing this newsletter online, know that we appreciate your readership and that you can get this newsletter delivered directly to your inbox by filling out the sign-up form on the left-hand side.
P.S. If you want to learn more about Kafka, Flink, or Confluent Cloud, visit our developer site at Confluent Developer.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.