For a data streaming engineer, this is the best of times ! With the advent of AI agents and agentic workflows reinventing data engineering tasks, the worth of quality data has skyrocketed! Data streaming is the only meaningful way to impose stream governance and stream quality checks on data, closer to the source. With Apache Kafka® and Apache Flink® leading the way with clean frameworks to implement "Shift-Left" patterns on streaming data, AI systems downstream are assured of getting data that is trustworthy, fresh, and relevant.
Let's start 2026 with technology news and updates around data streaming and AI and dive deep throughout this year, for exciting updates from the community newsletter team!
Confluent announced the official launch of Confluent Marketplace (formerly Confluent Hub), a centralized resource designed to accelerate innovation, drive connectivity, and dramatically simplify the developer experience within the data streaming landscape.
For years, integration engineers have been the quiet force behind the modern digital world. They connect systems that never communicated before, design architectures that keep businesses in motion, and make real-time intelligence possible—all while navigating an environment that evolves faster every quarter.
Until now, the work of these builders—the connectors, the sample code, the documentation—lived in silos: GitHub repos, internal projects, or proofs of concept shared at meetups. That's what Confluent Marketplace is here to change.
Confluent Hub is now Confluent Marketplace, a centralized destination for Confluent partners and community developers to share and soon monetize their contributions to the Confluent Cloud ecosystem. Learn how this launch will help organizations find the curated, validated solutions they need to succeed with the data streaming platform.
Read the official blog for more insights around Confluent Marketplace!
Kafka Plugin for IntelliJ from JetBrains is officially released on the IntelliJ Marketplace under Confluent.
What you can do today:
What’s coming next:
Check out the Kafka Plugin on the IntelliJ Marketplace today: Kafka - IntelliJ IDEs Plugin | Marketplace
Over the past year, the batch and streaming engineering teams at Uber re-architected ingestion for some of Uber's most enormous datasets, moving from Spark-based batch pipelines to an Apache Flink®–powered streaming platform called IngestionNext. Their goal was to turn "hours to days" of lag into minutes-level freshness at a petabyte scale, while also driving meaningful compute savings. Read the insightful blog from the Uber engineering team, which talks about:
We hope you enjoyed our curated assortment of resources! If you’d like to provide feedback, suggest ideas for content you’d like to see, or you want to submit your own resource for consideration, email us at devx_newsletter@confluent.io.
If you’d like to view previous editions of the newsletter, visit our archive.
If you’re viewing this newsletter online, know that we appreciate your readership and that you can get this newsletter delivered directly to your inbox by filling out the sign-up form below.
P.S. If you want to learn more about Kafka, Flink, or Confluent Cloud, visit our developer site at Confluent Developer.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.