Get Started Free

Event-driven AI agents

August 28, 2025

Event-driven AI agents

Watch Adi Polak, Director of Advocacy and Developer Experience Engineering, at Confluent explain what event-driven AI agents are, in this Youtube video.

ai agents YT

While they may sound futuristic, especially when paired with LLMs or autonomous decision-making, AI agents can and should follow the same architectural principles that have long governed scalable software systems.

Each agent is a self-contained unit that performs a specific task: planning, reasoning, data retrieval, or execution, and communicates with other agents or systems. This is precisely what microservices do: modular, decoupled components that collaborate.

Framing AI agents as microservices not only demystifies their architecture but also unlocks proven patterns for production-readiness, including scalability, observability, and fault isolation.

In short, if you want your agents to evolve beyond prototypes and thrive in the real world, you are better set for success if you treat them like microservices. It also helps set the ownership of components with the right team.

Streaming Agents in Confluent Cloud for Apache Flink® launched!

Today, turning agents from demos into production systems can be challenging – disjointed systems need to be stitched together, ready-to-use data is missing, frameworks are often not production-ready, and there's a brittle separation between data processing and AI.

This is where Streaming Agents come in. You can now build, deploy, and orchestrate event-driven agents natively on Apache Flink. Embedded in data streams, Streaming Agents can access fresh context and continuously monitor and act on what's happening in the moment.

Using familiar Flink APIs, you can unify data processing and agentic AI workflows, with built-in support for:

  • Model Inference: Work directly with AI models in Flink queries.
  • Real-time Embeddings: Use any embedding model (e.g., OpenAI, Amazon, Google Gemini) with any vector database (e.g., MongoDB Atlas, Pinecone, Elastic, Couchbase) to turn data into vector embeddings for RAG.
  • Built-In ML Functions: Simplify complex data science tasks by using out-of-the-box Forecasting and Anomaly Detection Flink SQL functions on streaming data.
  • Tool Calling with MCP: Enable contextual tool invocation, with tools defined in an MCP server or as UDFs.
  • Connections: Securely integrate and manage connectivity with external systems, safeguarding credentials while enabling reusability and centralizing credential management.
  • External Tables & Search: Enhance data enrichment for AI decision-making by joining real-time streams with data from non-Apache Kafka® sources (e.g., RDBMS, vector databases, REST APIs) and using Flink SQL to do both vector search for RAG and quick external table lookups, eliminating complex data synchronization.

Visit the Quickstart to try it yourself →

Watch the demo →

Docs can be found here.

Registration is LIVE for Current New Orleans 2025!

Registration is open for the biggest event in data streaming: #Current25 New Orleans!

If you're building real-time applications or modernizing your data architecture, this is the event to be at.

✅ 60+ technical sessions

✅ Insights from Kafka, Flink & Iceberg experts

✅ Networking with the global data streaming community

Join us October 29–30 to explore the future of data streaming. Register by August 15 to save $500 off the standard ticket price -> Current New Orleans

current NOLA

Data Streaming Resources

  • Workspaces is a new feature in WarpStream that provides the ability to group clusters within an account and manage access controls per Workspace rather than relying on account-wide RBAC for access controls. Workspaces can also help with chargebacks and cost management because invoices are grouped by Workspace. Read more about Workspaces in the docs.
  • Curious about VARIANT data type, binary deletion vectors, and row-level lineage? Read this blog written by Alex Merced on exciting new features of Apache Iceberg format version 3. Apache Iceberg format version 3 introduces a set of capabilities aimed at enhancing flexibility, performance, and expressiveness in data modeling. While V1 and V2 focused on stability and row-level operations, V3 is about expanding the format to accommodate more complex use cases and data types.

Links From Around the Web:

  • Read this blog to learn how AWS enables customers to deliver production-ready AI agents at scale with Amazon Bedrock AgentCore. AgentCore provides a secure, serverless runtime with complete session isolation and the longest-running workload available today, tools and capabilities to help agents execute workflows with the right permissions and context, and controls to operate trustworthy agents. Its capabilities can be used together or independently and work with popular open-source frameworks such as CrewAI, LangGraph, and LlamaIndex, and with any model including those in (or outside of) Amazon Bedrock, so developers can stay agile as technology shifts.
  • Do you want to learn just enough SQL to be dangerous with AI and master the skills required to analyze your data? Read this fantastic blog written by Jacob Matson, Dev Advocate at MotherDuck, and Alex Monahan, where "just enough SQL" is explained along with DuckDB. The blog ends with an important postscript: P.S. Always verify AI-generated SQL before trusting the results!

In-Person Meetups:

Online Meetups:

Stay up to date with all Confluent-run meetup events - by copying the following link into your personal calendar platform:

https://airtable.com/app8KVpxxlmhTbfcL/shrNiipDJkCa2GBW7/iCal

(Instructions for GCal, iCal, Outlook, etc.

By the way…

We hope you enjoyed our curated assortment of resources! If you’d like to provide feedback, suggest ideas for content you’d like to see, or you want to submit your own resource for consideration, email us at devx_newsletter@confluent.io!

If you’d like to view previous editions of the newsletter, visit our archive.

If you’re viewing this newsletter online, know that we appreciate your readership and that you can get this newsletter delivered directly to your inbox by filling out the sign-up form on the left-hand side.

P.S. If you want to learn more about Kafka, Flink, or Confluent Cloud, visit our developer site at Confluent Developer.

Subscribe Now

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Recent Newsletters