March 31, 2021 | Episode 150

Building Real-Time Data Pipelines with Microsoft Azure, Databricks, and Confluent

  • Transcript
  • Notes

Processing data in real time is a process, as some might say. Angela Chu (Solution Architect, Databricks) and Caio Moreno (Senior Cloud Solution Architect, Microsoft) explain how to integrate Azure, Databricks, and Confluent to build real-time data pipelines that enable you to ingest data, perform analytics, and extract insights from data at hand. They share about where to start within the Apache Kafka® ecosystem and how to maximize the tools and components that it offers using fully managed services like Confluent Cloud for data in motion.

Continue Listening

Episode 151April 7, 2021 | 24 min

Resurrecting In-Sync Replicas with Automatic Observer Promotion ft. Anna McDonald

As most developers and architects know, data always needs to be accessible no matter what happens outside of the system. This week, Tim Berglund virtually sits down with Anna McDonald (Principal Customer Success Technical Architect, Confluent) to discuss how Automatic Observer Promotion (AOP) can help solve the Apache Kafka 2.5 datacenter dilemma, a feature now available in Confluent Platform 6.1 and above.

Episode 152April 12, 2021 | 24 min

Automated Cluster Operations in the Cloud ft. Rashmi Prabhu

Running operations on the cloud in a scaling organization can be time consuming, error prone, and tedious. This episode addresses manual upgrades and rolling restarts of Confluent Cloud clusters during releases, fixes, experiments, and the like, and more importantly, the progress that’s been made to switch from manual operations to an almost fully automated process. Rashmi Prabhu, a software engineer on the Control Plane team at Confluent, has the opportunity to help govern the data plane that comprises all these clusters and enables API-driven operations on these clusters.

Episode 153April 14, 2021 | 31 min

Connecting Azure Cosmos DB with Apache Kafka - Better Together ft. Ryan CrawCour

When building solutions for customers in Microsoft Azure, it is not uncommon to come across customers who are deeply entrenched in the Kafka ecosystem and want to continue expanding within it. Thus, figuring out how to connect Azure first-party services to this ecosystem is of the utmost importance. Ryan CrawCour (Engineer, Microsoft) explains how you can use a connector to feed events from your Kafka infrastructure into Azure Cosmos DB, as well as how to get changes from your database system back into their Kafka topics.

Got questions?

If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.

Email Us

Never miss an episode!

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.