December 7, 2021 | Episode 189

Using Apache Kafka as Cloud-Native Data System ft. Gwen Shapira

  • Transcript
  • Notes

What does cloud native mean, and what are some design considerations when implementing cloud-native data services? Gwen Shapira (Apache Kafka® Committer and Principal Engineer II, Confluent) addresses these questions in today’s episode. She shares her learnings by discussing a series of technical papers published by her team, which explains what they’ve done to expand Kafka’s cloud-native capabilities on Confluent Cloud. 

Gwen leads the Cloud-Native Kafka team, which focuses on developing new features to evolve Kafka to its next stage as a fully managed cloud data platform. Turning Kafka into a self-service platform is not entirely straightforward, however, Kafka’s early day investment in elasticity, scalability, and multi-tenancy to run at a company-wide scale served as the North Star in taking Kafka to its next stage—a fully managed cloud service where users will just need to send us their workloads and everything else will magically work. Through examining modern cloud-native data services, such as Aurora, Amazon S3, Snowflake, Amazon DynamoDB, and BigQuery, there are seven capabilities that you can expect to see in modern cloud data systems, including: 

  1. Elasticity: Adapt to workload changes to scale up and down with a click or APIs—cloud-native Kafka omits the requirement to install REST Proxy for using Kafka APIs
  2. Infinite scale: Kafka has the ability to elastic scale with a behind-the-scene process for capacity planning 
  3. Resiliency: Ensures high availability to minimize downtown and disaster recovery 
  4. Multi-tenancy: Cloud-native infrastructure needs to have isolations—data, namespaces, and performance, which Kafka is designed to support
  5. Pay per use: Pay for resources based on usage
  6. Cost-effectiveness: Cloud deployment has notably lower costs than self-managed services, which also decreases adoption time 
  7. Global: Connect to Kafka from around the globe and consume data locally

Building around these key requirements, a fully managed Kafka as a service provides an enhanced user experience that is scalable and flexible with reduced infrastructure management costs. Based on their experience building cloud-native Kafka, Gwen and her team published a four-part thesis that shares insights on user expectations for modern cloud data services as well as technical implementation considerations to help you develop your own cloud-native data system. 

Continue Listening

Episode 190December 14, 2021 | 28 min

Lessons Learned From Designing Serverless Apache Kafka ft. Prachetaa Raghavan

You might call building and operating Apache Kafka as a cloud-native data service synonymous with a serverless experience. Prachetaa Raghavan (Staff Software Developer I, Confluent) spends his days focused on this very thing. In this podcast, he shares his learnings from implementing a serverless architecture on Confluent Cloud using Kubernetes Operator.

Episode 191December 21, 2021 | 31 min

Running Hundreds of Stream Processing Applications with Apache Kafka at Wise

What’s it like building a stream processing platform with around 300 stateful stream processing applications based on Kafka Streams? Levani Kokhreidze (Principal Engineer, Wise) shares his experience building such a platform that the business depends on for multi-currency movements across the globe. He explains how his team uses Kafka Streams for real-time money transfers at Wise, a fintech organization that facilitates international currency transfers for 11 million customers.

Episode 192December 28, 2021 | 34 min

Modernizing Banking Architectures with Apache Kafka ft. Fotios Filacouris

Financial services have been early Apache Kafka adopters. With strong delivery guarantees and scalability, Kafka is a streaming platform that solves architectural gaps for banks. Having experience working and designing architectural solutions for financial services, Fotios Filacouris (Senior Solutions Engineer, Enterprise Solutions Engineering, Confluent) joins Tim to discuss how Kafka and Confluent help banks build modern architectures, highlighting key emerging use cases from the sector.

Got questions?

If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.

Email Us

Never miss an episode!

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.