August 26, 2021 | Episode 173

Using Apache Kafka and ksqlDB for Data Replication at Bolt

  • Transcript
  • Notes

What does a ride-hailing app that offers micromobility and food delivery services have to do with data in motion? In this episode, Ruslan Gibaiev (Data Architect, Bolt) shares about Bolt’s road to adopting Apache Kafka® and ksqlDB for stream processing to replicate data from transactional databases to analytical warehouses. 

Rome wasn't built overnight, nor was the adoption of Kafka and ksqlDB at Bolt. Initially, Bolt noticed the need for system standardization and replacing the unreliable query-based change data capture (CDC) process. As an experienced Kafka developer, Ruslan believed that Kafka is the solution for adopting change data capture as a company-wide event streaming solution. Persuading the team at Bolt to adopt and buy in was hard at first, but Ruslan made it possible. Eventually, the team replaced query-based CDC with log-based CDC from Debezium, built on top of Kafka. Shortly after the implementation, developers at Bolt began to see precise, correct, and real-time data. 

As Bolt continues to grow, they see the need to implement a data lake or a data warehouse for OTP system data replication and stream processing. After carefully considering several different solutions and frameworks such as ksqlDB, Apache Flink®, Apache Spark™, and Kafka Streams, ksqlDB shines most for their business requirement. 

Bolt adopted ksqlDB because it is native to the Kafka ecosystem, and it is a perfect fit for their use case. They found ksqlDB to be a particularly good fit for replicating all their data to a data warehouse for a number of reasons, including: 

  • Easy to deploy and manage
  • Linearly scalable
  • Natively integrates with Confluent Schema Registry 

Turn in to find out more about Bolt’s adoption journey with Kafka and ksqlDB. 

Continue Listening

Episode 174August 31, 2021 | 31 min

Multi-Cluster Apache Kafka with Cluster Linking ft. Nikhil Bhatia

Infrastructure needs to react in real time to support globally distributed events, such as cloud migration, IoT, edge data collection, and disaster recovery. To provide a seamless yet cloud-native, cross-cluster topic replication experience, Nikhil Bhatia (Principal Engineer I, Product Infrastructure, Confluent) and the team engineered a solution called Cluster Linking. Available on Confluent Cloud, Cluster Linking is an API that enables Apache Kafka to work across multi-datacenters, making it possible to design globally available distributed systems.

Episode 175September 9, 2021 | 34 min

What Is Data Mesh, and How Does it Work? ft. Zhamak Dehghani

The data mesh architectural paradigm shift is all about moving analytical data away from a monolithic data warehouse or data lake into a distributed architecture—allowing data to be shared for analytical purposes in real time, right at the point of origin. The idea of data mesh was introduced by Zhamak Dehghani (Director of Emerging Technologies, Thoughtworks) in 2019. Here, she provides an introduction to data mesh and the fundamental problems that it’s trying to solve.

Episode 176September 14, 2021 | 35 min

How to Build a Strong Developer Community with Global Engagement ft. Robin Moffatt and Ale Murray

A developer community brings people with shared interests and purpose together. The fundamental elements of a community are to gather, learn, support, and create opportunities for collaboration. A developer community is also an effective and efficient instrument for exploring and solving problems together.

Got questions?

If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.

Email Us

Never miss an episode!

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.