Use CCLOUD50 to get an additional $50 of free Confluent Cloud- (details)
June 22, 2021 | Episode 164

Chaos Engineering with Apache Kafka and Gremlin

  • Transcript
  • Notes

The most secure clusters aren’t built on the hopes that they’ll never break. They are the clusters that are broken on purpose and with a specific goal. When organizations want to avoid systematic weaknesses, chaos engineering with Apache Kafka® is the route to go. 

Your system is only as reliable as its highest point of vulnerability. Patrick Brennan (Principal Architect) and Tammy Butow (Principal SRE) from Gremlin discuss how they do their own chaos engineering to manage and resolve high-severity incidents across the company. But why would an engineer break things when they would have to fix them? Brennan explains that finding weaknesses in the cloud environment helps Gremlin to:

  • Avoid lengthy downtime when there is an issue (not if, but when)
  • Halt lost revenue that results from service interruptions
  • Maintain customer satisfaction with their stream processing services
  • Steer clear of burnout for the SRE team 

Chaos engineering is all about experimenting with injecting failure directly into the clusters on the cloud. The key is to start with a small blast radius and then scale as needed. It is critical that SREs have a plan for failure and then practice an intense communication methodology with the development team. This plan has to be detailed and includes precise diagramming so that nothing in the chaos engineering process is an anomaly. Once the process is confirmed, SREs can automate it, and nothing about it is random. 

When something breaks or you find a vulnerability, it only helps the overall network become stronger. This becomes a way to problem-solve across engineering teams collaboratively. Chaos engineering makes it easier for SRE and development teams to do their job, and it helps the organization promote security and reliability to their customers. With Kafka, companies don’t have to wait for an issue to happen. They can make their disorder within microservices on the cloud and fix vulnerabilities before anything catastrophic happens.

Continue Listening

Episode 165June 29, 2021 | 27 min

Data-Driven Digitalization with Apache Kafka in the Food Industry at BAADER

Coming out of university, Patrick Neff (Data Scientist, BAADER) was used to “perfect” examples of datasets. However, he soon realized that in the real world, data is often either unavailable or unstructured. This compelled him to learn more about collecting data, analyzing it in a smart and automatic way, and exploring Apache Kafka as a core ecosystem while at BAADER, a global provider of food processing machines. After Patrick began working with Apache Kafka in 2019, he developed several microservices with Kafka Streams and used Kafka Connect for various data analytics projects. Focused on the food value chain, Patrick’s mission is to optimize processes specifically around transportation and processing.

Episode 166July 8, 2021 | 29 min

Automated Event-Driven Architectures and Microservices with Apache Kafka and SmartBear

Automated behavioral-driven testing of your event-driven microservices—sounds great, right? But how do you do it? SmartBear's Alianna Inzana shares about just that and some tooling that SmartBear makes to support efforts like this. In fact, SmartBear is actually responsible for many products that you're probably familiar with.

Episode 167July 15, 2021 | 25 min

Powering Real-Time Analytics with Apache Kafka and Rockset

Using large amounts of streaming data increasingly requires interactive, real-time analytics and dashboards—and this applies to any industry, including tech. CTO and Co-Founder of Rockset Dhruba Borthakur shares how his company uses Apache Kafka to perform complex joins, search, and aggregations on streaming data with low latencies. The Kafka database integrations allow his team to make a cloud-native analytics database that is a fundamental piece of enterprise infrastructure.

Got questions?

If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.

Email Us

Never miss an episode!

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.