Coming out of university, Patrick Neff (Data Scientist, BAADER) was used to “perfect” examples of datasets. However, he soon realized that in the real world, data is often either unavailable or unstructured. This compelled him to learn more about collecting data, analyzing it in a smart and automatic way, and exploring Apache Kafka as a core ecosystem while at BAADER, a global provider of food processing machines. After Patrick began working with Apache Kafka in 2019, he developed several microservices with Kafka Streams and used Kafka Connect for various data analytics projects. Focused on the food value chain, Patrick’s mission is to optimize processes specifically around transportation and processing.
The most secure clusters aren’t built on the hopes that they’ll never break. They are the clusters that are broken on purpose and with a specific goal. When organizations want to avoid systematic weaknesses, chaos engineering with Apache Kafka® is the route to go. Patrick Brennan (Principal Architect) and Tammy Butow (Principal SRE) from Gremlin discuss how they do their own chaos engineering to manage and resolve high-severity incidents across the company.
Enabling private links on the cloud is increasingly important for security across networks and even the reliability of stream processing. Staff Software Engineer II Dan LaMotte and his team focus on making secure connections for customers to utilize Confluent Cloud. With the option of private links, you can now also build microservices that use new functionality that wasn’t available in the past. You no longer need to segment your workflow, thanks to completely secure connections between teams that are otherwise disconnected from one another.
Based on Apache Kafka® 2.8, Confluent Platform 6.2 introduces Health+, which offers intelligent alerting, cloud-based monitoring tools, and accelerated support so that you can get notified of potential issues before they manifest as critical problems that lead to downtime and business disruption.
Collecting internal, operational telemetry from Confluent Cloud services and thousands of clusters is no small feat. Traditionally, this data needs to be collected in multiple ways to satisfy all the different requirements. However, this sometimes leads to discrepancies between various systems. With OpenTelemetry, we can collect data in a vendor-agnostic way. Many vendors already integrate with OpenTelemetry, which gives us the flexibility to try out different observability solutions with minimal effort, without the need to rewrite applications or deploy new agents.
If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.
Email Us