July 8, 2021 | Episode 166

Automated Event-Driven Architectures and Microservices with Apache Kafka and SmartBear

  • Transcript
  • Notes

Is it possible to have automated adoption of your event-driven architectures and microservices? The answer is yes! Alianna Inzana, product leader for API testing and virtualization at SmartBear, uses this evolutionary model to make event services reusable, functional, and strategic for both in-house needs and clients. SmartBear relies on Apache Kafka® to drive its automated microservices solutions forward through scaled architecture and adaptive workflows. 

Although the path to adoption may be different across use case and client requirements, it is all about maturity and API lifecycle management. As your services mature and grow, so should your event streaming architecture. The data your organization collects is no longer in a silo—rather, it has to be accessible across several events. The best architecture can handle these fluctuations. Alianna explains that although the result of these requirements is an architectural pattern, it doesn’t start that way. Instead, these automation processes begin as coding patterns on isolated platforms. 

You cannot rush code development at the coding stage because you never truly know how it will work for the end system. Testing must be done at each step of the implementation to ensure that event-driven architectures work for each step and variation of the service. The code will be altered as needed throughout the trial phase. Next, the product and development teams compare what architecture you currently have to where you’d like it to be. It is all about the product and how you’d like to scale it. The tricky part comes in the trial and error of bringing on each product and service one by one. However, once your offerings and architecture are synced, you’re saving time and effort not building something new for every microservice. 

As a result of event-driven architectures, you can minimize duplicate efforts and adapt your business offerings to them as the need arises. This is a strategic step for any organization looking to have a cohesive product offering. Architecture automation allows for flexible features that scale with your event services. Once these are in place, a company can use and grow them as needed. With innovative and adaptable event-driven architectures, organizations can grow with clients and scale the backend system as required. 

Continue Listening

Episode 167July 15, 2021 | 25 min

Powering Real-Time Analytics with Apache Kafka and Rockset

Using large amounts of streaming data increasingly requires interactive, real-time analytics and dashboards—and this applies to any industry, including tech. CTO and Co-Founder of Rockset Dhruba Borthakur shares how his company uses Apache Kafka to perform complex joins, search, and aggregations on streaming data with low latencies. The Kafka database integrations allow his team to make a cloud-native analytics database that is a fundamental piece of enterprise infrastructure.

Episode 168July 22, 2021 | 29 min

Consistent, Complete Distributed Stream Processing ft. Guozhang Wang

Stream processing has become an important part of the big data landscape as a new programming paradigm to implement real-time data-driven applications. One of the biggest challenges for streaming systems is to provide correctness guarantees for data processing in a distributed environment. Guozhang Wang (Distributed Systems Engineer, Confluent) contributed to a leadership paper, along with other leaders in the Apache Kafka community, on consistency and completeness in streaming processing in Apache Kafka in order to shed light on what a reimagined, modern infrastructure looks like.

Episode 169July 27, 2021 | 25 min

Collecting Data with a Custom SIEM System Built on Apache Kafka and Kafka Connect ft. Vitalii Rudenskyi

The best-informed business insights that support better decision-making begin with data collection, ahead of data processing and analytics. Enterprises nowadays are engulfed by data floods, with data sources ranging from cloud services, applications, to thousands of internal servers. The massive volume of data that organizations must process presents data ingestion challenges for many large companies. In this episode, data security engineer, Vitalli Rudenskyi, discusses the decision to replace a vendor security information and event management (SIEM) system by developing a custom solution with Apache Kafka and Kafka Connect for a better data collection strategy.

Got questions?

If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.

Email Us

Never miss an episode!

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.