Get Started Free
July 27, 2021 | Episode 169

Collecting Data with a Custom SIEM System Built on Apache Kafka and Kafka Connect ft. Vitalii Rudenskyi

  • Transcript
  • Notes
undefined

The best-informed business insights that support better decision-making begin with data collection, ahead of data processing and analytics. Enterprises nowadays are engulfed by data floods, with data sources ranging from cloud services, applications, to thousands of internal servers. The massive volume of data that organizations must process presents data ingestion challenges for many large companies. In this episode, data security engineer, Vitalli Rudenskyi, discusses the decision to replace a vendor security information and event management (SIEM) system by developing a custom solution with Apache Kafka® and Kafka Connect for a better data collection strategy.

Having a data collection infrastructure layer is mission critical for Vitalii and the team in helping enterprises protect data and detect security events. Building on the base of Kafka, their custom SIEM infrastructure is configurable and designed to be able to ingest and analyze huge amounts of data, including personally identifiable information (PII) and healthcare data. 

When it comes to collecting data, there are two fundamental choices: push or pull. But how about both? Vitalii shares that Kafka Connect API extensions are integral to data ingestion in Kafka. The three key components to allow their SIEM system to collect and record daily by pushing and pulling: 

  1. NettySource Connector: A connector developed to receive data from different network devices to Apache Kafka. It helps receive data using both the TCP and UDP transport protocols and can be adopted to receive any data from Syslog to SNMP and NetFlow.
  2. PollableAPI Connector: A connector made to receive data from remote systems, pulling data from different remote APIs and services.
  3. Transformations Library: These are useful extensions to the existing out-of-the-box transformations. Approach with “tag and apply” transformations that transform data into the right place in the right format after collecting data.

Listen to learn more as Vitalii shares the importance of data collection and the building of a custom solution to address multi-source data management requirements. 

Continue Listening

Episode 170August 5, 2021 | 31 min

Minimizing Software Speciation with ksqlDB and Kafka Streams ft. Mitch Seymour

Building a large, stateful Kafka Streams application that tracks the state of each outgoing email is crucial to marketing automation tools like Mailchimp. Joining us today in this episode, Mitch Seymour, staff engineer at Mailchimp, shares how ksqlDB and Kafka Streams handle the company’s largest source of streaming data.

Episode 171August 11, 2021 | 28 min

Advanced Stream Processing with ksqlDB ft. Michael Drogalis

ksqlDB makes it easy to read, write, process, and transform data on Apache Kafka the de facto event streaming platform. With simple SQL syntax, pre-built connectors, and materialized views, ksqlDB’s powerful stream processing capabilities enable you to quickly start processing real-time data at scale. But how does ksqlDB work? In this episode, Michael Drogalis (Principal Product Manager, Product Management, Confluent) previews an all-new Confluent Developer course – Inside ksqlDB, where he provides a full overview of ksqlDB’s internal architecture and delves into advanced ksqlDB features.

Episode 172August 19, 2021 | 28 min

Placing Apache Kafka at the Heart of a Data Revolution at Saxo Bank

Monolithic applications present challenges for organizations like Saxo Bank, including difficulties when it comes to transitioning to cloud, data efficiency, and performing data management in a regulated environment. Graham Stirling, the head of data platforms at Saxo Bank and also a self-proclaimed recovering architect on the pathway to delivery, shares his experience over the last 2.5 years as Saxo Bank placed Apache Kafka at the heart of their company—something they call a data revolution.

Got questions?

If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.

Email Us

Never miss an episode!

Confluent Cloud is a fully managed Apache Kafka service available on all three major clouds. Try it for free today.

Try it for free