Gwen Shapira outlines KIP-402, which aims to improve fairness in how Apache Kafka® processes connections and how network threads pick up requests and new data. She also shares about her team’s efforts to make user-facing Kafka improvements.
When it comes to data modeling, Dani Traphagen covers importance business requirements, including the need for a domain model, practicing domain-driven design principles, and bounded context. She also discusses the attributes of data modeling: time, source, key, header, metadata, and payload, in addition to exploring the significance of data governance and lineage and performing joins.
Joy Gao chats with Tim Berglund about all things related to streaming ETL—how it works, its benefits, and the implementation and operational challenges involved. She describes the streaming ETL architecture at WePay from MySQL/Cassandra to BigQuery using Apache Kafka®, Kafka Connect, and Debezium.
Todd Palino talks about the start of Apache Kafka® at LinkedIn, what learning to use Kafka was like, how Kafka has changed, and what he and others in the community hope for in the future of Kafka.
Neil Buesing (Director of Real-Time Data, Object Partners) discusses what a day in his life looks like and how Kafka Streams helps analyze flight data.
If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.
Email Us