The dual-write problem is a classic snag when it comes to architecting microservices. You might be writing a message about an event to Kafka and to a database… what if the message makes it to the database but not to Kafka? There are ways to avoid this scenario… in our resources today, Wade Waldron discusses it in his blog post and gives other microservice tips in video fashion.
We have a big announcement: confluent-kafka-javascript is in early access! Lucia Cerchie offers us a write up on Medium.
In addition, we’re featuring a blog post from Michelin on designing topologies, a YouTube short on the difference between idempotent and transactional producers, and a blog post comparing Apache Iceberg to Delta Lake. Dig in!
Using confluent-kafka-javascript: Notes for Beginners. Learn about Confluent’s new JavaScript client for Kafka (early access) and get some practical tips on getting started with it, in this Medium post by Lucia Cerchie.
Will Apache Iceberg win over Delta Lake? Gilles Philippart holds forth in this Medium post.
A new case study from our microservices course. Wade Waldron walks through a concrete example of one of the initial steps of decomposing a monolith: defining an API.
Ever found yourself scrolling through dozens of Kafka topics? The Data Portal and Apache Flink in Confluent Cloud can help with that! Watch Gilles Philippart’s video to get a tutorial.
What’s the difference between idempotent and transactional Kafka producers? It’s subtle, but Justine Olshan nails it in a YouTube short.
Solving the dual-write problem: Wade Waldron guides us through a solution in this blog post.
Turning the data from a REST API into a data stream: get a tutorial in this video from Lucia Cerchie or read and clone the original demo in this repository.
Designing Kafka Streams Applications: Get a peek into the topology design process at Michelin
Tips for decomposing a monolith from Wade Waldron: the latest installment of the Designing Event-Driven Microservices course
Want to learn more about the difference between watermark delay and allowed lateness for Flink windows, read here.
Got your own favorite Stack Overflow answer related to Flink or Kafka? Send it in to devx_newsletter@confluent.io !
Let’s learn the art of increasing the replication factor of an existing partition within a Kafka topic using a Kafka CLI! For Kafka administrators, adding replicas might come in as a late but urgent request from development teams. The following CLI sequence of commands quickly adds replicas to partitions with single replicas.
Let’s create a leaders topic with replication-factor 1 and partitions 1, for a Kafka cluster with 3 brokers (Ids : 1, 2 and 3).
bin/kafka-topics.sh --create --bootstrap-server <HOST:PORT> --topic leaders --partitions 1 --replication-factor 1
Let’s check the partition assignment by running a kafka-topics —describe:
bin/kafka-topics.sh --bootstrap-server <HOST:PORT> --topic leaders --describe
Output:
Topic: leaders
TopicId: _ikEsj89QcG18YEeMVX-Ag
PartitionCount: 1
ReplicationFactor: 1
Configs: segment.bytes=1073741824
Topic: leaders
Partition: 0
Leader: 2
Replicas: 2
Isr: 2
Partition 0 is assigned broker 2 and the replica is only [2]. Now, let’s create a custom partition-reassignment json file to add brokers 1 and 3 as replicas, and save it as increase-replication-factor.json:
{"version":1,
"partitions":[{"topic":"leaders","partition":0,"replicas":[1,2,3]}]}
This reassignment will now add brokers 1 and 3 as well, to store replicas of partition 0.
Let’s run kafka-reassign-partitions.sh with the --execute option, so that the custom partitioner strategy is executed:
bin/kafka-reassign-partitions.sh --bootstrap-server <HOST:PORT> --reassignment-json-file increase-replication-factor.json --execute
Output:
Successfully started partition reassignment for leaders-0
Now, let’s check if the partition reassignment was successful or not, by checking replicas:
Topic: leaders
TopicId: _ikEsj89QcG18YEeMVX-Ag
PartitionCount: 1
ReplicationFactor: 3
Configs: segment.bytes=1073741824
Topic: leaders
Partition: 0
Leader: 2
Replicas: 1,2,3
Isr: 2,3,1
The replication is perfectly increased to [1,2,3] from [2] for partition ‘0’.
This tool comes in very handy for Kafka administrators, in case a partition is short of replicas and needs replicas to be added. Although, for large topics with lots of partition replicas, this tool should be used with caution.
A blog post on the physics of naval architecture with interactive diagrams
Ambient noise, configurable to your tastes
What part of speech is “really,” really? A StackExchange Q&A
Hybrid
In-person
Ho Chi Minh, Vietnam (Jun 15): Come to Ho Chi Minh for a presentation on the fundamentals of Apache Kafka
Berlin, Germany (Jun 17): Featuring Apache Kafka, Apache Flink, and Apache Druid power real-time plane spotting, as well as integrating Kafka with OpenTelemetry
Munich, Germany (Jun 18): Learn about Apache Kafka and OpenTelemetry, as well as how Apache Flink and Apache Druid power real-time plane spotting
London, UK (Jun 18): This is a GenAI bootcamp: mastering AI with Kafka and Flink
Krakow, Poland (Jun 18): A meetup focusing on Apache Kafka and Pinot
Bucharest, Romania (Jun 19): Apache Kafka Use Cases: Data Consumption and Data Integration
Belfast, Ireland (Jun 20): Integrating Apache Kafka with OpenTelemetry
Mumbai, India (Jun 22): Get a walkthrough of both ksqlDB and FlinkSQL, and learn about the library that Dream11 created to abstract the low-level details of the Kafka consumer
San Francisco Bay Area, United States (Jun 27): Get updates on the world of Kafka Streams, and learn about common Kafka Connect pitfalls
We hope you enjoyed our curated assortment of resources! If you’d like to provide feedback, suggest ideas for content you’d like to see, or you want to submit your own resource for consideration, email us at devx_newsletter@confluent.io!
If you’d like to view previous editions of the newsletter, visit our archive.
If you’re viewing this newsletter online, know that we appreciate your readership and that you can get this newsletter delivered directly to your inbox by filling out the sign-up form on the left-hand side.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.