We are happy to say that the first Kafka Summit Bangalore was a rousing success! One of our Confluent Developer Advocates, Diptiman Raichaudhuri, reports from the scene:
What was the energy like? The first-ever Kafka Summit in Bangalore, India drew in over 6000 participants. The ambiance was nothing short of electrifying, igniting enthusiasm and momentum throughout the event!
What was your favorite talk? While there were outstanding sessions to choose from, it was amazing to learn about the scale at which PhonePe, a leading fintech company in India, operates with their Apache Kafka® clusters. Nitish Goyal, software architect, shared life-saving tips and experiences of managing ~400 billion messages per day with their Kafka clusters and how automation had a fundamental impact on their Kafka system design. He also shared valuable insights on cluster sizing and observability.
What was it like being at the Confluent booth? The Confluent booth had both experienced and fresh Kafka enthusiasts inquiring about the benefits of Confluent Cloud and asking how managed stream processing engines like Apache Flink® SQL removes a lot of infrastructure bottlenecks around deploying such applications. The DevX section of the booth also garnered a lot of interest on the Confluent Developer portal.
Wow, thanks for relating the vibes and the buzz, Diptiman! If you’re feeling the FOMO, don’t worry, there’s a blog post that summarizes the news that went out, including the arrival of AI Model Inference in Confluent Cloud for Apache Flink, auto-scaling Freight clusters, Tableflow, and Confluent Platform for Apache Flink.
We’ve also got you covered with other data streaming resources to satisfy your curiosity, including: a terminal tip that teaches you how to reset Kafka topic offsets, a video that explains watermarking in Flink, and a list of international meetups.
How would you design a Flink job for topics with heterogeneous processing needs? For example, some topics need session windowing, some need tumbling, etc. Read the answer here.
Got your own favorite Stack Overflow answer related to Flink or Kafka? Send it in to devx_newsletter@confluent.io!
Let’s learn how the Kafka Consumer CLI resets offsets using a shift-by command. shift-by allows users to reset offsets by shifting the current offset value by n. The shift-by value could be positive or negative—positive to go back in messages and negative to advance in messages.
Create a consumer group first for reading messages sent by sensor devices with multiple consumers by running the following command in multiple terminals:
kafka-console-consumer.sh --topic sensor.readings --bootstrap-server localhost:9092 --group sensor-con-grp
Then, produce sensor readings and check that each consumer is getting messages. Followed by a --describe of the consumer group:
kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group sensor-con-grp
Check the offsets:
GROUP TOPIC PARTITION CURRENT-OFFSET
sensor-con-grp sensor.readings 2 20
sensor-con-grp sensor.readings 1 18
sensor-con-grp sensor.readings 3 17
Stop all consumers, and apply the shift-by with a (-ve) 5:
kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group sensor-con-grp --reset-offsets --shift-by -5 --execute --topic sensor.readings
Describe the consumer group again, and look at the new offsets:
kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group sensor-con-grp
Notice how shift-by has reset the current-offset by (-5) counts:
GROUP TOPIC PARTITION CURRENT-OFFSET
sensor-con-grp sensor.readings 2 15
sensor-con-grp sensor.readings 1 13
sensor-con-grp sensor.readings 3 12
Offsets decreased by 5 counts and any subsequent read will also return the last 5 messages.
Hybrid
In-person
We hope you enjoyed our curated assortment of resources! If you’d like to provide feedback, suggest ideas for content you’d like to see, or you want to submit your own resource for consideration, email us at devx_newsletter@confluent.io!
If you’d like to view previous editions of the newsletter, visit our archive.
If you’re viewing this newsletter online, know that we appreciate your readership and that you can get this newsletter delivered directly to your inbox by filling out the sign-up form on the left-hand side.
P.S. If you want to learn more about Kafka, Flink, or Confluent Cloud, visit our developer site at Confluent Developer.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.