Get Started Free

Apache Kafka 3.8 is generally available, with 17 new KIPs!

August 8, 2024

Apache Kafka 3.8 is officially here!

This release includes 17 new KIPs, adding new features and functionality across Kafka Core, Kafka Streams, and Kafka Connect:

  • 13 KIPs related to Kafka Core and clients
  • 3 for Kafka Streams
  • 1 for Kafka Connect

Highlights include:

  • Two new Docker images, next generation of the Consumer Rebalance Protocol(Preview), the ability to set compression levels for some codecs, and an easier way to check if a metric is measurable
  • The ability to set task assignors in Kafka Streams
  • The ability to enforce the tasks.max configuration in Kafka Connect

In a new video, Danica Fine provides a summary of the latest updates and new features in this Apache Kafka release! Watch it here ➡️ YouTube.

Data Streaming Resources:

  • Want to learn more about KIP-405 and how Kafka records can be stored in infinite cloud storage? Read Stanislav Kozlovski’s Linkedin post here
  • In the last newsletter, we posted a Medium blog by Adam Bellemare, Staff Technologist at Confluent’s medium blog about bad events in streams. In Part 2 of the blog, you can learn how to use schemas, tests, and data quality constraints to ensure your systems produce well defined data in the first place.
  • Explore how to update data streams without downtime by following a banking case study. In a recently released Case Study Module from our Microservices 101 course by Wade Waldron, you’ll learn about schema evolution, fraud detection, and more. In the module, you’ll discover how Tributary Bank tackles fraud detection with innovative techniques, and also how they face the challenges of evolving schemas. Learn how they can evolve their message protocols and replace encryption algorithms in a live system, without impacting end users. Check it out here
  • Apache Paimon is gaining interest among data platform practitioners as the newest star in the open-source table format space. Read a blog by Jack Vanlightly, Principal Technologist at Confluent, to understand Apache Paimon’s consistency models.

A Droplet From Stack Overflow:

You may have gotten an error like this while using the Apache Flink Table API:

Invalid primary key 'PK\_id'. Column 'id' is nullable.

Martijn Visser has you covered with the syntax to create a non-nullable column!

Got your own favorite Stack Overflow answer related to Flink or Kafka? Send it in to devx_newsletter@confluent.io!

Terminal Tip of the Week:

Today’s terminal tip is actually a code tip! Adding delivery reports to the producers you create with the confluent-kafka client will significantly improve your experience when you’re running code from your terminal.

First, define a delivery report function:

def delivery_report(err, event):
    if err is not None:
       print(f'Delivery failed on reading for {event.key().decode("utf8")}: {err}')
    else:
       print(
         f'Device reading for {event.key().decode("utf8")} produced to {event.topic()}'
       )

Then, pass the function as a callback argument when you produce the message:

producer.produce(
          topic=topic,
          key=string\_serializer(key),
          value=avro\_serializer(
          message\_data, SerializationContext(topic, MessageField.VALUE)
          ),
          on\_delivery=delivery\_report,
       )

This ensures that confirmation is printed to your terminal when things are going well:

Device reading for 09 produced to messages\_5

And that errors are printed when things are not going well:

Delivery failed on reading for 89: KafkaError{code=UNKNOWN\_TOPIC\_OR\_PART,val=3,str="Broker: Unknown topic or partition"}

In the above case, this is the error you would get when producing to a topic name that is not in, or known, by the cluster.

Links From Around the Web:

Upcoming Events:

Hybrid:

In-person:

By the way…

We hope you enjoyed our curated assortment of resources! If you’d like to provide feedback, suggest ideas for content you’d like to see, or you want to submit your own resource for consideration, email us at devx_newsletter@confluent.io!

If you’d like to view previous editions of the newsletter, visit our archive.

If you’re viewing this newsletter online, know that we appreciate your readership and that you can get this newsletter delivered directly to your inbox by filling out the sign-up form on the left-hand side.

P.S. If you want to learn more about Kafka, Flink, or Confluent Cloud, visit our developer site at Confluent Developer.

Subscribe Now

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Recent Newsletters