Get Started Free
course: Kafka Connect 101

Errors and Dead Letter Queues

5 min
Danica Fine

Danica Fine

Senior Developer Advocate (Presenter)

Error Handling in Kafka Connect

error-handling-kafka-connect

The bottom line is that Kafka Connect supports several error-handling patterns, including fail fast, silently ignore, and dead letter queues.

Obviously these different patterns are going to be useful in certain scenarios. Let’s examine some cases where these error-handling patterns are utilized.

Serialization Challenges – Wrong Converter

wrong-converter

As you set up and try various options with Connect, you might find that you have accidentally configured Connect to use an incorrect converter. For example, if topic messages were serialized as JSON data and Connect attempts to deserialize them using the Avro converter, an “Unknown magic byte!” exception would be triggered.

In this situation, the Avro deserializer is trying to process a message from a Kafka topic, and the message is either not Avro, or it is Avro, but it wasn’t created with the Confluent Schema Registry serializer. Either way, the message doesn’t match the expected wire format, thus the reference to “magic bytes.” The error is quite specific, but basically it means that the data is in a format other than that which the Avro deserializer expects.

The simple fix in this case would be to update the connector instance configuration to use the correct JSON converter.

Serialization Challenges – Multiple Formats

multiple-formats

Another scenario you might come across is where a topic contains messages with multiple serialization formats. This might happen if the serialization method was changed for messages being written to a topic, for example, at first the messages were serialized as JSON and then at some point this was changed to Avro. And if the producer clients were not all updated at the same time so that multiple producers were writing the topic using different serialization formats, the topic messages might alternate from one format to another. Since the connector instance can only be configured to use a single converter, exceptions will occur when the converter attempts to deserialize a message with the other format.

You can address both of these scenarios with configuration for error tolerances and dead letter queues. Let’s take a look at these now.

Error Tolerances – Fail Fast (Default)

fail-fast-default

By default, if Connect receives a serialization error like the ones that were just covered, the corresponding connector task is going to stop and you will have to deal with the issue and then restart it. This is safe behavior because if there’s an invalid message, Connect won’t process it.

Error Tolerances – Dead Letter Queue

dead-letter-queue

A dead letter queue in Kafka is just another Kafka topic to which messages can be routed by Kafka Connect if they fail to process in some way. The term is employed for its familiarity; a dead letter queue as traditionally conceived is part of an enterprise messaging system and is a place where messages are sent based on some routing logic that classifies them as having nowhere to go, and as potentially needing to be processed at a later time.

In Kafka Connect, the dead letter queue isn’t configured automatically as not every connector needs one. Kafka Connect’s dead letter queue is where failed messages are sent, instead of silently dropping them. Once the messages are there, you can inspect their headers, which will contain reasons for their rejection, and you can also look at their keys and values.

Reprocessing the Dead Letter Queue

reprocessing-dead-letter-queue

The dilemma whereby an Avro and a JSON producer are sending to the same topic has a solution in the dead letter queue. Basically, you set the dead letter queue to receive the erroring messages, then reprocess them from the dead letter queue with the appropriate converter, and send them on to the sink. For example, if Avro messages are correctly proceeding to the sink, but JSON messages are erroring into the dead letter queue, you could add another connector with a JSON converter to process them out of the dead letter queue and send them on to the sink. This would allow you to complete the processing of the source topic, which is not possible with a single connector.

Manual DLQ Processing

The dead letter queue reprocessing solution is easy to work out computationally, and it’s simple to configure and get running. However, if the option doesn’t exist for some reason or messages are failing for reasons that are hard to identify, you will have to try something else. You may be able to develop a consumer to convert the messages if you know how to deal with a particular failure mode, but often in these situations, you will end up having to manually handle the erroring messages—so you should have a process for a person to review them.

This is ultimately why dead letter queues aren’t a default in Kafka Connect, because you need a way of dealing with the dead letter queue messages, otherwise you are just producing them somewhere for no reason. In effect, you should make sure you have thought through how you will process the results of a dead letter queue before you set one up.

We hope you enjoyed following along in these Kafka Connect modules and exercises. We’ve learned so much together, and we can’t wait to see what you build!

Use the promo code 101CONNECT to get $25 of free Confluent Cloud usage

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.