Senior Developer Advocate (Presenter)
There are several tools that you can use to troubleshoot managed connectors. Depending upon the type of problem, one of these tools may work better than others. Let’s now walk through a scenario taking a look at how each of these tools can be used to troubleshoot the problem.
One problem that you may experience is a sink connector is unable to process the messages from the Kafka topic it is configured to consume from. It could be that it is just a subset of these messages that it is unable to process or it could be all messages from the topic as shown in this example. Depending upon how the connector is configured, To troubleshoot this, you can click the dead letter queue tile in the connector overview window. This will navigate the UI to the associated Kafka topic where the dead letter queue messages are being written.
In the dead letter queue topic view, select the messages tab, drill into one of the messages and select the header tab. Then scroll down in the header information to identify the possible cause for the message ending up in the dead letter queue. In this example, we see that the connector wasn’t configured to auto-create the destination table if it didn’t already exist. To correct this, you would simply updated the connector configuration setting auto create table to true. You could do so using either the Confluent Cloud UI, the Connect API, or the Confluent CLI.
Let’s continue with the use case from the previous slide. The connector has been configured to auto-create the destination table in the MySql database and the Confluent Cloud UI now indicates the connector failed. You can investigate this using several tools. Let’s now look at each of these.
The Confluent CLI is one of the tools that can be used to investigate connector failures. The describe command provides similar detail as the Confluent Cloud UI.
There are several tools that you can use to troubleshoot managed connectors. Depending upon the type of problem, one of these tools may work better than others. Let’s now walk through a scenario taking a look at how each of these tools can be used to troubleshoot the problem.
Confluent Cloud Connect log events are available on the connector events tab. They may provide additional detail regarding connector problems. In this example, the log event adds to the previous trace information about ensuring your input events are a flat struct of primitive fields.
Connect log events can also be accessed using the Confluent CLI consume command. Detailed information regarding how to accomplish this can be found by clicking the triple bar icon in the upper right corner of the Confluent Cloud UI and choosing the Connect log events menu.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.
Hi again. Danica Fine here. If you're using Kafka Connect, even a fully managed service like Confluent, you probably know that there's a lot that can happen under the hood. Let's dive into Kafka Connect troubleshooting and see how to debug Confluent-managed connectors. Rest assured that there are a number of tools that you can use to troubleshoot your managed connectors, but depending on the type of problem, one of these tools may be better suited than another. Let's walk through a scenario and see how each of these tools might be used to troubleshoot the problem. Let's suppose that your sink connector is unable to process the messages from the Kafka topic it's configured to consume from. It could be that it's just a subset of these messages that it's unable to process, or it could be all of the messages from the topic as shown in this example. To troubleshoot this, we'll start off by checking into the dead-letter queue tile in the connector overview window. This will navigate to the associated Kafka topic where the dead letter queue messages are being written. In the dead letter queue topic view, select the messages tab then drill into one of the messages and select the header tab. We can check into the header information to identify the possible cause for the message ending up in the dead letter queue. In this example, we see that the connector wasn't configured to auto-create the destination table if it didn't already exist. To correct this, you would simply update the connector configuration, setting auto-create table to true. You could do so using the Confluent Cloud UI, the Connect API, or the Confluent CLI. Now suppose we've resolved this error. We've updated the connector configuration to auto-create the destination table in the MySQL database and the Confluent Cloud UI. But when we run the connector, the overview now indicates that the connector failed. You can investigate this using several tools. Let's look into a few of them. The Confluent CLI is always a great way to investigate connector failures. The describe command provides a similar level of detail that you might find in the Confluent Cloud UI. Depending on where you're doing your troubleshooting, you may want to use REST. To get the same details, we could also use the Confluent Connect API status request as shown here. And finally, Confluent Cloud Connect log events are available on the connector events tab. They may provide additional detail regarding connector problems. In this example, the log event adds to the previous trace information about ensuring your input events are a flat struct of primitive fields. Connect log events can also be accessed using the Confluent CLI consume command. Detailed information regarding how to accomplish this can be found by clicking the triple bar icon in the upper right corner of the Confluent Cloud UI and choosing the Connect log events menu. To better familiarize yourself with all of the troubleshooting tools available to you and your confluent managed connectors, I encourage you to take a look at all of them the next time you encounter an issue with Connect. I'll see you in the next module where we'll review troubleshooting methods for self-managed connect clusters.