Dead Letter Queue support for JDBC Connector Exceptions

Hello Kafkateers!

It’s great to finally see the specialized community platform out there! Congrats everybody!

I wanted to raise the topic about support of Dead Letter Queues for the Standard JDBC Connector, especially when it operates in Sink mode.

For those, who are not familiar with the DLQ concepts, you can catch-up here: Kafka Connect Deep Dive – Error Handling and Dead Letter Queues | Confluent

So, before Kafka 2.6, only internal Kafka exceptions could be handled and rerouted to a DLQ topic.

But with Kafka 2.6, which was included with Confluent Platform since version 6.x, as far as I know, there was added a brand-new API for the DLQ-functionality to be used by Connectors. This should enable the Connectors to re-route bad messages (which cause, i.e. a constraint violation in the target database on put operation) to a DLQ topic, and therefore the pipeline can continue to operate.

I already checked the latest version of the JDBC Connector (10.0.1) together with Confluent Platform 6.0.1, and unfortunately this does not seem to be implemented yet.

There is an open discussion on the topic on GitHub, and there was even a pull request #890, where it seemed to be implemented, and it was already merged once, but then, because of some reason, it was reverted (with pull request #966) :frowning:

Sorry for mentioning the pull request by numbers without links - unfortunately there is a limitation of 2 links in the post.

Does anybody know the reason why it was reverted? And are there any time frames of when we could expect this to be added to the JDBC Sink Connector capabilities?

Thanks!

1 Like

Welcome @whatsupbros! The link limit is just for your first post, sorry about that.

Let me see what I can find out for you about that issue & PR…

Thank you, @rmoff!

It happened, that the issue, which I mentioned, was in fact created by you in GitHub.

Just for the convenience, the links to the PRs: the original one, which seemed to have added the functionality to the connector, and the one, which reverted the changes.

1 Like

@rmoff, did you have a chance to check the issue and the PRs in the meanwhile?
It seems the post raised interesest in other people as well.

Hey @whatsupbros! There’s work in progress on this, you can track it in https://github.com/confluentinc/kafka-connect-jdbc/pull/999 as the latest PR.

1 Like

Hey @whatsupbros. You are correct, there was an initial implementation that was merged and later reverted. The reasoning behind this was that we wished to provide more robust testing as well as expand the various scenarios/circumstances in which the error reporting functionality could report errant records. The PR mentioned by @rmoff above is the new PR for this functionality and once merged, should be included in the next release of the connector. Let me know if you have any other questions!

2 Likes

Thanks @aakash for the clarifications!
Please keep us updated :wink:

Hi, @aakash!

Can you tell if the mentioned pull request was included with the latest release of the connector in version 10.1.0?

Unfortunately, there is no changelog for this version on the official page.

However, it seems that the commit for CCDB-192 was included, when you compare the two latest releases of the connector on GitHub, or?