It’s great to finally see the specialized community platform out there! Congrats everybody!
I wanted to raise the topic about support of Dead Letter Queues for the Standard JDBC Connector, especially when it operates in Sink mode.
For those, who are not familiar with the DLQ concepts, you can catch-up here: https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues/
So, before Kafka 2.6, only internal Kafka exceptions could be handled and rerouted to a DLQ topic.
But with Kafka 2.6, which was included with Confluent Platform since version 6.x, as far as I know, there was added a brand-new API for the DLQ-functionality to be used by Connectors. This should enable the Connectors to re-route bad messages (which cause, i.e. a constraint violation in the target database on put operation) to a DLQ topic, and therefore the pipeline can continue to operate.
I already checked the latest version of the JDBC Connector (10.0.1) together with Confluent Platform 6.0.1, and unfortunately this does not seem to be implemented yet.
There is an open discussion on the topic on GitHub, and there was even a pull request #890, where it seemed to be implemented, and it was already merged once, but then, because of some reason, it was reverted (with pull request #966)
Sorry for mentioning the pull request by numbers without links - unfortunately there is a limitation of 2 links in the post.
Does anybody know the reason why it was reverted? And are there any time frames of when we could expect this to be added to the JDBC Sink Connector capabilities?