On sink connector is it possible to avoid having messages going to DLQ when target systems are un-reachable

Hi all,
I was going to reply in this topic because it’s related but decided to start a different one because the other one anyway seems closed.

Dead Letter Queue support for JDBC Connector Exceptions - Kafka Connect - Confluent Community

I’m experimenting some issues with the way that records end up in the DLQ.
I seem to see that the only options we have in Kafka to handle the connector errors is [none, all] as seen in the confluent [documentation]( Kafka Connect Sink Configuration Properties for Confluent Platform | Confluent Documentation)

So my problem or what I want to achieve is that only failures during the deserialization (valueConverter) end up in the DLQ. Any error in the Sink stage should make the connector fail.

What I experience so far is that for example if my JDBC Sink is going to a target where there is problems with the tablespace size all the messages end up discarded into the DLQ. I wanted to avoid this and ensure that in those cases the connector would fail.

I also experience similar problem when using for example the S3 Sink when connected to a target that uses for example MinIO or some sort of on-premises object storage solution.

Is there a way to achieve this?
I could not find any reference even if in the linked discussion it talks about “behavior.on.malformed.documents” for a Elasticsearch sink I was wondering what is the solution for JDBC for example.

I also was looking for this to understand better Error tolerance in Kafka Connect (I) – El Javi but I think here it just states the different sort of errors allowed or not that cause the connector to fail, but not linked to DLQ management.
Also I don’t see it as a configurable option

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.