I am using confluent community version 5.5.2 and jdbc sink connector for putting data from topic ‘A’ to Oracle DB.
If there is a primary key constraint violation the connector task fails after ‘n’ retries. It doesnt send message to DLQ topic configured and doesnt commit the offset, in order to restart the task and start consumption from next event.
Any pointer on whether it is supported, if not how can I achieve the functionality?
Messages are forwarded to DLQ when the error occurs during transform/convert stage. In your case, as the connector failed while pushing the data (put stage), it would not be sent to DLQ. You can read more on this here: https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues/
Regarding your issue, you could try setting
insert.mode=upsert in JDBC Sink Connector config to update the existing row in case of primary key constraint violation.
In general are we planning to have a SQL exception record sent to DLQ, maybe in a future release? because if there is a SQL Exception the connector task fails and doesn’t commit offset so even a retry doesnt work. This is tricky in production scenarios.