Hi,
I am using confluent community version 5.5.2 and jdbc sink connector for putting data from topic ‘A’ to Oracle DB.
If there is a primary key constraint violation the connector task fails after ‘n’ retries. It doesnt send message to DLQ topic configured and doesnt commit the offset, in order to restart the task and start consumption from next event.
Any pointer on whether it is supported, if not how can I achieve the functionality?
Thanks
Dapinder
Hi Dapinder,
Messages are forwarded to DLQ when the error occurs during transform/convert stage. In your case, as the connector failed while pushing the data (put stage), it would not be sent to DLQ. You can read more on this here: Kafka Connect Deep Dive – Error Handling and Dead Letter Queues | Confluent
Regarding your issue, you could try setting insert.mode=upsert
in JDBC Sink Connector config to update the existing row in case of primary key constraint violation.
Thanks
Ravi
1 Like
Thanks Ravi,
In general are we planning to have a SQL exception record sent to DLQ, maybe in a future release? because if there is a SQL Exception the connector task fails and doesn’t commit offset so even a retry doesnt work. This is tricky in production scenarios.