Dead Letter Queue

Scenario: I am using Kafka HTTP connector to update addresses in SQL Server db. I have configured it so that after 3 retries, it sends the message to Dead Letter Queue.
After Address1 landed in Dead Letter Queue, another payload ‘Address2’ updated the same record successfully. So Address1 is now old . There is no timestamp in source or sink to identify if message is stale and should be discarded.
Question: What mechanism is in place to identify stale messages in dead letter queue when I re-process all messages in dead letter queue in batch at the end of the day ?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.