Trigger Event after JDBC Sink Connector Record Completes

Hey, I am trying to create a fork of the confluentinc/kafka-connect-jdbc to add a feature. This feature allows the JDBC connector to trigger another event after the completion of sinking each record. This topic would indicate both success and failure outcomes.

I saw the JDBC connector has a feature semi-similar to what I want called the dead letter queue; however, that feature doesn’t work in my case since I also need to trigger the next event even on success. I would just need whether the record succeed and the Kafka record key.

Would this be the right approach to do this?

Another idea I had would be to create a source connector on the database that has the sink connector on it. The source connector would only check for changes when the last modified timestamp is updated or a new primary key is inserted since I am using the timestamp+incrementing mode. The only issue with that approach is that I would prefer fewer hits to the database as much as possible and it seems logical to have the JDBC connector just report itself.

I might be getting ahead of myself, but going back to modifying the JDBC connector all I would have to do would be to…

  • modify the connector config to accept the topic to target
  • modify the connector to start the Kafka producer
  • modify the connector task to use the Kafka producer object to create records to the topic

At a high level, it doesn’t seem that hard. I am just having difficulty setting up the development environment. But I’ll start another topic regarding that issue after I confirm if this is a sound approach.

I finally figure out how to compile the confluent JDBC connector and made some modifications to implement this, so if anyone stumbles across this and is interested. Here is what I did:

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.