Sink from tables with no primary key

I am sinking data from a table with no primary key. I am getting this error below.
Any ideas to resolve it.
Caused by: org.apache.kafka.connect.errors.ConnectException: Sink connector ‘jdbc-debe-azure-sink-itemdata’ is configured with ‘delete.enabled=false’ and ‘pk.mode=record_value’ and therefore requires records with a non-null Struct value and non-null Struct schema, but found record at (topic=‘itemdata’,partition=0,offset=280,timestamp=1679535912437) with a null value and null value schema.

1 Like

Please does anyone know what I need to do to fix this error. I am still getting same error

Caused by: org.apache.kafka.connect.errors.ConnectException: Sink connector ‘jdbc-debe-azure-sink-wflog-1’ is configured with ‘delete.enabled=true’ and ‘pk.mode=record_key’ and therefore requires records with a non-null key and non-null Struct or primitive key schema, but found record at (topic=‘wflog’,partition=5,offset=32,timestamp=1679664389234) with a null key and null key schema.

It looks like in one of your errors you are using record_value and in one record_key but both will have the same problem since you presumably don’t have a key to begin with on a table without a PK.

I suspect your problem is the tombstone records so you will need to drop tombstone records or change tombstone handling to ignore with a SMT, Here is how you would do it with confluents SMT:
“transforms”: “ignoreTS”
“transforms.ignoreTS.type”: “io.confluent.connect.transforms.TombstoneHandler”,
“transforms.ignoreTS.behavior”: “ignore”,

1 Like