We are running a JDBC sink connector in standalone mode. It consumes from 2 topics with 12 partitions each aand has tasks.max = 12.
But consume from 1 particular partition seems stuck. Its Current offset is not increasing and as a result lag is increasing for that partition.
Now, the task assigned to that particular partition leaves the consumer group due to poll timeout. After rebalancing the new task assigned to that partition leaves also. And after sometimes no task remains to consume from that topics.
I have tried errors.tolerance to all, increased the batch size and poll timeout also. But nothing works. Same problem persists.
What version is the connect cluster and what version in the connector?
Typically I have found that increasing size for sink to be the opposite of what you want to do. In that you are adding to the burden of the connection…
decrease batch size to 1 and see if you can troubleshoot the error
set errors.tollerance to “none” and create the dead letter queue topic settings
set logging level to trace
look at the “upsert” queries in the database and get an explain plan – you want to make sure a full-table-scan is not happening. There was an issue with the connector and oracle (I think 10.0.3 timeframe) where the PK index wasn’t being used because the connector used setNString vs setString (because of a fix for clobs) - but setNString for the PK column bypassed oracle from using the DB index.