Kafka JDBC Sink connector(oracle) slowness after certain load

Hello,

We’re using confluent kafka sink jdbc connector to insert records to Oracle DB.
The source topic has 15Million records and once the task has inserted 12M records, the connectors seems working very slow, like inserting 500 records per minute and it’s taking too long for the remaining records to insert.

Configuration below :

{
    "name": "jdbc-connect-oracle-sink",
    "config": {
        "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
        "tasks.max": "5",
        "topics": "new_titles_details",
        "connection.url": "jdbc:oracle:thin:@//xxxxxxx:xxxxxxx/xxxxxxx",
        "connection.user": "xxxxxx",
        "connection.password": "xxxxxxx",
        "table.name.format": "TITLE_DETAILS",
        "key.converter": "org.apache.kafka.connect.storage.StringConverter",
        "value.converter": "org.apache.kafka.connect.storage.StringConverter",
        "errors.tolerance": "all",
        "errors.log.enable": "true",
        "errors.retry.timeout": "0",
        "value.converter.schemas.enable": "false",
        "pk.mode": "none",
        "insert.mode": "insert",
        "auto.create": "false",
        "auto.evolve": "false",
        "transforms": "t1",
        "transforms.t1.type": "org.apache.kafka.connect.transforms.HoistField$Value",
        "transforms.t1.field": "TITLE_PING"
    }
}

The source topic has 20 partitions and we tried increasing tasks.max=20, but still the same performance issue after 12M

Worker docker version we’re using is 5.5.0
Kafka version : 2.0.0-cp1
And the docker is connecting to Confluent Community version 5.5.2

Any help would be appreciated.
Thanks!!!

Hey @Jay, this looks like SQL query degradation to me.
Do you have indexes/foreign key constraints in your connector target table?

In case there are foreign keys, reassure that you have indexes all needed indexes for efficient lookups.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.