Debezium mysql task failed

Started the docker container (cp-kafka-connect-base:7.0.1) and installed the self-managed connector (debezium-connector-mysql:1.7.1).

Docker container started fine.

When I am trying to configure the connector configuration as below

{
  "name": "family-mysql-connector",
  "config": {
    "connector.class": "io.debezium.connector.mysql.MySqlConnector",
    "database.hostname": "3es-mysql",
    "database.port": "3306",
    "database.user": "cconnect",
    "database.password": "xxxxxxx",
    "database.server.id": "5500",
    "database.allowPublicKeyRetrieval":"true",
    "database.server.name": "dev",
    "database.include.list": "family",
    "database.history.kafka.bootstrap.servers": "pkc-4n66v.australiaeast.azure.confluent.cloud:9092",
    "database.history.kafka.topic": "dev.dbhistory.family" ,
    "include.schema.changes": "true",
    "tasks.max": "1"
  }
}

Container logs

[2022-02-28 06:09:52,608] WARN [family-mysql-connector|task-0] [Consumer clientId=dev-dbhistory, groupId=dev-dbhistory] Bootstrap broker pkc-4n66v.australiaeast.azure.confluent.cloud:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1050)


[2022-02-28 06:10:10,849] INFO [family-mysql-connector|task-0|offsets] WorkerSourceTask{id=family-mysql-connector-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:484)


[2022-02-28 06:10:52,539] ERROR [family-mysql-connector|task-0] WorkerSourceTask{id=family-mysql-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:195)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

When I trying to check the status:

{

"name" : “family-mysql-connector” ,

"connector" : {

"state" : “RUNNING” ,

"worker_id" : “3es-cconnect-mysql:8083”

},

"tasks" : [

{

"id" : 0 ,

"state" : “FAILED” ,

"worker_id" : “3es-cconnect-mysql:8083” ,

"trace" : "org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata\n"

}

],

"type" : “source”

}

Someone help me to find out the root cause?

I found myself for debugging and fixed.

Fix: Adding the below entries in my connector

database.history.kafka.topic: xx,
database.history.kafka.bootstrap.servers: xxx,
database.history.consumer.security.protocol: SASL_SSL,
database.history.consumer.ssl.endpoint.identification.algorithm: https,
database.history.consumer.sasl.mechanism: PLAIN,
database.history.consumer.sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=\xx\ password=\xx/vF+xx/x+x;,
database.history.producer.security.protocol: SASL_SSL,
database.history.producer.ssl.endpoint.identification.algorithm: https,
database.history.producer.sasl.mechanism: PLAIN,
database.history.producer.sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=\632HZGOZTJVRTFNU\ password=\xx/vF+xx/x+x;,
2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.