Debezium: Lock request time out period exceeded

 INFO [sql-server-connection|task-0] Snapshot - Final stage (io.debezium.pipeline.source.AbstractSnapshotChangeEventSource:88)
[2022-03-08 07:15:47,766] INFO [sql-server-connection|task-0] Removing locking timeout (io.debezium.connector.sqlserver.SqlServerSnapshotChangeEventSource:244)
[2022-03-08 07:15:47,770] ERROR [sql-server-connection|task-0] Producer failure (io.debezium.pipeline.ErrorHandler:31)
io.debezium.DebeziumException: com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded.
        at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:85)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.doSnapshot(ChangeEventSourceCoordinator.java:153)
        at io.debezium.connector.sqlserver.SqlServerChangeEventSourceCoordinator.executeChangeEventSources(SqlServerChangeEventSourceCoordinator.java:69)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:108)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded.
        at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1621)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:868)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:768)
        at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7194)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2935)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:248)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:223)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQuery(SQLServerStatement.java:693)
        at io.debezium.connector.sqlserver.SqlServerSnapshotChangeEventSource.lockTablesForSchemaSnapshot(SqlServerSnapshotChangeEventSource.java:142)
        at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:106)
        at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:76)
        ... 8 more
[2022-03-08 07:15:47,776] INFO [sql-server-connection|task-0] Connected metrics set to 'false' (io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics:70)
[2022-03-08 07:15:48,017] INFO [sql-server-connection|task-0] WorkerSourceTask{id=sql-server-connection-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:484)
[2022-03-08 07:15:48,019] ERROR [sql-server-connection|task-0] WorkerSourceTask{id=sql-server-connection-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:195)
org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.
        at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:115)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: io.debezium.DebeziumException: com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded.
        at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:85)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.doSnapshot(ChangeEventSourceCoordinator.java:153)
        at io.debezium.connector.sqlserver.SqlServerChangeEventSourceCoordinator.executeChangeEventSources(SqlServerChangeEventSourceCoordinator.java:69)
        at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:108)
        ... 5 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded.
        at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1621)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:868)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:768)
        at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7194)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2935)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:248)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:223)
        at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQuery(SQLServerStatement.java:693)
        at io.debezium.connector.sqlserver.SqlServerSnapshotChangeEventSource.lockTablesForSchemaSnapshot(SqlServerSnapshotChangeEventSource.java:142)
        at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:106)
        at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:76)
        ... 8 more
[2022-03-08 07:15:48,022] INFO [sql-server-connection|task-0] Stopping down connector (io.debezium.connector.common.BaseSourceTask:238)

Hi @nickbob . Please also mention the problem that you’re facing and what you’re trying to do.

i am getting Lock request time out period exceeded .while running the connect standalone with worker and connector file. How could i resolve this issue?

hi @nickbob, just like yesterday, please provide full details to help people answer your question :slight_smile: Thanks

all the connector and worker properties are same as yesterday.

connector.properties

name=sql-server-connection
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
database.hostname=POCDB
database.port=51523
database.user=poc_dev_user
database.password=db_user_poc
database.dbname=user_poc
database.server.name=poc
table.whitelist==dbo.mbr,dbo.mbr_enc
database.history.kafka.bootstrap.servers=zoo1:9092,zoo2:9092,zoo3:9092
database.history.kafka.topic=dbhistory.sqlsever
decimal.handling.mode=string

worker.properties

offset.storage.file.filename=/tmp/connect.offsets
bootstrap.servers=zoo1:9092,zoo2:9092,zoo3:9092
offset.storage.topic=connect-offsets
config.storage.topic=connect-configs
status.storage.topic=connect-status
auto.offset.reset = latest
offset.flush.interval.ms=10000
rest.port=10082
rest.host.name=zoo2
rest.advertised.port=10082
rest.advertised.host.name=zoo2
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
key.converter.schemas.enable=true
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
plugin.path=/home/gokafka/
group.id=sre1

i ran the connect-standalone with worker and connector file and getting the lock time exceed error how could i resolve the issue?

When you run the connector for the first time it will take a snapshot of all of the tables. To do this it takes a lock on the database, and it looks like this is failing:

SQLServerException: Lock request time out period exceeded

By default it uses repeatable.read, but you can change this in the config. I would speak to your DBA about this to establish what other work is happening on the database at the same time that could be preventing the lock being taken, and if a different isolation mode is more appropriate to use.

In which case link to your post from yesterday and don’t rely on people being telepathic :stuck_out_tongue:

Hi @rmoff , As you mentioned earlier to use a different config for taking snapshot. I have implemented it. I change the snapshot.isolation.mode to read_committed and increased the “snapshot.lock.timeout.ms”: “30000”.
But still some time i am facing the sql lock timeout period exceeded Exception.

I am getting this exception while i am trying to deploy the connector in standalone and distributed Mode. And as a result it’s connectors logs show that it need to be Restarted manually and making the connector state as FAILED.

Worker.properties file

bootstrap.servers=zoo1:9092,zoo2:9092,zoo3:9092
group.id=Test-Distributed
offset.storage.topic=connect-offsets-worker2
config.storage.topic=connect-configs-worker2
status.storage.topic=connect-status-worker2
rest.port=8085
rest.advertised.host.name=zoo2
rest.advertised.port=8085
listeners=http://zoo2:8085
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter.schemas.enable=false
offset.flush.interval.ms=10000
status.storage.flush.interval.ms=10001
plugin.path=/home/gokafka/apache_kafka/,/home/gokafka/apache_kafka/kafka_2.13-3.2.1/libs

connector.json file

{
  "name": "sql-connect",
  "config": {
    "connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
    "database.hostname": "<DB HOSTNAME>",
    "database.port": "<DB PORT>",
    "database.dbname": "<DB NAME>",
    "database.user": "<DB USERNAME>",
    "database.password": "<DB PASSWORD>",
    "table.include.list": "dbo.user,dbo.emp",
    "database.server.name": "POC.dev",
    "database.history.kafka.bootstrap.servers": "zoo1:9092,zoo2:9092,zoo3:9092",
    "database.history.kafka.topic": "dbhistory.demo",
    "schema.ignore": "true",
    "decimal.handling.mode": "string",
    "transforms":"unwrap",
    "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
    "time.precision.mode": "connect",
    "transforms.unwrap.drop.tombstones": "false",
    "snapshot.isolation.mode": "read_committed",
    "poll.interval.ms": "10000",
    "snapshot.lock.timeout.ms": "30000",
  }
}

I have Observed sometime in Standalone mode in One shot it works seamlessly and able to take proper snapshot for the desired table. But same thing if i try in distributed mode it give the ** Lock request time out period exceeded**.

In dev DB i am facing the issue this issue yet i have not gone to production DB.
What is the recommendation from Debezium side for production environment to avoid this ‘Lock request time out period exceeded’ issue, Becoz In production DB, getting a down time is hard for taking a snapshot of the tables.
Please let us know what configurations we can use to avoid this in production environment. So that without having a down time for Db it should take successful snapshot of the table.