Here is a small extended log. Yes, taken from Connect.log
[2022-02-09 11:23:59,527] INFO [SqlServer-SQL-TT|task-0] For table 'Dev01_Tessa.dbo.DynamicRoles' using select statement: 'SELECT [ID], [Name], [SqlText], [SchedulingTypeID], [CronScheduling], [PeriodScheduling], [LastErrorDate], [LastErrorText], [LastSuccessfulRecalcDate], [ScheduleAtLaunch] FROM [Dev01_Tessa].[dbo].[DynamicRoles]' (io.debezium.relational.RelationalSnapshotChangeEventSource:348)
[2022-02-09 11:23:59,539] INFO [SqlServer-SQL-TT|task-0] Finished exporting 8 records for table 'Dev01_Tessa.dbo.DynamicRoles'; total duration '00:00:00.044' (io.debezium.relational.RelationalSnapshotChangeEventSource:393)
[2022-02-09 11:23:59,539] INFO [SqlServer-SQL-TT|task-0] Exporting data from table 'Dev01_Tessa.dbo.Errors' (66 of 428 tables) (io.debezium.relational.RelationalSnapshotChangeEventSource:340)
[2022-02-09 11:23:59,541] INFO [SqlServer-SQL-TT|task-0] For table 'Dev01_Tessa.dbo.Errors' using select statement: 'SELECT [ID], [ActionID], [TypeID], [TypeCaption], [CardID], [CardDigest], [Request], [Category], [Text], [Modified], [ModifiedByID], [ModifiedByName] FROM [Dev01_Tessa].[dbo].[Errors]' (io.debezium.relational.RelationalSnapshotChangeEventSource:348)
[2022-02-09 11:23:59,543] INFO [SqlServer-SQL-TT|task-0] Finished exporting 0 records for table 'Dev01_Tessa.dbo.Errors'; total duration '00:00:00.004' (io.debezium.relational.RelationalSnapshotChangeEventSource:393)
[2022-02-09 11:23:59,543] INFO [SqlServer-SQL-TT|task-0] Exporting data from table 'Dev01_Tessa.dbo.FileCategories' (67 of 428 tables) (io.debezium.relational.RelationalSnapshotChangeEventSource:340)
[2022-02-09 11:23:59,544] INFO [SqlServer-SQL-TT|task-0] For table 'Dev01_Tessa.dbo.FileCategories' using select statement: 'SELECT [ID], [Name] FROM [Dev01_Tessa].[dbo].[FileCategories]' (io.debezium.relational.RelationalSnapshotChangeEventSource:348)
[2022-02-09 11:23:59,545] INFO [SqlServer-SQL-TT|task-0] Finished exporting 0 records for table 'Dev01_Tessa.dbo.FileCategories'; total duration '00:00:00.002' (io.debezium.relational.RelationalSnapshotChangeEventSource:393)
[2022-02-09 11:23:59,546] INFO [SqlServer-SQL-TT|task-0] Exporting data from table 'Dev01_Tessa.dbo.FileContent' (68 of 428 tables) (io.debezium.relational.RelationalSnapshotChangeEventSource:340)
[2022-02-09 11:23:59,549] INFO [SqlServer-SQL-TT|task-0] For table 'Dev01_Tessa.dbo.FileContent' using select statement: 'SELECT [VersionRowID], [Content], [Ext] FROM [Dev01_Tessa].[dbo].[FileContent]' (io.debezium.relational.RelationalSnapshotChangeEventSource:348)
[2022-02-09 11:24:01,789] ERROR [SqlServer-SQL-TT|task-0] WorkerSourceTask{id=SqlServer-SQL-TT-0} failed to send record to TESSA35-SQL-TT.dbo.FileContent: (org.apache.kafka.connect.runtime.WorkerSourceTask:384)
org.apache.kafka.common.errors.RecordTooLargeException: The message is 1371709 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
[2022-02-09 11:24:04,485] INFO [SqlServer-SQL-TT|task-0|offsets] WorkerSourceTask{id=SqlServer-SQL-TT-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)
[2022-02-09 11:24:06,883] INFO [SqlServer-SQL-TT|task-0] WorkerSourceTask{id=SqlServer-SQL-TT-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)
[2022-02-09 11:24:06,933] ERROR [SqlServer-SQL-TT|task-0] WorkerSourceTask{id=SqlServer-SQL-TT-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:206)
org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:294)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:355)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:272)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:199)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:254)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1371709 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
[2022-02-09 11:24:06,987] INFO [SqlServer-SQL-TT|task-0] Stopping down connector (io.debezium.connector.common.BaseSourceTask:241)
[2022-02-09 11:24:51,954] INFO [SqlServerConnectorConnector_1|task-0|offsets] WorkerSourceTask{id=SqlServerConnectorConnector_1-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)
[2022-02-09 11:25:04,625] INFO [SqlServer-SQL-TT|task-0|offsets] WorkerSourceTask{id=SqlServer-SQL-TT-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)
[2022-02-09 11:25:04,625] WARN [SqlServer-SQL-TT|task-0|offsets] Couldn't commit processed log positions with the source database due to a concurrent connector shutdown or restart (io.debezium.connector.common.BaseSourceTask:292)
[2022-02-09 11:25:37,014] WARN [SqlServer-SQL-TT|task-0] Coordinator didn't stop in the expected time, shutting down executor now (io.debezium.pipeline.ChangeEventSourceCoordinator:189)
[2022-02-09 11:25:51,965] INFO [SqlServerConnectorConnector_1|task-0|offsets] WorkerSourceTask{id=SqlServerConnectorConnector_1-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)
[2022-02-09 11:26:04,629] INFO [SqlServer-SQL-TT|task-0|offsets] WorkerSourceTask{id=SqlServer-SQL-TT-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)
[2022-02-09 11:26:04,629] WARN [SqlServer-SQL-TT|task-0|offsets] Couldn't commit processed log positions with the source database due to a concurrent connector shutdown or restart (io.debezium.connector.common.BaseSourceTask:292)
[2022-02-09 11:26:51,965] INFO [SqlServerConnectorConnector_1|task-0|offsets] WorkerSourceTask{id=SqlServerConnectorConnector_1-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)
[2022-02-09 11:27:04,629] INFO [SqlServer-SQL-TT|task-0|offsets] WorkerSourceTask{id=SqlServer-SQL-TT-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)
[2022-02-09 11:27:04,630] WARN [SqlServer-SQL-TT|task-0|offsets] Couldn't commit processed log positions with the source database due to a concurrent connector shutdown or restart (io.debezium.connector.common.BaseSourceTask:292)
[2022-02-09 11:27:07,057] INFO [SqlServer-SQL-TT|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:965)
[2022-02-09 11:27:07,069] INFO [SqlServer-SQL-TT|task-0] [Producer clientId=TESSA35-SQL-TT-dbhistory] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1208)
[2022-02-09 11:27:07,102] INFO [SqlServer-SQL-TT|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:676)
[2022-02-09 11:27:07,103] INFO [SqlServer-SQL-TT|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:680)
[2022-02-09 11:27:07,103] INFO [SqlServer-SQL-TT|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:686)
[2022-02-09 11:27:07,112] INFO [SqlServer-SQL-TT|task-0] App info kafka.producer for TESSA35-SQL-TT-dbhistory unregistered (org.apache.kafka.common.utils.AppInfoParser:83)
[2022-02-09 11:27:07,117] INFO [SqlServer-SQL-TT|task-0] [Producer clientId=connector-producer-SqlServer-SQL-TT-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1208)
[2022-02-09 11:27:07,123] INFO [SqlServer-SQL-TT|task-0] Publish thread interrupted for client_id=connector-producer-SqlServer-SQL-TT-0 client_type=PRODUCER session= cluster=EfR3zRWUSWGKxQMocOo-7w (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor:285)
[2022-02-09 11:27:07,124] INFO [SqlServer-SQL-TT|task-0] Publishing Monitoring Metrics stopped for client_id=connector-producer-SqlServer-SQL-TT-0 client_type=PRODUCER session= cluster=EfR3zRWUSWGKxQMocOo-7w (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor:297)
[2022-02-09 11:27:07,124] INFO [SqlServer-SQL-TT|task-0] [Producer clientId=confluent.monitoring.interceptor.connector-producer-SqlServer-SQL-TT-0] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:1208)
[2022-02-09 11:27:07,130] INFO [SqlServer-SQL-TT|task-0] Snapshot - Final stage (io.debezium.pipeline.source.AbstractSnapshotChangeEventSource:82)
[2022-02-09 11:27:07,144] INFO [SqlServer-SQL-TT|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:676)
[2022-02-09 11:27:07,145] INFO [SqlServer-SQL-TT|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:680)
[2022-02-09 11:27:07,145] INFO [SqlServer-SQL-TT|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:686)
[2022-02-09 11:27:07,146] INFO [SqlServer-SQL-TT|task-0] App info kafka.producer for confluent.monitoring.interceptor.connector-producer-SqlServer-SQL-TT-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)
[2022-02-09 11:27:07,146] INFO [SqlServer-SQL-TT|task-0] Closed monitoring interceptor for client_id=connector-producer-SqlServer-SQL-TT-0 client_type=PRODUCER session= cluster=EfR3zRWUSWGKxQMocOo-7w (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor:320)
[2022-02-09 11:27:07,146] INFO [SqlServer-SQL-TT|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:676)
[2022-02-09 11:27:07,146] INFO [SqlServer-SQL-TT|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:680)
[2022-02-09 11:27:07,146] INFO [SqlServer-SQL-TT|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:686)
[2022-02-09 11:27:07,146] INFO [SqlServer-SQL-TT|task-0] App info kafka.producer for connector-producer-SqlServer-SQL-TT-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)
[2022-02-09 11:27:07,718] INFO [SqlServer-SQL-TT|task-0] Removing locking timeout (io.debezium.connector.sqlserver.SqlServerSnapshotChangeEventSource:244)
[2022-02-09 11:27:07,721] ERROR [SqlServer-SQL-TT|task-0] Producer failure (io.debezium.pipeline.ErrorHandler:31)
io.debezium.DebeziumException: java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:79)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:118)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
at io.debezium.relational.RelationalSnapshotChangeEventSource.rollbackTransaction(RelationalSnapshotChangeEventSource.java:526)
at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:149)
at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:70)
... 6 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:234)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:1088)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.rollback(SQLServerConnection.java:3153)
at io.debezium.relational.RelationalSnapshotChangeEventSource.rollbackTransaction(RelationalSnapshotChangeEventSource.java:523)
... 8 more
[2022-02-09 11:27:07,732] INFO [SqlServer-SQL-TT|task-0] Connected metrics set to 'false' (io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics:70)
[2022-02-09 11:27:51,967] INFO [SqlServerConnectorConnector_1|task-0|offsets] WorkerSourceTask{id=SqlServerConnectorConnector_1-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask:503)