Connector Debezium SQL - Failed

The topic is created and here are its parameters.

General settings

name
testtopic

partitions
1

compression.type
producer

confluent.value.schema.validation
false

leader.replication.throttled.replicas

confluent.key.subject.name.strategy
io.confluent.kafka.serializers.subject.TopicNameStrategy

message.downconversion.enable
true

min.insync.replicas
1

segment.jitter.ms
0

cleanup.policy
delete

flush.ms
9223372036854775807

confluent.tier.local.hotset.ms
86400000

follower.replication.throttled.replicas

confluent.tier.local.hotset.bytes
-1

confluent.value.subject.name.strategy
io.confluent.kafka.serializers.subject.TopicNameStrategy

segment.bytes
1073741824

retention.ms
604800000

flush.messages
9223372036854775807

confluent.tier.enable
false

confluent.tier.segment.hotset.roll.min.bytes
104857600

confluent.segment.speculative.prefetch.enable
false

message.format.version
3.0-IV1

max.compaction.lag.ms
9223372036854775807

file.delete.delay.ms
60000

max.message.bytes
20971520

min.compaction.lag.ms
0

message.timestamp.type
CreateTime

preallocate
false

confluent.placement.constraints

min.cleanable.dirty.ratio
0.5

index.interval.bytes
4096

unclean.leader.election.enable
false

retention.bytes
-1

delete.retention.ms
86400000

confluent.prefer.tier.fetch.ms
-1

confluent.key.schema.validation
false

segment.ms
604800000

message.timestamp.difference.max.ms
9223372036854775807

segment.index.bytes
10485760

But here is the automatically created topic, it also has max.message.bytes
20000000

General settings

name
TESSA35-SQL-TT.dbo.FileContent

partitions
1

compression.type
producer

confluent.value.schema.validation
false

leader.replication.throttled.replicas

--

confluent.key.subject.name.strategy
io.confluent.kafka.serializers.subject.TopicNameStrategy

message.downconversion.enable
true

min.insync.replicas
1

segment.jitter.ms
0

cleanup.policy
delete

flush.ms
9223372036854775807

confluent.tier.local.hotset.ms
86400000

follower.replication.throttled.replicas

confluent.tier.local.hotset.bytes
-1

confluent.value.subject.name.strategy
io.confluent.kafka.serializers.subject.TopicNameStrategy

segment.bytes
1073741824

retention.ms
604800000

flush.messages
9223372036854775807

confluent.tier.enable
false

confluent.tier.segment.hotset.roll.min.bytes
104857600

confluent.segment.speculative.prefetch.enable
false

message.format.version
3.0-IV1

max.compaction.lag.ms
9223372036854775807

file.delete.delay.ms
60000

max.message.bytes
20000000

min.compaction.lag.ms
0

message.timestamp.type
CreateTime

preallocate
false

confluent.placement.constraints

min.cleanable.dirty.ratio

0.5

index.interval.bytes

4096

unclean.leader.election.enable

false

retention.bytes
-1

delete.retention.ms
86400000

confluent.prefer.tier.fetch.ms
-1

confluent.key.schema.validation
false

segment.ms
604800000

message.timestamp.difference.max.ms
9223372036854775807

segment.index.bytes
10485760