Data loss in kafka broker

Please what configuration should I use to ensure that I don’t lose data when replicating?
I have a source table that has 7 M rows but only about 6 M plus gets to the destination table. I think the consumer just idles while waiting. I have been trying to throttle the producer and consumer to produce and consume equal amount of data to reduce the idle time.
My producer config:
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
tasks.max=1
include.schema.changes=true
tombstones.on.delete=true
topic.prefix=adc-
decimal.handling.mode=double
schema.history.internal.kafka.topic=schema-changes.adc
include.schema.comments=false
poll.interval.ms=10
value.converter=io.confluent.connect.avro.AvroConverter
key.converter=io.confluent.connect.avro.AvroConverter
snapshot.lock.timeout.ms=5000
database.encrypt=false
database.user=${file:/etc/data/xxxxxxxx:username}
database.names=xxxxxxx
max.message.bytes=10485880
acks=all
time.precision.mode=connect
schema.history.internal.kafka.bootstrap.servers=kafka:9092
database.port=1433
database.hostname=${file:/etc/data/xxxxxx:hostname}
database.password=${file:/etc/data/xxxxxx:password}
table.include.list=xxxxxx
snapshot.mode=initial

And my Consumer config
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
connection.password=${file:/etc/data/xxxxxx:password}
tasks.max=5
decimal.handling.mode=double
enable.auto.commit=true
auto.evolve=true
value.converter=io.confluent.connect.avro.AvroConverter
session.timeout.ms=60000
insert.mode=upsert
key.converter=io.confluent.connect.avro.AvroConverter
max.poll.records=10000
topics=xxxxx
heartbeat.interval.ms=3000
delete.enabled=true
connection.user=${file:/etc/data/xxxxxx:username}
fetch.min.bytes=16384
fetch.max.bytes=52428800
auto.create=true
connection.url=${file:/etc/data/xxxxxx:connectionurl}
max.poll.interval.ms=100
pk.mode=record_key
pk.fields=id

Please what else do I need to ensure that data is not lost?