ERROR [sql-server-connection|task-0] WorkerSourceTask{id=sql-server-connection-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:195)
io.debezium.DebeziumException: The db history topic or its content is fully or partially missing. Please check database history topic configuration and re-execute the snapshot.
at io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:47)
at io.debezium.connector.sqlserver.SqlServerConnectorTask.start(SqlServerConnectorTask.java:87)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:101)
at org.apache.kafka.connect.runtime.WorkerSourceTask.initializeAndStart(WorkerSourceTask.java:225)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor
Hi @nickbob, welcome to the forum!
Please can you provide some more details - an error message alone is not a lot to go on
worker.properties
file configuration
offset.storage.file.filename=/tmp/connect.offsets
bootstrap.servers=zoo1:9092,zoo2:9092,zoo3:9092
offset.storage.topic=connect-offsets
config.storage.topic=connect-configs
status.storage.topic=connect-status
auto.offset.reset = latest
offset.flush.interval.ms=10000
rest.port=10082
rest.host.name=zoo2
rest.advertised.port=10082
rest.advertised.host.name=zoo2
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
key.converter.schemas.enable=true
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
plugin.path=/home/gokafka/
group.id=sre1
#If kafka is TLS authenticated, uncomment below lines.
#security.protocol=SSL
#ssl.truststore.location=/tmp/kafka.client.truststore.jks
#producer.security.protocol=SSL
#producer.ssl.truststore.location=/tmp/kafka.client.truststore.jks
connector.properties
file configuration
name=sql-server-connection
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
database.hostname=pocdb
database.port=51523
database.user=db_user
database.password=dvxxx
database.dbname=dbname
database.server.name=poc
table.whitelist==dbo.mbr,dbo.mbr_enc,dbo.cert_auth_master
database.history.kafka.bootstrap.servers=zoo1:9092,zoo2:9092,zoo3:9092
database.history.kafka.topic=dbhistory.history
decimal.handling.mode=string
time.precision.mode=connect
transforms.unwrap.type=io.debezium.transforms.UnwrapFromEnvelope
decimal.handling.mode=string
when i run these files using connect-standalone or distributed i am getting the error as - The db history topic or its content is fully or partially missing
We get this error every now and then too…we think it has something to do with recreating a connector with the same name which tries to use the previous history topic and the offset is different so it throws an error?