Elasticsearch Sink Connector, unrecoverable exception

Hi,

I am new to Kafka but I managed to setup an instance of Kafka on my local machine. I have one process which adds a message into a topic once per second. I have setup confluentinc-kafka-connect-elasticsearch-11.0.4, it forwards messages to ElasticSearch.

I tried to see what happens if I disconnect from the Internet. I see an error

Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: Bulk request failed.

If I connect back, no more messages are sent to ElastiSearch and the connector has to be terminated (Ctrl+C) and started again. Then it runs ok.

name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=quickstart-events
key.ignore=true
connection.url=https://abcd.eu-west-1.es.amazonaws.com:443
connection.username=user
connection.password=pass
type.name=kafka-connect
transforms=TimestampRouter
transforms.TimestampRouter.type=org.apache.kafka.connect.transforms.TimestampRouter
transforms.TimestampRouter.topic.format=logstash-test-${timestamp}
transforms.TimestampRouter.timestamp.format=YYYY.MM.dd
drop.invalid.message=true
behavior.on.malformed.documents=WARN
max.retries=1000
retry.backoff.ms=100
connection.compression=false
max.connection.idle.time.ms=60000
connection.timeout.ms=1000 
read.timeout.ms=15000

Is there some recovery mechanism which could be used? I would like to avoid killing and starting the connector when there is a connection problem for a while.

Full log: JustPaste.it - Share Text & Images the Easy Way

Thank you, Martin.

@MPeli you should be able to adjust max.retries and retry.backoff.ms in order to increase the outage tolerance.

https://docs.confluent.io/kafka-connect-elasticsearch/current/index.html#automatic-retries