JDBC Connectors disappearing after cluster restart

Hello,

After restart of Confluent cluster all previously created and working JDBC Connectors disappeared. My question is: how to prevent such situation in the future? Where are exactly stored configurations of JDBC Connectors? For following topics:

docker-connect-configs
docker-connect-offsets
docker-connect-status

I have already set following settings:

cleanup.policy compact
retention.ms -1
retention.bytes -1

Thanks in advance for reply!

Hi,

Sorry to hear about that!

I could use a little more information, but if I’m understanding correctly, when you say disappeared do you mean you can’t find the instances or they just stopped running? I think what may have happened is during the restart the JDBC Connectors shut down. Where is the cluster running? I noticed the name of the topics includes “docker” is this a local deployment?

-Bill

How did you restart the cluster?
Just guessing, but if you’re using Docker Compose and you did docker-compose down and then docker-compose up that would have removed your Kafka broker on which the connector config is stored in topics.

I forgot to mention earlier: I’ve got Confluent cluster running on Openshift. Kafka Broker has mounted Openshift persistent volume, so I was a little surprised when these topics docker-connect-* were cleared after cluster restart, since I was expecting these data to be persisted.

@rmoff @bbejeck any suggestions? Thanks in advance.

I’ve not used Openshift, sorry.

This topic can be closed since I’ve found reason for this situation. I will write it here just in case someone face similar situation :slightly_smiling_face:

  • it turned out that I was not persisting directory on Kafka Broker with a data of all topics. When I was restarting cluster all data on these topics(esp. these topics needed for connectors) was lost and it looked like JDBC Connectors previously declared were lost too.

Managed to solve it by adding persistent volume claim to Openshift deployment config of Kafka broker.

Thanks for all your replies and suggestions!

1 Like

Hey @kubeusz, glad you got it sorted - and thanks for taking the time to close the loop here, it’ll help others in the future! :slight_smile: