Kafka Connect/Kubernetes Pod fails when switching bootstrap server


I am having an issue starting up a Kubernetes pod that runs Kafka Connect. The pod starts up and works fine in our Dev environment, however when the only lines of code we change are for the bootstrap server and the cert paths, the pod continually fails and restarts with exit code 1.

There are no errors in the logs, even after elevating the log level. To troubleshoot, we overrode the docker command to ‘do-nothing’ so that we could manually run the original docker command line by line. We determined that the below line is what is causing the pod to fail:

exec java -Xms1G -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 
-Djava.awt.headless=true -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false 
-Dkafka.logs.dir=/tmp/ -Dlog4j.configuration=file:///logging-config/connect-log4j.properties 
-cp /etc/kafka-connect/jars/*
org.apache.kafka.connect.cli.ConnectDistributed /etc/kafka-connect/kafka-connect.properties

The docker image we are using is the same in both DEV and PROD, and again the settings we are changing in our config file is the bootstrap server and the path to the certs. When using Kafka CLI, we are able to consume and produce to the topics on the prod server. Also, the pod fails even before attempting to start our connectors. Our Kafka team believes this is an issue with our config file, as we have verified that the topics and our certs are set up fine, but we can not figure out what may be causing this.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.