I have a self managed distributed connect cluster running locally on docker. I am following the kafka-connect-101 training
On container start up I:
- Install the splunk sink connector from confluent-hub
-
Start kafka connect
-
Wait on a loop until the connect server returns 200 to make sure everything is good.
-
Create the splunk sink connector instance.
The problem: The loop never ends because http://localhost:8083/connectors never returns 200 so it doesn’t even get to the point where the connector instance is created
When i check in the cloud cluster where the connect topics are living i can see the internal connect topics for config, offset and status are created. I can also see the client producer and consumer running.
What am i missing?
Here is the docker-compose file:
Please note some of the connector properties are masked for privacy
version: ‘2’
services:
connect-1:
image: confluentinc/cp-kafka-connect:7.5.0
restart: always
hostname: connect-1
container_name: connect-1
ports:
- “8083:8083”
volumes:
- ./data:/data
environment:
CONNECT_BOOTSTRAP_SERVERS: $BOOTSTRAP_SERVERS
#identifies the worker this container will run on. Same Id means running in the same worker
CONNECT_GROUP_ID: “lil_kc101-connect”
CONNECT_CONFIG_STORAGE_TOPIC: “lil_kc101-connect-configs”
CONNECT_OFFSET_STORAGE_TOPIC: “lil_kc101-connect-offsets”
CONNECT_STATUS_STORAGE_TOPIC: “lil_kc101-connect-status”
CONNECT_REPLICATION_FACTOR: 3
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
CONNECT_KEY_CONVERTER: “org.apache.kafka.connect.storage.StringConverter”
# Confluent Schema Registry for Kafka Connect
CONNECT_VALUE_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: $SCHEMA_REGISTRY_URL
CONNECT_VALUE_CONVERTER_BASIC_AUTH_CREDENTIALS_SOURCE: $BASIC_AUTH_CREDENTIALS_SOURCE
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO: $SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO
CONNECT_REST_ADVERTISED_HOST_NAME: "connect-1"
CONNECT_LISTENERS: http://connect-1:8083
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: 'org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR'
# Confluent Cloud config
CONNECT_REQUEST_TIMEOUT_MS: "20000"
CONNECT_RETRY_BACKOFF_MS: "500"
CONNECT_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: "https"
# Connect worker
CONNECT_SECURITY_PROTOCOL: SASL_SSL
CONNECT_SASL_JAAS_CONFIG: $SASL_JAAS_CONFIG
CONNECT_SASL_MECHANISM: PLAIN
# Connect producer
CONNECT_PRODUCER_SECURITY_PROTOCOL: SASL_SSL
CONNECT_PRODUCER_SASL_JAAS_CONFIG: $SASL_JAAS_CONFIG
CONNECT_PRODUCER_SASL_MECHANISM: PLAIN
# Connect consumer
CONNECT_CONSUMER_SECURITY_PROTOCOL: SASL_SSL
CONNECT_CONSUMER_SASL_JAAS_CONFIG: $SASL_JAAS_CONFIG
CONNECT_CONSUMER_SASL_MECHANISM: PLAIN
command:
- bash
- -c
- |
#Install confluent splunk sing connector
confluent-hub install --no-prompt splunk/kafka-connect-splunk:2.1.1
echo "---- LAUNCHING KAFKA-CONNECT -----"
/etc/confluent/docker/run &
echo "---- WAITING FOR KAFKA-CONNECT TO START LISTENING ON http://localhost:8083 ------ "
while [ $$(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors) -ne 200 ] ; do
echo -e $$(date) " Kafka Connect listener HTTP state: " $$(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors) " (waiting for 200)"
sleep 5
done
echo "----------------- Creating Splunk sink connector-------------"
curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/connectors/splunk-sink-1/config \
-d '{
"name": "splunk-sink_1",
"config": {
"connector.class"="com.splunk.kafka.connect.SplunkSinkConnector",
"splunk.hec.token"="************************************",
"splunk.hec.uri"="****************************************",
"splunk.header.index"="************************************",
"tasks.max": 3,
"topics": "confluent-audit-log-events",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"max.interval":750
}
}'
sleep infinity
Here is what the container log is showing:
2023-10-31 12:02:44 Running in a “–no-prompt” mode
2023-10-31 12:02:46 javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
2023-10-31 12:02:46
2023-10-31 12:02:46 Error: Unknown error
2023-10-31 12:02:46 ---- LAUNCHING KAFKA-CONNECT -----
2023-10-31 12:02:46 ---- WAITING FOR KAFKA-CONNECT TO START LISTENING ON http://localhost:8083 ------
2023-10-31 12:02:46 ===> User
2023-10-31 12:02:46 uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
2023-10-31 12:02:46 ===> Configuring …
… a lot of info on connect start up then at the end…
2023-10-31 12:04:00 [2023-10-31 16:04:00,139] INFO [Worker clientId=connect-1, groupId=lil_kc101-connect] Session key updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2255)
2023-10-31 12:04:02 Tue Oct 31 16:04:02 UTC 2023 Kafka Connect listener HTTP state: 000 (waiting for 200)
2023-10-31 12:04:07 Tue Oct 31 16:04:07 UTC 2023 Kafka Connect listener HTTP state: 000 (waiting for 200)
2023-10-31 12:04:12 Tue Oct 31 16:04:12 UTC 2023 Kafka Connect listener HTTP state: 000 (waiting for 200)
2023-10-31 12:04:17 Tue Oct 31 16:04:17 UTC 2023 Kafka Connect listener HTTP state: 000 (waiting for 200)
2023-10-31 12:04:22 Tue Oct 31 16:04:22 UTC 2023 Kafka Connect listener HTTP state: 000 (waiting for 200)
and the loop never ends