When I try to configure a connector on a specific cluster, it fails after a long while with a 500. I’ve built three Kafka Connect clusters and I don’t know why this one is different. The other two are running in Strimzi in Anthos, while the not-working one is running in EKS. I can’t imagine that the flavor of k8s would impact the error I’m seeing.
Although this example is using the SplunkSinkConnector, any connector seems to have the same problem. If I do any of the usual GET calls, they work great. If I POST a bad config (missing connector.class, for instance), I get back an error immediately.
[kafka@kafka-connect-connect-0 kafka]$ curl -v http://127.0.0.1:8083/connectors/my-connector/config -H “Content-type: application/json” -X PUT --data ‘{“consumer.override.auto.offset.reset”:“latest”,“splunk.hec.ssl.validate.certs”:“false”,“splunk.hec.token”:“REDACTED”,“splunk.hec.uri”:“REDACTED”,“splunk.indexes”:“one,two,three”,“topics”:“one,two,three”,“tasks.max”:20,“connector.class”:“com.splunk.kafka.connect.SplunkSinkConnector”}’
* Trying 127.0.0.1…
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8083 (#0)
> PUT /connectors/my-connector/config HTTP/1.1
> Host: 127.0.0.1:8083
> User-Agent: curl/7.61.1
> Accept: */*
> Content-type: application/json
> Content-Length: 458
* upload completely sent off: 458 out of 458 bytes
< HTTP/1.1 500 Internal Server Error
< Date: Tue, 19 Aug 2025 05:04:23 GMT
< Content-Type: application/json
< Content-Length: 169
< Server: Jetty(9.4.53.v20231009)
<
* Connection #0 to host 127.0.0.1 left intact
{“error_code”:500,“message”:“Request timed out. The worker is currently performing multi-property validation for the connector, which began at 2025-08-19T05:04:23.060Z”}
I’ve googled this error message, and other than the commit that created it, I’ve found very little. I gotta be doing something wierd.
Thanks for the help,
Steve