Confluent For Kubernetes: Connect pod going in perpetual restart mode

Hi,

We have a broker running on an ec2 instance and exposed at (plaintext) 192.168.54.226:9092.

I deployed connect in two different ways:

  1. Deployed helm chart to deploy. a connect pod: Works well. Topics were created on the broker.
  2. Deployed connect pod via CR by utilizing confluent for operator.
    we are trying out confluent-kubernetes-examples/quickstart-deploy at master · confluentinc/confluent-kubernetes-examples · GitHub guide to so.
    CRD’s on the operator are installed correctly.

However when i apply following CR:

apiVersion: platform.confluent.io/v1beta1
kind: Connect
metadata:
  name: connect
  namespace: confluent
spec:
  replicas: 1
  image:
    application: confluentinc/cp-server-connect-operator:6.1.0.0
    init: confluentinc/cp-init-container-operator:6.1.0.0
  dependencies:
    kafka:
      bootstrapEndpoint: "plaintext://192.168.54.226:9092"

i am getting following error message when i describe the pod:

Events:
  Type     Reason     Age                   From     Message
  ----     ------     ----                  ----     -------
  Normal   Pulling    6m27s                 kubelet  Pulling image "confluentinc/cp-init-container-operator:6.1.0.0"
  Normal   Pulled     5m54s                 kubelet  Successfully pulled image "confluentinc/cp-init-container-operator:6.1.0.0" in 33.117379287s
  Normal   Created    5m52s                 kubelet  Created container config-init-container
  Normal   Started    5m52s                 kubelet  Started container config-init-container
  Normal   Pulling    5m50s                 kubelet  Pulling image "confluentinc/cp-server-connect-operator:6.1.0.0"
  Normal   Pulled     3m28s                 kubelet  Successfully pulled image "confluentinc/cp-server-connect-operator:6.1.0.0" in 2m21.592593165s
  Normal   Created    3m28s                 kubelet  Created container connect
  Normal   Started    3m27s                 kubelet  Started container connect
  Warning  Unhealthy  36s (x12 over 2m26s)  kubelet  Readiness probe failed: Get "http://192.168.120.242:8083/v1/metadata/id": dial tcp 192.168.120.242:8083: connect: connection refused
  Warning  Unhealthy  34s (x6 over 84s)     kubelet  Liveness probe failed: Get "http://192.168.120.242:8083/v1/metadata/id": dial tcp 192.168.120.242:8083: connect: connection refused

Notice the kafka bootstrap endpoint in CR and the error. they are different. looks like CR is not getting applied correctly to connect crd.

Both broker and connect is on same vpc and can talk to each other. i verified it by curl from operator bash to broker.

Any pointers… am i making some obvious mistake?

hmm seems as if the connect container does a self check
on its rest api port (connect rest api defaults to port 8083)

any errors in the logs?

any update on this issue?