Kafka Connect docker doesn't work properly (unhealthy)

Hi,
I’m trying to run kafka-connect with docker. But weird thing is when I run just one kafka broker, it works perfectly fine. However, when I spin up more than 2 kafka broker, the status of kafka-connect becomes unhealthy. The host machine I use is Debian 10.

Here’s the logs of kafka-connect:

I did see an ERROR saying that Topic ‘docker-connect-offsets’ supplied via the ‘offset.storage.topic’ property is required to have ‘cleanup.policy=compact’ to guarantee consistency and durability of source connector offsets, but found the topic currently has ‘cleanup.policy=delete’. Continuing would likely result in eventually losing source connector offsets and problems restarting this Connect cluster in the future. Change the ‘offset.storage.topic’ property in the Connect worker configurations to use a topic with 'cleanup.policy=compact’
But I’m not sure if that’s the reason which caused kafka-connect unhealthy.

Here’s my docker-compose file:

version: “3”

services:

zookeeper1:

# image: zookeeper
# image: wurstmeister/zookeeper
image: confluentinc/cp-zookeeper
restart: always
container_name: zookeeper1
hostname: zookeeper1
ports:
  - 2181:2181
environment:
  ZOO_MY_ID: 1
  ZOOKEEPER_CLIENT_PORT: 2181
  ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888
volumes:
  - ./zoo1/data:/data
  - ./zoo1/datalog:/datalog

zookeeper2:

# image: zookeeper
# image: wurstmeister/zookeeper
image: confluentinc/cp-zookeeper
restart: always
container_name: zookeeper2
hostname: zookeeper2
ports:
  - 2182:2182
environment:
  ZOO_MY_ID: 2
  ZOOKEEPER_CLIENT_PORT: 2182
  ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888

volumes:
  - ./zoo2/data:/data
  - ./zoo2/datalog:/datalog

kafka1:

# image: wurstmeister/kafka
image: confluentinc/cp-kafka
restart: always
container_name: kafka1
ports:
  - 9092:9092
  - 9093:9093
  - 9094:9094
depends_on: 
  - zookeeper1
  - zookeeper2
environment:
  KAFKA_BROKER_ID: 1
  KAFKA_LISTENERS: INSIDE://0.0.0.0:9092,HOST://0.0.0.0:9093,OUTSIDE://0.0.0.0:9094
  KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka1:9092,HOST://localhost:9093,OUTSIDE://${HOST_IP}:9094
  KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,HOST:PLAINTEXT,OUTSIDE:PLAINTEXT
  KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
  KAFKA_ZOOKEEPER_CONNECT: "zookeeper1:2181,zookeeper2:2182"
  KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  KAFKA_MESSAGE_MAX_BYTES: 2000000
  KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"

volumes:
  - ./kafka1/data:/var/lib/kafka/data

kafka2:

# image: wurstmeister/kafka
image: confluentinc/cp-kafka
restart: always
container_name: kafka2
ports:
  - 9095:9095
  - 9096:9096
  - 9097:9097

depends_on: 
  - zookeeper1
  - zookeeper2

environment:
  KAFKA_BROKER_ID: 2
  KAFKA_LISTENERS: INSIDE://0.0.0.0:9095,HOST://0.0.0.0:9096,OUTSIDE://0.0.0.0:9097
  KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka2:9095,HOST://localhost:9096,OUTSIDE://${HOST_IP}:9097
  KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,HOST:PLAINTEXT,OUTSIDE:PLAINTEXT
  KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
  KAFKA_ZOOKEEPER_CONNECT: "zookeeper1:2181,zookeeper2:2182"
  KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  KAFKA_MESSAGE_MAX_BYTES: 2000000
  KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"

volumes:
  - ./kafka2/data:/var/lib/kafka/data

kafka_manager:

image: hlebalbau/kafka-manager:stable
container_name: kakfa-manager
restart: always
ports:
  - "9000:9000"

environment:
  ZK_HOSTS: "zookeeper1:2181,zookeeper2:2182"
  APPLICATION_SECRET: "random-secret"
command: -Dpidfile.path=/dev/null

kafka-connect:

  image: confluentinc/cp-kafka-connect-base
  container_name: kafka-connect
  restart: always
  depends_on:
    - zookeeper1
    - zookeeper2
    - kafka1
    - kafka2

  ports:
    - 8083:8083

  environment:
    CONNECT_BOOTSTRAP_SERVERS: "kafka1:9092,kafka2:9095"
    CONNECT_REST_PORT: 8083
    CONNECT_GROUP_ID: compose-connect-group
    CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
    CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
    CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
    CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
    CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
    CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
    CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
    CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
    CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
    CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
    CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"

    CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
    CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
    CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
    CONNECT_PLUGIN_PATH: '/usr/share/java,/usr/share/confluent-hub-components/'

  command: 
  - bash 
  - -c 
  - |
    # Install connector plugins
    # This will by default install into /usr/share/confluent-hub-components/ so make
    #  sure that this path is added to the plugin.path in the environment variables 
    confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:11.0.0
    # Launch the Kafka Connect worker
    /etc/confluent/docker/run &
    # Don't exit
    sleep infinity

  volumes:
    - $PWD/data:/data

kafka-connect-ui:

  image: landoop/kafka-connect-ui
  container_name: kafka-connect-ui
  ports:
    - 8000:8000

  environment:
    CONNECT_URL: http://kafka-connect:8083

Any idea if I’m missing something? Thanks

Maybe you could start with the start. What are you trying to do? Why do you run two zk’s and two kafka’s on one node. One is enough for testing, 3 on different nodes is kind of minimal to ensure not losing data. And maybe more important, how much memory is available for docker?

Hi gklijs, thanks for your reply.
I did test it with just one zookeeper and one kafka broker, and it works fine.
The reason I ran 2 zookeeper is I saw an example of kafka cluster docker compose file somewhere else, and it uses 3 zookeepers for 3 kafka brokers on 1 node, so I thought they’re paired, which is the number of zookeeper should be the same as the number of broker. (Maybe I’m wrong?)

Also, I do want to run 3 brokers on 1 node to share the workload and ensure not losing data like you said. Since I only have one physical machine, is it useless to spin up more than 2 kafka brokers on the same node?

As for the memory concerned, how do I check how much memory is available for docker? I’m using Linux, so from my understanding, it could use all the memory resource on the host if you don’t limit it on purpose. I ran docker info, and it gave the following result. (Seems like I can use 16G memory)

I would for now just stick to one zookeeper and one Kafka to keep things simple. As long as you only have one physical machine (is it not a vm?) there is a big risk of losing data anyway. And running less things means also you need less resources.

I guess memory should be fine, unless a lot of other things are also running.

Yes, the machine I currently use is a physical one instead of VM. And there’s no other services which would consume the resource for now.
But if the memory is enough, what’s the problem to cause kafka-connect unhealthy when I run multiple kafka instances at the same time?
Also, how do you suggest if I only have 1 machine, but still want to run a FAKE kafka cluster(Because they’re at the same node) to share the workload. Is the docker-compose file I’m using now a possible solution to go with?

The problem is you kind of need 3 or 1, not one, at least for zookeeper. But I still don’t understand why. If you have more load than one broker could handle, you also don’t have enough memory and cpu.

You mean 1 zookeeper for 3 brokers?

Well that’s an option. I like to help. but it’s hard when I don’t know what you are trying to do.

Thanks man. I really appreciate it. What I’m trying to do is to set up a data pipeline to collect windows event log from multiple servers using winlogbeat, then using kafka to do the buffering since there may be tons of data. After that using kafka-connect to extract data from kafka and send it to elasticsearch.
The data flow would go like this:

Also, do you think there’s no difference if I run 1 broker or 3 brokers on the same node? Because they would have the same power of data ingestion ability?

In that case I would request at least 3 machines for zookeeper + kafka. And another three for kafka-connect. The configuration for rolling out on multiple machines is a lot different then a single one, because of the ports and such.

In that case, that’s 6 machines in total. LOL What if I only have 1 machine available for kafka and kafka-connect?

Hi @LeamonLee , If you only have one machine, the question would be: why bother using multiple nodes? :slight_smile:
Having multiple nodes protects you from the risk of losing all when one machine has an issue. You could run multiple nodes in one physical machine, by using virtualisation, but it would make sense only if this machine is built to be resilient to failures (multiple disks, multiple power supply and so on…
If you are running in a simple machine, then using a simple configuration as @gklijs correctly advised, is probably you best bet. You can assign more resources to the single kafka and connect nodes and that should do. If assigning more resources is not going to cut it, then you will need to reconsider your requirement to run in a single machine. Let us know how it goes!

3 Likes

Hi @gianlucanatali, thanks so much for your suggestion. I think I know what I’m supposed to do.
One more question: you mentioned that “I still could run multiple nodes in one physical machine by using virtualization”. Is there any difference if I don’t use VM but still running multiple nodes in one physical machine by directly using docker instead when it comes to setting up a cluster? Like the right one in the following diagram, which is the solution I wanted to go. I suppose the left one is what you’re talking about.

Hi @LeamonLee , of course just go ahead with docker only, I includer docker in the “virtualization” bucket :slight_smile: . The diagram on the left would be a bad design, as docker itself is doing the virtualization on top of the OS/physical machine, as you can see represented in docker docs: What is a Container? | App Containerization | Docker.


You can find some great examples of docker-compose files in this GitHub repository .
If you still want to go with multiple nodes in docker-compose, have a look at the cp-demo Github repository

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.