Troubleshoot Deployment of Kafka Docker Image

I am trying to deploy a docker-compose.yaml on a public cloud VM. There is no other file I would like to deploy

My file that contains 3 images to build the container (see code below): Kafka Broker, Zookeeper and Python-Producer (which basically just produces random numbers periodically and writes to a Kafka Topic; and this image is deployed publicly).

I have followed 2 comprehensive tutorials on how to deploy a container on VM using GCP & AWS EC2 meticulously. But I am getting the following errors

  • On AWS EC2 on Linux → I am unable to install docker-compose engine
  • On GCP VM on Ubuntu → I am able to run docker-compose up command but it throws an error stating the broker cannot be started

Link to AWS Deploy tutorial: https://www.youtube.com/watch?v=gRgdnHHuvoI&t=142s

Link to GCP Deploy Tutorial: https://www.youtube.com/watch?v=nt7fpz4JXzY

version: ‘2’

services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
container_name: zookeeper
ports:
- “2181:2181”
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000

broker:
image: confluentinc/cp-kafka:latest
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- “29092:29092”
- “9092:9092”
- “9101:9101”
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: ‘zookeeper:2181’
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0

producer:
image: daftpunkapi/flink_v1:latest
depends_on:
- broker
restart: “on-failure”

Any help and guidance will be much appreciated - TIA!

Are you getting a specific error you’d like to share?

  1. “Docker-compose engine” does not need installed docker compose is a command built-in to Docker Engine itself. (Notice the lack of a hyphen)
  2. Kafka broker does not start immediately. You need to make your Python code wait.
  3. If you are only wanting to connect to Kafka from Python, you can remove the localhost:9092 listener and all ports defined on the kafka and zookeeper containers.

Hi

Thank you, I did not realize that docker-compose had been deprecated.
And now I am finally able to use the docker compose command to spin up containers using my YAML file.

I edited the yaml file to remove the python image and now it consists of only the zookeeper and kafka broker services. However containers keep exiting for kafka broker.
The zookeeper does not exit and runs smoothly. I also changed the docker compose yaml file and edited the PLAINTEXT_HOST to :9092

The log for the container looks like -

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error=‘Not enough space’ (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
An error report file with more information is saved as:
/home/appuser/hs_err_pid1.log

Is this related to the EBS volume attached to the instance ??

(I am using a free tier EC2 instance with 1 GB EBS volume)

EBS is storage, not memory. Your error is saying that the RAM was exhausted. You will need a minimum of 4GB of RAM to run both Zookeeper and Kafka together on the same machine with Docker Compose.

T3 Micro only has 1GB of memory available.

Is there a specific reason you cannot install Docker locally? Do you really need AWS/GCP to run services? And if so, you could use Confluent Cloud, rather than Docker.

Just wanted to test out AWS EC2 services and since I was working on a project, thought I would deploy that itself.

I did provision a T3.large resource and it worked perfectly.

Thank you for the help!!

EC2-based Kafka clients can also connect to Confluent Cloud with VPC peering or PrivateLink.

1 Like