I am quite new to containers & Kafka so excuse my very basic question (s).
I am trying to set up new 3 node Cluster using the cp community edition.
I struggle with the easy task to generate the CLUSTER_ID for the KRaft mode.
The ID is supposed to be generated by
bin/kafka-storage.sh random-uuid
However, as I understand it that is a command inside the kafka docker container.
Whenever I start the container it exists immediatly since the Cluster_ID is not set, so I dont have a chance to create it and format storage.
I am sure I must be missing something very basic, but I can’t seem to find it.
thanks, I’ve seen that.
But I cannot execute
/bin/kafka-storage random-uuid
inside the docker container since it dies right away and I dont have the binary on the host os since I don’t have Kafka installed.
I sure can use a randomn ClusterID I picked up from the internet to get over that, but then it still dies because the storage is not configured properly.
I assume I am missing something obvious but what?
Is there an option i can pass to prevent the container from dying when kafka cannot start properly?
So that tells me that my custom startup config must be the culprit, thats very helpful.
I’ll simplify mine or rebuild it based on the example
Thanks a lot for that pointer
Edit :
But anyhow, what is the envisioned workflow here?
Use a dummy (example) Cluster_ID to start, then run the commands to create a new one?
This was somehow missing from the documentation or I didnt unterstand it
So,
first attempt to change things directly causes the next issue -
I basically replace kafka-kraft with the host/server fqdn and promptly the container crashes with
[2024-10-14 12:21:32,151] INFO [SocketServer listenerType=CONTROLLER, nodeId=1] Enabling request processing. (kafka.network.SocketServer)
[2024-10-14 12:21:32,154] ERROR Unable to start acceptor for ListenerName(CONTROLLER) (kafka.network.DataPlaneAcceptor)
org.apache.kafka.common.KafkaException: Socket server failed to bind to <fqdn>:29093: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:734)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:637)
at kafka.network.Acceptor.start(SocketServer.scala:632)
I’ve turned off selinux and firewalld to make sure thats not it, and it works fine with
the internal hostname, but I dont see why it wont with the actual hostname?
I found Kafka Listeners – Explained | Confluent | DE but that didnt help much.
In our previous (non confluent, zookeeper) deployment we used the fqdn for the listeners just fine…
mmh which config did you use?
the one I’ve shared or something different?
Hi,
yes the one you shared, put it in a file, ran replace kafka-kraft with fqdn and then it does not start any more
(and also replace 7.7.1 with latest to be precise)
I tried changeing -h parameter and also leave it out to use the OS hostname
Do you suggest I need to provide a separate (internal) hostname to the container? how would I address that from the other kafka containers if it is not known to DNS
I looked at the production settings assuming that it might be explained in more detail there but its not mentioned there at all.
Maybe I am missing some basic understanding how the communication works, sorry
what would you like to achieve ?
a mulit node docker based setup with CP and KRaft?
Well, things are a bit more complicated, in the end I need a 3 host setup based on podman to replace our old 3 host ZK based cluster, so the compose file probably wont work. However it should give me an idea on how the options should look like, so I’ll have a look.
Maybe it can shed a light on why the (single, test) container won’t start with hostname. I also will need to get SSL encryption running again.
A separate question - I see that for a production setup it is suggested to separate broker and controller.
I could split those into two containers on the same host but not sure if thats any better than having them in the same container?.
If this is really really neccessary I could place them with the application (assuming either’s resource use is small enough) but that would be a last resort option.
We didnt have any issues with the 3 Kafka/ 3 ZK container based setup we had before, availability matters were handled by VMWare.
I tried both and neither is working (container dies) while its working when I dont use the actual hostname in it, but then I don’t see how this would work in a multihost cluster unless I make kafka-kraft or kafka and controller 1-3 from the docker compose generator valid (resolveable) hostnames.
And while I am asking stupid questions - Labels can be adjusted, right?
So I can replace PLAINTEXT_HOST with PLAINTEXT_dolphin to make things clearer?