did some tests locally added my examples here
Thanks a lot for this
I have not tested it yet, need to convert to shell script as we dont user docker/compose (-> podman).
I assume it is advisable to separate controller and broker into two containers, but I assume they can run on the same host with sufficient resources (in production) ?
I see that there are still references to internal names ( KAFKA_LISTENERS: CONTROLLER://controller-1:19091) - I assume this works because its inside that container itself?
And the kafka-1 healthcheck (assumably performed by docker) can id the container via its label?
Assuming a yes on the above, that only leaves
KAFKA_JMX_HOSTNAME: controller-1
and
(kafka-1): KAFKA_JMX_HOSTNAME: localhost
unclear - I assume thats just a label for jmx and controller1-3 is fine, but why localhost for kafka-1?
I assume I could put vm01_controller-1 and vm01_kafka-1 instead?
Thanks very much,
cheers
I assume it is advisable to separate controller and broker into two containers, but I assume they can run on the same host with sufficient resources (in production) ?
should be possible right, not recommend generally though I see a lot out there which such a config
I see that there are still references to internal names ( KAFKA_LISTENERS: CONTROLLER://controller-1:19091) - I assume this works because its inside that container itself?
correct docker internal stuff
And the kafka-1 healthcheck (assumably performed by docker) can id the container via its label?
the question how the connection works? connection is done docker internal so works as designed
KAFKA_JMX_HOSTNAME: controller-1
and
(kafka-1): KAFKA_JMX_HOSTNAME: localhostunclear - I assume thats just a label for jmx and controller1-3 is fine, but why localhost for kafka-1?
I assume I could put vm01_controller-1 and vm01_kafka-1 instead?
you’re right missed that should kafka-1
though you could try with your mentioned confs above
best,
michael
This is not working as hoped yet…
The controllers are up and are to be able to communicate, but something still seems to be wrong with my kafka containers.
They all start up and then die since they cannot assign the requested address.
Socket server failed to bind to vm80:9091: Cannot assign requested address.
[2024-11-05 13:05:45,900] INFO [BrokerServer id=4] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-11-05 13:05:45,941] INFO [BrokerLifecycleManager id=4] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-11-05 13:05:45,941] INFO [BrokerServer id=4] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-11-05 13:05:45,943] INFO authorizerStart completed for endpoint PLAINTEXT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-11-05 13:05:45,943] INFO authorizerStart completed for endpoint EXTERNAL. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-11-05 13:05:45,944] INFO [SocketServer listenerType=BROKER, nodeId=4] Enabling request processing. (kafka.network.SocketServer)
[2024-11-05 13:05:45,946] ERROR Unable to start acceptor for ListenerName(EXTERNAL) (kafka.network.DataPlaneAcceptor)
org.apache.kafka.common.KafkaException: Socket server failed to bind to vm80:9091: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:734)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:637)
at kafka.network.Acceptor.start(SocketServer.scala:632)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$2(SocketServer.scala:222)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:887)
at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2325)
at kafka.network.SocketServer.chainAcceptorFuture$1(SocketServer.scala:215)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$5(SocketServer.scala:229)
at java.base/java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4780)
at kafka.network.SocketServer.enableRequestProcessing(SocketServer.scala:229)
at kafka.server.BrokerServer.startup(BrokerServer.scala:536)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:89)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:730)
… 17 more
[2024-11-05 13:05:45,948] ERROR Unable to start acceptor for ListenerName(PLAINTEXT) (kafka.network.DataPlaneAcceptor)
org.apache.kafka.common.KafkaException: Socket server failed to bind to vm80:19094: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:734)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:637)
at kafka.network.Acceptor.start(SocketServer.scala:632)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$2(SocketServer.scala:222)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:887)
at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2325)
:
I assume it must be a general error since all 3 kafka instances have the same issue.
The only idea i came up with was name resolution, but that is working fine within the controller container, so why wouldnt it in the kafka one …
Kafka ports are also not in use…
kafka-1 startup script
podman run -d
–name kafka-1
-h=vm80
-p 9091:9091
-p 19094:19094
-e KAFKA_LISTENERS=‘PLAINTEXT://vm80:19094,EXTERNAL://vm80:9091’
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=‘CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT’
-e KAFKA_ADVERTISED_LISTENERS=‘PLAINTEXT://vm80:19094,EXTERNAL://vm80::9091’
-e KAFKA_INTER_BROKER_LISTENER_NAME=‘PLAINTEXT’
-e KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0
-e KAFKA_BROKER_RACK=‘rack-0’
-e KAFKA_MIN_INSYNC_REPLICAS=2
-e KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR=2
-e KAFKA_CONFLUENT_CLUSTER_LINK_ENABLE=‘true’
-e KAFKA_CONFLUENT_REPORTERS_TELEMETRY_AUTO_ENABLE=‘false’
-e KAFKA_NODE_ID=4
-e CLUSTER_ID=‘MkU3OEVBNTcwNTJENDM2Qk’
-e KAFKA_CONTROLLER_QUORUM_VOTERS=‘1@vm80:19091,2@vm82:19092,3@vm83:19093’
-e KAFKA_PROCESS_ROLES=‘broker’
-e KAFKA_CONTROLLER_LISTENER_NAMES=‘CONTROLLER’
confluentinc/cp-kafka:latest
Thanks
is there any policy on the OS which may preve t the usage of these ports?
check this with
netstat -na | grep :9091
try to open port via nc
ls | nc -l -p 9001
There is no other process blocking the ports, we disabled selinux checked scurity settings, nothing.
Also this does not seem to prevent the controller from opening arbitrary high ports, so why would this happen to kafka is this was a system problem…
I also tried only running kafka, allowing all ports (-P), different ways to set the names, nothing.
At thins point I think my config is to blame, but I can’t see the difference to the one you provided (which I assume you tested to run).
Let me attach the full log and startup here, maybe you can spot an error …
Thank you
startup script
podman run -d
–name kafka-2
-h=vm81
-p 9092:9092
-p 19095:19095
-e KAFKA_LISTENERS=‘PLAINTEXT://vm81:19095, EXTERNAL://vm81:9092’
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=‘CONTROLLER:PLAINTEXT, PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT’
-e KAFKA_ADVERTISED_LISTENERS=‘PLAINTEXT://vm81:19095, EXTERNAL://vm81:9092’
-e KAFKA_INTER_BROKER_LISTENER_NAME=‘PLAINTEXT’
-e KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0
-e KAFKA_BROKER_RACK=‘rack-0’
-e KAFKA_MIN_INSYNC_REPLICAS=2
-e KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR=2
-e KAFKA_CONFLUENT_CLUSTER_LINK_ENABLE=‘true’
-e KAFKA_CONFLUENT_REPORTERS_TELEMETRY_AUTO_ENABLE=‘false’
-e KAFKA_NODE_ID=5
-e CLUSTER_ID=‘MkU3OEVBNTcwNTJENDM2Qk’
-e KAFKA_CONTROLLER_QUORUM_VOTERS=‘1@vm80:19091,2@vm81:19092,3@vm82:19093’
-e KAFKA_PROCESS_ROLES=‘broker’
-e KAFKA_CONTROLLER_LISTENER_NAMES=‘CONTROLLER’
confluentinc/cp-kafka:latest
Log startup - part 1
===> User
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
===> Configuring …
Running in KRaft mode…
===> Running preflight checks …
===> Check if /var/lib/kafka/data is writable …
===> Running in KRaft mode, skipping Zookeeper health check…
===> Using provided cluster id MkU3OEVBNTcwNTJENDM2Qk …
===> Launching …
===> Launching kafka …
[2024-11-06 10:00:07,457] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2024-11-06 10:00:07,702] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2024-11-06 10:00:07,818] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-11-06 10:00:07,822] INFO [BrokerServer id=5] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer)
[2024-11-06 10:00:07,823] INFO [SharedServer id=5] Starting SharedServer (kafka.server.SharedServer)
[2024-11-06 10:00:07,885] INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2024-11-06 10:00:07,885] INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$)
[2024-11-06 10:00:07,886] INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data] Producer state recovery took 0ms for snapshot load and 1ms for segment recovery from offset 0 (kafka.log.UnifiedLog$)
[2024-11-06 10:00:07,918] INFO Initialized snapshots with IDs SortedSet() from /var/lib/kafka/data/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2024-11-06 10:00:07,945] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2024-11-06 10:00:08,099] INFO [RaftManager id=5] Completed transition to Unattached(epoch=0, voters=[1, 2, 3], electionTimeoutMs=1813) from null (org.apache.kafka.raft.QuorumState)
[2024-11-06 10:00:08,113] INFO [kafka-5-raft-outbound-request-thread]: Starting (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-11-06 10:00:08,113] INFO [kafka-5-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread)
[2024-11-06 10:00:08,144] INFO [MetadataLoader id=5] initializeNewPublishers: the loader is still catching up because we still don’t know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,146] INFO [BrokerServer id=5] Starting broker (kafka.server.BrokerServer)
[2024-11-06 10:00:08,177] INFO [broker-5-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:08,181] INFO [broker-5-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:08,182] INFO [broker-5-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:08,183] INFO [broker-5-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:08,225] INFO [BrokerServer id=5] Waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-11-06 10:00:08,226] INFO [BrokerServer id=5] Finished waiting for controller quorum voters future (kafka.server.BrokerServer)
[2024-11-06 10:00:08,234] INFO [broker-5-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,245] INFO [MetadataLoader id=5] initializeNewPublishers: the loader is still catching up because we still don’t know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,296] INFO [RaftManager id=5] Registered the listener org.apache.kafka.image.loader.MetadataLoader@1862264113 (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-06 10:00:08,350] INFO [MetadataLoader id=5] initializeNewPublishers: the loader is still catching up because we still don’t know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,414] INFO [RaftManager id=5] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=6, leaderId=1, voters=[1, 2, 3], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) from Unattached(epoch=0, voters=[1, 2, 3], electionTimeoutMs=1813) (org.apache.kafka.raft.QuorumState)
[2024-11-06 10:00:08,447] INFO [broker-5-to-controller-forwarding-channel-manager]: Recorded new KRaft controller, from now on will use node vm80:19091 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,448] INFO [RaftManager id=5] Fetching snapshot OffsetAndEpoch(offset=7235, epoch=6) from Fetch response from leader 1 (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-06 10:00:08,462] INFO [MetadataLoader id=5] initializeNewPublishers: the loader is still catching up because we still don’t know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,490] INFO [LocalLog partition=__cluster_metadata-0, dir=/var/lib/kafka/data] Deleting segments as part of log truncation: LogSegment(baseOffset=0, size=0, lastModifiedTime=1730887207874, largestRecordTimestamp=-1) (kafka.log.LocalLog)
[2024-11-06 10:00:08,499] INFO [UnifiedLog partition=__cluster_metadata-0, dir=/var/lib/kafka/data] Loading producer state till offset 7235 with message format version 2 (kafka.log.UnifiedLog$)
[2024-11-06 10:00:08,499] INFO [UnifiedLog partition=__cluster_metadata-0, dir=/var/lib/kafka/data] Reloading from producer snapshot and rebuilding producer state from offset 7235 (kafka.log.UnifiedLog$)
[2024-11-06 10:00:08,499] INFO [UnifiedLog partition=__cluster_metadata-0, dir=/var/lib/kafka/data] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 7235 (kafka.log.UnifiedLog$)
[2024-11-06 10:00:08,501] INFO [RaftManager id=5] Fully truncated the log at (LogOffsetMetadata(offset=7235, metadata=Optional[(segmentBaseOffset=7235,relativePositionInSegment=0)]), 6) after downloading snapshot OffsetAndEpoch(offset=7235, epoch=6) from leader 1 (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-06 10:00:08,503] INFO [RaftManager id=5] High watermark set to Optional[LogOffsetMetadata(offset=7235, metadata=Optional.empty)] for the first time for epoch 6 (org.apache.kafka.raft.FollowerState)
[2024-11-06 10:00:08,514] INFO [MetadataLoader id=5] handleLoadSnapshot(00000000000000007235-0000000006): incrementing HandleLoadSnapshotCount to 1. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,528] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-11-06 10:00:08,543] INFO [MetadataLoader id=5] handleLoadSnapshot(00000000000000007235-0000000006): generated a metadata delta between offset -1 and this snapshot in 28829 us. (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,543] INFO [MetadataLoader id=5] maybePublishMetadata(SNAPSHOT): The loader is still catching up because we have loaded up to offset 7234, but the high water mark is 7278 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,545] INFO [SocketServer listenerType=BROKER, nodeId=5] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2024-11-06 10:00:08,546] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2024-11-06 10:00:08,549] INFO [SocketServer listenerType=BROKER, nodeId=5] Created data-plane acceptor and processors for endpoint : ListenerName(EXTERNAL) (kafka.network.SocketServer)
[2024-11-06 10:00:08,555] INFO [broker-5-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,555] INFO [broker-5-to-controller-alter-partition-channel-manager]: Recorded new KRaft controller, from now on will use node vm80:19091 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,566] INFO [MetadataLoader id=5] maybePublishMetadata(LOG_DELTA): The loader finished catching up to the current high water mark of 7278 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,566] INFO [broker-5-to-controller-directory-assignments-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,567] INFO [broker-5-to-controller-directory-assignments-channel-manager]: Recorded new KRaft controller, from now on will use node vm80:19091 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,573] INFO [MetadataLoader id=5] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 7277 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,585] INFO [ExpirationReaper-5-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,587] INFO [ExpirationReaper-5-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,588] INFO [ExpirationReaper-5-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,589] INFO [ExpirationReaper-5-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,590] INFO [ExpirationReaper-5-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,604] INFO [ExpirationReaper-5-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,604] INFO [ExpirationReaper-5-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,632] INFO Unable to read the broker epoch in /var/lib/kafka/data. (kafka.log.LogManager)
[2024-11-06 10:00:08,633] INFO [broker-5-to-controller-heartbeat-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,633] INFO [broker-5-to-controller-heartbeat-channel-manager]: Recorded new KRaft controller, from now on will use node vm80:19091 (id: 1 rack: null) (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,641] INFO [BrokerLifecycleManager id=5] Incarnation tQ6GNsqARwGJo0_0zc2xSQ of broker 5 in cluster MkU3OEVBNTcwNTJENDM2Qk is now STARTING. (kafka.server.BrokerLifecycleManager)
[2024-11-06 10:00:08,658] INFO [ExpirationReaper-5-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,688] INFO [BrokerLifecycleManager id=5] Successfully registered broker 5 with broker epoch 7278 (kafka.server.BrokerLifecycleManager)
[2024-11-06 10:00:08,690] INFO [BrokerServer id=5] Waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-11-06 10:00:08,690] INFO [BrokerServer id=5] Finished waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer)
[2024-11-06 10:00:08,690] INFO [BrokerServer id=5] Waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-11-06 10:00:08,691] INFO [MetadataLoader id=5] InitializeNewPublishers: initializing MetadataVersionPublisher(id=5) with a snapshot at offset 7277 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,691] INFO [MetadataLoader id=5] InitializeNewPublishers: initializing BrokerMetadataPublisher with a snapshot at offset 7277 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,691] INFO [BrokerMetadataPublisher id=5] Publishing initial metadata at offset OffsetAndEpoch(offset=7277, epoch=6) with metadata.version 3.7-IV4. (kafka.server.metadata.BrokerMetadataPublisher)
[2024-11-06 10:00:08,692] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager)
[2024-11-06 10:00:08,695] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager)
[2024-11-06 10:00:08,705] INFO Loaded 0 logs in 12ms (kafka.log.LogManager)
[2024-11-06 10:00:08,705] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2024-11-06 10:00:08,706] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2024-11-06 10:00:08,715] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2024-11-06 10:00:08,781] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread)
[2024-11-06 10:00:08,795] INFO [GroupCoordinator 5]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2024-11-06 10:00:08,796] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-11-06 10:00:08,796] INFO [AddPartitionsToTxnSenderThread-5]: Starting (kafka.server.AddPartitionsToTxnManager)
[2024-11-06 10:00:08,797] INFO [GroupCoordinator 5]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2024-11-06 10:00:08,798] INFO [TransactionCoordinator id=5] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-11-06 10:00:08,808] INFO [TransactionCoordinator id=5] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-11-06 10:00:08,809] INFO [TxnMarkerSenderThread-5]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-11-06 10:00:08,819] INFO [MetadataLoader id=5] InitializeNewPublishers: initializing BrokerRegistrationTracker(id=5) with a snapshot at offset 7277 (org.apache.kafka.image.loader.MetadataLoader)
[2024-11-06 10:00:08,827] INFO [BrokerLifecycleManager id=5] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-11-06 10:00:08,828] INFO [BrokerServer id=5] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer)
[2024-11-06 10:00:08,828] INFO [BrokerServer id=5] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-11-06 10:00:08,828] INFO [BrokerServer id=5] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer)
[2024-11-06 10:00:08,829] INFO KafkaConfig values:
advertised.listeners = PLAINTEXT://vm81:19095, EXTERNAL://vm81:9092
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.include.jmx.reporter = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 5
broker.id.generation.enable = true
broker.rack = rack-0
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = CONTROLLER
controller.quorum.append.linger.ms = 25
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = [1@vm80:19091, 2@vm81:19092, 3@vm82:19093]
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
early.start.listeners = null
eligible.leader.replicas.enable = false
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
group.consumer.heartbeat.interval.ms = 5000
group.consumer.max.heartbeat.interval.ms = 15000
group.consumer.max.session.timeout.ms = 60000
group.consumer.max.size = 2147483647
group.consumer.min.heartbeat.interval.ms = 5000
group.consumer.min.session.timeout.ms = 45000
group.consumer.session.timeout.ms = 45000
group.coordinator.new.enable = false
group.coordinator.rebalance.protocols = [classic]
group.coordinator.threads = 1
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = PLAINTEXT
inter.broker.protocol.version = 3.7-IV4
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters =
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = CONTROLLER:PLAINTEXT, PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
listeners = PLAINTEXT://vm81:19095, EXTERNAL://vm81:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /var/lib/kafka/data
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.local.retention.bytes = -2
log.local.retention.ms = -2
log.message.downconversion.enable = true
log.message.format.version = 3.0-IV1
log.message.timestamp.after.max.ms = 9223372036854775807
log.message.timestamp.before.max.ms = 9223372036854775807
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1048588
metadata.log.dir = null
metadata.log.max.record.bytes.between.snapshots = 20971520
metadata.log.max.snapshot.interval.ms = 3600000
metadata.log.segment.bytes = 1073741824
metadata.log.segment.min.bytes = 8388608
metadata.log.segment.ms = 604800000
metadata.max.idle.interval.ms = 500
metadata.max.retention.bytes = 104857600
metadata.max.retention.ms = 604800000
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 2
node.id = 5
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.roles = [broker]
producer.id.expiration.check.interval.ms = 600000
producer.id.expiration.ms = 86400000
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.window.num = 11
quota.window.size.seconds = 1
remote.log.index.file.cache.total.size.bytes = 1073741824
remote.log.manager.task.interval.ms = 30000
remote.log.manager.task.retry.backoff.max.ms = 30000
remote.log.manager.task.retry.backoff.ms = 500
remote.log.manager.task.retry.jitter = 0.2
remote.log.manager.thread.pool.size = 10
remote.log.metadata.custom.metadata.max.bytes = 128
remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.class.path = null
remote.log.metadata.manager.impl.prefix = rlmm.config.
remote.log.metadata.manager.listener.name = null
remote.log.reader.max.pending.tasks = 100
remote.log.reader.threads = 10
remote.log.storage.manager.class.name = null
remote.log.storage.manager.class.path = null
remote.log.storage.manager.impl.prefix = rsm.config.
remote.log.storage.system.enable = false
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288
security.inter.broker.protocol = PLAINTEXT
security.providers = null
server.max.startup.time.ms = 9223372036854775807
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.listen.backlog.size = 50
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.allow.dn.changes = false
ssl.allow.san.changes = false
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
telemetry.max.bytes = 1048576
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.partition.verification.enable = true
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
unstable.api.versions.enable = false
unstable.metadata.versions.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect =
zookeeper.connection.timeout.ms = null
zookeeper.max.in.flight.requests = 10
zookeeper.metadata.migration.enable = false
zookeeper.metadata.migration.min.batch.size = 200
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
(kafka.server.KafkaConfig)
And the actual errors
Log errors - part2
[2024-11-06 10:00:08,839] INFO [BrokerLifecycleManager id=5] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-11-06 10:00:08,847] INFO [BrokerServer id=5] Waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-11-06 10:00:08,884] INFO [BrokerLifecycleManager id=5] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2024-11-06 10:00:08,885] INFO [BrokerServer id=5] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer)
[2024-11-06 10:00:08,889] INFO authorizerStart completed for endpoint PLAINTEXT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-11-06 10:00:08,889] INFO authorizerStart completed for endpoint EXTERNAL. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures)
[2024-11-06 10:00:08,889] INFO [SocketServer listenerType=BROKER, nodeId=5] Enabling request processing. (kafka.network.SocketServer)
[2024-11-06 10:00:08,892] ERROR Unable to start acceptor for ListenerName(PLAINTEXT) (kafka.network.DataPlaneAcceptor)
org.apache.kafka.common.KafkaException: Socket server failed to bind to vm81:19095: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:734)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:637)
at kafka.network.Acceptor.start(SocketServer.scala:632)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$2(SocketServer.scala:222)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:887)
at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2325)
at kafka.network.SocketServer.chainAcceptorFuture$1(SocketServer.scala:215)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$5(SocketServer.scala:229)
at java.base/java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4780)
at kafka.network.SocketServer.enableRequestProcessing(SocketServer.scala:229)
at kafka.server.BrokerServer.startup(BrokerServer.scala:536)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:89)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:730)
… 17 more
[2024-11-06 10:00:08,896] ERROR Unable to start acceptor for ListenerName(EXTERNAL) (kafka.network.DataPlaneAcceptor)
org.apache.kafka.common.KafkaException: Socket server failed to bind to vm81:9092: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:734)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:637)
at kafka.network.Acceptor.start(SocketServer.scala:632)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$2(SocketServer.scala:222)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:887)
at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2325)
at kafka.network.SocketServer.chainAcceptorFuture$1(SocketServer.scala:215)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$5(SocketServer.scala:229)
at java.base/java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4780)
at kafka.network.SocketServer.enableRequestProcessing(SocketServer.scala:229)
at kafka.server.BrokerServer.startup(BrokerServer.scala:536)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:89)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:730)
… 17 more
[2024-11-06 10:00:08,898] INFO [BrokerServer id=5] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-11-06 10:00:08,899] INFO [BrokerServer id=5] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2024-11-06 10:00:08,899] INFO [BrokerServer id=5] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2024-11-06 10:00:08,899] ERROR [BrokerServer id=5] Received a fatal error while waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
java.lang.RuntimeException: Unable to start acceptor for ListenerName(PLAINTEXT)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:652)
at kafka.network.Acceptor.start(SocketServer.scala:632)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$2(SocketServer.scala:222)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:887)
at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2325)
at kafka.network.SocketServer.chainAcceptorFuture$1(SocketServer.scala:215)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$5(SocketServer.scala:229)
at java.base/java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4780)
at kafka.network.SocketServer.enableRequestProcessing(SocketServer.scala:229)
at kafka.server.BrokerServer.startup(BrokerServer.scala:536)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
Caused by: org.apache.kafka.common.KafkaException: Socket server failed to bind to vm81:19095: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:734)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:637)
… 16 more
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:89)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:730)
… 17 more
[2024-11-06 10:00:08,899] INFO [BrokerServer id=5] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2024-11-06 10:00:08,901] ERROR [BrokerServer id=5] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer)
java.lang.RuntimeException: Received a fatal error while waiting for all of the SocketServer Acceptors to be started
at org.apache.kafka.server.util.FutureUtils.waitWithLogging(FutureUtils.java:68)
at kafka.server.BrokerServer.startup(BrokerServer.scala:546)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.RuntimeException: Unable to start acceptor for ListenerName(PLAINTEXT)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:652)
at kafka.network.Acceptor.start(SocketServer.scala:632)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$2(SocketServer.scala:222)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:887)
at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2325)
at kafka.network.SocketServer.chainAcceptorFuture$1(SocketServer.scala:215)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$5(SocketServer.scala:229)
at java.base/java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4780)
at kafka.network.SocketServer.enableRequestProcessing(SocketServer.scala:229)
at kafka.server.BrokerServer.startup(BrokerServer.scala:536)
… 6 more
Caused by: org.apache.kafka.common.KafkaException: Socket server failed to bind to vm81:19095: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:734)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:637)
… 16 more
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:89)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:730)
… 17 more
[2024-11-06 10:00:08,902] INFO [BrokerServer id=5] Transition from STARTED to SHUTTING_DOWN (kafka.server.BrokerServer)
[2024-11-06 10:00:08,902] INFO [BrokerServer id=5] shutting down (kafka.server.BrokerServer)
[2024-11-06 10:00:08,903] INFO [BrokerLifecycleManager id=5] Beginning controlled shutdown. (kafka.server.BrokerLifecycleManager)
[2024-11-06 10:00:08,940] INFO [BrokerLifecycleManager id=5] The controller has asked us to exit controlled shutdown. (kafka.server.BrokerLifecycleManager)
[2024-11-06 10:00:08,941] INFO [BrokerLifecycleManager id=5] beginShutdown: shutting down event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:08,942] INFO [BrokerLifecycleManager id=5] Transitioning from PENDING_CONTROLLED_SHUTDOWN to SHUTTING_DOWN. (kafka.server.BrokerLifecycleManager)
[2024-11-06 10:00:08,942] INFO [broker-5-to-controller-heartbeat-channel-manager]: Shutting down (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,942] INFO [broker-5-to-controller-heartbeat-channel-manager]: Stopped (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,944] INFO [broker-5-to-controller-heartbeat-channel-manager]: Shutdown completed (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,954] INFO Node to controller channel manager for heartbeat shutdown (kafka.server.NodeToControllerChannelManagerImpl)
[2024-11-06 10:00:08,955] INFO [SocketServer listenerType=BROKER, nodeId=5] Stopping socket server request processors (kafka.network.SocketServer)
[2024-11-06 10:00:08,959] INFO [SocketServer listenerType=BROKER, nodeId=5] Stopped socket server request processors (kafka.network.SocketServer)
[2024-11-06 10:00:08,960] INFO [data-plane Kafka Request Handler on Broker 5], shutting down (kafka.server.KafkaRequestHandlerPool)
[2024-11-06 10:00:08,965] INFO [data-plane Kafka Request Handler on Broker 5], shut down completely (kafka.server.KafkaRequestHandlerPool)
[2024-11-06 10:00:08,966] INFO [ExpirationReaper-5-AlterAcls]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,967] INFO [ExpirationReaper-5-AlterAcls]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,967] INFO [ExpirationReaper-5-AlterAcls]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,968] INFO [KafkaApi-5] Shutdown complete. (kafka.server.KafkaApis)
[2024-11-06 10:00:08,970] INFO [TransactionCoordinator id=5] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-11-06 10:00:08,971] INFO [Transaction State Manager 5]: Shutdown complete (kafka.coordinator.transaction.TransactionStateManager)
[2024-11-06 10:00:08,971] INFO [TxnMarkerSenderThread-5]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-11-06 10:00:08,971] INFO [TxnMarkerSenderThread-5]: Stopped (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-11-06 10:00:08,971] INFO [TxnMarkerSenderThread-5]: Shutdown completed (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-11-06 10:00:08,975] INFO [TransactionCoordinator id=5] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-11-06 10:00:08,976] INFO [GroupCoordinator 5]: Shutting down. (kafka.coordinator.group.GroupCoordinator)
[2024-11-06 10:00:08,982] INFO [ExpirationReaper-5-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,982] INFO [ExpirationReaper-5-Heartbeat]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,982] INFO [ExpirationReaper-5-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,983] INFO [ExpirationReaper-5-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,983] INFO [ExpirationReaper-5-Rebalance]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,983] INFO [ExpirationReaper-5-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,983] INFO [GroupCoordinator 5]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator)
[2024-11-06 10:00:08,984] INFO [AssignmentsManager id=5]KafkaEventQueue#close: shutting down event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:08,984] INFO [broker-5-to-controller-directory-assignments-channel-manager]: Shutting down (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,984] INFO [broker-5-to-controller-directory-assignments-channel-manager]: Stopped (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,984] INFO [broker-5-to-controller-directory-assignments-channel-manager]: Shutdown completed (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,985] INFO Node to controller channel manager for directory-assignments shutdown (kafka.server.NodeToControllerChannelManagerImpl)
[2024-11-06 10:00:08,985] INFO [AssignmentsManager id=5]closed event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:08,985] INFO [ReplicaManager broker=5] Shutting down (kafka.server.ReplicaManager)
[2024-11-06 10:00:08,986] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-11-06 10:00:08,986] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-11-06 10:00:08,986] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-11-06 10:00:08,986] INFO [ReplicaFetcherManager on broker 5] shutting down (kafka.server.ReplicaFetcherManager)
[2024-11-06 10:00:08,987] INFO [ReplicaFetcherManager on broker 5] shutdown completed (kafka.server.ReplicaFetcherManager)
[2024-11-06 10:00:08,987] INFO [ReplicaAlterLogDirsManager on broker 5] shutting down (kafka.server.ReplicaAlterLogDirsManager)
[2024-11-06 10:00:08,988] INFO [ReplicaAlterLogDirsManager on broker 5] shutdown completed (kafka.server.ReplicaAlterLogDirsManager)
[2024-11-06 10:00:08,988] INFO [ExpirationReaper-5-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,988] INFO [ExpirationReaper-5-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,988] INFO [ExpirationReaper-5-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,988] INFO [ExpirationReaper-5-RemoteFetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,990] INFO [ExpirationReaper-5-RemoteFetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,990] INFO [ExpirationReaper-5-RemoteFetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,990] INFO [ExpirationReaper-5-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,990] INFO [ExpirationReaper-5-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,990] INFO [ExpirationReaper-5-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,991] INFO [ExpirationReaper-5-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,992] INFO [ExpirationReaper-5-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,992] INFO [ExpirationReaper-5-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,992] INFO [ExpirationReaper-5-ElectLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,992] INFO [ExpirationReaper-5-ElectLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,992] INFO [ExpirationReaper-5-ElectLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-11-06 10:00:08,993] INFO [AddPartitionsToTxnSenderThread-5]: Shutting down (kafka.server.AddPartitionsToTxnManager)
[2024-11-06 10:00:08,995] INFO [AddPartitionsToTxnSenderThread-5]: Stopped (kafka.server.AddPartitionsToTxnManager)
[2024-11-06 10:00:08,995] INFO [AddPartitionsToTxnSenderThread-5]: Shutdown completed (kafka.server.AddPartitionsToTxnManager)
[2024-11-06 10:00:08,995] INFO [ReplicaManager broker=5] Shut down completely (kafka.server.ReplicaManager)
[2024-11-06 10:00:08,996] INFO [broker-5-to-controller-alter-partition-channel-manager]: Shutting down (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,996] INFO [broker-5-to-controller-alter-partition-channel-manager]: Stopped (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,996] INFO [broker-5-to-controller-alter-partition-channel-manager]: Shutdown completed (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,996] INFO Node to controller channel manager for alter-partition shutdown (kafka.server.NodeToControllerChannelManagerImpl)
[2024-11-06 10:00:08,996] INFO [broker-5-to-controller-forwarding-channel-manager]: Shutting down (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,998] INFO [broker-5-to-controller-forwarding-channel-manager]: Stopped (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,998] INFO [broker-5-to-controller-forwarding-channel-manager]: Shutdown completed (kafka.server.NodeToControllerRequestThread)
[2024-11-06 10:00:08,999] INFO Node to controller channel manager for forwarding shutdown (kafka.server.NodeToControllerChannelManagerImpl)
[2024-11-06 10:00:09,000] INFO Shutting down. (kafka.log.LogManager)
[2024-11-06 10:00:09,002] INFO Shutting down the log cleaner. (kafka.log.LogCleaner)
[2024-11-06 10:00:09,003] INFO [kafka-log-cleaner-thread-0]: Shutting down (kafka.log.LogCleaner$CleanerThread)
[2024-11-06 10:00:09,009] INFO [kafka-log-cleaner-thread-0]: Stopped (kafka.log.LogCleaner$CleanerThread)
[2024-11-06 10:00:09,009] INFO [kafka-log-cleaner-thread-0]: Shutdown completed (kafka.log.LogCleaner$CleanerThread)
[2024-11-06 10:00:09,058] INFO Shutdown complete. (kafka.log.LogManager)
[2024-11-06 10:00:09,059] INFO [broker-5-ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,060] INFO [broker-5-ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,060] INFO [broker-5-ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,060] INFO [broker-5-ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,060] INFO [broker-5-ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,060] INFO [broker-5-ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,060] INFO [broker-5-ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,062] INFO [broker-5-ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,062] INFO [broker-5-ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,062] INFO [broker-5-ThrottledChannelReaper-ControllerMutation]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,062] INFO [broker-5-ThrottledChannelReaper-ControllerMutation]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,062] INFO [broker-5-ThrottledChannelReaper-ControllerMutation]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-11-06 10:00:09,063] INFO [SocketServer listenerType=BROKER, nodeId=5] Shutting down socket server (kafka.network.SocketServer)
[2024-11-06 10:00:09,078] INFO [SocketServer listenerType=BROKER, nodeId=5] Shutdown completed (kafka.network.SocketServer)
[2024-11-06 10:00:09,080] INFO Broker and topic stats closed (kafka.server.BrokerTopicStats)
[2024-11-06 10:00:09,080] INFO [BrokerLifecycleManager id=5] closed event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:09,081] INFO [SharedServer id=5] Stopping SharedServer (kafka.server.SharedServer)
[2024-11-06 10:00:09,081] INFO [MetadataLoader id=5] beginShutdown: shutting down event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:09,081] INFO [SnapshotGenerator id=5] close: shutting down event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:09,089] INFO [SnapshotGenerator id=5] closed event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:09,089] INFO [MetadataLoader id=5] closed event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:09,091] INFO [SnapshotGenerator id=5] closed event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2024-11-06 10:00:09,091] INFO [raft-expiration-reaper]: Shutting down (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2024-11-06 10:00:09,154] INFO [raft-expiration-reaper]: Stopped (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2024-11-06 10:00:09,154] INFO [raft-expiration-reaper]: Shutdown completed (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2024-11-06 10:00:09,155] INFO [kafka-5-raft-io-thread]: Shutting down (kafka.raft.KafkaRaftManager$RaftIoThread)
[2024-11-06 10:00:09,155] INFO [RaftManager id=5] Beginning graceful shutdown (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-06 10:00:09,156] INFO [RaftManager id=5] Graceful shutdown completed (org.apache.kafka.raft.KafkaRaftClient)
[2024-11-06 10:00:09,157] INFO [kafka-5-raft-io-thread]: Completed graceful shutdown of RaftClient (kafka.raft.KafkaRaftManager$RaftIoThread)
[2024-11-06 10:00:09,157] INFO [kafka-5-raft-io-thread]: Stopped (kafka.raft.KafkaRaftManager$RaftIoThread)
[2024-11-06 10:00:09,157] INFO [kafka-5-raft-io-thread]: Shutdown completed (kafka.raft.KafkaRaftManager$RaftIoThread)
[2024-11-06 10:00:09,164] INFO [kafka-5-raft-outbound-request-thread]: Shutting down (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-11-06 10:00:09,164] INFO [kafka-5-raft-outbound-request-thread]: Stopped (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-11-06 10:00:09,164] INFO [kafka-5-raft-outbound-request-thread]: Shutdown completed (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-11-06 10:00:09,167] INFO [ProducerStateManager partition=__cluster_metadata-0] Wrote producer snapshot at offset 7282 with 0 producer ids in 1 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager)
[2024-11-06 10:00:09,174] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics)
[2024-11-06 10:00:09,174] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics)
[2024-11-06 10:00:09,174] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics)
[2024-11-06 10:00:09,175] INFO App info kafka.server for 5 unregistered (org.apache.kafka.common.utils.AppInfoParser)
[2024-11-06 10:00:09,175] INFO [BrokerServer id=5] shut down completed (kafka.server.BrokerServer)
[2024-11-06 10:00:09,175] INFO [BrokerServer id=5] Transition from SHUTTING_DOWN to SHUTDOWN (kafka.server.BrokerServer)
[2024-11-06 10:00:09,175] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
java.lang.RuntimeException: Received a fatal error while waiting for all of the SocketServer Acceptors to be started
at org.apache.kafka.server.util.FutureUtils.waitWithLogging(FutureUtils.java:68)
at kafka.server.BrokerServer.startup(BrokerServer.scala:546)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.RuntimeException: Unable to start acceptor for ListenerName(PLAINTEXT)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:652)
at kafka.network.Acceptor.start(SocketServer.scala:632)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$2(SocketServer.scala:222)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:887)
at java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2325)
at kafka.network.SocketServer.chainAcceptorFuture$1(SocketServer.scala:215)
at kafka.network.SocketServer.$anonfun$enableRequestProcessing$5(SocketServer.scala:229)
at java.base/java.util.concurrent.ConcurrentHashMap$ValuesView.forEach(ConcurrentHashMap.java:4780)
at kafka.network.SocketServer.enableRequestProcessing(SocketServer.scala:229)
at kafka.server.BrokerServer.startup(BrokerServer.scala:536)
… 6 more
Caused by: org.apache.kafka.common.KafkaException: Socket server failed to bind to vm81:19095: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:734)
at kafka.network.Acceptor.liftedTree1$1(SocketServer.scala:637)
… 16 more
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:89)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:730)
… 17 more
[2024-11-06 10:00:09,177] INFO App info kafka.server for 5 unregistered (org.apache.kafka.common.utils.AppInfoParser)
/etc/hosts is fine?
e.g. vm81 resolvable?
Yes.
Also working perfectly fine with the controller on the same box…
I see
how did you start the controller?
could share this as well?
Certainly:
controller startup script
podman run -d
–name controller-2
-h=vm81
-p 19092:19092
-e KAFKA_NODE_ID=2
-e CLUSTER_ID=‘MkU3OEVBNTcwNTJENDM2Qk’
-e KAFKA_PROCESS_ROLES=‘controller’
-e KAFKA_LISTENERS=‘CONTROLLER://controller-2:19092’
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=‘CONTROLLER:PLAINTEXT’
-e KAFKA_INTER_BROKER_LISTENER_NAME=‘CONTROLLER’
-e KAFKA_CONTROLLER_LISTENER_NAMES=‘CONTROLLER’
-e KAFKA_CONTROLLER_QUORUM_VOTERS=‘1@vm80:19091,2@vm81:19092,3@vm82:19093’
-e KAFKA_BROKER_RACK=‘rack-0’
-e KAFKA_DEFAULT_REPLICATION_FACTOR=3
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=3
-e KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=3
-e KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR=3
-e KAFKA_CONFLUENT_METADATA_TOPIC_REPLICATION_FACTOR=3
-e KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR=3
confluentinc/cp-kafka:latest
did a tests locally tbh it works for me with a single node setup at least
one broker one controller at the moment and I’m able to bind to the ports
controller
podman run -d \
-h=vm80 \
--name controller-1 \
-p 19091:19091 \
-e KAFKA_NODE_ID=1 \
-e CLUSTER_ID='MkU3OEVBNTcwNTJENDM2Qk' \
-e KAFKA_PROCESS_ROLES='controller' \
-e KAFKA_LISTENERS='CONTROLLER://controller-1:19091' \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='CONTROLLER' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@vm80:19091' \
-e KAFKA_BROKER_RACK='rack-0' \
-e KAFKA_DEFAULT_REPLICATION_FACTOR=1 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1 \
-e KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONFLUENT_METADATA_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:7.7.0
broker
podman run -d \
--name kafka-1 \
-h=vm80 \
-p 9091:9091 \
-p 19094:19094 \
-e KAFKA_LISTENERS='PLAINTEXT://vm80:19094, EXTERNAL://vm80:9091' \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:PLAINTEXT, PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT' \
-e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://vm80:19094, EXTERNAL://vm80:9091' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='PLAINTEXT' \
-e KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0 \
-e KAFKA_BROKER_RACK='rack-0' \
-e KAFKA_MIN_INSYNC_REPLICAS=1 \
-e KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_CONFLUENT_CLUSTER_LINK_ENABLE='true' \
-e KAFKA_CONFLUENT_REPORTERS_TELEMETRY_AUTO_ENABLE='false' \
-e KAFKA_NODE_ID=2 \
-e CLUSTER_ID='MkU3OEVBNTcwNTJENDM2Qk' \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@vm80:19091' \
-e KAFKA_PROCESS_ROLES='broker' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
confluentinc/cp-kafka:latest
Very very weird …
The vm80 startup scripts on single host work for me as well. But it can’t really communicate since I did not set up /etc/hosts.
However, as soon as I change vm80 to the actual hostname, I get the “Cannot assign requested address” error and the container dies.
This is similar to before, when using the sample single container set it worked and when trying to adjust (several changes incl hostname) it wouldnt any more which prompted me to ask for help in the first place 'cause i thought I had an invalid config.
So what is it that is behaving differently when the it starts up in controller mode vs broker mode…?
how does you’re /etc/hosts look like?
so single broker works on vm80 when binding to vm80 address right?
is vm80 pointing to localhost?
actual hostname points to what?
another idea:
what about using cp-ansible / a non-container way of configuring the stack?
So what is it that is behaving differently when the it starts up in controller mode vs broker mode…?
a lot of as there are more listeners and so on as you may have seen.
vm80 is not in /etc/hosts at all (as this is just a placeholder since i cannot post the hostname), but when using it in the config file it works. O/c it does not work with connecting to controller, but the broker container stays up.
The actual hostname has the format
ip shortname fqdn
I used the fqdn for the tests
The q re differences was wrt to is there any reason why the controller might be able to bind to host;port while the broker is not able do the same on the same box?
At this point I cannot imagine this to be an OS config problem since we clearly see that it works with the controller container. So why would the broker container not be able to do the same.
Theoretically I could deploy cp-kafka on the bare vm, but i have not looked into it.
If we cannot get it to run I also can look into using apache kafka instead, in the end I don’t mind. Just thought that cp-kafka might be benficial if we ever need to get paid support. This was supposed to be a simple replacement of a vendor supplied kafka container with an open source one but this is way more difficult than I envisioned;)
ok I see
the basic difference is that the broker tries to advertise 2 listeners and the controller only 1.
according to the logs the issue seems to be related to platintext listener
is there anything in the OS logs?
did you try to switch ports?
best,
michael
Whatever ports I start the broker with it dies.
External only, Plaintext only, different ports, highports, , same port as controller (which then has different one), start broker with controllers on other boxes (ie no local controller), whatever - always the same.
Single container with controller+broker and hostname → dead
For some reason the broker does not start in this environment.
Nothing in debug logs, nothing in system logs
I have an older kafka (the vendor supplied one, ZK based config) on the same box on different ports, with Plaintext and SSL listeners, that is working perfectly fine.
(O/c tried turning that off before starting broker too, just in case).
ok pretty strange
one last thing to try is to check what’s happening when you start
without binding to the host vm81 but binding to localhost?
something like this
EXTERNAL://localhost:9092
Thank you soo much !
I tried :9093 at some point but that didnt do it, so I don’t think I would have found that one…
-e KAFKA_LISTENERS='PLAINTEXT://localhost:19096,EXTERNAL://localhost:9093' \
[2024-11-08 06:32:17,163] INFO Awaiting socket connections on localhost:9093. (kafka.network.DataPlaneAcceptor)
[2024-11-08 06:32:17,164] INFO Awaiting socket connections on localhost:19096. (kafka.network.DataPlaneAcceptor)
Now at least Kafka stays up and I can start looking at doing some actual configurations
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.