Issues Adding ACLs in KRaft Mode Kafka Cluster - No Authorizer Configured Error

Hi everyone,

I’m encountering an issue while working with a KRaft-mode Kafka cluster. I can list ACLs without any issues, but when I try to add a new ACL, I get the following error:

Error while executing ACL command: org.apache.kafka.common.errors.SecurityDisabledException: No Authorizer is configured.

Here’s the command I’m running to add the ACL:

kafka-acls --bootstrap-server broker01.abc.net:9094 --command-config adminclient.properties --add --allow-principal User:testing --operation Read --operation Write --topic '*'

I’ve confirmed that the authorizer is supposed to be configured, as I have set the KAFKA_AUTHORIZER_CLASS_NAME to org.apache.kafka.metadata.authorizer.StandardAuthorizer in the environment variables.

Cluster Setup:

  • 3-node KRaft cluster, with each node running both broker and controller containers in isolated mode.
  • 6 containers in total: 3 for brokers and 3 for controllers.
  • I’m using KAFKA_KRAFT_MODE=“true” in all the relevant containers.

Error Details:

When running the kafka-acls command, I get the following error message:

org.apache.kafka.common.errors.SecurityDisabledException: No Authorizer is configured.

Playbook: Here is the relevant portion of my Ansible playbook used to configure Kafka:

* name: Setup Kafka with KRaft and SSL
  hosts: kafka
  become: yes
  vars:
    kafka_version: "7.8.0"
    kafka_container_name: "kafka"
    kafka_volume_broker: "kafka-data-broker"
    kafka_volume_controller: "kafka-data-controller"
    kafka_data_dir_broker: "/var/lib/kafka/data"
    kafka_data_dir_controller: "/var/lib/kafka/data"
    kafka_env:
      KAFKA_KRAFT_MODE: "true"
      KAFKA_AUTHORIZER_CLASS_NAME: "org.apache.kafka.metadata.authorizer.StandardAuthorizer"
      KAFKA_SASL_ENABLED_MECHANISMS: "PLAIN"
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: "SASL_SSL"
      KAFKA_SUPER_USERS: "User:admin"
      KAFKA_LISTENERS: "SASL_SSL://:9094,SASL_SSL01://:19094,SASL_SSL02://:29094"

Troubleshooting Attempts:

  • I have verified the KAFKA_AUTHORIZER_CLASS_NAME is correctly set to StandardAuthorizer for both brokers and controllers.
  • I also tried using a different properties file (kafka.properties), but then I encountered a TimeoutException with the error:

Timed out waiting for a node assignment. Call: createAcls

Broker kafka.properties:

properties

replica.fetch.max.bytes=1152921504
ssl.keystore.filename=kafka.keystore.jks
super.users=User:admin
default.replication.factor=1
transaction.state.log.min.isr=1
ssl.key.credentials=kafka_ssl_key_creds
process.roles=broker
security.inter.broker.protocol=SASL_SSL
controller.listener.names=CONTROLLER
controller.quorum.voters=1@broker01.abc.net:29092,2@broker02.abc.net:29092,3@broker03.abc.net:29092
message.max.bytes=1152921504
auto.create.topics.enable=false
node.id=6
ssl.key.password=XXXXXX
ssl.truststore.password=YYYYYY
ssl.keystore.type=JKS
log.retention.ms=604800000
metadata.load.timeout.ms=60000
advertised.listeners=SASL_SSL://broker03.abc.net:9094,SASL_SSL01://broker02.abc.net:19094,SASL_SSL02://broker01.abc.net:29094
sasl.enabled.mechanisms=PLAIN
kraft.mode=true
listener.security.protocol.map=SASL_SSL01:SASL_SSL,SASL_SSL02:SASL_SSL,CONTROLLER:PLAINTEXT,SASL_SSL:SASL_SSL,SSL:SSL
ssl.truststore.filename=kafka.truststore.jks
fetch.message.max.bytes=1152921504
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
ssl.truststore.credentials=kafka_truststore_creds
log.retention.hours=168
broker.rack=Test123
ssl.keystore.password=XXXXX
min.insync.replicas=1
transaction.state.log.replication.factor=1
listeners=SASL_SSL://:9094,SASL_SSL01://:19094,SASL_SSL02://:29094
ssl.keystore.location=/etc/kafka/secrets/kafka.keystore.jks
zookeeper.connect=
sasl.mechanism.inter.broker.protocol=PLAIN
ssl.truststore.location=/etc/kafka/secrets/kafka.truststore.jks
ssl.endpoint.identification.algorithm=
ssl.truststore.type=JKS
log.dirs=/var/lib/kafka/data
offsets.topic.replication.factor=1
allow.everyone.if.no.acl.found=true
ssl.client.auth=required
ssl.keystore.credentials=kafka_keystore_creds

Controller kafka.properties:

inter.broker.listener.name=SASL_SSL
transaction.state.log.min.isr=1
process.roles=controller
controller.listener.names=CONTROLLER
group.initial.rebalance.delay.ms=0
controller.quorum.voters=1@broker01.abc.net:29092,2@broker02.abc.net:29092,3@broker03.abc.net:29092
node.id=3
kraft.mode=true
transaction.state.log.replication.factor=1
listeners=CONTROLLER://:29092
zookeeper.connect=
log.dirs=/var/lib/kafka/data
offsets.topic.replication.factor=1

Has anyone faced a similar issue or has any suggestions on what might be misconfigured in my setup? Any help or insights would be greatly appreciated!

Thanks in advance!

Hello, sgangu!
You can try adding the following options to your controller configuration:

authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
allow.everyone.if.no.acl.found=true

Don’t forget to set the option ‘allow.everyone.if.no.acl.found to false’ on production.

Thank you. I updated it and now the controller container is stable, however the broker is getting restarted itself and is not stable with the following error logs.

Controller Logs:

, clientId=raft-client-4, correlationId=5943, headerVersion=2) -- FetchRequestData(clusterId='abc-7890', replicaId=-1, replicaState=ReplicaState(replicaId=4, replicaEpoch=-1), maxWaitMs=500, minBytes=0, maxBytes=8388608, isolationLevel=0, sessionId=0, sessionEpoch=-1, topics=[FetchTopic(topic='', topicId=AAAAAAAAAAAAAAAAAAAAAQ, partitions=[FetchPartition(partition=0, currentLeaderEpoch=164250, fetchOffset=8744265, lastFetchedEpoch=164250, logStartOffset=-1, partitionMaxBytes=0)])], forgottenTopicsData=[], rackId='') with context RequestContext(header=RequestHeader(apiKey=FETCH, apiVersion=16, clientId=raft-client-4, correlationId=5943, headerVersion=2), connectionId='1.2.3.82:29092-1.2.3.80:57460-2', clientAddress=/1.2.3.80, principal=User:ANONYMOUS, listenerName=ListenerName(CONTROLLER), securityProtocol=PLAINTEXT, clientInformation=ClientInformation(softwareName=apache-kafka-java, softwareVersion=7.8.0-ccs), fromPrivilegedListener=false, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@69bca88]) (kafka.server.ControllerApis)
org.apache.kafka.common.errors.AuthorizerNotReadyException
[2025-02-17 16:03:00,050] ERROR [RaftManager id=3] Unexpected error UNKNOWN_SERVER_ERROR in VOTE response: InboundResponse(correlationId=3032, data=VoteResponseData(errorCode=-1, topics=[]), source=broker02.abc.net:29092 (id: 2 rack: null)) (org.apache.kafka.raft.KafkaRaftClient)
[2025-02-17 16:03:00,061] ERROR [RaftManager id=3] Unexpected error UNKNOWN_SERVER_ERROR in VOTE response: InboundResponse(correlationId=3033, data=VoteResponseData(errorCode=-1, topics=[]), source=broker01.abc.net:29092 (id: 1 rack: null)) (org.apache.kafka.raft.KafkaRaftClient)
[2025-02-17 16:03:00,069] ERROR [ControllerApis nodeId=3] Unexpected error handling request RequestHeader(apiKey=FETCH, apiVersion=16, clientId=raft-client-4, correlationId=5946, headerVersion=2) -- FetchRequestData(clusterId='abc-7890', replicaId=-1, replicaState=ReplicaState(replicaId=4, replicaEpoch=-1), maxWaitMs=500, minBytes=0, maxBytes=8388608, isolationLevel=0, sessionId=0, sessionEpoch=-1, topics=[FetchTopic(topic='', topicId=AAAAAAAAAAAAAAAAAAAAAQ, partitions=[FetchPartition(partition=0, currentLeaderEpoch=164250, fetchOffset=8744265, lastFetchedEpoch=164250, logStartOffset=-1, partitionMaxBytes=0)])], forgottenTopicsData=[], rackId='') with context RequestContext(header=RequestHeader(apiKey=FETCH, apiVersion=16, clientId=raft-client-4, correlationId=5946, headerVersion=2), connectionId='1.2.3.82:29092-1.2.3..80:57460-2', clientAddress=/1.2.3..80, principal=User:ANONYMOUS, listenerName=ListenerName(CONTROLLER), securityProtocol=PLAINTEXT, clientInformation=ClientInformation(softwareName=apache-kafka-java, softwareVersion=7.8.0-ccs), fromPrivilegedListener=false, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@]) (kafka.server.ControllerApis)
context canceled

Broker Logs:

[2025-02-17 16:04:04,867] INFO [broker-5-to-controller-heartbeat-channel-manager]: Recorded new KRaft controller, from now on will use node broker02.abc.net:29092 (id: 2 rack: null) (kafka.server.NodeToControllerRequestThread)
[2025-02-17 16:04:04,868] WARN [NodeToControllerChannelManager id=5 name=heartbeat] Error connecting to node broker02.abc.net:29092 (id: 2 rack: null) (org.apache.kafka.clients.NetworkClient)
java.io.IOException: Channel could not be created for socket java.nio.channels.SocketChannel[closed]
        at org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:352)
        at org.apache.kafka.common.network.Selector.registerChannel(Selector.java:329)
        at org.apache.kafka.common.network.Selector.connect(Selector.java:256)
        at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:1072)
        at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:320)
        at org.apache.kafka.server.util.InterBrokerSendThread.sendRequests(InterBrokerSendThread.java:145)
        at org.apache.kafka.server.util.InterBrokerSendThread.pollOnce(InterBrokerSendThread.java:108)
        at kafka.server.NodeToControllerRequestThread.doWork(NodeToControllerChannelManager.scala:375)
        at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:135)
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
        at org.apache.kafka.common.network.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:243)
        at org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:340)
        ... 8 more
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login
        at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.firstPrincipal(SaslClientAuthenticator.java:633)
        at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.<init>(SaslClientAuthenticator.java:203)
        at org.apache.kafka.common.network.SaslChannelBuilder.buildClientAuthenticator(SaslChannelBuilder.java:289)
        at org.apache.kafka.common.network.SaslChannelBuilder.lambda$buildChannel$1(SaslChannelBuilder.java:229)
        at org.apache.kafka.common.network.KafkaChannel.<init>(KafkaChannel.java:143)
        at org.apache.kafka.common.network.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:237)
        at org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:340)
        at org.apache.kafka.common.network.Selector.registerChannel(Selector.java:329)
        at org.apache.kafka.common.network.Selector.connect(Selector.java:256)
        at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:1072)
        at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:320)
        at org.apache.kafka.server.util.InterBrokerSendThread.sendRequests(InterBrokerSendThread.java:145)
        at org.apache.kafka.server.util.InterBrokerSendThread.pollOnce(InterBrokerSendThread.java:108)
        at kafka.server.NodeToControllerRequestThread.doWork(NodeToControllerChannelManager.scala:375)
        at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:135)
[2025-02-17 16:04:04,888] INFO [MetadataLoader id=5] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2025-02-17 16:04:04,919] INFO [broker-5-to-controller-heartbeat-channel-manager]: Recorded new KRaft controller, from now on will use node broker02.abc.net:29092 (id: 2 rack: null) (kafka.server.NodeToControllerRequestThread)

In my opinion, you need to check SASL in this case. I can’t help with this because I don’t use it.