ClusterAuthorizationException with User:ANONYMOUS when setting up SASL with Kraft and ACL

Hello,

I am trying to set up SASL with the Kraft mode in Kafka. I have a Docker Compose setup that works fine without ACLs, and I can connect to Kafka using producers. However, when I try to implement ACLs, I run into issues.

Working Docker Compose without ACLs:


---
version: '2'
services:

  broker:
    image: confluentinc/cp-kafka:7.5.0
    hostname: broker
    container_name: broker
    ports:
      - "9092:9092"
      - "9101:9101"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'

      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      # Replace CLUSTER_ID with a unique base64 UUID using "bin/kafka-storage.sh random-uuid" 
      # See https://docs.confluent.io/kafka/operations-tools/kafka-tools.html#kafka-storage-sh
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
      #KAFKA_LISTENERS: SASL_PLAINTEXT://:9092
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/broker_jaas.conf
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT'
      KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:9092,SASL_PLAINTEXT://broker:9093'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092,SASL_PLAINTEXT://broker:9093'
    volumes:
      - ./broker_jaas.conf:/etc/kafka/broker_jaas.conf

Docker Compose with ACLs (causing issues):

version: '2'
services:

  broker:
    image: confluentinc/cp-kafka:7.5.0
    hostname: broker
    container_name: broker
    ports:
      - "9092:9092"
      - "9101:9101"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'

      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      # Replace CLUSTER_ID with a unique base64 UUID using "bin/kafka-storage.sh random-uuid" 
      # See https://docs.confluent.io/kafka/operations-tools/kafka-tools.html#kafka-storage-sh
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
      #KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT'
      KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:9092,SASL_PLAINTEXT://broker:9093'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092,SASL_PLAINTEXT://broker:9093'
      KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
      KAFKA_SUPER_USERS: User:broker
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/broker_jaas.conf
      KAFKA_INTER_BROKER_LISTENER_NAME: SASL_PLAINTEXT
    volumes:
      - ./broker_jaas.conf:/etc/kafka/broker_jaas.conf

JAAS Configuration (broker_jaas.conf):

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="broker"
    password="broker-password"
    user_broker="broker-password";
};

When I try to bring up the Docker container with the ACL configuration, I get the following error:

broker  | org.apache.kafka.common.errors.ClusterAuthorizationException: Request Request(processor=0, connectionId=172.25.0.2:29093-172.25.0.2:42614-0, session=Session(User:ANONYMOUS,/172.25.0.2), listenerName=ListenerName(CONTROLLER), securityProtocol=PLAINTEXT, buffer=null, envelope=None) is not authorized.
broker  | [2023-10-12 20:22:29,692] INFO [BrokerLifecycleManager id=1] Unable to register broker 1 because the controller returned error CLUSTER_AUTHORIZATION_FAILED (kafka.server.BrokerLifecycleManager)
broker  | [2023-10-12 20:22:38,721] ERROR [ControllerApis nodeId=1] Unexpected error handling request RequestHeader(apiKey=BROKER_REGISTRATION, apiVersion=1, clientId=1, correlationId=13, headerVersion=2) -- BrokerRegistrationRequestData(brokerId=1, clusterId='MkU3OEVBNTcwNTJENDM2Qg', incarnationId=bext2mlpSe22Jmw2YvcQ6w, listeners=[Listener(name='PLAINTEXT', host='broker', port=29092, securityProtocol=0), Listener(name='PLAINTEXT_HOST', host='localhost', port=9092, securityProtocol=0), Listener(name='SASL_PLAINTEXT', host='broker', port=9093, securityProtocol=2)], features=[Feature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=11)], rack=null, isMigratingZkBroker=false) with context RequestContext(header=RequestHeader(apiKey=BROKER_REGISTRATION, apiVersion=1, clientId=1, correlationId=13, headerVersion=2), connectionId='172.25.0.2:29093-172.25.0.2:42614-0', clientAddress=/172.25.0.2, principal=User:ANONYMOUS, listenerName=ListenerName(CONTROLLER), securityProtocol=PLAINTEXT, clientInformation=ClientInformation(softwareName=apache-kafka-java, softwareVersion=7.5.0-ccs), fromPrivilegedListener=false, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@4eaf9846]) (kafka.server.ControllerApis)

From the error, it seems like my broker is still trying to register as User:ANONYMOUS. I’m not sure why this is happening. I’m new to Kafka and would appreciate any guidance on this issue.

Thank you!

UPDATE -
I got around this by changing
CONTROLLER:PLAINTEXT to CONTROLLER:SASL_PLAINTEXT.
I am however getting new errors,
org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config

This is a bit surprising since I am trying to set up SASL/PLAIN, not Kerberos.

I might have not mounted my congif properly, here is the updated docker compose file

---
version: '2'
services:

  broker:
    image: confluentinc/cp-kafka:7.5.0
    hostname: broker
    container_name: broker
    ports:
      - "9092:9092"
      - "9101:9101"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/broker_jaas.conf
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SASL_PLAINTEXT:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
      KAFKA_LISTENERS: SASL_PLAINTEXT://broker:9092,CONTROLLER://broker:29093
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://broker:9092
      KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
      KAFKA_SUPER_USERS: User:admin    
    volumes:
      - ./broker_jaas.conf:/etc/kafka/broker_jaas.conf

@devve I actually had a similar problem and that’s why I have found your question. I’m also still rather new to security and Kafka, but here is what I have found and what works for me.

I have tried the same with CONTROLLER:SASL_PLAINTEXT but was not able to make it work. Think I got further than you, the problem you face is that the default sasl.mechanism is GSSAPI. In my case the problem I got is the same mentioned here: Self-hosted Kafka with KRaft, SSL and SASL (scram-sha-256). As you can see in my answer, I switched to PLAINTEXT on the CONTROLLER, the same you had initially. With that setting, I got the same problem you had in your first try.

I guess the reason why the ANONYMOUS user is used, is that using PLAINTEXT on the controller, no authentication is necessary and therefore no user is available. I managed to get around the error my adding the ANONYMOUS user to the list or super users.

      KAFKA_SUPER_USERS: User:broker;User:ANONYMOUS

@gschmutz Thanks for the reply, I knew about

KAFKA_SUPER_USERS: User:broker;User:ANONYMOUS

The problem is that it is insecure and I did not want Anonymous clients to connect to my kafka cluster.

I have implemented SASL now, successfully. Here is the docker compose if you want

  version: '2'
services:
  broker:
    volumes:
      - ./broker_jaas.conf:/etc/kafka/broker_jaas.conf
    image: confluentinc/cp-kafka:7.5.0
    hostname: broker
    container_name: broker
    ports:
      - "9092:9092"
      - "9101:9101"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/broker_jaas.conf
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SASL_PLAINTEXT:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
      KAFKA_LISTENERS: SASL_PLAINTEXT://broker:9092,CONTROLLER://broker:29093
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://broker:9092
      KAFKA_SASL_MECHANISM_CONTROLLER_PROTOCOL: PLAIN
      KAFKA_LISTENER_NAME_BROKER_PLAIN_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret";
      KAFKA_LISTENER_NAME_CONTROLLER_PLAIN_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret";
      KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
      KAFKA_SUPER_USERS: User:admin

Here is the jaas.config

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin-secret"
    user_broker="admin-secret";
};

Now I am working on implementing SASL SCRAM SHA 256 (no SSL for now). I am getting authorization errors here here too. Trying to work on that, here is the new docker compose I am currently working on -

version: '2'
services:
  broker:
    volumes:
      - ./broker_jaas.conf:/etc/kafka/broker_jaas.conf
    image: confluentinc/cp-kafka:7.5.0
    hostname: broker
    container_name: broker
    ports:
      - "9092:9092"
      - "9101:9101"
    environment:
      KAFKA_LISTENER_NAME_BROKER_SASL_PLAINTEXT_SASL_JAAS_CONFIG: |
        org.apache.kafka.common.security.scram.ScramLoginModule required
        username="broker"
        password="broker";
      KAFKA_LISTENER_NAME_CONTROLLER_SASL_PLAINTEXT_SASL_JAAS_CONFIG: |
        org.apache.kafka.common.security.scram.ScramLoginModule required
        username="broker"
        password="broker";
      KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
      KAFKA_NODE_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-256
      KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/broker_jaas.conf
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SASL_PLAINTEXT:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
      KAFKA_LISTENERS: SASL_PLAINTEXT://broker:9092,CONTROLLER://broker:29093
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://broker:9092
      KAFKA_SASL_MECHANISM_CONTROLLER_PROTOCOL: SCRAM-SHA-256
      KAFKA_SUPER_USERS: User:broker

Here is the update jaas.config

KafkaServer {
   org.apache.kafka.common.security.scram.ScramLoginModule required
   username="broker"
   password="broker"
};

Let me know if you end up implementing this and making it work.

Thanks.

I have it working with SASL/SCRAM on BROKER and SASL/PLAIN on the CONTROLLER, as I was not able to make it work with SCRAM on the CONTROLLER listener.

You can find the complete docker-compose.yml and the config files here: https://github.com/gschmutz/various-platys-platforms/tree/main/kafka-sasl-scram-kraft. The stack is completely generated by a tool I maintain. You can find it here if you are interested: GitHub - TrivadisPF/platys-modern-data-platform: Support for generating modern platforms dynamically with services such as Kafka, Spark, Streamsets, HDFS, ..... Support for Secure Kafka cluster is work in progress and not yet released. Here the definition for kafka-1

  kafka-1:
    image: confluentinc/cp-kafka:7.5.0
    container_name: kafka-1
    hostname: kafka-1
    labels:
      com.platys.name: kafka
    ports:
      - 9092:9092
      - 19092:19092
      - 29092:29092
      - 39092:39092
      - 9992:9992
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_BROKER_RACK: rack1
      KAFKA_INTER_BROKER_LISTENER_NAME: BROKER
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka-1:49092,2@kafka-2:49093,3@kafka-3:49094
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      CLUSTER_ID: y4vRIwfDT0SkZ65tD7Ey2A
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:SASL_PLAINTEXT,BROKER:SASL_PLAINTEXT,LOCAL:SASL_PLAINTEXT,DOCKERHOST:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
      KAFKA_LISTENERS: CONTROLLER://kafka-1:49092,BROKER://kafka-1:19092,LOCAL://kafka-1:39092,DOCKERHOST://kafka-1:29092,EXTERNAL://kafka-1:9092
      KAFKA_ADVERTISED_LISTENERS: BROKER://kafka-1:19092,LOCAL://localhost:39092,DOCKERHOST://${DOCKER_HOST_IP:-127.0.0.1}:29092,EXTERNAL://${PUBLIC_IP:-127.0.0.1}:9092
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-256
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN,SCRAM-SHA-256
      KAFKA_SASL_MECHANISM_CONTROLLER_PROTOCOL: PLAIN
      KAFKA_LISTENER_NAME_CONTROLLER_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_LISTENER_NAME_CONTROLLER_PLAIN_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret";
      KAFKA_LISTENER_NAME_BROKER_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_LISTENER_NAME_BROKER_SCRAM-SHA-256_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_connect="connect-secret" user_schemaregistry="schemaregistry-secret" user_ksqldb="ksqldb-secret" user_tool="tool-secret" ;
      KAFKA_LISTENER_NAME_LOCAL_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_LISTENER_NAME_LOCAL_SCRAM-SHA-256_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_connect="connect-secret" user_schemaregistry="schemaregistry-secret" user_ksqldb="ksqldb-secret" user_tool="tool-secret" ;
      KAFKA_LISTENER_NAME_DOCKERHOST_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_LISTENER_NAME_DOCKERHOST_SCRAM-SHA-256_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_connect="connect-secret" user_schemaregistry="schemaregistry-secret" user_ksqldb="ksqldb-secret" user_tool="tool-secret" ;
      KAFKA_LISTENER_NAME_EXTERNAL_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_LISTENER_NAME_EXTERNAL_SCRAM-SHA-256_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_connect="connect-secret" user_schemaregistry="schemaregistry-secret" user_ksqldb="ksqldb-secret" user_tool="tool-secret" ;
      KAFKA_SASL_SERVER_CALLBACK_HANDLER_CLASS:
      CONFLUENT_METRICS_REPORTER_SASL_MECHANISM: SCRAM-SHA-256
      CONFLUENT_METRICS_REPORTER_SECURITY_PROTOCOL: SASL_PLAINTEXT
      CONFLUENT_METRICS_REPORTER_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="tool" password="tool-secret";
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: 'true'
      KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
      KAFKA_SUPER_USERS: User:admin;User:client;User:tool
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 3
      KAFKA_MESSAGE_TIMESTAMP_TYPE: CreateTime
      KAFKA_MIN_INSYNC_REPLICAS: 1
      KAFKA_DELETE_TOPIC_ENABLE: 'True'
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'False'
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
      KAFKA_JMX_PORT: 9992
      KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=9992
      KAFKA_JMX_HOSTNAME: ${PUBLIC_IP:-127.0.0.1}
      KAFKA_LOG4J_ROOT_LOGLEVEL: INFO
      KAFKA_TOOLS_LOG4J_LOGLEVEL: INFO
    volumes:
      - ./data-transfer:/data-transfer
      - ./security/kafka/sasl-scram/kafka.jaas.conf:/etc/kafka/kafka_server_jaas.conf
      - ./security/kafka/sasl-scram/client.properties:/tmp/client.properties
      - ./scripts/kafka/kraft/update_storage.sh:/tmp/kraft/update_storage.sh
    command: bash -c '/tmp/kraft/update_storage.sh y4vRIwfDT0SkZ65tD7Ey2A admin:admin-secret,connect:connect-secret,schemaregistry:schemaregistry-secret,ksqldb:ksqldb-secret,tool:tool-secret SCRAM-SHA-256 false && /etc/confluent/docker/run ;'
    restart: unless-stopped

by the way, with this configuration, I don’t really need a JAAS file. It is still mapped and referred to in the KAFKA_OPTS but this is no longer necessary. I have kept it for the non KRaft setup (with Zookeeper) but the idea is to dynamically exclude in the generator if KRaft Mode is selected.

Hi @gschmutz

This is my docker compose, I have a running kafka fraft SCRAM SHA256, I am getting authentication erros when trying to connect to the broker, even though I have the credentials in jaas conf, since you are able to connect to the broker using SHA 256, did you add the credentials using kafka-storage tool?

Here is my 1 controller and 1 broker setup

version: '2'
services:
  # Controller Service Configuration
  controller:
    # Docker settings
    image: confluentinc/cp-kafka:7.5.0
    hostname: controller
    container_name: controller
    ports:
      - "29093:29093" # Controller listener port
      - "9102:9102"   # JMX port for monitoring

    # Volume for JAAS configuration
    volumes:
      - ./controller_jass.conf:/etc/kafka/controller_jass.conf

    # Environment variables
    environment:
      # Basic Kafka settings
      KAFKA_NODE_ID: 2
      KAFKA_LOG_DIRS: '/tmp/kraft-controller-logs'
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
      KAFKA_PROCESS_ROLES: 'controller'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '2@controller:29093'
      # Listener
      KAFKA_LISTENERS: CONTROLLER://controller:29093
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT, CONTROLLER:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_JMX_PORT: 9102
      KAFKA_JMX_HOSTNAME: localhost

      # Security settings
      KAFKA_SASL_MECHANISM_CONTROLLER_PROTOCOL: PLAIN
      
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/controller_jass.conf

  # Broker Service Configuration
  broker:
    # Docker settings
    image: confluentinc/cp-kafka:7.5.0
    hostname: broker
    container_name: broker
    depends_on:
      - controller
    ports:
      - "9092:9092"   # Broker listener port
      - "9101:9101"   # JMX port for monitoring

    # Volume for JAAS configuration
    volumes:
      - ./broker_jaas.conf:/etc/kafka/broker_jaas.conf
      - ./client.properties:/tmp/client.properties

    # Environment variables
    environment:
      # Basic Kafka settings
      KAFKA_NODE_ID: 1
      KAFKA_LOG_DIRS: '/tmp/kraft-broker-logs'
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
      KAFKA_PROCESS_ROLES: 'broker'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '2@controller:29093'
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      

      # Topic settings
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0

      # Listener settings
      KAFKA_LISTENERS: SASL_PLAINTEXT://broker:9092,
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://broker:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SASL_PLAINTEXT:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      #KAFKA_SUPER_USERS: User:admin

      # Security settings
      KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN, SCRAM-SHA-256
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/broker_jaas.conf
      

This is my jaas.conf

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin-secret"
    user_hyperkafka="hyperkafka-secret";

    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin-secret"
    user_test="test-secret";
};

Hi @devve, sorry for not answering earlier. You can find my version in the GitHub project I mentioned before: https://github.com/gschmutz/various-platys-platforms/tree/main/kafka-sasl-scram-kraft.

I include a call to kafka-storage tool before starting the broker. But as I mentioned before, it won’t work if I set the controller to use SCRAM for the sasl.mechanism, that’s why I have used PLAIN.

  kafka-1:
    image: confluentinc/cp-kafka:7.5.0
    container_name: kafka-1
    hostname: kafka-1
    labels:
      com.platys.name: kafka
    ports:
      - 9092:9092
      - 19092:19092
      - 29092:29092
      - 39092:39092
      - 9992:9992
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_BROKER_RACK: rack1
      KAFKA_INTER_BROKER_LISTENER_NAME: BROKER
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka-1:49092,2@kafka-2:49093,3@kafka-3:49094
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      CLUSTER_ID: y4vRIwfDT0SkZ65tD7Ey2A
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:SASL_PLAINTEXT,BROKER:SASL_PLAINTEXT,LOCAL:SASL_PLAINTEXT,DOCKERHOST:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
      KAFKA_LISTENERS: CONTROLLER://kafka-1:49092,BROKER://kafka-1:19092,LOCAL://kafka-1:39092,DOCKERHOST://kafka-1:29092,EXTERNAL://kafka-1:9092
      KAFKA_ADVERTISED_LISTENERS: BROKER://kafka-1:19092,LOCAL://localhost:39092,DOCKERHOST://${DOCKER_HOST_IP:-127.0.0.1}:29092,EXTERNAL://${PUBLIC_IP:-127.0.0.1}:9092
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-256
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN,SCRAM-SHA-256
      KAFKA_SASL_MECHANISM_CONTROLLER_PROTOCOL: PLAIN
      KAFKA_LISTENER_NAME_CONTROLLER_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_LISTENER_NAME_CONTROLLER_PLAIN_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret";
      KAFKA_LISTENER_NAME_BROKER_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_LISTENER_NAME_BROKER_SCRAM-SHA-256_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_connect="connect-secret" user_schemaregistry="schemaregistry-secret" user_ksqldb="ksqldb-secret" user_tool="tool-secret" ;
      KAFKA_LISTENER_NAME_LOCAL_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_LISTENER_NAME_LOCAL_SCRAM-SHA-256_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_connect="connect-secret" user_schemaregistry="schemaregistry-secret" user_ksqldb="ksqldb-secret" user_tool="tool-secret" ;
      KAFKA_LISTENER_NAME_DOCKERHOST_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_LISTENER_NAME_DOCKERHOST_SCRAM-SHA-256_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_connect="connect-secret" user_schemaregistry="schemaregistry-secret" user_ksqldb="ksqldb-secret" user_tool="tool-secret" ;
      KAFKA_LISTENER_NAME_EXTERNAL_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256
      KAFKA_LISTENER_NAME_EXTERNAL_SCRAM-SHA-256_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_connect="connect-secret" user_schemaregistry="schemaregistry-secret" user_ksqldb="ksqldb-secret" user_tool="tool-secret" ;
      KAFKA_SASL_SERVER_CALLBACK_HANDLER_CLASS:
      CONFLUENT_METRICS_REPORTER_SASL_MECHANISM: SCRAM-SHA-256
      CONFLUENT_METRICS_REPORTER_SECURITY_PROTOCOL: SASL_PLAINTEXT
      CONFLUENT_METRICS_REPORTER_SASL_JAAS_CONFIG: org.apache.kafka.common.security.scram.ScramLoginModule required username="tool" password="tool-secret";
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: 'true'
      KAFKA_AUTHORIZER_CLASS_NAME: org.apache.kafka.metadata.authorizer.StandardAuthorizer
      KAFKA_SUPER_USERS: User:admin;User:client;User:tool
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 3
      KAFKA_MESSAGE_TIMESTAMP_TYPE: CreateTime
      KAFKA_MIN_INSYNC_REPLICAS: 1
      KAFKA_DELETE_TOPIC_ENABLE: 'True'
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'False'
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
      KAFKA_JMX_PORT: 9992
      KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=9992
      KAFKA_JMX_HOSTNAME: ${PUBLIC_IP:-127.0.0.1}
      KAFKA_LOG4J_ROOT_LOGLEVEL: INFO
      KAFKA_TOOLS_LOG4J_LOGLEVEL: INFO
    volumes:
      - ./data-transfer:/data-transfer
      - ./security/kafka/sasl-scram/kafka.jaas.conf:/etc/kafka/kafka_server_jaas.conf
      - ./security/kafka/sasl-scram/client.properties:/tmp/client.properties
      - ./scripts/kafka/kraft/update_storage.sh:/tmp/kraft/update_storage.sh
    command: bash -c '/tmp/kraft/update_storage.sh y4vRIwfDT0SkZ65tD7Ey2A admin:admin-secret,connect:connect-secret,schemaregistry:schemaregistry-secret,ksqldb:ksqldb-secret,tool:tool-secret SCRAM-SHA-256 false && /etc/confluent/docker/run ;'
    restart: unless-stopped

Will won’t to further investigate why SCRAM does not work for the controller, once I have time.