KRaft - Apache Kafka Without ZooKeeper SASL_SSL failed due to authentication error

I try to start new Kafka Kraft cluster in version 3.7.1 with SASL_SSL. During startup I’ve got an errors:

[2024-12-06 10:49:38,577] ERROR [kafka-1-raft-outbound-request-thread]: Failed to send the following request due to authentication error: ClientRequest(expectResponse=true, callback=org.apache.kafka.raft.KafkaNetworkChannel$$Lambda$625/0x00007fbc0840fda8@47cbf7a1, destination=3, correlationId=596, clientId=raft-client-1, createdTimeMs=1733482178255, requestBuilder=VoteRequestData(clusterId='2qORNERpRzSlWBo4YPnhjQ', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1968, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])])) (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-12-06 10:49:38,577] ERROR Request OutboundRequest(correlationId=596, data=VoteRequestData(clusterId='2qORNERpRzSlWBo4YPnhjQ', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1968, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])]), createdTimeMs=1733482178255, destinationId=3) failed due to authentication error (org.apache.kafka.raft.KafkaNetworkChannel)
org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
[2024-12-06 10:49:38,579] ERROR [kafka-1-raft-outbound-request-thread]: Failed to send the following request due to authentication error: ClientRequest(expectResponse=true, callback=org.apache.kafka.raft.KafkaNetworkChannel$$Lambda$625/0x00007fbc0840fda8@9729234, destination=2, correlationId=595, clientId=raft-client-1, createdTimeMs=1733482178255, requestBuilder=VoteRequestData(clusterId='2qORNERpRzSlWBo4YPnhjQ', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1968, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])])) (org.apache.kafka.raft.KafkaNetworkChannel$SendThread)
[2024-12-06 10:49:38,579] ERROR Request OutboundRequest(correlationId=595, data=VoteRequestData(clusterId='2qORNERpRzSlWBo4YPnhjQ', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=1968, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])]), createdTimeMs=1733482178255, destinationId=2) failed due to authentication error (org.apache.kafka.raft.KafkaNetworkChannel)
org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
[2024-12-06 10:49:38,579] ERROR [RaftManager id=1] Unexpected error NETWORK_EXCEPTION in VOTE response: InboundResponse(correlationId=596, data=VoteResponseData(errorCode=13, topics=[]), sourceId=3) (org.apache.kafka.raft.KafkaRaftClient)

I do not have idea what is wrong. My configuration looks like this:

cat /opt/kafka/config/kraft/server.properties|grep [a-z]
process.roles=broker,controller
node.id=1
broker.rack=sr1
controller.quorum.voters=1@sr1-infra-kafka-n1-srv:9093,2@sr1-infra-kafka-n1-srv:9093,3@sr1-infra-kafka-n1-srv:9093
listeners=BROKER://:9092,CONTROLLER://:9093
advertised.listeners=BROKER://:9092
inter.broker.listener.name=BROKER
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:SASL_SSL,BROKER:SASL_SSL,CLIENT:SASL_SSL
listener.name.controller.ssl.client.auth=required
listener.name.broker.ssl.client.auth=required
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
ssl.truststore.location=/etc/pki/tls/certs/truststore.jks
ssl.truststore.password=changeit
ssl.keystore.location=/etc/pki/tls/certs/keystore.jks
ssl.keystore.password=visiona
ssl.client.auth=required
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
super.users=User:admin
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.mechanism.controller.protocol=SCRAM-SHA-512
listener.name.controller.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="qwerty123456";
listener.name.broker.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="qwerty123456";
log.dirs=/opt/kafka/data/
num.partitions=3
default.replication.factor=3
delete.topic.enable=true
auto.create.topics.enable=false
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

Do you have any idea what is wrong in my configuration?

did you follow this one?

best,
michael

Of course. The SCRAM credentials was created according to the following:

KCLUSTER_ID="2qORNERpRzSlWBo4YPnhjQ"

/opt/kafka/bin/kafka-storage.sh format -t $KCLUSTER_ID -c /opt/kafka/config/kraft/server.properties --add-scram 'SCRAM-SHA-512=[name='admin',password='qwerty123456']'

Regards,
Dan

ok thanks
was thinking about a low hanging fruit :wink:

Kraft in PLAINTEXT configuration started without any problems.

I saw similar problem here:
https://issues.apache.org/jira/browse/KAFKA-15513
the status is still “Unresolved”

did you check

Yes, I saw this topic but there is no solution for SASL_SSL working with SCRAM-SHA-256/512

just in the progress of hacking this in docker keep you posted

1 Like

Finally I stopped my tests and such configuration must be enough for us e.i.:

  • communication between nodes of controllers/brokers through ssl
  • communication from producers/consumers to cluster through SASL_SSL with SCRAM-SHA-512
/opt/kafka/config/kraft/server.properties
process.roles=broker,controller
broker.rack=rac1

# The node id associated with this instance's roles. Just increment this for each node
node.id=1

# The connect string for the controller quorum: <node.id>@<host>:<port>
controller.quorum.voters=1@kafka-n1-srv:9093,2@kafka-n2-srv:9093,3@kafka-n3-srv:9093

# The address the socket server listens on.
listeners=SSL://kafka-n1-srv:9092,CONTROLLER://kafka-n1-srv:9093,CLIENTS://kafka-n1-srv:9095

sasl.enabled.mechanisms=

listener.name.clients.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required ;
listener.name.clients.sasl.enabled.mechanisms=SCRAM-SHA-512

# Name of listener used for communication between brokers.
inter.broker.listener.name=SSL

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=SSL://kafka-n1-srv:9092,CLIENTS://kafka-n1-srv:9095

# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:SSL,SSL:SSL,CLIENTS:SASL_SSL

ssl.keystore.location=/etc/pki/tls/certs/keystore.jks
ssl.keystore.password=some_password
ssl.truststore.location=/etc/pki/tls/certs/truststore.jks
ssl.truststore.password=some_password

ssl.client.auth=required

authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
super.users=User:CN=kafka-n1-srv;User:CN=kafka-n2-srv;User:CN=kafka-n3-srv

allow.everyone.if.no.acl.found=False

# A comma separated list of directories under which to store log files
log.dirs=/opt/kafka/data

Thanks for your engagement.