I know I’ve already asked everyone’s opinion/experience with KRaft. Seems like not too many people have experimented yet… Let’s change that!
Here’s a great write-up of KRaft, how it works in place or Zookeeper, and an easy guide to get started on your own!
Apache Kafka Raft (KRaft) simplifies Kafka architecture by consolidating metadata into Kafka, removing the ZooKeeper dependency. Learn how it works, benefits, and what this means for Kafka's scalability.
1 Like
gklijs
30 September 2021 05:13
2
Why should I try it out? I know it’s a bit of a chicken and egg problem but as long as production is run with Zookeeper I don’t know why I should give it a try.
At some point, Kafka will be an entirely Zookeeperless system, so this will eventually catch up with you. The time will come when production Zookeeper will eventually not exist anymore, but of course that that doesn’t mean you need to be playing with preview versions. We are always excited about trying out new things, but you’re the one responsible for a production deployment, so your schedule wins over our enthusiasm.
3 Likes
I can run the all-in-one demo in KRaft mode, but then I can’t connect to it from localhost with kafka-topics or any other client.
https://docs.confluent.io/platform/current/tutorials/build-your-own-demos.html?#build-your-own-demos-onprem-kraft
gklijs
19 October 2021 05:06
5
Are you using Docker for Mac maybe? The KAFKA_LISTENERS
and KAFKA_ADVERTIZED_LISTENERS
expect docker to be reachable via localhost
. Depending on how you set thing up, that might need to be changed.
Yes, I am using a Mac. I have no problem running the Dockerfiles for the other cp-all-in-one variants that include Zookeeper. I just can’t figure out what’s wrong with the KRaft configuration. I tried adding ports 29092 and 29093 to the ports
list, similar to the config in the non-KRaft version that works. Didn’t help.
Here’s my current config for broker
. Please let me know if you can spot anything that might need tweaking.
broker:
image: confluentinc/cp-kafka:7.0.0
hostname: broker
container_name: broker
ports:
- "29093:29093"
- "29092:29092"
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092'
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_PROCESS_ROLES: 'broker,controller'
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://localhost:9092'
KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
volumes:
- ./update_run.sh:/tmp/update_run.sh
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
Hello Danica,
I’ve actually gone quite deep on trying to work with KRaft clusters over the standard Zookeeper setup. Everything works great except I can’t figure out how to configure the SCRAM mechanism. I have been able to successfully configure channels with the SASL_PLAINTEXT protocol but I am at a loss for how to set this up using anything other than the PLAIN SASL mechanism.
I am using the Kafka images published by Bitnami and actually started with their community to see if there was perhaps some issue there, but it would appear that even they are somewhat in the dark as to the current state of SCRAM compatibility when using KRaft. I did notice in the KIP-500 release notes that there is still a gap in that SCRAM user’s can’t be configured using the administrative API; however, it is unclear if that would imply that SCRAM simply isn’t there yet or if that note only applies to the external management API that ships with the tool itself.
I’m including the GitHub issue that I opened with the Bitnami team so as to reduce the content of this post (the setup is a little thick). Is this something that you or someone else on the team can lend some guidance towards?
opened 04:16PM - 22 Sep 22 UTC
tech-issues
triage
kafka
### Name and Version
bitnami/kafka:3.2
### What steps will reproduce the bug?
…
1. Modify libkafka.sh to remove step that tries to create SASL users in Zookeeper. Also need to fix a bug in configuring the KafkaClient JAAS element when using SCRAM (the ScramLoginModule does not exist in the plain package)
./kafka/Dockerfile
```
FROM docker.io/bitnami/kafka:3.2
USER 0
RUN apt-get update && \
apt-get install -y jq openssl dos2unix netcat dnsutils && \
apt-get clean && \
sed -i.bak '/\[\[ "\$KAFKA_CFG_SASL_ENABLED_MECHANISMS" =~ "SCRAM" \]\] && kafka_create_sasl_scram_zookeeper_users/c\export KAFKA_OPTS="-Djava.security.auth.login.config=\${KAFKA_CONF_DIR}/kafka_jaas.conf"' /opt/bitnami/scripts/libkafka.sh && \
sed -i.bak '/org.apache.kafka.common.security.plain.ScramLoginModule required/c\ org.apache.kafka.common.security.scram.ScramLoginModule required' /opt/bitnami/scripts/libkafka.sh
COPY ./scripts /kafka-scripts
RUN chmod 777 -R /kafka-scripts && \
find /kafka-scripts -name '*.sh' | xargs dos2unix
USER 1001
WORKDIR /kafka-scripts
ENTRYPOINT ["./entrypoint.sh"]
```
./kafka/kafka-scripts/entrypoint
```
#!/bin/bash
. /opt/bitnami/scripts/libvalidations.sh
[[ -z "$KAFKA_INTER_BROKER_USER" ]] && export KAFKA_INTER_BROKER_USER=broker
[[ -z "$KAFKA_INTER_BROKER_PASSWORD" ]] && export KAFKA_INTER_BROKER_PASSWORD=brokerPassword
[[ -z "$KAFKA_CFG_SASL_ENABLED_MECHANISMS" ]] && export KAFKA_CFG_SASL_ENABLED_MECHANISMS=${APP_KAFKA_CLIENT_SASL_MECHANISM:-SCRAM-SHA-512}
[[ -z "$KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL" ]] && export KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=${KAFKA_CFG_SASL_ENABLED_MECHANISMS}
export NODE_COUNT=${NODE_COUNT:-1}
export CLIENT_PORT=${CLIENT_PORT:-9092}
export INTERNAL_PORT=${INTERNAL_PORT:-9093}
export KAFKA_CFG_BROKER_ID=${KAFKA_CFG_BROKER_ID:-1}
export KAFKA_INTER_BROKER_LISTENER_NAME=INTERNAL
export KAFKA_CLIENT_LISTENER_NAME=CLIENT
export KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=${KAFKA_INTER_BROKER_LISTENER_NAME}:${KAFKA_INTER_BROKER_LISTENER_PROTOCOL:-SASL_PLAINTEXT},${KAFKA_CLIENT_LISTENER_NAME}:${APP_KAFKA_CLIENT_PROTOCOL:-SASL_PLAINTEXT}
export KAFKA_CFG_LISTENERS="${KAFKA_INTER_BROKER_LISTENER_NAME}://:${INTERNAL_PORT},${KAFKA_CLIENT_LISTENER_NAME}://:${CLIENT_PORT}"
export KAFKA_CFG_ADVERTISED_LISTENERS="${KAFKA_INTER_BROKER_LISTENER_NAME}://kubernetes.docker.internal:${INTERNAL_PORT},${KAFKA_CLIENT_LISTENER_NAME}://kubernetes.docker.internal:${CLIENT_PORT}"
export KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
export KAFKA_CFG_DEFAULT_REPLICATION_FACTOR=$NODE_COUNT
export KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=$NODE_COUNT
export KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=$NODE_COUNT
export KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR=${KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR:-2}
export KAFKA_CFG_NUM_PARTITIONS=${KAFKA_CFG_NUM_PARTITIONS:-10}
if is_boolean_yes "$KAFKA_ENABLE_KRAFT"; then
[[ -z "$KAFKA_CFG_NODE_ID" ]] && export KAFKA_CFG_NODE_ID="${KAFKA_CFG_BROKER_ID}"
export KAFKA_CFG_PROCESS_ROLES=broker,controller
export CONTROLLER_PORT=${CONTROLLER_PORT:-2181}
export KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
export KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=${KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP},${KAFKA_CFG_CONTROLLER_LISTENER_NAMES}:PLAINTEXT
export KAFKA_CFG_LISTENERS="${KAFKA_CFG_LISTENERS},${KAFKA_CFG_CONTROLLER_LISTENER_NAMES}://:${CONTROLLER_PORT}"
fi
if [[ -z "$KAFKA_CLIENT_USERS" && -z "$KAFKA_CLIENT_PASSWORDS" ]]; then
export KAFKA_CLIENT_USERS=$KAFKA_INTER_BROKER_USER
export KAFKA_CLIENT_PASSWORDS=$KAFKA_INTER_BROKER_PASSWORD
for user_key in $(printenv | grep APP_KAFKA.*USER | grep -o -P '(?<=APP_KAFKA).*(?=USER)')
do
userVar='$'"APP_KAFKA"$user_key"USER"
passwordVar='$'"APP_KAFKA"$user_key"PASSWORD"
export KAFKA_CLIENT_USERS=$KAFKA_CLIENT_USERS,$(eval echo $userVar)
export KAFKA_CLIENT_PASSWORDS=$KAFKA_CLIENT_PASSWORDS,$(eval echo $passwordVar)
done
fi
echo "users: ${KAFKA_CLIENT_USERS}"
if [[ -n "$NODESET_FQDN" ]]; then
current_node_id="${HOSTNAME: -1}"
export NODE_FQDN="$HOSTNAME.$NODESET_FQDN"
export KAFKA_CFG_ADVERTISED_LISTENERS="${KAFKA_INTER_BROKER_LISTENER_NAME}://${NODE_FQDN}:${INTERNAL_PORT},${KAFKA_CLIENT_LISTENER_NAME}://${NODE_FQDN}:${CLIENT_PORT}"
export KAFKA_CFG_BROKER_ID="${current_node_id}"
if is_boolean_yes "$KAFKA_ENABLE_KRAFT"; then
export KAFKA_CFG_NODE_ID="${current_node_id}"
for node_server_id in $(seq $NODE_COUNT);
do
node_id=$((node_server_id-1))
[[ -n "${KAFKA_CFG_CONTROLLER_QUORUM_VOTERS}" ]] && export KAFKA_CFG_CONTROLLER_QUORUM_VOTERS = "${KAFKA_CFG_CONTROLLER_QUORUM_VOTERS},"
export KAFKA_CFG_CONTROLLER_QUORUM_VOTERS="${KAFKA_CFG_CONTROLLER_QUORUM_VOTERS}${node_id}@${HOSTNAME::-1}$node_id.$NODESET_FQDN:$CONTROLLER_PORT"
done
fi
fi
exec /opt/bitnami/scripts/kafka/entrypoint.sh /opt/bitnami/scripts/kafka/run.sh
```
3. Configure Kafka Kraft Cluster
./docker-compose.yaml
```
version: '3.9'
networks:
infra:
driver: bridge
services:
kafka-1:
build: ./kafka
networks:
- infra
ports:
- 9092:9092
- 9093:9093
- 2181:2181
extra_hosts:
- "kubernetes.docker.internal:host-gateway"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_KRAFT_CLUSTER_ID=Z21R2idcSLiO8yU0KKOTxA
- KAFKA_CLIENT_USERS=local
- KAFKA_CLIENT_PASSWORDS=localPassword
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-512
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kubernetes.docker.internal:2181,2@kubernetes.docker.internal:2182,3@kubernetes.docker.internal:2183
- KAFKA_CFG_BROKER_ID=1
- KAFKA_CFG_NODE_ID=1
- KAFKA_CFG_LOG_RETENTION_HOURS=1
- NODE_COUNT=3
- CLIENT_PORT=9092
- INTERNAL_PORT=9093
- CONTROLLER_PORT=2181
- APP_KAFKA_CLIENT_PROTOCOL=${APP_KAFKA_CLIENT_PROTOCOL:-SASL_PLAINTEXT}
healthcheck:
test: /bin/bash -c 'nc -z localhost 9092'
interval: 10s
timeout: 5s
retries: 9
kafka-2:
build: ./kafka
networks:
- infra
ports:
- 9094:9094
- 9095:9095
- 2182:2182
extra_hosts:
- "kubernetes.docker.internal:host-gateway"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_KRAFT_CLUSTER_ID=Z21R2idcSLiO8yU0KKOTxA
- KAFKA_CLIENT_USERS=local
- KAFKA_CLIENT_PASSWORDS=localPassword
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-512
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kubernetes.docker.internal:2181,2@kubernetes.docker.internal:2182,3@kubernetes.docker.internal:2183
- KAFKA_CFG_BROKER_ID=2
- KAFKA_CFG_NODE_ID=2
- KAFKA_CFG_LOG_RETENTION_HOURS=1
- NODE_COUNT=3
- CLIENT_PORT=9094
- INTERNAL_PORT=9095
- CONTROLLER_PORT=2182
- APP_KAFKA_CLIENT_PROTOCOL=${APP_KAFKA_CLIENT_PROTOCOL:-SASL_PLAINTEXT}
healthcheck:
test: /bin/bash -c 'nc -z localhost 9094'
interval: 10s
timeout: 5s
retries: 9
kafka-3:
build: ./kafka
networks:
- infra
ports:
- 9096:9096
- 9097:9097
- 2183:2183
extra_hosts:
- "kubernetes.docker.internal:host-gateway"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_KRAFT_CLUSTER_ID=Z21R2idcSLiO8yU0KKOTxA
- KAFKA_CLIENT_USERS=local
- KAFKA_CLIENT_PASSWORDS=localPassword
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-512
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kubernetes.docker.internal:2181,2@kubernetes.docker.internal:2182,3@kubernetes.docker.internal:2183
- KAFKA_CFG_BROKER_ID=3
- KAFKA_CFG_NODE_ID=3
- KAFKA_CFG_LOG_RETENTION_HOURS=1
- NODE_COUNT=3
- CLIENT_PORT=9096
- INTERNAL_PORT=9097
- CONTROLLER_PORT=2183
- APP_KAFKA_CLIENT_PROTOCOL=${APP_KAFKA_CLIENT_PROTOCOL:-SASL_PLAINTEXT}
healthcheck:
test: /bin/bash -c 'nc -z localhost 9096'
interval: 10s
timeout: 5s
retries: 9
```
5. Cluster will start but all attempts to login to the cluster will fail. Changing KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL to PLAIN and configuring the client to use the PLAIN sasl mechanism will work just fine. However, it will fail if either of those are configured with a SCRAM sasl mechanism.
```
ERROR:kafka.conn:<BrokerConnection node_id=bootstrap-1 host=kubernetes.docker.internal:9092 <authenticating> [IPv4 ('192.168.65.2', 9092)]>: Error receiving reply from server
service-alerts-cli-1 | Traceback (most recent call last):
service-alerts-cli-1 | File "/usr/src/app/packages/kafka/conn.py", line 645, in _try_authenticate_plain
service-alerts-cli-1 | data = self._recv_bytes_blocking(4)
service-alerts-cli-1 | File "/usr/src/app/packages/kafka/conn.py", line 616, in _recv_bytes_blocking
service-alerts-cli-1 | raise ConnectionError('Connection reset during recv')
service-alerts-cli-1 | ConnectionError: Connection reset during recv
```
### What is the expected behavior?
That the server would support SCRAM authentication.
### What do you see instead?
Secure authentication only works when using the PLAIN SASL mechanism.
### Additional information
I admit that this may be a case of Kraft not being ready for SCRAM just yet. I am simply unable to determine if that is the case. The Kafka release notes indicate that the only thing related to SCRAM that is still pending is the ability to create users via the administrative API but that doesn't necessarily preclude the ability to configure SCRAM using the JAAS file to my understanding.
I am completely fine if this is a known limitation of Kraft at this point. I simply couldn't find anything that says that SCRAM flat-out doesn't work at all just yet.
Thanks,
Ryan