KRaft: Apache Kafka without Zookeeper

I know I’ve already asked everyone’s opinion/experience with KRaft. Seems like not too many people have experimented yet… Let’s change that!

Here’s a great write-up of KRaft, how it works in place or Zookeeper, and an easy guide to get started on your own!

1 Like

Why should I try it out? I know it’s a bit of a chicken and egg problem but as long as production is run with Zookeeper I don’t know why I should give it a try.

At some point, Kafka will be an entirely Zookeeperless system, so this will eventually catch up with you. The time will come when production Zookeeper will eventually not exist anymore, but of course that that doesn’t mean you need to be playing with preview versions. We are always excited about trying out new things, but you’re the one responsible for a production deployment, so your schedule wins over our enthusiasm. :stuck_out_tongue_winking_eye:

3 Likes

I can run the all-in-one demo in KRaft mode, but then I can’t connect to it from localhost with kafka-topics or any other client.

https://docs.confluent.io/platform/current/tutorials/build-your-own-demos.html?#build-your-own-demos-onprem-kraft

Are you using Docker for Mac maybe? The KAFKA_LISTENERS and KAFKA_ADVERTIZED_LISTENERS expect docker to be reachable via localhost. Depending on how you set thing up, that might need to be changed.

Yes, I am using a Mac. I have no problem running the Dockerfiles for the other cp-all-in-one variants that include Zookeeper. I just can’t figure out what’s wrong with the KRaft configuration. I tried adding ports 29092 and 29093 to the ports list, similar to the config in the non-KRaft version that works. Didn’t help.

Here’s my current config for broker. Please let me know if you can spot anything that might need tweaking.

  broker:
    image: confluentinc/cp-kafka:7.0.0
    hostname: broker
    container_name: broker
    ports:
      - "29093:29093"
      - "29092:29092"
      - "9092:9092"
      - "9101:9101"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
      KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://localhost:9092'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
    volumes:
      - ./update_run.sh:/tmp/update_run.sh
    command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"

Hello Danica,

I’ve actually gone quite deep on trying to work with KRaft clusters over the standard Zookeeper setup. Everything works great except I can’t figure out how to configure the SCRAM mechanism. I have been able to successfully configure channels with the SASL_PLAINTEXT protocol but I am at a loss for how to set this up using anything other than the PLAIN SASL mechanism.

I am using the Kafka images published by Bitnami and actually started with their community to see if there was perhaps some issue there, but it would appear that even they are somewhat in the dark as to the current state of SCRAM compatibility when using KRaft. I did notice in the KIP-500 release notes that there is still a gap in that SCRAM user’s can’t be configured using the administrative API; however, it is unclear if that would imply that SCRAM simply isn’t there yet or if that note only applies to the external management API that ships with the tool itself.

I’m including the GitHub issue that I opened with the Bitnami team so as to reduce the content of this post (the setup is a little thick). Is this something that you or someone else on the team can lend some guidance towards?

Thanks,
Ryan