Setting up SSL woes

Hi,

the next step in my cluster setup is SSL.

I modified our old clusters files to use the new one and everything seemed to work just fine, except that the client application could not access new new Kafka cluster with the less then helpful message

[AdminClient clientId=adminclient-1] Connection to node -3 (host/ip:19093) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.

Of course network is fine…
Anyhow I played around a bit trying to connect to the borker directly to create a topic when I realised that that also didnt work - even locally inside the kafka broker container (same message).

I then found that apparently i did not actiavte SSL properly, so all SSL enabled conneciton attempts fail.

I then started trying to find out why enableing SSL didnt work and I found that evidently the Listener Names are not only names but do have a meaning, so

this enables SSL

-e KAFKA_LISTENERS=‘SSL://localhost:29094,EXTERNAL://localhost:19091’
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=‘CONTROLLER:SSL,SSL:SSL,EXTERNAL:SSL’ \

but

-e KAFKA_LISTENERS=‘SSL_LISTENER://localhost:29094,EXTERNAL://localhost:19091’
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=‘CONTROLLER:SSL,SSL_LISTENER:SSL,EXTERNAL:SSL’ \

does not.

Weird, but ok.
Now I see that SSL is enabled, but i cannot get it to work, instead of using the key in keystore location it keeps on asking about the keystore_filename

podman logs kafka-1 ===> User uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) ===> Configuring ... Running in KRaft mode... SSL is enabled. KAFKA_SSL_KEYSTORE_FILENAME is required. Command [/usr/local/bin/dub ensure KAFKA_SSL_KEYSTORE_FILENAME] FAILED !
If I provide that it complains it cannot find it since its searching in the wrong path (not the kafka_keystore_location i provided).

podman logs kafka-1
===> User
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
===> Configuring ...
Running in KRaft mode...
SSL is enabled.
Command [/usr/local/bin/dub path /etc/kafka/secrets/host1.pfx exists] FAILED !

I can trick him to find it by pointing to the correct file in the container, but then its just asking for KAFKA_SSL_KEY_CREDENTIALS next.

It’s supposed to use the key in the keystore and I think I provided everything it needs, but it does not seem to take it :frowning:

podman run -d \
--name kafka-1 \
-h=host1 \
-p 19091:19091 \
-p 29094:29094 \
-v $humio_working_dir/data/cpkafka-broker:/data/cpkafka-data:Z \
-v $humio_working_dir/keystore:/keystore:z \
--secret=KAFKA_ssl_keystore_password,type=env,target=KAFKA_ssl_keystore_password \
--secret=KAFKA_ssl_truststore_password,type=env,target=KAFKA_ssl_truststore_password \
-e KAFKA_LISTENERS='SSL://localhost:29094,EXTERNAL://localhost:19091' \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:SSL,SSL:SSL,EXTERNAL:SSL' \
-e KAFKA_ADVERTISED_LISTENERS='SSL://host1:29094,EXTERNAL://host1:19091' \
-e KAFKA_INTER_BROKER_LISTENER_NAME='SSL' \
-e KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0 \
-e KAFKA_BROKER_RACK='rack-0' \
-e KAFKA_LOG_DIRS='/data/cpkafka-data' \
-e KAFKA_MIN_INSYNC_REPLICAS=2 \
-e KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR=2 \
-e KAFKA_CONFLUENT_CLUSTER_LINK_ENABLE='true' \
-e KAFKA_CONFLUENT_REPORTERS_TELEMETRY_AUTO_ENABLE='false' \
-e KAFKA_NODE_ID=4 \
-e CLUSTER_ID='<clusterid>' \
-e KAFKA_CONTROLLER_QUORUM_VOTERS='1@host1:29091,2@host2:29092,3@host3:29093' \
-e KAFKA_PROCESS_ROLES='broker' \
-e KAFKA_CONTROLLER_LISTENER_NAMES='CONTROLLER' \
-e KAFKA_SSL_KEYSTORE_LOCATION='/keystore/host1.pfx' \
-e KAFKA_ssl_keyStore_type='PKCS12' \
-e KAFKA_ssl_truststore_location='/keystore/truststore.pfx' \
-e KAFKA_ssl_trustStore_type='PKCS12' \
-e KAFKA_ssl_client_auth='requested' \
-e KAFKA_LOG4J_ROOT_LOGLEVEL="DEBUG" \
-e KAFKA_LOG4J_TOOLS_LOGLEVEL=ERROR \
-e KAFKA_LOG4J_LOGGERS="kafka=DEBUG,kafka.controller=WARN,kafka.log.LogCleaner=WARN,state.change.logger=WARN,kafka.producer.async.DefaultEventHandler=WARN" \
confluentinc/cp-kafka:latest

(Keystore/Truststore passwords are passed in via secrets)

So, a couple of questions

  1. What are the required parameters to tun on SSL? Am I missing any?
  2. If not, why does it not take them?
  3. I tried getting more info via debug logging, but its not helping at all (no change to output). What is the proper way of getting debug messages for this? The default is rater useless

Thanks :slight_smile:

@mmuehlbeyer Are you back from vacation ?:slight_smile:

yes @Rand

there you go with an docker-compose example

secrets are here for ref

Explicitly setting the key(store) credential file instead of passing in the keystore password seemed to have helped.

The next step was to SSL enable the controllers, and since you provided examples I was looking at kafka/docker/examples/docker-compose-files/cluster/isolated/ssl/docker-compose.yml at 88f0440066771202b9d6c979d6c45e806971d77d · confluentinc/kafka · GitHub

Unfortunately it did not work.

Therefore a couple of questions:

  1. Why is there a KAFKA_INTER_BROKER_LISTENER_NAME: ‘PLAINTEXT’ which is not used anywhere? in the controller config`
  2. Why is there no SSL in the SSL controller example file at all? Is communication between broker and controller not secured?
  3. When trying to run the exact example that is provided my brokers dont run, they die with:
===> Configuring ...
Running in KRaft mode...
SSL is enabled.
===> Running preflight checks ...
===> Check if /var/lib/kafka/data is writable ...
===> Running in KRaft mode, skipping Zookeeper health check...
===> Using provided cluster id <id>...
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: The advertised.listeners config must not contain KRaft controller listeners from controller.listener.names when process.roles contains the broker role because Kafka clients that send requests via advertised listeners do not send requests to KRaft controllers -- they only send requests to KRaft brokers. at scala.Predef$.require(Predef.scala:337) at kafka.server.KafkaConfig.validateAdvertisedListenersDoesNotContainControllerListenersForKRaftBroker$1(KafkaConfig.scala:2352) at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:2376) at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:2290) at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1638) at kafka.tools.StorageTool$.$anonfun$execute$1(StorageTool.scala:71) at scala.Option.flatMap(Option.scala:283) at kafka.tools.StorageTool$.execute(StorageTool.scala:71) at kafka.tools.StorageTool$.main(StorageTool.scala:52) at kafka.tools.StorageTool.main(StorageTool.scala)

Why would that happen?
Controller has
KAFKA_CONTROLLER_LISTENER_NAMES: ‘CONTROLLER’

Broker advertised listeners is
-e KAFKA_ADVERTISED_LISTENERS=‘SSL-INTERNAL://broker-3-hostname:29096,SSL://broker3-hostname:19093’ \

I assume the error message is not accurate and there is another issue, but as I said I recreated the exact example (adjusted o/c, but i tripple checked…)

Thanks,
regards

@mmuehlbeyer Any idea? Thanks

hey @Rand

let me doublecheck.
which Kafka version did you use?

best,
michael

“build-date”: “2024-08-20T18:30:35”,
“release”: “7.7.1-29”,

Is the one I am currently running, but I can update to the latest if that helps ?