SSL Woes II (repost)

Hi,

this is the repost of my previous thread that accidentally got deleted.

I am trying to set up a simple SSL enabled cluster and it’s proving way more complicated than I would have thought.
I want communication between Clienbt and Kafkam, Kafka and Controller and interCluster communication to be secured (ideally, at this point I might settle for just having client/server communication encrypted :()

We have 3 nodes, we run podman, I have a pre-existing kafka cluster (zookeeper) that I want to replace that is running just fine in SSL.

I learned that its better to have 2 containers for Kafka and KRaft, so thats what I am trying to do.
I have the cluster up and running fine without SSL (no client connected since those are SSL enabled). The problem is that

  1. For me there is no clear way of turning SSL on/off - there is no flag. There are only config options which don’t work for unknown reasons up until they suddenly work
  2. I then tried recreating the example distributed ssl example in kafka/docker/examples/docker-compose-files/cluster/isolated/ssl/docker-compose.yml at 88f0440066771202b9d6c979d6c45e806971d77d · confluentinc/kafka · GitHub (converted to podman and script based startup) but that also failed, potentially with a version incompatibility.

@mmuehlbeyer You wanted to verify if that example was running with the currently public available Kafka 3.8 builds (confluent build 7.8.0 or latest) since you remember there was potentially a bug pre 3.9

Thanks

Are you aiming to use TLS authentication, or a different kind of authentication over SSL?

Are you able to share the converted script / other assets so that we can repro with our own keys and try to tweak things to get it working?

Hi,

yes TLS.

So I gave recreating the example another try (this time as close as possible, not adjusting the previous examples).

I can’t get kafka to run yet -
when following your example directly I get

javax.net.ssl.SSLHandshakeException: PKIX path building failed

[2025-01-06 15:08:11,555] ERROR [BrokerServer id=4] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer)
org.apache.kafka.common.config.ConfigException: Invalid value javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target for configuration A client SSLEngine created with the provided settings can’t connect to a server SSLEngine created with those settings.
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:103)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:70)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)

When I add KAFKA_SSL_KEYSTORE_LOCATION/KAFKA_SSL_TRUSTSTORE_LOCATION I get another error:

java.nio.file.NoSuchFileException

[2025-01-06 12:44:19,315] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
org.apache.kafka.common.KafkaException: Failed to load SSL keystore truststore_int.pfx of type PKCS12
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:382)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.(DefaultSslEngineFactory.java:354)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createTruststore(DefaultSslEngineFactory.java:327)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.configure(DefaultSslEngineFactory.java:171)
at org.apache.kafka.common.security.ssl.SslFactory.instantiateSslEngineFactory(SslFactory.java:141)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:98)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:70)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)
at kafka.network.Processor.(SocketServer.scala:977)
at kafka.network.Acceptor.newProcessor(SocketServer.scala:882)
at kafka.network.Acceptor.$anonfun$addProcessors$1(SocketServer.scala:852)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
at kafka.network.Acceptor.addProcessors(SocketServer.scala:851)
at kafka.network.DataPlaneAcceptor.configure(SocketServer.scala:525)
at kafka.network.SocketServer.createDataPlaneAcceptorAndProcessors(SocketServer.scala:253)
at kafka.network.SocketServer.$anonfun$new$31(SocketServer.scala:177)
at kafka.network.SocketServer.$anonfun$new$31$adapted(SocketServer.scala:177)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:619)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:617)
at scala.collection.AbstractIterable.foreach(Iterable.scala:935)
at kafka.network.SocketServer.(SocketServer.scala:177)
at kafka.server.BrokerServer.startup(BrokerServer.scala:253)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:97)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:97)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:97)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.nio.file.NoSuchFileException: truststore_int.pfx
at java.base/sun.nio.fs.UnixException.translateToIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(Unknown Source)
at java.base/java.nio.file.Files.newByteChannel(Unknown Source)
at java.base/java.nio.file.Files.newByteChannel(Unknown Source)
at java.base/java.nio.file.spi.FileSystemProvider.newInputStream(Unknown Source)
at java.base/java.nio.file.Files.newInputStream(Unknown Source)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:375)
… 28 more
[2025-01-06 12:44:19,317] INFO App info kafka.server for 4 unregistered (org.apache.kafka.common.utils.AppInfoParser)

despite the file being there:

podman exec -it kafka-1 ls -l /etc/kafka/secrets|grep truststore_int.pfx
-rwxr-xr-x. 1 root root 23130 Jul  3  2023 truststore_int.pfx

This is the config:

Kafka-1 Config

podman unshare chown 1000:1000 -R /hostname0/data/cpkafka-broker
podman run -d
–name kafka-1
-h=hostname0.domain
-p 19093:19093
-p 39093:39093
-v /hostname0/data/cpkafka-broker:/data/cpkafka-data:Z
-v /hostname0/keystore:/etc/kafka/secrets:z
-e KAFKA_NODE_ID=4
-e KAFKA_PROCESS_ROLES=‘broker’
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=‘SSL:SSL,CONTROLLER:PLAINTEXT,SSL-INTERNAL:SSL’
-e KAFKA_LISTENERS=‘SSL-INTERNAL://:19093,SSL://:39093’
-e KAFKA_CONTROLLER_QUORUM_VOTERS=‘1@hostname0.domain:29092,2@hostname1.domain:29092,3@hostname2.domain:29092’
-e KAFKA_INTER_BROKER_LISTENER_NAME=‘SSL-INTERNAL’
-e KAFKA_SECURITY_PROTOCOL=‘SSL’
-e KAFKA_ADVERTISED_LISTENERS=‘SSL-INTERNAL://hostname0.domain:19093,SSL://:39093’
-e KAFKA_CONTROLLER_LISTENER_NAMES=‘CONTROLLER’
-e CLUSTER_ID=‘clusterid’
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
-e KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0
-e KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1
-e KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1
-e KAFKA_LOG_DIRS=‘/data/cpkafka-data’
-e KAFKA_SSL_KEYSTORE_LOCATION=‘hostname0.domain.pfx’
-e KAFKA_SSL_KEYSTORE_FILENAME=‘hostname0.domain.pfx’
-e KAFKA_SSL_KEYSTORE_CREDENTIALS=‘keystore.crd’
-e KAFKA_SSL_KEY_CREDENTIALS=‘key.crd’
-e KAFKA_SSL_TRUSTSTORE_LOCATION=‘truststore_int.pfx’
-e KAFKA_SSL_TRUSTSTORE_FILENAME=‘truststore_int.pfx’
-e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=‘truststore.crd’
-e KAFKA_ssl_client_auth=‘required’
-e KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=“”
-e KAFKA_ssl_keyStore_type=‘PKCS12’
-e KAFKA_ssl_trustStore_type=‘PKCS12’
-e KAFKA_LOG4J_ROOT_LOGLEVEL=“DEBUG”
-e KAFKA_LOG4J_TOOLS_LOGLEVEL=ERROR
-e KAFKA_LOG4J_LOGGERS=“kafka=DEBUG,kafka.controller=WARN,kafka.log.LogCleaner=WARN,state.change.logger=WARN,kafka.producer.async.DefaultEventHandler=WARN”
confluentinc/cp-kafka:latest

O/c all the files in use work fine with the current Zookeeper based Kafka Cluster in SSL mode. They are in pkcs12 which I explicitly set for cert and truststore.

Secondary Question,
I noticed is that there is no mention of SSL at all in the controller config you provide - is that intentional? Is the Broker to controller, or inter controller communication not supposed to be encrypted?

Thanks,
cheers

Where did you get KAFKA_SSL_KEYSTORE_FILENAME, KAFKA_SSL_KEYSTORE_CREDENTIALS, KAFKA_SSL_TRUSTSTORE_FILENAME, and KAFKA_SSL_TRUSTSTORE_CREDENTIALS from? I’m not seeing those mentioned in the docs here or here. (See here for how Docker environment variables are converted to properties, e.g., KAFKA_SSL_KEYSTORE_LOCATION corresponds to ssl.keystore.location.)

So, what I think it happening here is that KAFKA_SSL_TRUSTSTORE_LOCATION is set to truststore_int.pfx, and that would only work if /etc/kafka/secrets were the working directory of the broker process. Try setting KAFKA_SSL_TRUSTSTORE_LOCATION to the absolute path /etc/kafka/secrets/truststore_int.pfx and similarly use the absolute path for the keystore location.

It can be - you’d want to set the protocol to CONTROLLER:SSL rather than CONTROLLER:PLAINTEXT in the security protocol map config.

Hi,

Filename and credentials are set in the example config you provide which I used ( kafka/docker/examples/docker-compose-files/cluster/isolated/ssl/docker-compose.yml at 88f0440066771202b9d6c979d6c45e806971d77d · confluentinc/kafka · GitHub)

This very discrepancy between examples and documentaion is what makes this so confusing and hard to follow :frowning:

I explicitly set my keystore to be located at /etc/kafka/secrets/ because thats what the example did.

I had relative paths before, but happy to try again with absolute ones. :slight_smile:

Re SSL - I will give that a try too, but then I have to wonder why is that not set in the first place in a SSL example?

Thanks

Edit1
When I provide the location with
-e KAFKA_SSL_TRUSTSTORE_LOCATION='/etc/kafka/secrets/truststore_int.pfx' \
it basically behaves like when I dont provide the location at all (since its now looking at /etc/kafka/secrets//etc/kafka/secrets/truststore_int.pfx which does not exist.

The error I am getting then is:

the trustAnchors parameter must be non-empty for configuration

[2025-01-07 07:55:23,477] INFO [BrokerServer id=4] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2025-01-07 07:55:23,479] ERROR [BrokerServer id=4] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer)
org.apache.kafka.common.config.ConfigException: Invalid value java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty for configuration A client SSLEngine created with the provided settings can’t connect to a server SSLEngine created with those settings.
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:103)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:70)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)
at kafka.network.Processor.(SocketServer.scala:977)
at kafka.network.Acceptor.newProcessor(SocketServer.scala:882)
at kafka.network.Acceptor.$anonfun$addProcessors$1(SocketServer.scala:852)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
at kafka.network.Acceptor.addProcessors(SocketServer.scala:851)
at kafka.network.DataPlaneAcceptor.configure(SocketServer.scala:525)
at kafka.network.SocketServer.createDataPlaneAcceptorAndProcessors(SocketServer.scala:253)
at kafka.network.SocketServer.$anonfun$new$31(SocketServer.scala:177)
at kafka.network.SocketServer.$anonfun$new$31$adapted(SocketServer.scala:177)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:619)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:617)
at scala.collection.AbstractIterable.foreach(Iterable.scala:935)
at kafka.network.SocketServer.(SocketServer.scala:177)
at kafka.server.BrokerServer.startup(BrokerServer.scala:253)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:97)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:97)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:97)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
[2025-01-07 07:55:23,484] INFO [BrokerServer id=4] Transition from STARTED to SHUTTING_DOWN (kafka.server.BrokerServer)

which I read at “Can’t find truststore” which is why I added location.

Now, when followin the linked general documentation on how to set up SSL

ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password=test1234
ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234

It errors out right away with

===> User
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
===> Configuring ...
Running in KRaft mode...
SSL is enabled.
KAFKA_SSL_KEYSTORE_FILENAME is required.
Command [/usr/local/bin/dub ensure KAFKA_SSL_KEYSTORE_FILENAME] FAILED !

I then add the filename, only to get

===> User
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
===> Configuring ...
Running in KRaft mode...
SSL is enabled.
KAFKA_SSL_KEY_CREDENTIALS is required.
Command [/usr/local/bin/dub ensure KAFKA_SSL_KEY_CREDENTIALS] FAILED !

And that goes on to KAFKA_SSL_KEYSTORE_CREDENTIALS, which then turns to
the same issue we were before

java.nio.file.NoSuchFileException: truststore_int.pfx

[2025-01-07 08:10:38,781] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
org.apache.kafka.common.KafkaException: Failed to load SSL keystore truststore_int.pfx of type PKCS12
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:382)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.(DefaultSslEngineFactory.java:354)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createTruststore(DefaultSslEngineFactory.java:327)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.configure(DefaultSslEngineFactory.java:171)
at org.apache.kafka.common.security.ssl.SslFactory.instantiateSslEngineFactory(SslFactory.java:141)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:98)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:70)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)
at kafka.network.Processor.(SocketServer.scala:977)
at kafka.network.Acceptor.newProcessor(SocketServer.scala:882)
at kafka.network.Acceptor.$anonfun$addProcessors$1(SocketServer.scala:852)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
at kafka.network.Acceptor.addProcessors(SocketServer.scala:851)
at kafka.network.DataPlaneAcceptor.configure(SocketServer.scala:525)
at kafka.network.SocketServer.createDataPlaneAcceptorAndProcessors(SocketServer.scala:253)
at kafka.network.SocketServer.$anonfun$new$31(SocketServer.scala:177)
at kafka.network.SocketServer.$anonfun$new$31$adapted(SocketServer.scala:177)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:619)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:617)
at scala.collection.AbstractIterable.foreach(Iterable.scala:935)
at kafka.network.SocketServer.(SocketServer.scala:177)
at kafka.server.BrokerServer.startup(BrokerServer.scala:253)
at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:97)
at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:97)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:97)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.nio.file.NoSuchFileException: truststore_int.pfx
at java.base/sun.nio.fs.UnixException.translateToIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(Unknown Source)
at java.base/java.nio.file.Files.newByteChannel(Unknown Source)
at java.base/java.nio.file.Files.newByteChannel(Unknown Source)
at java.base/java.nio.file.spi.FileSystemProvider.newInputStream(Unknown Source)
at java.base/java.nio.file.Files.newInputStream(Unknown Source)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:375)
… 28 more

Edit2:
Wrt to Controller SSL -

Now the example does not provide a KAFKA_LISTENER_SECURITY_PROTOCOL_MAP for the controller at all.

Do you want me add that variable there or adjust the one in the kafka config ?
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP='CONTROLLER:SSL,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
But how would the controller then know its supposed to accept SSL?

Oh my mistake – I see that these environment variables are Docker-specific and the Kafka config ssl.keystore.location gets constructed here. I’ll have to play with this example from scratch to give better guidance.

Ok, a look at the source code at least tells me that I should not set location and filename at the same.

Actually its unlcear how location (and CREDENTIALS) are used (is the user provided value overriding the constructed value? Ot is the user not really supposed to set these?

Anyhow, when specifying exactky the variables referenced in the code i do not get the missing trustrore exception, but instead an ssl exception
that is point to the same issue (-> truststore null)

DEBUG Created SSL context with keystore SecurityStore(path=/etc/kafka/secrets/hostname0.domain.pfx, modificationTime=Wed Feb 08 13:11:39 UTC 2023), truststore **null**, provider SunJSSE. (org.apache.kafka.common.security.ssl.DefaultSslEngineFactory) [2025-01-07 14:46:19,488] INFO [BrokerServer id=4] Transition from STARTING to STARTED (kafka.server.BrokerServer) [2025-01-07 14:46:19,489] ERROR [BrokerServer id=4] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer)

Theat means it seems see the file in this case but does not seem to be able to read it - but why?

This is our old Zookeeper based cluster config which is running just fine (on an older kafka version o/c).

security.inter.broker.protocol=SSL
ssl.keystore.location=/keystore/hostname0.domain.pfx
ssl.keystore.password=<pw>
ssl.truststore.location=/keystore/truststore_int.pfx
ssl.truststore.password=<pw>
ssl.client.auth=required
ssl.trustStore.type=PKCS12
zookeeper.ssl.client.enable=true
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.truststore.location=/keystore/truststore_int.pfx
zookeeper.ssl.truststore.password=<pw>
zookeeper.ssl.truststore.type=PKCS12
zookeeper.ssl.keystore.location=/keystore/hostname0.domain.pfx
zookeeper.ssl.keystore.password=<pw>
zookeeper.ssl.keystore.type=PKCS12

UPDATE: this response is incorrect. Leaving it here so that the conversation makes sense

One source of confusion for me is your usage of the confluentinc/cp-kafka image here:

Kafka-1 Config
podman unshare chown 1000:1000 -R /hostname0/data/cpkafka-broker
podman run -d
–name kafka-1
-h=hostname0.domain
-p 19093:19093
-p 39093:39093
-v /hostname0/data/cpkafka-broker:/data/cpkafka-data:Z
-v /hostname0/keystore:/etc/kafka/secrets:z
-e KAFKA_NODE_ID=4
-e KAFKA_PROCESS_ROLES=‘broker’
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=‘SSL:SSL,CONTROLLER:PLAINTEXT,SSL-INTERNAL:SSL’
-e KAFKA_LISTENERS=‘SSL-INTERNAL://:19093,SSL://:39093’
-e KAFKA_CONTROLLER_QUORUM_VOTERS=‘1@hostname0.domain:29092,2@hostname1.domain:29092,3@hostname2.domain:29092’
-e KAFKA_INTER_BROKER_LISTENER_NAME=‘SSL-INTERNAL’
-e KAFKA_SECURITY_PROTOCOL=‘SSL’
-e KAFKA_ADVERTISED_LISTENERS=‘SSL-INTERNAL://hostname0.domain:19093,SSL://:39093’
-e KAFKA_CONTROLLER_LISTENER_NAMES=‘CONTROLLER’
-e CLUSTER_ID=‘clusterid’
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
-e KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0
-e KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1
-e KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1
-e KAFKA_LOG_DIRS=‘/data/cpkafka-data’
-e KAFKA_SSL_KEYSTORE_LOCATION=‘hostname0.domain.pfx’
-e KAFKA_SSL_KEYSTORE_FILENAME=‘hostname0.domain.pfx’
-e KAFKA_SSL_KEYSTORE_CREDENTIALS=‘keystore.crd’
-e KAFKA_SSL_KEY_CREDENTIALS=‘key.crd’
-e KAFKA_SSL_TRUSTSTORE_LOCATION=‘truststore_int.pfx’
-e KAFKA_SSL_TRUSTSTORE_FILENAME=‘truststore_int.pfx’
-e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=‘truststore.crd’
-e KAFKA_ssl_client_auth=‘required’
-e KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=“”
-e KAFKA_ssl_keyStore_type=‘PKCS12’
-e KAFKA_ssl_trustStore_type=‘PKCS12’
-e KAFKA_LOG4J_ROOT_LOGLEVEL=“DEBUG”
-e KAFKA_LOG4J_TOOLS_LOGLEVEL=ERROR
-e KAFKA_LOG4J_LOGGERS=“kafka=DEBUG,kafka.controller=WARN,kafka.log.LogCleaner=WARN,state.change.logger=WARN,kafka.producer.async.DefaultEventHandler=WARN”
confluentinc/cp-kafka:latest

But the error that you hit more recently (KAFKA_SSL_KEYSTORE_FILENAME is required.) looks to be from the apache/kafka or apache/kafka-native image.

The Docker Compose example that you shared will work with the apache/kafka or apache/kafka-native image but not confluentinc/cp-kafka. The SSL property construction that I pointed to only runs in the AK images. In the AK codebase, that happens here for the apache/kafka image (calls this, which then calls the environment variable logic linked earlier). The analogous Dockerfile for cp-kafka is here, and it has different container startup scripts. This explains the java.nio.file.NoSuchFileException error.

Which image are you aiming to use?

If it’s apache/kafka or apache/kafka-native then the Docker Compose example in the kafka repo should work and then it’s a matter of tweaking the example to get encrypted controller communication if you’d like. E.g., this worked for me:

IMAGE=apache/kafka:3.9.0 docker compose -f docker/examples/docker-compose-files/cluster/isolated/ssl/docker-compose.yml up

kafka-console-producer --topic test --bootstrap-server localhost:49093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties

<enter some messages>

kafka-console-consumer --topic test --bootstrap-server localhost:49093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties

If it’s confluentinc/cp-kafka then the environment variables will be a little different following the documentation I shared previously.

I see.

I am currently using the confluentinc images.
(I could use the AK images if thats beneficial, better, easier, better documented but I need to get those loaded into a local repo first. If you think thats the way forward I can do that?)

@mmuehlbeyer provided the docker ssl example link originally when I asked for a complete set of what I should use since getting tidbids did not work.

I did not realize that those were from AK and not confluent so i used them in the wrong combination.

At this point I’d prefer to get a confluent based solution since that is what I have now.
I am just looking for a properly working example of how to configure Controller and broker to run in SSL mode.

I tried it off the Confluent documentation initially and it did not work as expected, but if thats all that there is I will give it another try

I’m poking around Confluent docs and GitHub and don’t see an example for what you’re trying to do. I’ll work on putting one together and then it can eventually wind up in Confluent documentation.

1 Like

Brilliant, thank you very much.

In the meantime I started the long internal process to give Apache Kafka a whirl, just in case :wink:

On closer inspection, this was incorrect:

The cp-kafka container startup scripts do have the same environment variable logic as the AK images (here).

Below is an example Docker Compose file that runs 3 brokers and 3 controllers (cp-kafka 7.8.0) in isolated mode using TLS everywhere. Try this as a drop-in replacement for docker/examples/docker-compose-files/cluster/isolated/ssl/docker-compose.yml in the kafka repo. I didn’t expect to need the KAFKA_LISTENER_NAME_CONTROLLER_* environment variables in the controller container config but I hit auth errors without them. (I thought that it would fall back and use the SSL properties that get created from the KAFKA_SSL_* environment variables.) So, not sure about that but give this a try:

SSL KRaft cluster Docker Compose file
---
version: '2'
services:
  controller-1:
    image: confluentinc/cp-kafka:7.8.0
    hostname: controller-1
    container_name: controller-1
    volumes:
      - ../../../../fixtures/secrets:/etc/kafka/secrets
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: 'controller'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:SSL,SSLINTERNAL:SSL'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@controller-1:29092,2@controller-2:29092,3@controller-3:29092'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'SSLINTERNAL'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LISTENERS: 'CONTROLLER://:29092'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_TRUSTSTORE_LOCATION: '/etc/kafka/secrets/kafka.truststore.jks'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_TRUSTSTORE_PASSWORD: 'abcdefgh'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_KEYSTORE_LOCATION: '/etc/kafka/secrets/kafka01.keystore.jks'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_KEYSTORE_PASSWORD: 'abcdefgh'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_KEYSTORE_FILENAME: 'kafka01.keystore.jks'
      KAFKA_SSL_KEYSTORE_CREDENTIALS: 'kafka_keystore_creds'
      KAFKA_SSL_KEY_CREDENTIALS: 'kafka_ssl_key_creds'
      KAFKA_SSL_TRUSTSTORE_FILENAME: 'kafka.truststore.jks'
      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: 'kafka_truststore_creds'
      KAFKA_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
  
  controller-2:
    image: confluentinc/cp-kafka:7.8.0
    hostname: controller-2
    container_name: controller-2
    volumes:
      - ../../../../fixtures/secrets:/etc/kafka/secrets
    environment:
      KAFKA_NODE_ID: 2
      KAFKA_PROCESS_ROLES: 'controller'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:SSL,SSLINTERNAL:SSL'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@controller-1:29092,2@controller-2:29092,3@controller-3:29092'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'SSLINTERNAL'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LISTENERS: 'CONTROLLER://:29092'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_TRUSTSTORE_LOCATION: '/etc/kafka/secrets/kafka.truststore.jks'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_TRUSTSTORE_PASSWORD: 'abcdefgh'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_KEYSTORE_LOCATION: '/etc/kafka/secrets/kafka01.keystore.jks'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_KEYSTORE_PASSWORD: 'abcdefgh'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_KEYSTORE_FILENAME: 'kafka01.keystore.jks'
      KAFKA_SSL_KEYSTORE_CREDENTIALS: 'kafka_keystore_creds'
      KAFKA_SSL_KEY_CREDENTIALS: 'kafka_ssl_key_creds'
      KAFKA_SSL_TRUSTSTORE_FILENAME: 'kafka.truststore.jks'
      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: 'kafka_truststore_creds'
      KAFKA_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
    
  controller-3:
    image: confluentinc/cp-kafka:7.8.0
    hostname: controller-3
    container_name: controller-3
    volumes:
      - ../../../../fixtures/secrets:/etc/kafka/secrets
    environment:
      KAFKA_NODE_ID: 3
      KAFKA_PROCESS_ROLES: 'controller'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:SSL,SSLINTERNAL:SSL'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@controller-1:29092,2@controller-2:29092,3@controller-3:29092'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'SSLINTERNAL'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LISTENERS: 'CONTROLLER://:29092'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_TRUSTSTORE_LOCATION: '/etc/kafka/secrets/kafka.truststore.jks'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_TRUSTSTORE_PASSWORD: 'abcdefgh'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_KEYSTORE_LOCATION: '/etc/kafka/secrets/kafka01.keystore.jks'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_KEYSTORE_PASSWORD: 'abcdefgh'
      KAFKA_LISTENER_NAME_CONTROLLER_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_KEYSTORE_FILENAME: 'kafka01.keystore.jks'
      KAFKA_SSL_KEYSTORE_CREDENTIALS: 'kafka_keystore_creds'
      KAFKA_SSL_KEY_CREDENTIALS: 'kafka_ssl_key_creds'
      KAFKA_SSL_TRUSTSTORE_FILENAME: 'kafka.truststore.jks'
      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: 'kafka_truststore_creds'
      KAFKA_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""

  kafka-1:
    image: confluentinc/cp-kafka:7.8.0
    hostname: kafka-1
    container_name: kafka-1
    ports:
      - 29093:9093
    volumes:
      - ../../../../fixtures/secrets:/etc/kafka/secrets
    environment:
      KAFKA_NODE_ID: 4
      KAFKA_PROCESS_ROLES: 'broker'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'SSL:SSL,CONTROLLER:SSL,SSLINTERNAL:SSL'
      KAFKA_LISTENERS: 'SSLINTERNAL://:19093,SSL://:9093'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@controller-1:29092,2@controller-2:29092,3@controller-3:29092'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'SSLINTERNAL'
      KAFKA_SECURITY_PROTOCOL: SSL
      KAFKA_ADVERTISED_LISTENERS: SSLINTERNAL://kafka-1:19093,SSL://localhost:29093
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_SSL_KEYSTORE_FILENAME: 'kafka01.keystore.jks'
      KAFKA_SSL_KEYSTORE_CREDENTIALS: 'kafka_keystore_creds'
      KAFKA_SSL_KEY_CREDENTIALS: 'kafka_ssl_key_creds'
      KAFKA_SSL_TRUSTSTORE_FILENAME: 'kafka.truststore.jks'
      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: 'kafka_truststore_creds'
      KAFKA_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
    depends_on:
      - controller-1
      - controller-2
      - controller-3

  kafka-2:
    image: confluentinc/cp-kafka:7.8.0
    hostname: kafka-2
    container_name: kafka-2
    ports:
      - 39093:9093
    volumes:
      - ../../../../fixtures/secrets:/etc/kafka/secrets
    environment:
      KAFKA_NODE_ID: 5
      KAFKA_PROCESS_ROLES: 'broker'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'SSL:SSL,CONTROLLER:SSL,SSLINTERNAL:SSL'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@controller-1:29092,2@controller-2:29092,3@controller-3:29092'
      KAFKA_LISTENERS: 'SSLINTERNAL://:19093,SSL://:9093'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'SSLINTERNAL'
      KAFKA_SECURITY_PROTOCOL: SSL
      KAFKA_ADVERTISED_LISTENERS: SSLINTERNAL://kafka-2:19093,SSL://localhost:39093
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_SSL_KEYSTORE_FILENAME: 'kafka01.keystore.jks'
      KAFKA_SSL_KEYSTORE_CREDENTIALS: 'kafka_keystore_creds'
      KAFKA_SSL_KEY_CREDENTIALS: 'kafka_ssl_key_creds'
      KAFKA_SSL_TRUSTSTORE_FILENAME: 'kafka.truststore.jks'
      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: 'kafka_truststore_creds'
      KAFKA_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
    depends_on:
      - controller-1
      - controller-2
      - controller-3

  kafka-3:
    image: confluentinc/cp-kafka:7.8.0
    hostname: kafka-3
    container_name: kafka-3
    ports:
      - 49093:9093
    volumes:
      - ../../../../fixtures/secrets:/etc/kafka/secrets
    environment:
      KAFKA_NODE_ID: 6
      KAFKA_PROCESS_ROLES: 'broker'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'SSL:SSL,CONTROLLER:SSL,SSLINTERNAL:SSL'
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@controller-1:29092,2@controller-2:29092,3@controller-3:29092'
      KAFKA_LISTENERS: 'SSLINTERNAL://:19093,SSL://:9093'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'SSLINTERNAL'
      KAFKA_SECURITY_PROTOCOL: SSL
      KAFKA_ADVERTISED_LISTENERS: SSLINTERNAL://kafka-3:19093,SSL://localhost:49093
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      CLUSTER_ID: '4L6g3nShT-eMCtK--X86sw'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
      KAFKA_SSL_KEYSTORE_FILENAME: 'kafka01.keystore.jks'
      KAFKA_SSL_KEYSTORE_CREDENTIALS: 'kafka_keystore_creds'
      KAFKA_SSL_KEY_CREDENTIALS: 'kafka_ssl_key_creds'
      KAFKA_SSL_TRUSTSTORE_FILENAME: 'kafka.truststore.jks'
      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: 'kafka_truststore_creds'
      KAFKA_SSL_CLIENT_AUTH: 'required'
      KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
    depends_on:
      - controller-1
      - controller-2
      - controller-3

I think I got it…

  1. I cannot directly use your docker compose files since we dont use that (and dont have podman compose either), so I am always converting to shell scripts to start the containers.

So in my file I usually adjusted the values and left the parameter names unless it was a new parameter.

I had -e KAFKA_ssl_client_auth='required' \ defined and that caused the startup script not to go into the truststore identification part of the file, thus leaving the value nullall the time in my broker container.

kafka-images/kafka/include/etc/confluent/docker/configure at ad43da084067fab491873ba7f8686741f6731038 · confluentinc/kafka-images · GitHub

As soon as I set -e KAFKA_SSL_CLIENT_AUTH='required' \ (all upper case) the if was true and the truststore variables were being used and it finally found the file…

Will try to connect the client app next, but thats at least an important lesson learned,
thank you very much !

2 Likes