Deploy Schema Registry on DOKS(Kubernetes on DigitalOcean)

Hello Contributors,
Current Working Setup

  • Kafka cluster (DigitalOcean)
  • Kafka Connect (using Debezium and Outbox pattern)
  • Kafka UI (Provectus)
  • Kafka topics, connectors, and message flow are functional

Current Goal

  • Deploy Confluent Schema Registry
  • Use it with Avro serialization in Kafka Connect

Now, I am posting the way to how can I deploy or install Schema Registry to my Kubernetes Environment. I am not using Helm. So, I would like to use it from basic deployment.yaml, configmap.yaml like. my current configuration is like the following.

configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: schema-registry-config
  namespace: kafka-debezium
data:
  SCHEMA_REGISTRY_HOST_NAME: schema-registry
  SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
  SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: SASL_SSL://kafka-***-0.g.db.ondigitalocean.com:25073
  SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: SASL_SSL
  SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM: SCRAM-SHA-256
  SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\\\"***\\\" password=\\\"***\\\";"
  SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secret/truststore.p12
  SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD: ***
  SCHEMA_REGISTRY_SSL_TRUSTSTORE_TYPE: PKCS12
  SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION: /etc/kafka/secret/keystore.p12
  SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD: ***
  SCHEMA_REGISTRY_SSL_KEY_PASSWORD: ***
  SCHEMA_REGISTRY_SSL_KEYSTORE_TYPE: PKCS12
  SCHEMA_REGISTRY_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
  SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: DEBUG

deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: schema-registry
  namespace: kafka-debezium
spec:
  replicas: 1
  selector:
    matchLabels:
      app: schema-registry
  template:
    metadata:
      labels:
        app: schema-registry
    spec:
      volumes:
        - name: kafka-certificates
          secret:
            secretName: kafka-certificates

      containers:
        - name: schema-registry
          image: confluentinc/cp-schema-registry:latest
          ports:
            - containerPort: 8081
          envFrom:
            - configMapRef:
                name: schema-registry-config
          volumeMounts:
          - name: kafka-certificates
            mountPath: /etc/kafka/secret
            readOnly: true

Now, I changed the method to use Helm to install or deploy. But, it was not success.
@albin , I configured like that but not success yet

##########
# Secret #
##########
kubectl create secret generic schema-registry-p12 \
  --from-file=truststore.p12=./truststore.p12 \
  --from-file=keystore.p12=./keystore.p12 \
  -n kafka-debezium

###############
# values.yaml #
###############
replicaCount: 1

kafka:
  enabled: false

externalKafka:
  brokers: "SASL_SSL://kafka-***-0.g.db.ondigitalocean.com:25073"
auth:
  enabled: true
  clientProtocol: sasl_tls
  tls:
    certsSecret: schema-registry-p12
    keystorePassword: ***
    truststorePassword: ***
    type: p12
  sasl:
    mechanism: scram-sha-256
    users:
      - doadmin
    passwords:
      - AVNS_1QxaiLBUWP172Nz2MN6

extraEnvVars:
  - name: SCHEMA_REGISTRY_LISTENERS
    value: http://0.0.0.0:8081
  - name: SCHEMA_REGISTRY_HOST_NAME
    value: schema-registry
  - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
    value: SASL_SSL://kafka-***-0.g.db.ondigitalocean.com:25073
  - name: SCHEMA_REGISTRY_KAFKA_SASL_USERS
    value: ***
  - name: SCHEMA_REGISTRY_KAFKA_SASL_PASSWORDS
    value: ***
  - name: SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL
    value: SASL_SSL
  - name: SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM
    value: SCRAM-SHA-256
  - name: SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG
    value: >
      org.apache.kafka.common.security.scram.ScramLoginModule required
      username="***" password="***";
  - name: SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION
    value: /etc/kafka/secret/truststore.p12
  - name: SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD
    value: ***
  - name: SCHEMA_REGISTRY_SSL_TRUSTSTORE_TYPE
    value: PKCS12
  - name: SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION
    value: /etc/kafka/secret/keystore.p12
  - name: SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD
    value: ***
  - name: SCHEMA_REGISTRY_SSL_KEY_PASSWORD
    value: ***
  - name: SCHEMA_REGISTRY_SSL_KEYSTORE_TYPE
    value: PKCS12
  - name: SCHEMA_REGISTRY_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
    value: ""

volumeMounts:
  - name: kafka-certificates
    mountPath: /etc/kafka/secret
    readOnly: true

volumes:
  - name: kafka-certificates
    secret:
      secretName: schema-registry-p12

I got this error

kubectl logs schema-registry-0 -n ns-lsp-kafka-debezium-shd-01                         
  
schema-registry 03:30:30.35 INFO  ==> 
schema-registry 03:30:30.36 INFO  ==> Welcome to the Bitnami schema-registry container
schema-registry 03:30:30.36 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
schema-registry 03:30:30.36 INFO  ==> Did you know there are enterprise versions of the Bitnami catalog? For enhanced secure software supply chain features, unlimited pulls from Docker, LTS support, or application customization, see Bitnami Premium or Tanzu Application Catalog. See https://www.arrow.com/globalecs/na/vendors/bitnami/ for more information.
schema-registry 03:30:30.37 INFO  ==> 
schema-registry 03:30:30.38 INFO  ==> ** Starting Schema Registry setup **
schema-registry 03:30:30.41 INFO  ==> Validating settings in SCHEMA_REGISTRY_* env vars
schema-registry 03:30:30.45 WARN  ==> In order to configure the TLS encryption for communication with Kafka brokers, most auth protocols require mounting your schema-registry.keystore.jks and schema-registry.truststore.jks certificates to the /opt/bitnami/schema-registry/certs directory.
/opt/bitnami/scripts/libschemaregistry.sh: line 153: SCHEMA_REGISTRY_KAFKA_SASL_USERS: unbound variable

What did I miss configure?

Hello Contributors, please let me know any correction. Thanks

Hello Contributors,
I tried many ways but not success yet. Please, let me know my issue

kubectl logs schema-registry-0 -n ns-lsp-kafka-debezium-shd-01
schema-registry 11:46:03.19 INFO  ==> 
schema-registry 11:46:03.20 INFO  ==> Welcome to the Bitnami schema-registry container
schema-registry 11:46:03.20 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
schema-registry 11:46:03.20 INFO  ==> Did you know there are enterprise versions of the Bitnami catalog? For enhanced secure software supply chain features, unlimited pulls from Docker, LTS support, or application customization, see Bitnami Premium or Tanzu Application Catalog. See https://www.arrow.com/globalecs/na/vendors/bitnami/ for more information.
schema-registry 11:46:03.21 INFO  ==> 
schema-registry 11:46:03.21 INFO  ==> ** Starting Schema Registry setup **
schema-registry 11:46:03.24 INFO  ==> Validating settings in SCHEMA_REGISTRY_* env vars
schema-registry 11:46:03.25 WARN  ==> In order to configure the TLS encryption for communication with Kafka brokers, most auth protocols require mounting your schema-registry.keystore.jks and schema-registry.truststore.jks certificates to the /opt/bitnami/schema-registry/certs directory.
/opt/bitnami/scripts/libschemaregistry.sh: line 153: SCHEMA_REGISTRY_KAFKA_SASL_USERS: unbound variable

are the needed files (e.g schema-registry.keystore.jks)

in the expected dir /etc/kafka/secret?

if you check the running container?

1 Like

Let me update the latest config values.yaml and SSL Handshake error

kubectl create secret generic schema-registry-p12 \
  --from-file=truststore.p12=./truststore.p12 \
  --from-file=keystore.p12=./keystore.p12 \
  -n ns-lsp-kafka-debezium-shd-01`
replicaCount: 1

kafka:
  enabled: false

externalKafka:
  brokers: "SASL_SSL://kafka-***.g.db.ondigitalocean.com:25073"

auth:
  enabled: false

volumeMounts:
  - name: kafka-certificates
    mountPath: /opt/bitnami/schema-registry/certs
    readOnly: true

volumes:
  - name: kafka-certificates
    secret:
      secretName: schema-registry-p12

extraEnvVars:
  - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
    value: SASL_SSL://kafka-***.g.db.ondigitalocean.com:25073
  - name: SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL
    value: SASL_SSL
  - name: SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM
    value: SCRAM-SHA-256
  - name: SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG
    value: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"***\" password=\"***\";"
  - name: SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION
    value: /opt/bitnami/schema-registry/certs/truststore.p12
  - name: SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD
    value: ***
  - name: SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION
    value: /opt/bitnami/schema-registry/certs/keystore.p12
  - name: SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD
    value: ***
  - name: SCHEMA_REGISTRY_SSL_KEY_PASSWORD
    value: ***
  - name: SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL
    value: TRACE
  - name: SCHEMA_REGISTRY_LISTENERS
    value: "http://0.0.0.0:8081"
  # If you need compatibility settings
  - name: SCHEMA_REGISTRY_AVRO_COMPATIBILITY_LEVEL
    value: "BACKWARD"
  # Add this variable to fix the error
  - name: SCHEMA_REGISTRY_KAFKA_SASL_USERS
    value: "***"
  - name: SCHEMA_REGISTRY_KAFKA_SASL_PASSWORDS
    value: "***"
  # Add SSL endpoint identification algorithm setting
  - name: SCHEMA_REGISTRY_KAFKASTORE_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM
    value: ""
  # Disable hostname verification
  - name: SCHEMA_REGISTRY_KAFKASTORE_CLIENT_DNS_LOOKUP
    value: "use_all_dns_ips"
  # Add debug for SSL
  - name: SCHEMA_REGISTRY_DEBUG
    value: "true"
  # Turn off the script validation that's causing issues
  - name: ALLOW_PLAINTEXT_LISTENER
    value: "yes"

Error

kubectl logs schema-registry-0 -n ns-lsp-kafka-debezium-shd-01
schema-registry 16:25:58.24 INFO  ==> 
schema-registry 16:25:58.24 INFO  ==> Welcome to the Bitnami schema-registry container
schema-registry 16:25:58.24 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
schema-registry 16:25:58.24 INFO  ==> Did you know there are enterprise versions of the Bitnami catalog? For enhanced secure software supply chain features, unlimited pulls from Docker, LTS support, or application customization, see Bitnami Premium or Tanzu Application Catalog. See https://www.arrow.com/globalecs/na/vendors/bitnami/ for more information.
schema-registry 16:25:58.25 INFO  ==> 
schema-registry 16:25:58.25 INFO  ==> ** Starting Schema Registry setup **
schema-registry 16:25:58.27 INFO  ==> Validating settings in SCHEMA_REGISTRY_* env vars
schema-registry 16:25:58.31 WARN  ==> In order to configure the TLS encryption for communication with Kafka brokers, most auth protocols require mounting your schema-registry.keystore.jks and schema-registry.truststore.jks certificates to the /opt/bitnami/schema-registry/certs directory.
schema-registry 16:25:58.34 INFO  ==> Initializing Schema Registry
schema-registry 16:25:58.35 INFO  ==> Waiting for Kafka brokers to be up
schema-registry 16:25:58.38 INFO  ==> ** Schema Registry setup finished! **

schema-registry 16:25:58.40 INFO  ==> ** Starting Schema Registry **
[2025-04-03 16:26:00,924] INFO Logging initialized @2496ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:170)
[2025-04-03 16:26:01,157] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer:466)
[2025-04-03 16:26:02,145] INFO Adding listener: NamedURI{uri=http://0.0.0.0:8081, name='null'} (io.confluent.rest.ApplicationServer:367)
[2025-04-03 16:26:02,250] INFO Registered MetricsListener to connector of listener: null (io.confluent.rest.ApplicationServer:144)
[2025-04-03 16:26:03,128] INFO Found internal listener: NamedURI{uri=http://0.0.0.0:8081, name='null'} (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:201)
[2025-04-03 16:26:03,131] INFO Setting my identity to version=1,host=schema-registry-0.schema-registry-headless.ns-lsp-kafka-debezium-shd-01.svc.cluster.local,port=8081,scheme=http,leaderEligibility=true,isLeader=false (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:204)
[2025-04-03 16:26:05,520] ERROR [AdminClient clientId=adminclient-1] Connection to node -1 (kafka-***-0.g.db.ondigitalocean.com/ip:25073) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient:941)
[2025-04-03 16:26:05,520] ERROR [AdminClient clientId=adminclient-1] Connection to node -1 (kafka-***-0.g.db.ondigitalocean.com/ip:25073) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient:941)
[2025-04-03 16:26:05,543] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:81)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:2227)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:220)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:75)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:105)
        at io.confluent.rest.Application.configureHandler(Application.java:323)
        at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:234)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
        at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:2225)
        ... 7 more
Caused by: org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:331)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:274)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:269)
        at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
        at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
        at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
        at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)
        at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
        at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:981)
        at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:968)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:915)
        at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:504)
        at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:603)
        at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:442)
        at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:324)
        at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:455)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:796)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:733)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:685)
        at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1780)
        at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1711)
        at java.lang.Thread.run(Thread.java:750)
        at org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:68)
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:456)
        at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:323)
        at sun.security.validator.Validator.validate(Validator.java:271)
        at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:315)
        at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:278)
        at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
        at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632)
        ... 20 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:148)
        at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:129)
        at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
        at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:451)
        ... 26 more

You can see that I created a secret called “schema-registry-p12” for the certificate to Managed Kafka Cluster on DigitalOcean. Then I referenced that certificate with the Volume and VolumeMount. But, I still got the SSL Handshake issue. I used the same truststore and keystore for Debezium Kafka Connect. It was successful. Why is it not successful in Schema Registry.

so

  • the secrets and keystore is properly mounted to the pod/container did you check that?
  • for kafka connect the same certificate is used?
  • same bootstrap server at digitalocean used for kafka connect?
1 Like

For Schema Registry,

  • Yes, I am using the same certificate(truststore.p12 and keystore.p12).
  • Bootstrap Server is the same.
  • Yes, keystore and truststroe are properly mounted under the path of /opt/bitnami/schema-registry/certs.
    I confirmed triple. I check more than 5 times. So, why is it still showing error? Only one thing is that I can’t run "kubectl exec -it … ) because container can’t up and run. So, I can’t use that command. But, I test with testing Pod by mounting the same volume. I can see it.

@mmuehlbeyer I also import ca-certificate.crt that provided by Managed Kafka Cluster to be sure. But, I got the same error.

not sure but maybe the connect image does not require the full cert chain to be available
full chain in the secret?
did you check on the bitnami side if there is an existing issue regarding this?

1 Like

Dear @mmuehlbeyer ,
Yes, I already imported full chain to the truststore. I did not check with Bitnami because may be this one may not be new issue. I believe that it’s misconfiguration but I am new with Schema Registry. So, I don’t why?

just was thinking about an example on the bitnami docs or similar.

as said I think it’s obviously related to a cert issue though not sure what’s happening there in detail
fwiw did you try to connect a SR docker image with digital oceans kafka service?

Dear @mmuehlbeyer , Yes, I’m trying to connect from SR image to Managed Kafka on DO using the above configuration. My goal is my SR Kubernetes Pod need to up and run. So, do you have any better idea?

mmh basically you could make use of Confluent for Kubernetes or Strimzi
for deploying to K8s
though the issue with the certs will not get solved automatically of course.
it might just ease up everything.

regarding the ssl issue
are you able to debug this with openssl and the certs you’ve been using?

1 Like

@mmuehlbeyer I already done debugging using openssl for certs. They are valid. CFK is need enterprise license, right? So, I ignore it.

yep you need a valid license for that. (basically you can test it with an evaluation license Manage Confluent License using Confluent for Kubernetes | Confluent Documentation)

if openssl looks fine, there seems to be an issue with the bitnami image or so
therefore an github issue might be the best.
I need to check if I’m able to rebuild though I have no Digitalocean subscription available.

1 Like

@mmuehlbeyer any alternative solution to deploy SR. Now, my goal is to use SR with Avro. So, help me to find out the solution. Thanks

you could make use of Cfk or strimzi as well
though this won’t help with the tls issues obviously…

to keep everything easy for testing I would recommend to test locally with docker and the bitnami schema registry container and try to debug there.

1 Like