Schema-registry won't start

Guys, I"m unable to get the registry started. Here is what I did, I’m using AWS linux version (free) and the steps. error : Failed to get Kafka cluster ID

Steps :
Installed java libraries.
sudo yum install java-1.8.0-openjdk

Install the library on an EC2 linux instance using this link
curl -O https://packages.confluent.io/archive/7.3/confluent-7.3.1.tar.gz

Extract it :
tar xzf confluent-7.3.1.tar.gz

Made the following changes to the
Server.Properites file :
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://ec2instancepublicipaddress:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
zookeeper.connect=ec2instancepublicipaddress:2181

Ran the following command :

confluent local services start

Using CONFLUENT_CURRENT: /tmp/confluent.962728
ZooKeeper is [UP]
Kafka is [UP]
Starting Schema Registry
Error: Schema Registry failed to start

sudo nano schema-registry.stdout

[2023-02-14 16:10:08,683] INFO Logging initialized @1344ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:170)
[2023-02-14 16:10:08,814] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer:619)
[2023-02-14 16:10:09,036] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer:521)
[2023-02-14 16:11:10,190] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:77)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID

Do I need to make changes to the Schema-Registry.properties file?

Hi @akbar and welcome to the Confluent Community Forum!

Based on your configuration, I’m assuming that you want to run the full Confluent Platform stack locally via confluent local services start but expose Kafka to external clients. Is that correct?

If this is your goal, I’d recommend a couple of listeners, one for internal components and one for external clients. If you let the internal one use the default port 9092 then you won’t have to change configs aside from the broker’s. This ought to do it in etc/kafka/server.properties:

listeners=INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:9093
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://<internal DNS ending in "compute.internal">:9092,EXTERNAL://<public IP>:9093
inter.broker.listener.name=INTERNAL

A couple of side recommendations:

  1. lock down your security group so that only the client(s) you know about can reach the broker
  2. check out this great blog post on listeners for other options and a deeper explanation of listeners

HTH,
Dave

Hi Dave, thanks for the response.

i’m trying to run the confluent platform locally (Ubuntu) and at this point not trying to expose Kafka to external client.

Just so I understand the server properties changes you are recommending. For my local Kafka instance
I’m using
listeners=INTERNAL://0.0.0.0:9092
advertised.listeners=INTERNAL://192.168.1.1:9092 (based on the internal DNS, listed below)
listener.security.protocol.map=INTERNAL:PLAINTEXT
inter.broker.listener.name=INTERNAL

Internal DNS Servers : 2600:6c5a:177f:1c8a::1
192.168.1.1

By the way, I still couldn’t get the Kafka server up using these configurations.