Schema Registry Setup multi-DC

Hi, We have a weird Schema Registry setup i.e…combination of VMs (bare metal) and K8’s in multi-DC. Probably K8s were added to retire VMs. My goal is to totally get rid of VMs and make it just K8s in multi-DC. Current version is very old: confluent 4.1. We are not on managed confluent, we use open source kafka and confluent Schema Registry bundle.

I tried to make VMs read-only i.e… “master.eligibility” = false. And updated K8’s to “true”. That doesn’t work and any schema writes are failing. Reads work fine.

Here is the config file from VM and K8’s:
This is in EastCoast:

listeners=http://0.0.0.0:8081
kafkastore.bootstrap.servers=PLAINTEXT://kafkac1n12.bos.lcims.com:9092,PLAINTEXT://kafkac1n13.bos.lcims.com:9092
kafkastore.topic=__schemas
master.eligibility=true
debug=false

This is in K8’s EastCoast:
global:
region: “East1”
environment: “prod”
replicaCount: 3
ingressconnect:
serviceFQDN: kafka-schema-c1.service.bos.lcims.com
kafkastoreTopic: “__schemas”
bootstrapServers: “PLAINTEXT://c1.kafka-broker.service.intrabos.lcims.com:9092”
hostName: “schemaregistryc1-bos”
debug: “false”
masterEligibility: “false”

This is our west-coast K8’s:

global:
region: “west1”
environment: “prod”
replicaCount: 3
ingressconnect:
serviceFQDN: kafka-schema-c1.service.sfo.lcims.com
kafkastoreTopic: “__schemas”
bootstrapServers: “PLAINTEXT://c1.kafka-broker.service.intrabos.lcims.com:9092”
hostName: “schemaregistryc1-bos”
debug: “false”
masterEligibility: “false”

My question is: How can I make the K8’s as master and make VMs as the read-only and eventually shut them down. BTW, bootstrap servers are all pointing to the same kafka cluster, K8s use the DNS name

Thank you!

Hey, welcome to the community! Glad to have you here.

This sounds like your non-leader SRs are trying to redirect the writes but fail due to not being able to resolve the “advertised hostname” of the leader Schema Registry. In SR, the IP/DNS present in the host.name property will be advertised to other SRs of the same group for redirection of writes, so it needs to be resolvable by everyone.

We can verify this if you provide logs.

1 Like

Thank you! Yes, I just found in the logs that they can’t resolve to host.

Our end goal is to provide a local Schema Registry endpoint in every DC, and route the writes to the master instance. I m reading the documentation here for multi-DC setup: Schema Registry Single and Multi-Datacenter Deployments — Confluent Platform 5.5.1

All it says is to have these 3 configs:

  • kafkastore.bootstrap.servers
  • schema.registry.group.id
  • master.eligibility

so looks like we have to setup host.name as well? I just setup a test cluster(K8’s) in 2 DCs with this config:

Cluster-1
kafkastoreTopic: _schemas
bootstrapServers: PLAINTEXT://kafkac1n1.dev.lcims.com:9092
groupID: schemaRegistry-bo1
debug: ‘false’
masterEligibility: ‘true’
HostName: kafka-schema-c1.service.intradevbo1.consul.lcims.com

Cluster-2
kafkastoreTopic: _schemas
bootstrapServers: PLAINTEXT://kafkac1n1.dev.bo1.lcims.com:9092
groupID: schemaRegistry-bo1
debug: ‘false’
masterEligibility: ‘false’
HostName: kafka-schema-c1.service.intradevbo1.consul.lcims.com

With the above config, Cluster 1 works fine. But the writes on cluster2 are failing with this error.
[2021-02-27 17:36:26,782] ERROR Request Failed with exception (io.confluent.rest.exceptions.DebuggableExceptionMapper:62)
io.confluent.kafka.schemaregistry.rest.exceptions.RestUnknownMasterException: Master not known.
at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.unknownMasterException(Errors.java:153)
at io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource.register(SubjectVersionsResource.java:286)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)

Caused by: io.confluent.kafka.schemaregistry.exceptions.UnknownMasterException: Register schema request failed since master is unknown
	at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.registerOrForward(KafkaSchemaRegistry.java:559)
	at io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource.register(SubjectVersionsResource.java:266)
	... 61 more

So I changed the host.name in cluster-1 to : http://kafka-schema-c1.service.intradevbo1.consul.lcims.com:80. This time it failed with below error:
{“error_code”:50003,“message”:“Error while forwarding register schema request to the master”}
Caused by: io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryRequestForwardingException: Unexpected error while forwarding the registering schema request {version=0, id=-1, schemaType=AVRO,references=,schema=… to [http://http://kafka-schema-c1.service.intradevbo1.consul.lcims.com:80:8081]

It appends http and port 8081 at the end to given hostname at cluster -1 config. So how does it even take the host name to forward to? I understand that it can’t resolve to the given hostname.

  1. Do we need the group.id config?
  2. What is the difference between the 2 configs above with http and with no http? How do I fix this issue?

Appreciate your help! Thank you

Glad to hear it’s what we suspect!

So, you are hitting this because of a third related config: “listeners”. The value set in listeners determine the protocol and port for redirection that is recorded for each node. By default, listeners is set to “http://0.0.0.0:8081” which means it will use the http protocol at port 8081. (This also tells SR where to listen to… “hence listeners”). A value of 0.0.0.0 in listeners allows it to listen in all IP addresses associated with the host.

So when you put http and 80 in the host name, it follows it will just append them.

In a k8s environment, host.name is commonly set to the Pod IP address. I find it interesting the first time you got a master unknown error. Could you try again and post the full log here so that we may have more context to help you?

To answer your questions directly:

  1. The schema.registry.group.id config would probably beneficial here just to be explicit on the group you want these sr’s should be in, but by default they join the same default group.
  2. The difference is explained by how host.name and listeners work together as explained before.

got it, this is all adding up now. it was me who updated the host.name, it was set to Pod IP earlier. I will update that back to pod IP. I thought I would route traffic to podFQDN and updated the hostname.

yeah, the master unknown error is when I follow this doc: and set the unique group.id for every cluster. It doesn’t seem to be working, and there is no master in that group.id. Here are the complete logs with that config:

mkdir: cannot create directory ‘/app/confluent/bin/…/logs’: Permission denied
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /app/confluent/bin/…/logs/schema-registry.log (No such file or directory)
at java.base/java.io.FileOutputStream.open0(Native Method)
at java.base/java.io.FileOutputStream.open(FileOutputStream.java:298)
at java.base/java.io.FileOutputStream.(FileOutputStream.java:237)
at java.base/java.io.FileOutputStream.(FileOutputStream.java:158)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.(SchemaRegistryMain.java:28)
[2021-02-28 12:47:59,306] INFO SchemaRegistryConfig values:
access.control.allow.headers =
access.control.allow.methods =
access.control.allow.origin =
access.control.skip.options = true
authentication.method = NONE
authentication.realm =
authentication.roles = [*]
authentication.skip.paths =
avro.compatibility.level =
compression.enable = true
debug = false
host.name = kafka-schema-c1.service.intradeviad1.consul.lcims.com
idle.timeout.ms = 30000
inter.instance.headers.whitelist =
inter.instance.protocol = http
kafkastore.bootstrap.servers = [PLAINTEXT://kafkac1n1.dev.lcims.com:9092]
kafkastore.connection.url = localhost:2181
kafkastore.group.id =
kafkastore.init.timeout.ms = 60000
kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafkastore.sasl.kerberos.min.time.before.relogin = 60000
kafkastore.sasl.kerberos.service.name =
kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
kafkastore.sasl.mechanism = GSSAPI
kafkastore.security.protocol = PLAINTEXT
kafkastore.ssl.cipher.suites =
kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
kafkastore.ssl.endpoint.identification.algorithm =
kafkastore.ssl.key.password = [hidden]
kafkastore.ssl.keymanager.algorithm = SunX509
kafkastore.ssl.keystore.location =
kafkastore.ssl.keystore.password = [hidden]
kafkastore.ssl.keystore.type = JKS
kafkastore.ssl.protocol = TLS
kafkastore.ssl.provider =
kafkastore.ssl.trustmanager.algorithm = PKIX
kafkastore.ssl.truststore.location =
kafkastore.ssl.truststore.password = [hidden]
kafkastore.ssl.truststore.type = JKS
kafkastore.timeout.ms = 500
kafkastore.topic = _schemas
kafkastore.topic.replication.factor = 3
kafkastore.write.max.retries = 5
kafkastore.zk.session.timeout.ms = 30000
listeners = [http://0.0.0.0:8081]
master.eligibility = false
metric.reporters =
metrics.jmx.prefix = kafka.schema.registry
metrics.num.samples = 2
metrics.sample.window.ms = 30000
metrics.tag.map =
mode.mutability = false
port = 8081
request.logger.name = io.confluent.rest-utils.requests
request.queue.capacity = 2147483647
request.queue.capacity.growby = 64
request.queue.capacity.init = 128
resource.extension.class =
resource.extension.classes =
resource.static.locations =
response.mediatype.default = application/vnd.schemaregistry.v1+json
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
rest.servlet.initializor.classes =
schema.compatibility.level = backward
schema.providers =
schema.registry.group.id = schemaRegistry-iad1
schema.registry.inter.instance.protocol =
schema.registry.resource.extension.class =
schema.registry.zk.namespace = schema_registry
shutdown.graceful.ms = 1000
ssl.cipher.suites =
ssl.client.auth = false
ssl.client.authentication = NONE
ssl.enabled.protocols =
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm =
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.reload = false
ssl.keystore.type = JKS
ssl.keystore.watch.location =
ssl.protocol = TLS
ssl.provider =
ssl.trustmanager.algorithm =
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
thread.pool.max = 200
thread.pool.min = 8
websocket.path.prefix = /ws
websocket.servlet.initializor.classes =
zookeeper.set.acl = false
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:347)
[2021-02-28 12:47:59,352] INFO Logging initialized @768ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:169)
[2021-02-28 12:47:59,364] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer:454)
[2021-02-28 12:47:59,474] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer:353)
[2021-02-28 12:47:59,748] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://kafkac1n1.dev.lcims.com:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore:108)
[2021-02-28 12:47:59,751] INFO Registering schema provider for AVRO: io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:212)
[2021-02-28 12:47:59,751] INFO Registering schema provider for JSON: io.confluent.kafka.schemaregistry.json.JsonSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:212)
[2021-02-28 12:47:59,751] INFO Registering schema provider for PROTOBUF: io.confluent.kafka.schemaregistry.protobuf.ProtobufSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:212)
[2021-02-28 12:48:00,508] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore:236)
[2021-02-28 12:48:00,532] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it’s crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:250)
[2021-02-28 12:48:00,646] INFO Kafka store reader thread starting consumer (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:110)
[2021-02-28 12:48:00,756] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:144)
[2021-02-28 12:48:00,757] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:66)
[2021-02-28 12:48:01,007] INFO Wait to catch up until the offset at 11398 (io.confluent.kafka.schemaregistry.storage.KafkaStore:304)
[2021-02-28 12:48:07,862] ERROR Failed to deserialize the schema or config key (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:158)
io.confluent.kafka.schemaregistry.storage.exceptions.SerializationException: Failed to deserialize unknown key
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:108)
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:41)
at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.doWork(KafkaStoreReaderThread.java:156)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token ‘wayday’: was expecting (JSON String, Number, Array, Object or token ‘null’, ‘true’ or ‘false’)
at [Source: (byte)“wayday”; line: 1, column: 7]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3560)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2655)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:857)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:754)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4356)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4205)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3292)
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:84)
… 3 more
[2021-02-28 12:48:07,865] ERROR Failed to deserialize the schema or config key (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:158)
io.confluent.kafka.schemaregistry.storage.exceptions.SerializationException: Failed to deserialize unknown key
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:108)
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:41)
at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.doWork(KafkaStoreReaderThread.java:156)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token ‘wayday’: was expecting (JSON String, Number, Array, Object or token ‘null’, ‘true’ or ‘false’)
at [Source: (byte)“wayday”; line: 1, column: 7]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3560)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2655)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:857)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:754)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4356)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4205)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3292)
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:84)
… 3 more
[2021-02-28 12:48:07,867] ERROR Failed to deserialize the schema or config key (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:158)
io.confluent.kafka.schemaregistry.storage.exceptions.SerializationException: Failed to deserialize unknown key
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:108)
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:41)
at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.doWork(KafkaStoreReaderThread.java:156)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token ‘wayday’: was expecting (JSON String, Number, Array, Object or token ‘null’, ‘true’ or ‘false’)
at [Source: (byte)“wayday”; line: 1, column: 7]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3560)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2655)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:857)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:754)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4356)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4205)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3292)
at io.confluent.kafka.schemaregistry.storage.serialization.SchemaRegistrySerializer.deserializeKey(SchemaRegistrySerializer.java:84)
… 3 more
[2021-02-28 12:48:10,751] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:299)
[2021-02-28 12:48:14,204] INFO Finished rebalance with master election result: Assignment{version=1, error=0, master=‘null’, masterIdentity=null} (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector:228)
[2021-02-28 12:48:14,205] ERROR No master eligible schema registry instances joined the schema registry group. Rebalancing was successful and this instance can serve reads, but no writes can be processed. (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector:233)
[2021-02-28 12:48:14,370] INFO jetty-9.4.33.v20201020; built: 2020-10-20T23:39:24.803Z; git: 1be68755656cef678b79a2ef1c2ebbca99e25420; jvm 11.0.8+10 (org.eclipse.jetty.server.Server:375)
[2021-02-28 12:48:14,432] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session:334)
[2021-02-28 12:48:14,432] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session:339)
[2021-02-28 12:48:14,434] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session:132)
Feb 28, 2021 12:48:14 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ServerMetadataResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ServerMetadataResource will be ignored.
Feb 28, 2021 12:48:14 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored.
Feb 28, 2021 12:48:14 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored.
Feb 28, 2021 12:48:14 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ModeResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ModeResource will be ignored.
Feb 28, 2021 12:48:14 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored.
Feb 28, 2021 12:48:14 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored.
Feb 28, 2021 12:48:14 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored.
[2021-02-28 12:48:15,096] INFO HV000001: Hibernate Validator 6.1.2.Final (org.hibernate.validator.internal.util.Version:21)
[2021-02-28 12:48:15,351] INFO Started o.e.j.s.ServletContextHandler@1d2644e3{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:916)
[2021-02-28 12:48:15,367] INFO Started o.e.j.s.ServletContextHandler@ca66933{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:916)
[2021-02-28 12:48:15,383] INFO Started NetworkTrafficServerConnector@7ac296f6{HTTP/1.1, (http/1.1)}{0.0.0.0:8081} (org.eclipse.jetty.server.AbstractConnector:331)
[2021-02-28 12:48:15,383] INFO Started @16802ms (org.eclipse.jetty.server.Server:415)
[2021-02-28 12:48:15,384] INFO Server started, listening for requests… (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:44)
[2021-02-28 12:49:06,909] INFO Registering new schema: subject test7, version null, id null, type null (io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource:250)
[2021-02-28 12:49:06,954] ERROR Request Failed with exception (io.confluent.rest.exceptions.DebuggableExceptionMapper:62)
io.confluent.kafka.schemaregistry.rest.exceptions.RestUnknownMasterException: Master not known.
at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.unknownMasterException(Errors.java:153)
at io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource.register(SubjectVersionsResource.java:286)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
at org.glassfish.jersey.servlet.ServletContainer.serviceImpl(ServletContainer.java:386)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:561)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:502)
at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:439)
at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1609)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:561)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1612)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1582)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:766)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:516)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:773)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:905)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.confluent.kafka.schemaregistry.exceptions.UnknownMasterException: Register schema request failed since master is unknown
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.registerOrForward(KafkaSchemaRegistry.java:559)
at io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource.register(SubjectVersionsResource.java:266)
… 56 more
[2021-02-28 12:49:06,986] INFO 10.244.59.38 - - [28/Feb/2021:17:49:06 +0000] “POST /subjects/test7/versions HTTP/1.1” 500 50 241 (io.confluent.rest-utils.requests:62)

Okay, so some stuff:

  1. I can see schema.registry.group.id = schemaRegistry-iad1 is not the default group. Make sure the “leaders” are part of this group as well if you hope for these “followers” to forward requests for writes. I think this might be your main issue.
  2. Looks like your _schemas topic is slightly corrupted with an entry the Schema Registry can’t recognize :slight_smile: I’d check that out.
  3. Looks like the user Schema Registry is running in does not have access to create the log file :slight_smile:

Also, for host.name, it doesn’t necessarily have to be the Pod IP, just a best practice. For example, we set it to be an internal DNS entry in our own docker examples. So, as long as it is resolvable by the pod to be the leader, go for it.

Please let us know if this resolves your issue!

Thanks a lot for taking time to explain. Yah both group.id and dns resolution are the issues. I think I can sort that out, will post if I come across any other issues. Appreciate all the help