Confluent Community Edition

Team,

I had downloaded the Confluent community edition and trying to start the services. But getting the below exception. Please let me know what is missing here.

[root@qfxprodsupportlocal logs]# tail -100 schema-registry.log
        kafkastore.ssl.trustmanager.algorithm = PKIX
        kafkastore.ssl.truststore.location =
        kafkastore.ssl.truststore.password = [hidden]
        kafkastore.ssl.truststore.type = JKS
        kafkastore.timeout.ms = 500
        kafkastore.topic = _schemas
        kafkastore.topic.replication.factor = 3
        kafkastore.topic.skip.validation = false
        kafkastore.update.handlers = []
        kafkastore.write.max.retries = 5
        leader.eligibility = true
        listener.protocol.map = []
        listeners = [http://0.0.0.0:8081]
        master.eligibility = null
        metric.reporters = []
        metrics.jmx.prefix = kafka.schema.registry
        metrics.num.samples = 2
        metrics.sample.window.ms = 30000
        metrics.tag.map = []
        mode.mutability = true
        nosniff.prevention.enable = false
        port = 8081
        proxy.protocol.enabled = false
        reject.options.request = false
        request.logger.name = io.confluent.rest-utils.requests
        request.queue.capacity = 2147483647
        request.queue.capacity.growby = 64
        request.queue.capacity.init = 128
        resource.extension.class = []
        resource.extension.classes = []
        resource.static.locations = []
        response.http.headers.config =
        response.mediatype.default = application/vnd.schemaregistry.v1+json
        response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
        rest.servlet.initializor.classes = []
        schema.cache.expiry.secs = 300
        schema.cache.size = 1000
        schema.canonicalize.on.consume = []
        schema.compatibility.level = backward
        schema.providers = []
        schema.registry.group.id = schema-registry
        schema.registry.inter.instance.protocol =
        schema.registry.resource.extension.class = []
        shutdown.graceful.ms = 1000
        ssl.cipher.suites = []
        ssl.client.auth = false
        ssl.client.authentication = NONE
        ssl.enabled.protocols = []
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = [hidden]
        ssl.keymanager.algorithm =
        ssl.keystore.location =
        ssl.keystore.password = [hidden]
        ssl.keystore.reload = false
        ssl.keystore.type = JKS
        ssl.keystore.watch.location =
        ssl.protocol = TLS
        ssl.provider =
        ssl.trustmanager.algorithm =
        ssl.truststore.location =
        ssl.truststore.password = [hidden]
        ssl.truststore.type = JKS
        thread.pool.max = 200
        thread.pool.min = 8
        websocket.path.prefix = /ws
        websocket.servlet.initializor.classes = []
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)
[2022-09-20 12:10:35,644] INFO Logging initialized @2221ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2022-09-20 12:10:35,860] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2022-09-20 12:10:36,151] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2022-09-20 12:10:40,535] INFO Registering schema provider for AVRO: io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2022-09-20 12:10:40,535] INFO Registering schema provider for JSON: io.confluent.kafka.schemaregistry.json.JsonSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2022-09-20 12:10:40,535] INFO Registering schema provider for PROTOBUF: io.confluent.kafka.schemaregistry.protobuf.ProtobufSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2022-09-20 12:10:40,778] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://localhost:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2022-09-20 12:10:40,930] INFO Creating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2022-09-20 12:10:40,944] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2022-09-20 12:10:41,130] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2022-09-20 12:10:41,342] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:314)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:75)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
        at io.confluent.rest.Application.configureHandler(Application.java:284)
        at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:267)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Failed trying to create or validate schema topic configuration
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:194)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:124)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:312)
        ... 6 more
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
        at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
        at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.verifySchemaTopic(KafkaStore.java:252)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.createSchemaTopic(KafkaStore.java:237)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:186)
        ... 8 more
Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
[root@qfxprodsupportlocal logs]#

Regards,
Dora.

Seems like you didn’t configure the broker to auto create topics. And schema registry needs a _schemas topic. You can create it manually through.