Confluent service - unstable on MAC services die

… not sure where this should go, so hoping it might fit in here.
I’ve tried running the cp_all_* stacks on y MAC, but running into resource issues… with everything else running, so finding the local confluent service deployment run’s better… or well… when I run it various services start and this fail shortly later.
Had things running nicely… all worked except for CC and schema manager, and as I was not using them was not a major issue, but now I want to integrate code with schema manager, so problem… went and redeployed (unpacked the zip) (oops forgot to backup my old server.properties… darn.

Where’s the logs, is there a resource setting or something, wondering if the stack is running out of resources making modules fail.

G
Honestly won’t mind getting cp_all_ working either… just need to figure out how to then stitch network together with a app native OS + cp_all_* basically in docker and then balance of app in minikube…

Hi,

I assume you’re running the quickstart described here (correct me if I’m wrong)

https://docs.confluent.io/platform/current/quickstart/ce-quickstart.html

There should be a “log” directory.

Would start here checking zookeeper and Kafka Broker logs.

HTH

1 Like

… should be a log directory, does not seem to exist under CONFLUENT_HOME though.

G

a temp directory should be created during startup

what’s the output of running confluent local services start

logs should exist in this tmp dir

screen grab of CONFLUENT_HOME after a failed start:

Georges-MacBook-Pro.local:/Users/george/Desktop/ProjectsCommon/confluent-6.1.1 > ls -la
total 8
drwxr-xr-x@  9 george  staff   288 Mar 17 02:21 .
drwxr-xr-x@  7 george  staff   224 May  8 09:33 ..
-rw-r--r--@  1 george  staff   871 Mar 17 02:20 README
drwxr-xr-x@ 85 george  staff  2720 Mar 17 01:15 bin
drwxr-xr-x@ 17 george  staff   544 Mar 17 01:15 etc
drwxr-xr-x@  3 george  staff    96 Mar 17 01:11 lib
drwxr-xr-x@  3 george  staff    96 Mar 17 01:15 libexec
drwxr-xr-x@  7 george  staff   224 Mar 17 01:15 share
drwxr-xr-x@  6 george  staff   192 Mar 17 02:20 src
Georges-MacBook-Pro.local:/Users/george/Desktop/ProjectsCommon/confluent-6.1.1 > cpstat

The local commands are intended for a single-node development environment only,
NOT for production usage. https://docs.confluent.io/current/cli/index.html

Using CONFLUENT_CURRENT: /var/folders/hr/8xcqn0yx5s92ksdx_8tb10l00000gn/T/confluent.538487
Connect is [DOWN]
Control Center is [DOWN]
Kafka is [UP]
Kafka REST is [DOWN]
ksqlDB Server is [DOWN]
Schema Registry is [DOWN]
ZooKeeper is [UP]
Georges-MacBook-Pro.local:/Users/george/Desktop/ProjectsCommon/confluent-6.1.1 >

logs should be in

/var/folders/hr/8xcqn0yx5s92ksdx_8tb10l00000gn/T/confluent.538487/zookeeper/logs

kafka Logs in
/var/folders/hr/8xcqn0yx5s92ksdx_8tb10l00000gn/T/confluent.538487/kafka/logs

1 Like

ahhh so thats what that CONFLUENT_CURRENT implies…

G

hmmm now how do I attached a log file… thinking if we fix one of the modules failing we’re probably going to fix them all

I just need either this one or the cp-all-in-one to work, this one seems to be easier to integrate with from my 2 apps, one in minikube, one note…

mean time,

	websocket.path.prefix = /ws
	websocket.servlet.initializor.classes = []
	zookeeper.set.acl = false
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)
[2021-05-10 07:33:22,414] INFO Logging initialized @1648ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-05-10 07:33:22,454] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-05-10 07:33:22,667] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-05-10 07:33:27,687] INFO Registering schema provider for AVRO: io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2021-05-10 07:33:27,687] INFO Registering schema provider for JSON: io.confluent.kafka.schemaregistry.json.JsonSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2021-05-10 07:33:27,687] INFO Registering schema provider for PROTOBUF: io.confluent.kafka.schemaregistry.protobuf.ProtobufSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2021-05-10 07:33:27,706] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://localhost:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2021-05-10 07:33:27,736] INFO Creating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2021-05-10 07:33:27,738] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2021-05-10 07:33:27,755] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2021-05-10 07:33:27,775] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
	at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:297)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:73)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:88)
	at io.confluent.rest.Application.configureHandler(Application.java:255)
	at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:227)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Failed trying to create or validate schema topic configuration
	at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:189)
	at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:121)
	at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:295)
	... 6 more
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
	at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
	at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
	at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
	at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
	at io.confluent.kafka.schemaregistry.storage.KafkaStore.verifySchemaTopic(KafkaStore.java:247)
	at io.confluent.kafka.schemaregistry.storage.KafkaStore.createSchemaTopic(KafkaStore.java:232)
	at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:181)
	... 8 more
Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
[2021-05-10 09:23:57,474] INFO SchemaRegistryConfig values: 
	access.control.allow.headers = 
	access.control.allow.methods = 
	access.control.allow.origin = 
	access.control.skip.options = true
	authentication.method = NONE
	authentication.realm = 
	authentication.roles = [*]
	authentication.skip.paths = []
	avro.compatibility.level = 
	compression.enable = true
	csrf.prevention.enable = false
	csrf.prevention.token.endpoint = /csrf
	csrf.prevention.token.expiration.minutes = 30
	csrf.prevention.token.max.entries = 10000
	debug = false
	host.name = localhost
	idle.timeout.ms = 30000
	inter.instance.headers.whitelist = []
	inter.instance.protocol = http
	kafkastore.bootstrap.servers = [PLAINTEXT://localhost:9092]
	kafkastore.checkpoint.dir = /tmp
	kafkastore.checkpoint.version = 0
	kafkastore.connection.url = localhost:2181
	kafkastore.group.id = 
	kafkastore.init.timeout.ms = 60000
	kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
	kafkastore.sasl.kerberos.min.time.before.relogin = 60000
	kafkastore.sasl.kerberos.service.name = 
	kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
	kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
	kafkastore.sasl.mechanism = GSSAPI
	kafkastore.security.protocol = PLAINTEXT
	kafkastore.ssl.cipher.suites = 
	kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
	kafkastore.ssl.endpoint.identification.algorithm = 
	kafkastore.ssl.key.password = [hidden]
	kafkastore.ssl.keymanager.algorithm = SunX509
	kafkastore.ssl.keystore.location = 
	kafkastore.ssl.keystore.password = [hidden]
	kafkastore.ssl.keystore.type = JKS
	kafkastore.ssl.protocol = TLS
	kafkastore.ssl.provider = 
	kafkastore.ssl.trustmanager.algorithm = PKIX
	kafkastore.ssl.truststore.location = 
	kafkastore.ssl.truststore.password = [hidden]
	kafkastore.ssl.truststore.type = JKS
	kafkastore.timeout.ms = 500
	kafkastore.topic = _schemas
	kafkastore.topic.replication.factor = 3
	kafkastore.topic.skip.validation = false
	kafkastore.update.handlers = []
	kafkastore.write.max.retries = 5
	kafkastore.zk.session.timeout.ms = 30000
	leader.eligibility = true
	listeners = [http://0.0.0.0:8081]
	master.eligibility = null
	metric.reporters = []
	metrics.jmx.prefix = kafka.schema.registry
	metrics.num.samples = 2
	metrics.sample.window.ms = 30000
	metrics.tag.map = []
	mode.mutability = true
	port = 8081
	request.logger.name = io.confluent.rest-utils.requests
	request.queue.capacity = 2147483647
	request.queue.capacity.growby = 64
	request.queue.capacity.init = 128
	resource.extension.class = []
	resource.extension.classes = []
	resource.static.locations = []
	response.http.headers.config = 
	response.mediatype.default = application/vnd.schemaregistry.v1+json
	response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
	rest.servlet.initializor.classes = []
	schema.compatibility.level = backward
	schema.providers = []
	schema.registry.group.id = schema-registry
	schema.registry.inter.instance.protocol = 
	schema.registry.resource.extension.class = []
	schema.registry.zk.namespace = schema_registry
	shutdown.graceful.ms = 1000
	ssl.cipher.suites = []
	ssl.client.auth = false
	ssl.client.authentication = NONE
	ssl.enabled.protocols = []
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = [hidden]
	ssl.keymanager.algorithm = 
	ssl.keystore.location = 
	ssl.keystore.password = [hidden]
	ssl.keystore.reload = false
	ssl.keystore.type = JKS
	ssl.keystore.watch.location = 
	ssl.protocol = TLS
	ssl.provider = 
	ssl.trustmanager.algorithm = 
	ssl.truststore.location = 
	ssl.truststore.password = [hidden]
	ssl.truststore.type = JKS
	thread.pool.max = 200
	thread.pool.min = 8
	websocket.path.prefix = /ws
	websocket.servlet.initializor.classes = []
	zookeeper.set.acl = false
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)
[2021-05-10 09:23:57,569] INFO Logging initialized @1064ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-05-10 09:23:57,597] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-05-10 09:23:57,732] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-05-10 09:24:58,352] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
	at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1288)
	at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:158)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:69)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:88)
	at io.confluent.rest.Application.configureHandler(Application.java:255)
	at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:227)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: java.util.concurrent.TimeoutException
	at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
	at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
	at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1286)
	... 7 more

hmm seems as you kafka broker takes too long to respond

  • did you check kafka server logs?
  • did you try to create the topic manually?
  • something else running which might impact performance?
1 Like

let me shut some things down and see how it behaves…

as said, this use to be the stable drop, the one that used least resources. the above screen print was done while the cp-app-in-one stack was down, I run either of them, never both together. well for now I don’t mind which i can get working/stable.

The cp-ap-in-one has been nice… if i can jsut get all networking in and out resolved.

G

… Resolved… I think.

Replaced my server.properties file with the original as per the zip file.

got all messages flowing again. so seems I have a working stack, using the confluent.6.1.1.zip drop.

G

1 Like