Kafka consumers stop reading messages from a topic

We are testing a set of 50 Kafka adapter consumers which pull messages from topic-A and direct them to a TCP port. Then, in a separate process, a single Kafka adapter producer pulls those messages from the port and writes them to topic-B. All the adapters are built with Camel Kafka on a Spring framework. The Kafka brokers are Confluent-6.1.0. Our intent is to evaluate the effects of a large load.

Passing 2KB and 1MB messages with the adapters running is successful. Our problems began with two negative tests: when stopping/starting the consumers during a load of messages to topic-A, and when loading messages to topic-A with the consumers stopped then starting them. Both negative tests were done in order to simulate a system failure. In both cases the consumers fail to pick up messages from topic-A. The following errors appear successively in all the consumer logs:

“Offset commit failed on partition topic.kpd.eis.smartadapter.pcm.kafka.to.tcp.loadtest-7 at offset 705916: The coordinator is not aware of this member.”

“Error saving offset repository state topic.kpd.eis.smartadapter.pcm.kafka.to.tcp.loadtest-Thread 0 from offsetKey topic.kpd.eis.smartadapter.pcm.kafka.to.tcp.loadtest/1 with offset: 626628”

“org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.”"

“Failed to close coordinator org.apache.kafka.common.KafkaException: User rebalance callback throws an error”

We were using the default consumer property settings, but are now questioning those. I’ve now changed “auto.offset.reset” from “latest” to “earliest” and am also likely to increase “max.partition.fetch.bytes” on the consumer side and “max.request.size” on the producer side to greater than 1MB. What exactly do the 4 errors copied above indicate is incorrect in our configuration or understanding? And which consumer properties do we need to modify to overcome those errors?

Increasing the value of “max.poll.interval.ms” and decreasing the value of “max.poll.records” in the consumer configurations overcame the offset commit errors. Thanks!