__consumer_offsets topic with very big partitions]

There are some partitions of the __consumer_offsets topic that are 500-700 GB and more than 5000-7000 segments. These segments are older than 2-3 months. There aren’t errors in the logs and that topic is COMPACT as default.

What could be the problem? Maybe a config or a consumer problem? What checks could I do?

My settings:

log.cleaner.enable=true
log.cleanup.policy = [delete]
log.retention.bytes = -1
log.segment.bytes = 268435456
log.retention.hours = 72
log.retention.check.interval.ms = 300000



offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600

Now there is only one partition ( of the topic __consumer_offsets ) that is 600 GB with roughly 6000 segments. There are 60 consumer groups, 90 topics and 100 partitions per topic. Other partitions of that topic are small: 20-30MB
There was another partition with the same problem and it has been compacted by kafka after some months.

Can the bad consumers cause these problems? If yes, how can I check?
What check can I do to discover the problem?