OffsetOutOfRangeException causes


I noticed the following error during the startup logs, but otherwise Kafka stream is working fine.

Could you please let me know in what circumstances I would get this error? Bing AI suggests that there could have some changes to the topic data. In fact, the change has been done to Kafka Stream Processor to fetch the State Store out Processor Context instead of injecting it.

o.a.k.s.p.i.StoreChangelogReader : stream-thread xxx Encountered org.apache.kafka.clients.consumer.OffsetOutOfRangeException fetching records from restore consumer for partitions [changelog], it is likely that the consumer’s position has fallen out of the topic partition offset range because the topic was truncated or compacted on the broker, marking the corresponding tasks as corrupted and re-initializing it later.

Not sure what you mean by this.

Kafka Streams tracks the changelog offsets in a local .checkpoint file. If the .checkpoint file contains stale data, ie, an offset that is smaller than the topic beginning-offset, the error you observe can happen. I basically means, that the local store contain stale data – the topic got new data and compaction (or retention for windowed stores) delete data in the changelog topic. Thus, the local store must be re-created from scratch and the changelog topic must be read from beginning. – Thus, Kafka Streams is fixing this automatically.

1 Like