we are using package named “Confluent kafka” in python code to consume the data from kafka topics. after we are consuming the data from kafka topics and processing it for our needs, we are committing the offset manually. Normally, offset is committed automatically but we are controlling the committing offset because after consuming the data if it fails in processing or transforming the data, we will not commit the offset.
With this approach, we are seeing offset is not getting committed sometimes but it supposed to commit. Can you please suggest on this and confirm if this is the best practice to commit offset manually.
Are you committing synchronously or async? Could you share a code snippet?
It’s really a use case-dependent tradeoff decision rather than a best practice. You’ll get granular control to prevent or minimize duplicates, at some code complexity and (potential) performance cost.
i have tried the code and captured the error message in which it is showing “Exceeded max.poll.interval”
I consume messages from a topic, process them, and commit the offsets after processing is complete. However, sometimes the message processing takes longer than the max.poll.interval.ms, which causes the consumer to be considered idle or dead, leading to rebalancing in the consumer group.
Can you suggest any solutions to keep the consumer alive and avoid this issue?