I am using the following kafka reactor version in my application :-
During the perf test of my application I found the below issue which i am not able to resolve .
I am consuming messages from a kafka topic and processing then my application.
My topic has 3 partitions and I have 3 pods running of my application on my test environment.
I am pushing 20 messages to each partition of the topic which makes 60 messages in total. after pushing all the messages I am staring my 3 application pods which has reactive kafka consumer. The issue here is when i start my consumers the rebalancing takes place (which is fine as during start it will happen and kafka partitions will be assigned to every instance of my application.). After this my application is suppose to consume 60 messages from the kafka queue but it’s consuming around 150+ messages , which means it’s consuming duplicate messages , even in kafka lenses i can see that if the lag in kafka queue goes down somehow it’s built up again due to increase and decrease in the latest offset. Even I do not see any commit exception in my debug logs.
On contrary, when all the 3 pods of my application are up and running and after the rebalancing the consumers are stable and each partition is assigned to one pod of my application after that if i push 20 feeds to each partition of my kafka topic then it’s working fine , no duplicates messages are produced , my application consumes 60 messages or feeds (20 from each partition).
So according to my analysis there is some issue with consumers when they are trying to read messages from kafka topic during the rebalancing.
Can anyone help me , solve this issue ?