We are using kafka to deliver events to our micro service. We populate/utilize the “key” field to ensure events with the same keys are put in the same partition and processed sequentially. We have also configured consumer concurrency giving birth to many listener containers within a single micro service instance. recently while testing we saw 5 out of the 6 listener containers all processing events with the same key.
(here we see the listenerContainer- <0,1,2,3,4> all processing an event with the same key. the breakpointer are conditional on the key)
I have seen other post recommending creation of a ConsumerRebalanceListener
Is it really necessary to manually handle partition assignment when using consumer concurrency?
using keys to maintain partition assignment seems like default behavior why would it need to be re-written?