How to ensure that all the events are picked up from Kafka topic

I have an Micro-services where we get the event messages from a Kafka messaging system. The event message contains the orders information, quantity, the mode of package and delivery location. At a predetermined time, these events are pushed to the Kafka stream. A special event full delivery event is issued to signify that all events have been sent from the producer. The MS on receiving the full delivery event stops processing the events going forward.

This worked fine until now as

  1. We have limited number of events around 150.
  2. Kafka and Micro-services were a single instance and was run on a single machine.

When we use multiple Kafka nodes, a huge amount of events (10,000, for instance), and numerous Micro-services. In those cases, I believe it will lead to error.

How do I avoid,

  1. What if full delivery event is picked first but there are still a lot of other events that need to be picked up?

What changes should I make to the system to prevent these scenarios?

The benefit which we are looking on explicitly adding an ending event is:

Let us suppose there are two delivery teams (Team1 and Team2). Team1 serves Postal Codes 10243 and 10245, while Team2 serves Postal Codes 10247 and 10249. Now, there is a Sony TV package that needs to be delivered at Postal-Code: 10243, so this falls under Team1’s jurisdiction. However, they only have this one order. T the neighbouring locations 10247 and 10249, which fall under Team2, have 30 orders to deliver. In such cases, the package delivery can be given to Team2. Saving money this way. There is a large minimum amount that must be paid for each Team in order for items to be delivered. These are business decisions over which we have no control.

To deal with such a scenario, we used the logic described above.

Kafka has order guarantees that avoid this situation. You only have to be careful with your partitioning.

In your use case, if you are planning to deploy multiple instances of this application, I suppose you are also going to use multiple consumers and multiple partitions. Each instance should have one consumer taking care of the messages from one partition in a one-to-one relationship.

I’m thinking about making your source micro-services send one full delivery message to each partition. That way, every instance of your application is going to consume all the messages from its assigned partition until they found the full delivery message and then you can stop them programmatically. The other instances will keep consuming until this same situation happens.

I don’t know what kind of consumer are you using, but there are utilities that allows to consume all the messages from a topic, and, if nothing arrives in a specific period of time, stop the application.