We are thinking of designing a Elastic Search Sink connector. So here the consumer group is that of Kafka connect. This will be for search. But the same messages are also routed to another topic so that they can be pushed using a cloud messaging tool.
Are my assumptions correct ?
- I need a different consumer group for the sink and one for push messages.
- Our requirements are not defined fully but the push messages can be successful but the sink connector could fail. This isn’t a serious problem if I can monitor the DLQ properly.
Is there a different architecture for routing the same message to different consumer groups and channels ? There is any orchestration possible without complicating it ?
This can also happen if the consumer channels are different - SMSs and Push notifications. Both contain the same messages for convenience of the customer.
@mohanr I think you might modify your mental model of what consumer groups are. Think of a consumer group as bookmark in the topic. Consumers join a group by a name, and as they consume messages they can store (in the cluster) the last read message by offset
(like a page number where you place a bookmark).
Different groups of consumers can save different bookmarks. So you do not need to “route” events to different consumer groups. Consumers in different groups can read messages from topics concurrently and independently, keeping their individual bookmarks. The retention settings for topics will determine how long your consumers have to read this data before it is removed from the topic.
For various reasons, you still may want messages to route through a topic that represents the sending of your push notifications, and then on to the topic that sinks them to elasticsearch, but you don’t need to do this because of consumer groups.
Also, I wanted to be sure you’re aware of the existing Elasticsearch Sink connector that already exists with a community license option. You may want to investigate this instead of building your own!
ElasticSearch Sink Connector | Confluent Hub
So in this case I just have to configure a consumer group name in Kafka connect Sink and use that same name in my Spring Boot consumer. Both consumers will do what they are designed to do ?
I. plan to use Confluent’s Sink Connector.
If you use the same consumer group name for both the connector and the spring boot application, they will read the messages ‘cooperatively’. That means they will divvy up the responsibility of reading the messages out of the topic, one application will read from a set of partitions and the other will read from the remaining partitions. This is probably not what you want.
Instead, you likely want a different consumer group name for those two different applications. You would typically use the same consume group name to scale out reading of messages by multiple instances of the same application. Just keep in mind that consumers, with the same consumer group name or not, will read from the topic in parallel, there is no ordering guarantee with this.
Let me point you to our free Kafka 101 course, which contains a module on consumers (module 8). https://developer.confluent.io/learn-kafka/apache-kafka/events/
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.