Kafka mirrormaker 2 or replicator active/active handling producers consumers after a disastor

Having configured mirrormaker 2 in an active/active state across two regions with a 2 clusters in each region. Producers write to the local origin topics in each of the clusters, these topics are successfully replicated to the other clusters as target topics.

Scenario 1: If the consumers are setup to read all the topics - local and remote on a cluster, then a message written to say orgin topic1 in cluster 1 is consumed locally and when copied over to cluster2 remote topic1 then a consumer on that cluster will also consume it and hence the message has been consumed twice. What is the best practice to handle this situation.

Scenario1: If the consumer reads only from local origin topic1 on each cluster then the duplicate reads are avoided, but if one of the clusters fail, then the load balancer will redirect the producer to another cluster.

So the two producers will now be writting to the orgin local topic1 or does the redirected producer write the remote topic 1 of the failed cluster and a new consumer instance reads from that ?

Hi, I’m Confluent’s product manager for Replicator & multi-region.

The best practice will depend on your application and goals.

In the first scenario, you could de-dupcliate within your consumer application(s). This would solve both steady state and failure states.

In the second scenario, I recommend that the producers always produce to the topic that is local to the region. So when a producer moves over to a new region, it will be producing to a different topic. I find that many customers keep the name of the writable topic the same in all region (perhaps, “clicks”) and then add a prefix to the topic name when its replicated to different regions (perhaps, “remote.clicks” or “us-west-2.clicks”). This is what we’ve found the most customers have success with.

1 Like