Hi,
Good day!
I have a 3 Node Kafka with log.dirs and dataDir (mounted and shared accross the 3 nodes), what is the best way to implement a disaster recovery?
- Using Mirror Maker
- Kafka DR (with the same broker id of the offline broker and mount the shared log.dirs and dataDir)
Hi @hasmine.roldan
depends a little on your needs/requirements I guess
Basically MirrorMaker2 could help in surviving hardware errors or data center outages (depending on your env). Also you have the possibility to route some read requests to the passive cluster.
Could you provide some more details about your current setup?
3 Node Kafka Cluster with shared log dirs? I guess every Kafka broker has its own directory, right?
Best,
Michael
Hi @mmuehlbeyer, that’s right every broker has its own directory.
Kindly see the directory below:
Node 1:
dataDir=/var/lib/zookeeper/node_1
log.dirs=/tmp/kafka-logs/node_1
Node 2:
dataDir=/var/lib/zookeeper/node_2
log.dirs=/tmp/kafka-logs/node_2
Node 3:
dataDir=/var/lib/zookeeper/node_3
log.dirs=/tmp/kafka-logs/node_3
Also, may I know how to remove the topic prefix when replicating topics using mirror maker. I have replicated my topics to another cluster and it has a format source.topic_name
Hi @mmuehlbeyer,
I have read that it’s a feature to prevent cycles specially for active/active setup. I’m implementing active/passive setup, is there a way to remove the topic prefix while replicating?
Hi @hasmine.roldan
I think you need to create and use your own customised replication policy
see the forum thread mentioned above as well as this post at stackoverflow:
hth,
michael
Or you could simply use the RegexRouter SMT to drop the prefix for target topics: RegexRouter | Confluent Documentation
1 Like
Hello Guys,
I am new to the Confluent Kafka and I was tasked to create DR strategy our existing clusters using Confluent Replicator. Can someone guide me on how to do it?
Thank you for all the help in advance.
Regards,
Paolo