MirrorMaker2 not replicating certain topics

Hello everyone!

I have installed a MirrorMaker2 cluster with 2 nodes that is replicating a 3 broker + zookeeper Kafka cluster with just under 1100 topics. The replication works as expected for all but the following topics:

STAGE.OU.task-updated-ics2-internal
STAGE.OU.task-updated-internal
STAGE.OU.task-upsert-events-internal
STAGE.OU.task-upsert-ics2-events-internal
TEST.OU.task-updated-ics2-internal
TEST.OU.task-updated-internal
TEST.OU.task-upsert-events-internal
TEST.OU.task-upsert-ics2-events-internal

There are several equivalent topics for a different environment that are replicated, however that is not important since they have no messages.

I do not know if it is of any relevance but the message count is as follows:

STAGE.OU.task-updated-ics2-internal - 55873
STAGE.OU.task-updated-internal - 420987
STAGE.OU.task-upsert-events-internal - 15000000+
STAGE.OU.task-upsert-ics2-events-internal - 43502
TEST.OU.task-updated-ics2-internal - 0
TEST.OU.task-updated-internal - 35
TEST.OU.task-upsert-events-internal - 14461
TEST.OU.task-upsert-ics2-events-internal - 0

I have made sure to include all topics through the mm2.properties configuration and that no topics are being excluded:

topics=.*
topics.exclude=

I have also tried replicating these topics alone without success:

topics=STAGE.OU.task-updated-ics2-internal,STAGE.OU.task-updated-internal,STAGE.OU.task-upsert-events-internal,STAGE.OU.task-upsert-ics2-events-internal,TEST.OU.task-updated-ics2-internal,TEST.OU.task-updated-internal,TEST.OU.task-upsert-events-internal,TEST.OU.task-upsert-ics2-events-internal

I am using the IdentityReplicationPolicy replication policy class. There is no indication of an error in the MirrorMaker log.

Source cluster is at version 7.6.5-ccs
Target cluster is at version 7.6.4-ccs
MirrorMaker cluster is at version 7.6.5-ccs

Any idea why these topics are excluded from replication?

hey @alexei.ivanov

I’m not fully sure only thinking out loud:

may this be related to the topic naming?
as all the topic have internal in their names…

there is some logic around the internal topics in MM2 so maybe it’s somehow related to this

Hi @mmuehlbeyer! Thanks for a quick response.

may this be related to the topic naming? as all the topic have internal in their names… there is some logic around the internal topics in MM2 so maybe it’s somehow related to this

That was my initial thought as well. However, I do see that topics that have the same name but different environment prefix (also end with internal in their names) are being replicated just fine. So, I feel the theory is somewhat good, but not 100% accurate. Perhaps there is a glitch in this logic surrounding internal topics? Also, I have tried configuring exclude.internal.topics and setting it to false, but MirrorMaker seems to ignore this configuration no matter which way it is configured. Any other ideas/insights will be greatly appreciated!

hmm pretty strange need
to dig a bit around.

maybe something worth to try
what about a second MM2 config/cluster which only serves the topics not getting
replicated at the moment?
maybe we can gain some further insights around the behaviour with setting everything to debug then?
(I’ve seen you already tried but probably within the same config/cluster?)

another one :
what happens if you switch to the default replication policy?

I’ve just checked

according to that (line 97 following) you should not be hit by the internal topic check

default boolean isMM2InternalTopic(String topic) {
        return  topic.startsWith("mm2") && topic.endsWith(".internal") || isCheckpointsTopic(topic);
    }

Thanks @mmuehlbeyer!

After looking at the code a bit closer I may have discovered the issue.

according to that (line 97 following) you should not be hit by the internal topic check

The code you referred to is a bit newer than the one I am currently running. The version was updated December 2024, so we need to update our clusters to version 7.8 or 7.9 (currently running 7.6.5). Take a look at this updated segment:

https://github.com/apache/kafka/commit/0bbed823e818a920fbaa2d54b50f4fcd81a8a759#diff-20a9931da3de4b577145af409d7e704e90ed30b410e4d8236e297b033dea6d0aR100-L109

However, it still puzzles me how 2 of my 10 topics that end with “-internal” in their name got replicated. Anyway, I think I need to update the clusters before continuing finding out what is wrong.

Thanks for pointing me in the right direction!
Btw, DEBUG/TRACE log mode did not produce any leads.

P.S. I am off on easter break so it might take a while before I can back to you with any results.

1 Like