MirrorMaker 2: topic gets created on target but records stay (or loop) on source
Behaviour
- Topology
Source cluster (Docker Compose, Confluent 7.9.0, PLAINTEXT) → firewall / DNAT → Target cluster (Strimzi, TLS + SCRAM, LB at188.94.xxx.xxx
).
Only ports 9092 / 9095 / 9096 are forwarded; the target cannot open connections back. - MM2 (runs only on the source) connects fine, creates
quelle.<topic>
on the target, then logs exactly one
LEADER_NOT_AVAILABLE
and never produces again. - New records appear in the source cluster under the prefixed name instead.
- When we forgot the prefix earlier, MM2 produced straight back into the original source topic – endless loop; topic grew huge and became unreadable.
- Manual TLS/SCRAM publish into the target topic works, so networking & auth are fine.
- Same behaviour with Confluent 7.4 → 7.9.0, fresh connector group IDs, manual pre-creation of target topics, higher retries, shorter refresh intervals, etc.
Any confirmed fix or reliable workaround (producer overrides, multiple tasks, always creating topics beforehand)? Or would you recommend falling back to MirrorMaker 1 / Cluster Linking for one-way replication behind a NAT?
Sanitised configs
1 · docker-compose.yml
– source cluster (full)
version: "3.8"
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:7.9.0
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_SERVERS: "zookeeper-1:2888:3888;zookeeper-2:2888:3888;zookeeper-3:2888:3888"
ports: ["127.0.0.1:12181:2181"]
networks: [kafka_network]
zookeeper-2:
image: confluentinc/cp-zookeeper:7.9.0
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_SERVERS: "zookeeper-1:2888:3888;zookeeper-2:2888:3888;zookeeper-3:2888:3888"
networks: [kafka_network]
zookeeper-3:
image: confluentinc/cp-zookeeper:7.9.0
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_SERVERS: "zookeeper-1:2888:3888;zookeeper-2:2888:3888;zookeeper-3:2888:3888"
networks: [kafka_network]
kafka-1:
image: confluentinc/cp-kafka:7.9.0
container_name: kafka-1
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: "zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181"
KAFKA_LISTENERS: DOCKER://0.0.0.0:19092,HOST://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-1:19092,HOST://127.0.0.1:9092
KAFKA_MESSAGE_MAX_BYTES: "104857600"
KAFKA_REPLICA_FETCH_MAX_BYTES: "104857600"
ports: ["127.0.0.1:9092:9092","127.0.0.1:19092:19092"]
networks: [kafka_network]
kafka-2:
image: confluentinc/cp-kafka:7.9.0
container_name: kafka-2
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: "zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181"
KAFKA_LISTENERS: DOCKER://0.0.0.0:29092,HOST://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-2:29092,HOST://127.0.0.1:9093
KAFKA_MESSAGE_MAX_BYTES: "104857600"
KAFKA_REPLICA_FETCH_MAX_BYTES: "104857600"
ports: ["127.0.0.1:9093:9093","127.0.0.1:29092:29092"]
networks: [kafka_network]
kafka-3:
image: confluentinc/cp-kafka:7.9.0
container_name: kafka-3
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: "zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181"
KAFKA_LISTENERS: DOCKER://0.0.0.0:39092,HOST://0.0.0.0:9094
KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka-3:39092,HOST://127.0.0.1:9094
KAFKA_MESSAGE_MAX_BYTES: "104857600"
KAFKA_REPLICA_FETCH_MAX_BYTES: "104857600"
ports: ["127.0.0.1:9094:9094","127.0.0.1:39092:39092"]
networks: [kafka_network]
kafka-mirrormaker2:
image: confluentinc/cp-kafka-connect:7.9.0
container_name: kafka-mirrormaker2
ports: ["127.0.0.1:8085:8083"]
volumes: ["./config:/kafka/config","./certs:/etc/kafka/certs:ro"]
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:29092,kafka-3:39092
CONNECT_GROUP_ID: mm2-group-v2
CONNECT_CONFIG_STORAGE_TOPIC: mm2-v2-configs
CONNECT_OFFSET_STORAGE_TOPIC: mm2-v2-offsets
CONNECT_STATUS_STORAGE_TOPIC: mm2-v2-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.converters.ByteArrayConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.converters.ByteArrayConverter
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.mirror=TRACE,org.apache.kafka.clients=TRACE"
CONNECT_PLUGIN_PATH: /usr/share/java
command: >
bash -c '
/etc/confluent/docker/run &
while ! nc -z localhost 8083; do sleep 2; done
curl -s -XPOST -H "Content-Type: application/json" \
--data @/kafka/config/mm2-connector.json \
http://localhost:8083/connectors
tail -f /dev/null
'
networks:
kafka_network: {driver: bridge}
2 · mm2-connector.json
(sanitised)
{
"name": "mm2-connector",
"config": {
"connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"tasks.max": "1",
"topics": "^zabbix\\..*",
"topics.exclude": "^ziel\\..*",
"source.cluster.alias": "quelle",
"source.cluster.bootstrap.servers": "kafka-1:19092,kafka-2:29092,kafka-3:39092",
"source.cluster.security.protocol": "PLAINTEXT",
"target.cluster.alias": "ziel",
"target.cluster.bootstrap.servers": "188.94.xxx.xxx:9096,188.94.xxx.xxx:9092,188.94.xxx.xxx:9095",
"target.cluster.security.protocol": "SASL_SSL",
"target.cluster.sasl.mechanism": "SCRAM-SHA-512",
"target.cluster.sasl.jaas.config": "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"kafka-mirrormaker-user\" password=\"********\";",
"target.cluster.ssl.endpoint.identification.algorithm": "",
"target.cluster.ssl.truststore.location": "/etc/kafka/certs/kafka-truststore.jks",
"target.cluster.ssl.truststore.password": "********",
"refresh.topics.interval.seconds": "10",
"producer.override.retries": "10",
"producer.override.acks": "all",
"producer.override.request.timeout.ms": "30000",
"consumer.override.key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"consumer.override.value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"producer.override.key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"producer.override.value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"replication.policy.class": "org.apache.kafka.connect.mirror.DefaultReplicationPolicy",
"sync.topic.acls.enabled": "false",
"sync.topic.configs.enabled": "false"
}
}
3 · Target Strimzi listener (excerpt)
listeners:
- name: intern
port: 29092
type: internal
tls: true
authentication: {type: scram-sha-512}
- name: external
port: 9092
type: loadbalancer
tls: true
authentication: {type: scram-sha-512}
configuration:
brokers:
- {broker: 0, advertisedHost: 188.94.xxx.xxx, advertisedPort: 9092}
- {broker: 1, advertisedHost: 188.94.xxx.xxx, advertisedPort: 9095}
Thanks in advance!
Any idea why MirrorMaker 2 stops after the first LEADER_NOT_AVAILABLE
, commits offsets but never produces – while direct TLS/SCRAM produce succeeds – and even loops back into the source topic when the prefix is wrong?