I have 2 services ( 1 producer who writes 15 000 messages to kafka topic, and 1 consumer who reads this messages from that topic ) and i have streched 3 dc kafka cluster ( this 3 dc locates within same city, so latency is low )
to immitate 2 dc failure i’m simultaneously shutdown 2 kafkas ( systemctl kill through ansible ) so i have only 1 kafka up & running, i have acks=all and isr=3 and min isr=3, so in theory if even 1 kafka will be down all writes to kafka will stop but in my case my service write to kafka with only 1 node alive!
in your scenario min.insync.replicas=3
couldn’t ne guaranteed with only one broker.
out of the docs:
A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of “all”. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
Sorry i listed describe from healthy brokers
here is for 2 dead brokers
kafka-topics –describe –zookeeper localhost:2181 –topic command
Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
but service writes messages to 1 alive brokers and all 15 000 messages are here!
i have 3 datacenter which locates in same city ( within 20km )
i have 3 vm ( Oracle Linux ) and on each vm kafka and zookeeper is installed
for the test case i run 1 producer and 1 consumer ( producer write 15 000 messages to kafka, and consumer reads messages from topic and echo this messages to logs ) and to test DC failure i run this services ( producer connects to kafka then start writing, and i’m kill zookeeper and kafka on 2 servers, only 1 node alive ) so using acks=all and min.isr=3 should stop writing to the topic, but this does not happen and the records are successfully put into kafka