Answering questions about consistency in mesh


** I’m new so if this is in the wrong place please let me know :slightly_smiling_face:

I’m working on a data mesh strategy - nothing too complicated. Super briefly, my team will put together a java library defining all our events, update the schema registry via the maven plugin and facilitate domain owners to produce to our cluster in Confluent Cloud.

The most problematic question I’ve had is around how we ‘guarantee’ all our read models are consistent.

The concerns are two-fold:

  1. Are asynchronous events ‘good enough’ compared to a database where we can guarantee consistency? Especially as we probably can’t ensure the single writer principle so reasoning about the order of events could get very tricky.

  2. In the real world, how do we manage multiple consumers, all of which drive decision making, but only some of which are real-time? Meaning different read models getting in the hands of business colleagues. Words like: “at least in a data warehouse, the data is out of date but 100% consistent” are spoken :slight_smile:

For those of you that are further down the road than I, did you come up against questions/problems like these?

I appreciate your time.


Welcome Toby!

I don’t believe I’ve ever seen “data mesh” and “nothing too complicated” in the same sentence before! :smiley: There’s a ton to unpack here.

I’ve found these questions come up time and time again; the challenge isn’t so much that decentralizing “breaks” consistency, it’s that it allows/forces us to be much more nuanced and fine-grained about consistency. In a Data Mesh there isn’t a globally, ACID-compliant notion of consistency. That is not to say there isn’t ANY consistency, it just means that there is read consistency, and eventual consistency, and ordering guarantees based on certain criteria.

To clarify a bit, do you mean:

  1. coordinating across versions of the same schema within a single topic or,
  2. coordinating schemas across multiple topics?

I was starting to write an answer to this and realized my answer is very different for 1 vs. 2.

1 Like