Data Quality Rules

Hello everyone,

I hope you’re all doing well. I wanted to reach out to the community because I’m relatively new to Kafka, and I’ve been exploring data quality rules. While I’ve made some progress, I still have a few questions and challenges I’d like to discuss.

First of all, I must admit that I’m not yet well-versed in Kafka and Kafka Streams. I’m eager to learn more and dive deeper into this technology. I’ve been trying to grasp the concept of data quality rules in Kafka, but I found the Confluent documentation on this topic somewhat challenging to understand. If anyone has any helpful resources or insights, I’d greatly appreciate it.

Currently, I’m running the latest version of Kafka, and I have the Stream Governance Advanced Package installed. I’ve also enabled the Schema Registry and its extensions. Additionally, I’ve successfully created a schema with rules using the Schema REST API.

My main question at this stage is: What else do I need to do to start working with data quality rules effectively? Are there any specific configurations or best practices I should be aware of?

I’m also curious about the impact on producers. Do I need to make any adjustments to the producers to work seamlessly with data quality rules?

I would be grateful for any guidance or insights you can provide!


We’ll be posting a blog soon with a more in-depth tutorial for using Data Contracts including Data Quality rules with Confluent Schema Registry.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.