The key issue here is that Avro and other IDLs like Protobuf are not aware of their own schema IDs and do not by default ship with them. By using Confluent’s default Serdes the schema ID is fetched during runtime. As you have noted this is a point of failure and in some sense wasteful as schema IDs are immutable after creation in the schema registry, the perfect candidate for permanent caching. We can call this the traditional integration approach supplied by Confluent where your applications need to fetch schema IDs at runtime.
Another approach which one can do is to take advantage of the fact that schema IDs are immutable. This approach works well if you control the producer/consumer fully and can make modifications to your schema artifact build processes.
The approach can be summarized as “baking in” the schema IDs into the schema artifacts that you build based on your respective IDL (Avro, Protobuf etc). Most applications usually include a dependency which has the generated code based off your schemas for the developer to code against. By piggybacking off this existing artifact setup, a custom build pipeline can also project the necessary schema IDs needed for the artifact. You will need to think about what subject naming strategy you are using as this will determine what schema IDs you will need to place into the “enhanced” artifacts. I find the RecordNameStrategy works well with this “baking in” approach as you do not need to worry about what topics this schema is being sent to.
This projection can be adding a file into the artifact or a generated class, which then a custom Serdes aware of the “bake in” approach can then read the schema IDs locally, instead of needing to reach out to schema registry. For the Java stack one can extend the SchemaRegistryClient interface and provide a new implementation that reads from a local file or in-memory map (call it
I implemented this approach for mobile devices where instead of exposing schema registry as per the traditional method I utilized the “bake in” approach:
Now millions of mobile devices do not need to reach out to Schema Registry reducing: a point of failure, costs in terms of requests & money, and all the ops/security work in exposing Schema Registry in an HA setup.
You can view more about this approach in a talk I did recently (“Schemas Beyond The Edge” if the link does not work).
Note for your use case of an outbox pattern you could avoid serializing the Confluent-encoded Avro until the last possible moment, that is remove any Schema ID aware Serdes until the
Outbox Poller processes the event (just persist vanilla Avro bytes to your outbox table).
This would require a custom app that has some configuration and changes to your outbox table to track what the raw Avro bytes schema is (e.g. class name) for later fetching by Schema registry. By doing so the critical path now involves doing your business write, and doing the outbox write with vanilla Avro based on your event definition (no schema registry needed). The actual fetching of the schema ID for the serialized vanilla bytes would then be done by the custom process, decoupling it from the critical request path.