How to write data in SchemaRegistry format with sparkstructured?

Hi There!
I am to trying write a batch data to kafka topic with schema registry in databricks using pyspark, i serialize the data with pyspark to_avro function and write it to the topic, but the consumers can’t read the schema id. If they do not separate the schema_id first 5 bytes they can read the data as well.
I read the avro schema from .avsc file downloaded in confluent, and it’s version is ok.

This is my script:

The var schema_str is the avro schema readed.

Anyone can help me to serialize this data with schema_id in the few first bytes?(Schema Registry Format)