Kafka record size in between 10-100MB


How can I handle if single records size is in between 10-100MB? What should be ideal solution?

I am sharing content of file over kafka which has mapping of 1 record : 1 file’s content.


It’s possible to configure Kafka and clients to handle messages that large. It’s not a good idea, causes all sorts of CPU and JVM heap issues.

The best method to do this is to place the large blobs on something like S3 and ship the URI in the kafka message.


@mitchell-h Thanks, that helps.