org.apache.kafka.common.errors.RecordTooLargeException

Hello i am trying to address the large payload size issue, when I try to push large payload(30MB) using offset explorer i am seeing this message

“org.apache.kafka.common.errors.RecordTooLargeException
The message is 3280843 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration”

Here are the properties i have set in different files

-- server.properties
message.max.bytes=104857600
replica.fetch.max.bytes=104857600
-- producer.properties
max.request.size = 104857600
---consumer.properties
max.partition.fetch.bytes=104857600
-- topic -> payload.max.test
max.message.bytes=104857600

can someone help me if i am missing any other property

@gg24 In looking at the source code, I’ve noticed that this error message has been improved in AK 2.8 to include the configured size in the exception message.

I’m wondering if you could try an upgraded AK client so that you could better verify that your client configuration is taking the desired effect .