Producer config settings that specifically relate to this error are max.block.ms and buffer.memory.
The error occurs when a producer send is blocked longer than max.block.ms due to the associated additional buffer memory needed would exceed buffer.memory. Existing message batches that have not been flushed are using that memory. This could be due to producer batching behavior controlled by linger.ms and batch.size and producer request behavior related to max.in.flight.requests.per.connection. It could also be related to the broker ability to process requests received. These requests include those received from producers, consumers, as well as replica fetch requests from other brokers.
Bottom line, there is not a simple answer. More detail is needed. Confluent Control Center is one source for this detail. JMX metrics is another source.
Producer configs - Producer Configurations — Confluent Documentation
Broker configs - Broker Configurations — Confluent Documentation
Monitoring Kafka metrics - Monitoring Kafka — Confluent Documentation