Often a maximum of 4,000 partitions per broker is recommended (Kafka definitive guide, Apache Kafka + Confluent Blog). My understanding is that more partitions per Broker require more RAM and probably also more CPU due to the additional operations that are required per partition.
However I can easily scale CPU and RAM vertically in the Cloud and I wonder whether this recommendations still apply to newer Kafka versions (v2.6.0+) and if so what is the problem with more than 4k partitions? How can I tell whether my broker suffers from problems due to the number of partitions.
Background: I run one Kafka cluster where each broker hosts ~6-7k partitions. I noticed a lot of open file descriptors, a high mmap count but once I bump these values I only expect more RAM usage then.