Kafka RAM consumption

have 1 kafka broker and 1 zookeeper on same VM with following specs

Vcpu: 96

RAM: 442GB

And for testing purposes I want to test how many partitions can will be on broker so I do the following steps

  1. Create topic with 128 partitions
  2. Scale this topics to 50.000 partitions

Then when number of partitions reach the 32168 partitions, kafka brokers Exited with following error log hs_err_pid.log

 head -100 hs_err_pid120814.log

#

# There is insufficient memory for the Java Runtime Environment to continue.

# Native memory allocation (mmap) failed to map 262144 bytes for committing reserved memory.

# Possible reasons:

#   The system is out of physical RAM or swap space

# Possible solutions:

#   Reduce memory load on the system

#   Increase physical memory or swap space

#   Check if swap backing store is full

#   Decrease Java heap size (-Xmx/-Xms)

#   Decrease number of Java threads

#   Decrease Java thread stack sizes (-Xss)

#   Set larger code cache with -XX:ReservedCodeCacheSize=

# This output file may be truncated or incomplete.

#

#  Out of Memory Error (os_linux.cpp:2795), pid=120814, tid=0x00007f4ba5e2d700

#

# JRE version: OpenJDK Runtime Environment (8.0_362-b09) (build 1.8.0_362-b09)

# Java VM: OpenJDK 64-Bit Server VM (25.362-b09 mixed mode linux-amd64 )

# Core dump written. Default location: //core or core.120814

But why Out Of memory? Here is Kafka heap size


Environment="KAFKA_HEAP_OPTS=-Xms1G -Xmx96G"

and here is pick RAM consumption when partitions is created

free -h

          total        used        free      shared  buff/cache   available

Mem: 442Gi 7.4Gi 435Gi 2.2Gi 4.5Gi 435Gi

Swap: 0B 0B 0B

Why it is exited because of RAM? I have no limits in my linux system


ulimit -a

core file size          (blocks, -c) unlimited

data seg size           (kbytes, -d) unlimited

scheduling priority             (-e) 0

file size               (blocks, -f) unlimited

pending signals                 (-i) unlimited

max locked memory       (kbytes, -l) unlimited

max memory size         (kbytes, -m) unlimited

open files                      (-n) 1048576

pipe size            (512 bytes, -p) 8

POSIX message queues     (bytes, -q) unlimited

real-time priority              (-r) 0

stack size              (kbytes, -s) unlimited

cpu time               (seconds, -t) unlimited

max user processes              (-u) unlimited

virtual memory          (kbytes, -v) unlimited

file locks                      (-x) unlimited

Solved it

These kinds of errors can not only occur because the total amount of heap space is exhausted, but also when the number of memory-mapped areas is exhausted.

In Linux, the maximum per-process number of map areas is controlled by vm.max_map_count sysctl parameter (which defaults to 65536). So, i would try, for example, to double it and examine what happens:

sysctl -w vm.max_map_count=131072
An indicator that you have hit into this problem is when the heap dump lists a huge number of open mmap areas (under “Dynamic Libraries” section)