Stream repartition with continuously increasing memory usage

I have a problem with an application using a repartitioning step.

.repartition(Repartitioned.numberOfPartitions<String?, Anl1sHist?>(1).withName("kStream-repartition"))

This has not been a problem before. The repartition step seems to continuously increase in the number of messages and size.

Any idea what may cause this?

In general, KS creates repartition topics with infinite retention and issues explicit “delete record” request to cut off data no longer needed.

By default, after a commit of offset X on partition P, KS would issue a corresponding “delete request” for this partition and offset. (In newer version, you there is a new config that allows you do configure how often “delete record” request are sent.)

Hence, you should first check if KS does make progress on the repartition topic, and if it does commit offsets – if not, you need to find out why.

If KS does make progress, the question is why data is no deleted. Are the request sent to begin with? Do you have the right permissions configured (KS required “delete” permissions for this operation to work: Secure Deployment for Kafka Streams in Confluent Platform | Confluent Documentation) – If the request reaches the broker and you have permissions but purging does not happen, the question goes into the broker: why is it not executed (for example, the current active segment can never be delete, and only after a segment roll data can be purged – thus, a “delete record” request might still be successful, but not up to the request offset not might lag)

HTH.