which shows that the wide of the windows is much more frequent than 1 minute:
(the timestamp converted to datetime (it means the last time stamp of the window)):
window start timestamp: 1620824040000 (encoded in the message key)
window end timestamp: 1620824100000 (not physically stored in the message, but computed as “window start timestamp + window size”)
and record timestamp: 1620824089981 itself.
The record timestamp is not the same as the window end timestamp but it is computed as max(r1.ts,...,rn.ts) over all record r1,...,rn that fall into the window. So it’s basically the timestamp of the last update to the window.
Kafka Streams implements a continuous update model: note that the result of aggregate() is a KTable. This table will contain one row per key and window, and each time a new input record is received, the corresponding row will be updated.
When you call toStream() you request the changelog stream of the table. Internally, Kafka Streams applies caching to tables, and thus, you see multiple updates per default (if you would disable caching, you would see all updates, ie, one for each input record). (Cf Kafka Streams Memory Management | Confluent Documentation)
If you only want to get the “final” result in the stream, you can apply the suppress() operator after the aggregation. Note that you would want to specify a gracePeriod on your window definition, because otherwise it defaults to 24h, and thus suppress() would not emit anything for a day…