How to trigger window time to work?

I know thread block by kafka partition, when and How trigger window by process time?

Kafka Streams is purely based on event-time, and you cannot trigger a window on processing time.

Not sure what your exact scenario is, but you would either send a “dummy tick” message to advance stream-time, or you would need to move off the DSL, and use the Processor API that allows you do run wall-clock base punctuations.

Thanks a lot,I have been studying the source code,I know a stream thread is loop process a batch data, when consumer poll a batch data, first task add record,and then start a lifecycle,when aggregate by key ,the data store by a sort map,and the thread get time-window from the map,if it is time to trigger,the key
will send to next producer?it is right?please describe more detail,thank you

I know a stream thread is loop process a batch data

Well, yes and no. It’s an internal detail to improve efficiently, but from a semantic point of view, it has no impact. Semantically, it’s record-by-record processing.

when aggregate by key, the data store by a sort map, and the thread get time-window from the map,

Not sure what you mean by this?

if it is time to trigger,the key will send to next producer

Not sure if I can follow, however, for a windowed aggregation, the state store will have one entry (“row”) per key and window, and those rows are updated each time an input record is processed. Conceptually, each update is sent downstream. However, we apply some internal caching (Kafka Streams Memory Management | Confluent Documentation) and thus by default you many not see every update. A good exercise to understand this better, is to disable caching by setting the cache size to zero in the config.

Thus, for a windowed aggregation, we don’t accumulate input data and compute the window result once, but we continuously update/refine the result. For example, if you compute the sum over three input record with value 3, 5, 2 (just as an example), you get result record 3, 8, 10.

thank you give me favor, the thead will update the data store by each record.i know, thanks a lot!