Hello,
I am working on trivial financial application, bitemporal data flow, each record has “marketTime” and “correctionTime”, and “business key”.
I use very simple Kafka Streams app, which does this: groupByKey
and reduce
(by comparing timestamps and picking latest record).
GlobalKTable created at the end which stores “latest” data for “business key”.
I run stress test using 1 record per minute, and it takes 8 seconds from end to end?
I run 1,000 records per second and it takes about 17 seconds average latency per message. Scalability is excellent; processor run in single JVM, 8 threads, 128 partitions.
But why single message per minute takes 8 seconds, with Confluent Cloud? Average message size is 2kb. I frankly expected 500ms-700ms (time it takes for Google Cloud Functions to update remote database, as an example).
I tuned acks: 1
and min.insync.replicas: 1
for Producer, but I can’t find similar settings for Processor.
Throughput is excellent, but latency?
8 seconds latency with zero-like load (single message per minute)?