when I have a Kafka Streaming job of messages with defined keys then the messages of same key are on same partitions. And all messages from same partition is is handled by single job, isn’t it? (single jar)
What if I change the key during the stream? Then the messages are repartitioned. When? The messages with new same keys are handled by same job?
sourceStream .selectKey((k, v) -> "MY_KEY") // here I add a key .transformValues(() -> new MyTransformer(MY_STORE), MY_STORE);
Assume source message has null keys and I run abvove job stream with 2 jars.
The source can be on many partitions but sink will be produced to one partition, won’t it?
2 jobs will be handling the job or 1 jar?