I am currently working on a project where we are using kafka stream and I am consuming few topics messages which get stored in GlobalKtable, KTable based on need. This application we are running in k8s and HPA strategy we mentioned as below.
When I deploy the application, both initial k8s pods consumes the topic message and partition divided equally 30 each (subscribed 60 partition topic).
But when HPA kicks in and new pods get created, it does get partition assigned hence it remains ideal. I tried by changing max.poll.records and max.poll.interval.ms values but no luck.
Please note, the consumer group is same for all 3 k8s pod.
Did some one faced this issue?
hpa:
minReplicaCount: 2 # this needs to be change to 4 for prod deployment
maxReplicaCount: 3
maxReplicaCountNonFullTraffic: 20
averageValue: “700m”
averageValueNonFullTraffic: “1400m”
selectPolicyForScaleUp: Min
scaleDown:
stabilizationWindowSeconds: 1800
policies:
- type: Percent
value: 10
periodSeconds: 180
- type: Pods
value: 1
periodSeconds: 180
scaleUp:
stabilizationWindowSeconds: 120
policies:
- type: Percent
value: 30
periodSeconds: 90
- type: Pods
value: 1
periodSeconds: 90
Do you mean, it does not get partitioned assigned (and it’s just a typo)?
If that’s the case, it should have nothing to do with k8s but with Kafka Streams itself. The new instance should send a “join group request” to the broker, which would trigger a so-called rebalance to kick in. Such event should be logged broker and client side, so inspecting the logs seems to be a first good step.
Partition assignment is not done by the broker side group coordinator, but by one of the Kafka Streams instances, so you would need to check the Kafka Streams log (you need to find the right instance) of the so-called “group leader” that computes the assignment.
It’s weird that switching to in-memory changes the behavior; it should not have any impact…