I am using spring cloud stream with Kafka and Avro.
I want to messages to be consumed very fast but unfortunately it is not meeting my desired expectation.
I have over 100 millions messages being consumed. I have 20 pods. It takes around 6 hours for complete consumption which is too much. I am just consuming, merging with some other object and sending it to a new topic. I have increased instances of my consumer but still not able to achieve fast processing.
I don’t care if messages are lost or they are consumed twice.
I am adding the configurations I have added.
Please suggest how can I make it faster ? Any tips on spawning more threads or using some Kafka property ?
PS : I have 5 different listeners listening from different topics
spring:
cloud:
config:
function:
definition: RiskProcessor1;RiskProcessor2;RiskProcessor3;RiskProcessor4;RiskProcessor5
stream:
bindings:
RiskProcessor1-in-0:
destination: ******
RiskProcessor1-out-0:
destination: *****
RiskProcessor2-in-0:
destination: *****
RiskProcessor2-out-0:
destination: *****
RiskProcessor3-in-0:
destination: *****
RiskProcessor3-out-0:
destination: *****
RiskProcessor4-in-0:
destination: *****
RiskProcessor4-out-0:
destination: *****
RiskProcessor5-in-0:
destination: *****
RiskProcessor5-out-0:
destination: *****
kafka:
streams:
binder:
brokers: kaas-int.nam.nsroot.net:9093
functions:
RiskProcessor1:
applicationId: RiskProcessor1_development
RiskProcessor2:
applicationId: RiskProcessor2_development
RiskProcessor3:
applicationId: RiskProcessor3_development
RiskProcessor4:
applicationId: RiskProcessor4_development
RiskProcessor5:
applicationId: RiskProcessor5_development
configuration:
commit.interval.ms: 1000
security.protocol: SSL
default:
deserialization:
exception:
handler: org.apache.kafka.streams.errors.LogAndContinueExceptionHandler
schema:
registry:
url: *****:9081
default:
consumer:
keySerde: ****
valueSerde: ****
producer:
keySerde: ****
valueSerde: ****