Spring cloud stream kafka is very slow

I am using spring cloud stream with Kafka and Avro.

I want to messages to be consumed very fast but unfortunately it is not meeting my desired expectation.
I have over 100 millions messages being consumed. I have 20 pods. It takes around 6 hours for complete consumption which is too much. I am just consuming, merging with some other object and sending it to a new topic. I have increased instances of my consumer but still not able to achieve fast processing.

I don’t care if messages are lost or they are consumed twice.

I am adding the configurations I have added.

Please suggest how can I make it faster ? Any tips on spawning more threads or using some Kafka property ?

PS : I have 5 different listeners listening from different topics

spring:
  cloud:
    config:
    function:
      definition: RiskProcessor1;RiskProcessor2;RiskProcessor3;RiskProcessor4;RiskProcessor5
    stream:
      bindings:
        RiskProcessor1-in-0:
          destination: ******
        RiskProcessor1-out-0:
          destination: *****
        RiskProcessor2-in-0:
          destination: *****
        RiskProcessor2-out-0:
          destination: *****
        RiskProcessor3-in-0:
          destination: *****
        RiskProcessor3-out-0:
          destination: *****
        RiskProcessor4-in-0:
          destination: *****
        RiskProcessor4-out-0:
          destination: *****
        RiskProcessor5-in-0:
          destination: *****
        RiskProcessor5-out-0:
          destination: *****
      kafka:
        streams:
          binder:
            brokers: kaas-int.nam.nsroot.net:9093
            functions:
              RiskProcessor1:
                applicationId: RiskProcessor1_development
              RiskProcessor2:
                applicationId: RiskProcessor2_development
              RiskProcessor3:
                applicationId: RiskProcessor3_development
              RiskProcessor4:
                applicationId: RiskProcessor4_development
              RiskProcessor5:
                applicationId: RiskProcessor5_development
            configuration:
              commit.interval.ms: 1000
              security.protocol: SSL
              default:
                deserialization:
                  exception:
                    handler: org.apache.kafka.streams.errors.LogAndContinueExceptionHandler
              schema:
                registry:
                  url: *****:9081
          default:
            consumer:
              keySerde: ****
              valueSerde: ****
            producer:
              keySerde: ****
              valueSerde: ****

If anyone if wondering. I have 20 partitions of topic and have 20 pods where each pods will have it’s consumer

Increasing commit interval could help (committing is an expensive blocking operation). Why do you reduce from 30sec (default) to 1sec?

Thanks for your response sir.
No reason. It was like that when I took over project. I am still learning kafka and it’s working.
I increased it to 30000ms but no difference

Is there any other configuration which I can add here I might not be aware of which can increase the speed ?

[Please note that when I tested the time intervals 30000, I have 20 partitions and 2 consumer pods]

The first question is: where is the bottleneck? CPU client side (-> maybe add more pods)? Network (maybe change consumer “fetch” configs)?