Stream processing from a Topic A on Broker A within Cluster A and write to a Topic B on Broker B within Cluster B

Is is possible to perform stream processing/build a topology in way that would allow Processor to stream from a Topic A on Broker A within Cluster A and write results to Topic B on Broker B within Cluster B.

(Topic A - Broker A - Cluster A) belong to Department A
(Topic B - Broker B - Cluster B) belong to Department B.

Department A writes its feeds to Cluster A and allows other Departments such as Department B to consume those feeds and stay in sync. Department A will not allow Department B to publish to its clusters/brokers/topics.

Kafka Streams does not allow to connect to multiple Kafka cluster.

For you example, you would need to replicate the input data from cluster A to cluster B, do the processing in cluster B, and replicate the result back to cluster B.

Or just let department A run the application to begin with against cluster A.

1 Like

Thank you for the quick reply. greatly appreciate it. !!

Assuming that there won’t be any org hurdles, I like the ability to replicate Topic A on to Topic B of Department B’s Broker and Cluster B.

However here’s another thought process :

Department B will use basic Kafka Consumer B program to consume events from Department A’s Topic A, process them (transformation/enrichment manually) then publish resulting event to its own Topic - B2, Broker B, Cluster B. Thereafter perform any stream related operation on it own internal topics B*. Is this less efficient ? given that we will have transformation/enrichment by service callouts, transactionality, offset management to do?

Thank you.

Guess there is no clear answer to it.