How to design data processing with Confluent for replacing ETL?

Hello team,

I want to consult with you about how to design Confluent platform to do data processing, maybe with ksqlDB or KStream, to replace existing ETL tools.

Details of logic on existing ETL tools:

  1. Get data from on-prem database systems then enrich them together before updating to another database system on cloud.
  2. Logics for data enrichment as a huge point to joining data around 20-30 tables together and that mixed conditional for joining like
SELECT * FROM TABLE_A A
LEFT JOIN TABLE_B B WITH (NOLOCK) on A.column1 = B.column1 and A.column2 = B.column2
INNER JOIN TABLE_C C WITH (NOLOCK) on A.column3 = C.column3
...
LEFT JOIN TABLE_Z Z WITH (NOLOCK) on A.column20 = Z.column20 and A.column21 = Z.column21 and A.column22 = Z.column22

.

My Questions
1 Are able to move existing data enrichment on ETL that joins data around 20-30 tables to processing on ksqlDB or KStream?
2 If the answer 1 is yes, which is more suitable between ksqlDB and KStream?
3 If the answer 1 is no, how do you design a solution for supporting data enrichment with multiple tables with Confluent platform? Or needs 3rd tools to help for joining 20-30 tables.

Thanks
May