Dear confluent community,
In the JDBC source connector, Is there any provision to start more than one task on one table? Every time I start a connector it only starts one task. I have 3 instances of Kafka connect in Kubernetes. And passing tasks.max as 3.
The JDBC Source connector will use at most one task per table. That’s by design.
Hi, Please help me to understand what is the role of task.max then in source connector
It tells Kafka Connect the maximum number of tasks that a connector may spawn. It is a limit rather than an instruction.
Each connector plugin will implement tasks in its own way. Some make sense to parallelise across tasks (e.g. JDBC source one task per table) whilst others do not (e.g. syslog only makes sense with a single task).
In theory the JDBC connector could read per partition in parallel I guess, or some other form of chunking up a table’s data to read in parallel – but it doesn’t.
Alternative would be create 3 views on the table with partition on incremental column % 3
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.