I would like to load data from Kafka to Files. For example, we have 1million messages in Kafka topic. We would like to load those 1 million messages from Kafka to files, where each file will have 100 thousand records. So we will get 10 files in with the 1 million messages.
Is it possible to achieve this using a connector? Also, I noticed that in FileSinkConnector docs page, it was mentioned that " Confluent does not recommended the FileStream Connector for production use. If you want a production connector to read from and write to files, use a Spool Dir connector". But in Spool Dir connector, I dont see any examples for sink. Is it only for reading?