Parquet file size on S3

Hi, I’m trying to use Kafka Connect to store files on S3 as Parquet files. The config I’m currently running is this:

        "connector.class": "io.confluent.connect.s3.S3SinkConnector",
        "storage.class": "",
        "s3.region": "eu-west-1",
        "": "Bucket-Name-Here",
        "topics.dir": "\b",
        "flush.size": "1000000000",
        "s3.part.size": "1073741823",
        "schema.compatibility": "NONE",
        "schema.generator.class": "",
        "tasks.max": "6",
        "topics": "Topic-Name-Here",
        "store.url": "Store-Url",
        "key.converter.schemas.enable": "false",
        "key.converter": "",
        "value.converter.schemas.enable": "false",
        "value.converter": "io.confluent.connect.protobuf.ProtobufConverter",
        "value.converter.schema.registry.url": "http://schema-registry:8081",
        "partitioner.class": "",
        "": "io.confluent.kafka.serializers.subject.TopicNameStrategy",
        "": "3600000",
        "path.format": "'\'date\''=YYYY-MM-dd/",
        "locale": "sv_SE",
        "": "-1",
        "": "180000",
        "timestamp.extractor": "RecordField",
        "timestamp.field": "timestamp",
        "timezone": "Europe/Stockholm",
        "format.class": "io.confluent.connect.s3.format.parquet.ParquetFormat",
        "parquet.codec": "gzip",
        "headers.format.class": "io.confluent.connect.s3.format.parquet.ParquetFormat",
        "keys.format.class": "io.confluent.connect.s3.format.parquet.ParquetFormat"

However, it seems that it doesn’t matter what flush.size or I set, the file is always around 5-8kb. I would like them to be at least a couple of Mb.

During my testing, I’ve sent around 300-1000 messages in less than 5 seconds. I’ve also made sure the messages are sent to kafka with the exact same timestamp (hardcoded).

Is there a option I’m missing or have I miss-configured something?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.