AWS S3 source connector, compressed input file with document array - HOW

hi all, hope someone can assist.

As related to AWS S3 Source Connector.

I have S3 folder structure. (////)

Can the connector intelligently consume from the newest Hour as new hours new days new months are created.

and then into this , I’ve got a inbound stream of json documents, .json.gz compressed

each json file is structured as per below.
{
[
{doc1},
{doc2},
{doc3},
{doc4},
{doc5}
]
}
Looking for suggestions how to process this into a single document/message (on topic).
Can the S3 source connector do this, Everything is hosted on AWS so thinking Lambda , uncompress step and then a 2nd step that take the entire json document and iterate over the array, posting the individual documents onto a new processed topic.

First prize however would be if the S3 source connector could do this all by itself ?

G

answer on this…

Source connector can’t currently do a pre process step (decompress) and then the size of the json document after the decompress puts it in a size that’s not advisable to post onto a topic (if it was smaller then a SMT step could have done the “decompile” of the document comprising and array of documents).

G