Hello All, I am trying to understand the behavior of the S3-Sink.
Using the debezium/example-postgres:1.0 docker image for the source db. I have set up dbz connector with
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable": "true",
"value.converter.schemas.enable": "true"
the next step is to psql into the db and create a simple row.
then by another process in the debezium/kafka:1.0 docker image I can utilize the watch-topic functionality and see that the topic that is written out is in this format
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"dbserver1.inventory.customers.Key"},"payload":{"id":1021}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"first_name"},{"type":"string","optional":false,"field":"last_name"},{"type":"string","optional":false,"field":"email"}],"optional":true,"name":"dbserver1.inventory.customers.Value","field":"before"}, ...rest}
This is good news since I can see that there is the key schema json blob and the value schema json blob, that is tab/space delimited.
Now when I create the s3-sink connector that monitors the topic names, the resulting json that is in the s3 bucket only contains the the value schema json blob. I was wondering if this is the natural behavior of the s3-sink connector, where when it consumes the topic, it only outputs the second part of the topic into the s3 bucket.