Partitioning by field name in s3 connector (access on field name in multiple record avro schema)

Hi there,

I’m using the Kafka-Connector to send data to s3.
The Kafka topic is using an Avro schema. In that schema, there are multiple records. So I need to access a field name on a record inside the record. So my scheme is looking like that:

{
	"type": "record",
	"name": "userdata",
	"namespace": "myNamespace",
	"fields": [
		{
			"name": "userinfo",
			"type": {
				"type": "record",
				"name": "record1",
				"fields": [
					{
						"name": "eventtime",
						"type": "long",
					},
				]
			}
		},
		{
			"name": "clientinfo",
			"type": {
				"type": "record",
				"name": "record2",
				"fields": [
					{
						"name": "username",
						"type": {
							"type": "string",
							"avro.java.string": "String"
						},
					}
				]
			}
		}
}

In my Kafka connector, I’m using this:

        partitioner.class: io.confluent.connect.storage.partitioner.FieldPartitioner
        partition.field.name: clientinfo.record2.username

But it’s not working. How do I access to a field name from a record inside a record? I tried different types of writing, but without success.

This is my error:

Caused by: org.apache.kafka.connect.errors.DataException: clientinfo.record2.username is not a valid field name

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.