Kafka connect s3 sink not able to create connectors

I’ve a dockerfile

FROM confluentinc/cp-kafka-connect:5.4.6-1-ubi8
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-s3:10.0.4

ENV CONNECT_PLUGIN_PATH='/usr/share/java,/usr/share/confluent-hub-components/,/connectors/'
ENV CONNECT_BOOTSTRAP_SERVERS=kafka-ip:9092
ENV CONNECT_REST_ADVERTISED_HOST_NAME="kafka-connect"
ENV CONNECT_REST_PORT=8083
ENV CONNECT_REST_LISTENERS="http://localhost:8083"
ENV CONNECT_GROUP_ID=kafka-connect
ENV CONNECT_CONFIG_STORAGE_TOPIC=__connect-config
ENV CONNECT_OFFSET_STORAGE_TOPIC=__connect-offsets
ENV CONNECT_STATUS_STORAGE_TOPIC=__connect-status
ENV CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
ENV CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
ENV CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter"
ENV CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter"
ENV CONNECT_LOG4J_LOGGERS="org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
ENV CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR="1"
ENV CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR="1"
ENV CONNECT_STATUS_STORAGE_REPLICATION_FACTOR="1"
ENV AWS_ACCESS_KEY_ID=1234
ENV AWS_SECRET_ACCESS_KEY=1234

container starts but when I try to create a s3 sink connector getting error.

If I use PUT

curl -i -X PUT -H "Accept:application/json" \
    -H  "Content-Type:application/json" http://localhost:8083/connectors/sink/config \
    -d '
 {
        "connector.class": "io.confluent.connect.s3.S3SinkConnector",
        "key.converter":"org.apache.kafka.connect.storage.StringConverter",
        "tasks.max": "1",
        "topics": "cats",
        "s3.region": "eu-west-1",
        "s3.bucket.name": "snowplow-kafka-s3-sink-test",
        "flush.size": "65536",
        "storage.class": "io.confluent.connect.s3.storage.S3Storage",
        "format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
        "schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
        "schema.compatibility": "NONE",
        "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
        "transforms": "AddMetadata",
        "transforms.AddMetadata.type": "org.apache.kafka.connect.transforms.InsertField$Value",
        "transforms.AddMetadata.offset.field": "_offset",
        "transforms.AddMetadata.partition.field": "_partition"
    }
'

I get below error

HTTP/1.1 500 Internal Server Error
Date: Tue, 28 Dec 2021 03:18:51 GMT
Content-Type: application/json
Content-Length: 48
Server: Jetty(9.4.43.v20210629)

{"error_code":500,"message":"Request timed out"}

If I use POST, get

HTTP/1.1 405 Method Not Allowed
Date: Tue, 28 Dec 2021 03:20:57 GMT
Content-Length: 58
Server: Jetty(9.4.43.v20210629)

{"error_code":405,"message":"HTTP 405 Method Not Allowed"}

http://localhost:8083/ works and returns 200 OK
http://localhost:8083/connectors returns (empty array)

Kafka and S3 bucket all works fine, AWS key also verified and it’s correct

Logged in container, but within container also not worked

@Pramodniralakeri Some debugging steps I would try:

  • Check the log file in the running container to try and ascertain more information about the exception message Request timed out.
  • If I understand correctly, POSTing to localhost:8083/connectors/name/config is not supported, so the 405 for that seems appropriate. If you’d like to try a POST, use the connectors route directly with a payload that contains the name wrapper, here is an example: Connect REST Interface | Confluent Documentation

I think your best bet may be to evaluate the log files and try and determine if the Request timed out is a result of a failure related to the connection to S3.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.