JDBC source connector for Oracle 11g is not working

Hi, i’m running a local Oracle 11g database, my Kafka Broker, Zookeeper and Kafka Connect are running in a docker-compose file.
docker-compose:

version: '3'
services:

  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment: 
      ZOOKEEPER_CLIENT_PORT: 2181
    ports: 
      - 2181:2181

  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on: 
      - zookeeper
    ports: 
      - 9092:9092
      - 9094:9094
    environment: 
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_LISTENERs: INTERNAL://:9092,OUTSIDE://:9094
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092, OUTSIDE://host.docker.internal:9094
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT, OUTSIDE:PLAINTEXT
  
  kafka-connect:
    image: confluentinc/cp-kafka-connect-base:6.0.0
    depends_on:
      - zookeeper
      - kafka
    ports:
      - 8083:8083
    environment:
      CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: kafka-connect
      CONNECT_CONFIG_STORAGE_TOPIC: _connect-configs
      CONNECT_OFFSET_STORAGE_TOPIC: _connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: _connect-status
      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
      CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
      CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
      CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
      CONNECT_PLUGIN_PATH: /usr/share/java,/usr/share/confluent-hub-components,/data/connect-jars
    volumes:
      - $PWD/data:/data
    command:
      - bash
      - -c
      - |
        echo "Installing connector plugins"
        confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.2.0
        #
        echo "Launching Kafka Connect worker"
        /etc/confluent/docker/run &
        #
        sleep infinity

  kafka-ui:
    image: provectuslabs/kafka-ui:latest
    environment: 
      KAFKA_CLUSTERS_0_NAME: local
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
      KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper:2181
      KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: connect
      KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: 'http://kafka-connect:8083'
    ports: 
      - 8000:8080

my json with configurations of the jdbc connector:

{
	"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
	"incrementing.column.name": "ID",
	"connection.password": "xxxx",
	"tasks.max": "1",
	"table.whitelist": "PEDIDOS",
	"mode": "incrementing",
	"topic.prefix": "oracle_",
	"connection.user": "xxxx",
	"poll.interval.ms": "2000",
	"name": "oracle-jdbc-connector",
	"connection.url": "jdbc:oracle:thin:@localhost:1521:xe"
}

my table with name PEDIDOS:
image

When I create the jdbc connector with this configurations apparentely works, but in my Kafka topics nothing is created.
I broke my head trying to understand what is happening, but without success.
A curious thing is that when I create a new table in this database, after this a topic is created with my desired prefix and with the correct table.whitelist name, but the offsets are just the previous records that I had created, new records are not beign listed in this topic.
Well, I need help with this, can someone help me?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.