Source sink jdbc connector

If the mode is timestamp from the source connector, does it read the whole thing every time?

If timestamp initial is set to default(null), data is duplicated in the topic.

"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
        "transforms.RenameField.renames": "SEQ:SOURCE_SEQ,REG_DATE:SOURCE_REG_DATE",
        "timestamp.column.name": "REG_DATE",
        "errors.log.include.messages": "true",
        "connection.password": "***",
        "timestamp.initial": "-1",
        "tasks.max": "1",
        "transforms": "renameTopic,RenameField",
        "transforms.renameTopic.replacement": "test-topic-04",
        "table.whitelist": "TB_SOURCE_KAFKA_TEST_1",
        "mode": "timestamp",
        "transforms.renameTopic.regex": "TB_SOURCE_KAFKA_TEST_1",
        "connection.user": "***",
        "db.timezone": "Asia/Seoul",
        "transforms.RenameField.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
        "poll.interval.ms": "60000",
        "name": "test-source-04",
        "transforms.renameTopic.type": "org.apache.kafka.connect.transforms.RegexRouter",
        "connection.url": "jdbc:oracle:thin:@****:1521:GSTG?characterEncoding=UTF-8&serverTimezone=UTC",
        "errors.log.enable": "true",
        "validate.non.null": "false"

Do I have to set timestamp.initial to -1 unconditionally to avoid duplicate data?
If timestamp.initial is set to -1, the previous data cannot be read.
How do I read the whole thing first?

When the connector first runs it uses, if I remember correctly, the unix epoch (1970 something), so pulls in all data that’s in the table. You shouldn’t need to set timestamp.initial unless you have a particular override in mind.