JDBC Source Connector not pulling all records from Snowflake Table

Hi All.
We have deployed JDBC Source Connectors to pull data from Snowflake tables/ views. Our connector properties looks as below:

{

“name”:“source-allocation-unit-snowflake”,
“connector.class”:“io.confluent.connect.jdbc.JdbcSourceConnector”,
“tasks.max”:“1”,
“poll.interval.ms”: “180000”,

“connection.url”:“jdbc:snowflake://aaaaaaaa.snowflakecomputing.com/?warehouse=MY_WH_XL&db=MY_DB&schema=STARSP_SRC_VIEWS&role=MY_ROLE&useProxy=true&proxyHost=aaa-aaa&proxyPort=99992&tracing=ALL”,
“connection.user”:“MYUSER”,
“connection.password”:“MYPASSWORDS”,

“schema.pattern”:“STARSP_SRC_VIEWS”,
“catalog.pattern”:“STARSP_SRC_VIEWS”,

“topic.prefix”:“sf-ALLOCATION_UNIT”,
“query”:“SELECT * from STARSP_SRC_VIEWS.V_SRC_COBO_AE_ALLOCATION_UNIT”,

“mode”:“timestamp”,
“timestamp.column.name”:“RCREATEDDATE”,
“timestamp.initial”:“1618869600000”,
“validate.non.null”:false,
“db.timezone”:“Europe/Copenhagen”,

“key.converter”:“io.confluent.connect.avro.AvroConverter”,
“key.converter.schemas.enable”:“true”,
“key.converter.schema.registry.url”:“https://aaaqqqqqq.net:99999,https://aaaaaaeewwwaaaaa.net:9444444,https://aaaaasswwww.net:92222222”,
“value.converter”:“io.confluent.connect.avro.AvroConverter”,
“value.converter.schemas.enable”:“true”,
“value.converter.schema.registry.url”:“https://aaaqqqqqq.net:99999,https://aaaaaaeewwwaaaaa.net:9444444,https://aaaaasswwww.net:92222222”,

“transforms”:“Cast,ValueToKey,SetSchemaMetadataValue,SetSchemaMetadataKey”,
“transforms.Cast.type”:“org.apache.kafka.connect.transforms.Cast$Value”,
“transforms.Cast.spec”:“CATOR:boolean”,
“transforms.ValueToKey.type”:“org.apache.kafka.connect.transforms.ValueToKey”,
“transforms.ValueToKey.fields”:“PDATE,SCOUNTRY,AID”,
“transforms.SetSchemaMetadataValue.type”:“org.apache.kafka.connect.transforms.SetSchemaMetadata$Value”,
“transforms.SetSchemaMetadataValue.schema.name”:“com.my.cobo.avro.AllocationUnit”,
“transforms.SetSchemaMetadataKey.type”:“org.apache.kafka.connect.transforms.SetSchemaMetadata$Key”,
“transforms.SetSchemaMetadataKey.schema.name”:“com.my.cobo.avro.AssetKey”

}

Everything looks good here . But we are facing below two unexpected behavior :
1: Even though we are doing "SELECT * " on source table, Everytime it is loading only one country data into table .
In Source Table, everyday 3 country data will loaded one by one with 5 -10 min gap in between . All the data should be picked and loaded by connceter into topic.
But Kafka Connector is loading only first country data into topic but not picking up next two country data.
We checked the prepared statement run by Kafka on top of Snowflake Which looks as below :
"SELECT * from STARSP_SRC_VIEWS.V_SRC_COBO_AE_ALLOCATION_UNIT WHERE “RECORD_CREATED_DATE” > ? AND “RECORD_CREATED_DATE” < ? ORDER BY “RECORD_CREATED_DATE” ASC "
Note : The initial timestamp we have provided in connector is a date in April. So everytime connector should pull all the data in table that loaded recently .

2: Data movement between Snowflake and Kafka is very slow . To pull around 1.8lL records it is taking around 30 min .
where as to pull same data in teradata throgh Teradata source connecter it takes hardlly 1 min.
In teradata souce connector, we have exact same parameters with minor change with respect url, user/password, connector class,
Is there any we can improve the performance ??

Could anybody please suggest any solution for above two issues ?