Issue with PostgresSinkConnector connection to Database, but same connection details works from Postgres client

Issue with PostgresSinkConnector connection to Database, but same connection details works from Postgres client. (Using admin credentials with all rights but getting connection.password error. ) . We are using confluent cloud basic cluster for POC. Also we didn’t find an option to select schema while setting connection and table details. Could some guide us … thanks in advance ![:slightly_smiling_face:]

Hi Lokesh,

Can you attach some log traces from the connector status so we can see what is happening?

Thanks!

B

Hi @kuro ,

Thanks for your reply.

We are using confluent cloud basic cluster as currently doing POC on confluent cloud to start using.

I have attached error screenshot and sharing connector config.

I have replaced all db url and cred with name as dummy as it is sensitive information.

{
“name”: “PostgresSinkConnector_0”,
“config”: {
“topics”: “CONVOX_DG”,
“input.data.format”: “JSON_SR”,
“delete.enabled”: “false”,
“connector.class”: “PostgresSink”,
“name”: “PostgresSinkConnector_0”,
“kafka.auth.mode”: “KAFKA_API_KEY”,
“kafka.api.key”: “dummy”,
“kafka.api.secret”: “****************************************************",
“connection.host”: “dummy”,
“connection.port”: “5432”,
“connection.user”: “dummy”,
“connection.password”: "
”,
“db.name”: “dummy”,
“ssl.mode”: “prefer”,
“insert.mode”: “INSERT”,
“table.name.format”: “t_crm_logs”,
“table.types”: “TABLE”,
“db.timezone”: “Asia/Kolkata”,
“pk.mode”: “none”,
“auto.create”: “false”,
“auto.evolve”: “false”,
“quote.sql.identifiers”: “ALWAYS”,
“batch.sizes”: “3000”,
“tasks.max”: “1”
}
}

Thanks for the detailed configuration Lokesh.

Does the specified db.name already exists in the postgres database?

db.name is exist in postgres and we are able to access same with postgres client like dbeaver .
is there anything we have to whitelist the servers from my side or required to migrate to paid cluster?

Depending where your database is running, yes, you may need to allow Confluent Cloud to Connect with it, such as by VPC Peering, or allow-listed.

thank you for the reply.

so you mean to say basic cluster should work for postgres connectors without any issues ?

Unclear what you mean by “basic cluster”.

Yes, Self-hosted or managed JDBC Connector should work, as long as networking is configured on both sides, appropriately.

As confluent is provided three types of cluster

Basic Cluster :- Free of Cost
Standard Cluster :- Premium
Dedicated Cluster:- Premium

As we are doing RND on confluent to check my use case will support or not so we are using Basic cluster

As you saying connectors will work in basic cluster as well until proper network configured on both side. yeam i correct ?

I am not a Confluent Cloud user, but I do not believe there are any restrictions on what types of Connectors can be ran. At least, I’ve not seen any documentation that mentions this.

Still, that doesn’t prevent you from self-managing Kafka Connect in the same datacenter where your database is available.