Error Deploying ServiceNow Sink

Hello All -

I’m doing some testing of confluent enterprise for a new project. I’ve deployed a cluster to AWS using the cloudformation template and have a couple of connectors running. I’m attempting to add the ServiceNow sink into the mix and receive the following error when attempting to launch the connector:

[2022-01-17 17:07:46,103] ERROR WorkerConnector{id=snow-dev} 
Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector) org.apache.kafka.common.config.ConfigException: Missing required configuration "bootstrap.servers" which has no default value.
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:506)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:496)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
at io.confluent.connect.reporter.ReporterConfig.<init>(ReporterConfig.java:171)
at io.confluent.connect.servicenow.ServiceNowSinkConnectorConfig.<init>(ServiceNowSinkConnectorConfig.java:35)
at io.confluent.connect.servicenow.ServiceNowSinkConnector.start(ServiceNowSinkConnector.java:35)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:186)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:211)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:350)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:333)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:141)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Could anybody offer any assistance with this? I verified that the bootstrap.servers is set in the config being used by the running connect process.

My connector config is:

{
  "bootstrap.servers": "ip-10-133-5-8.ec2.internal:9092,ip-10-133-6-78.ec2.internal:9092,ip-10-133-4-169.ec2.internal:909",
  "name": "snow-dev",
  "connector.class": "io.confluent.connect.servicenow.ServiceNowSinkConnector",
  "value.converter": "org.apache.kafka.connect.json.JsonConverter",
  "topics": [
    "snow_incidents"
  ],
  "servicenow.url": "https://devxxxxxx.service-now.com/",
  "servicenow.table": "incident",
  "servicenow.user": "xxxx",
  "servicenow.password": "*************",
  "confluent.license": "",
  "confluent.topic.bootstrap.servers": [
    "ip-10-133-5-8.ec2.internal:9092",
    "ip-10-133-6-78.ec2.internal:9092",
    "ip-10-133-4-1 69.ec2.internal:9092"
  ],
  "confluent.topic": ""
}

Thanks,

Will

Hi @wmccracken. Are you using Confluent Cloud or Confluent Platform? It appears like you are deploying Confluent Platform on AWS.

@vnadkarni - I’m deployed on Platform on AWS. The issue was a missing setting for the reporter.bootstrap.servers. After setting that, the connector starts and I’m able to send a message to ServiceNow as expected. However, after sending one, when sending a second the connector crashes:

[2022-01-25 19:38:44,072] INFO json is configured (io.confluent.connect.formatter.json.JsonFormatter)
[2022-01-25 19:38:44,320] ERROR WorkerSinkTask{id=snow-dev-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: Failed after 4 attempts to send request to ServiceNow: null (org.apache.kafka.connect.runtime.WorkerSinkTask)
io.confluent.connect.utils.retry.RetryCountExceeded: Failed after 4 attempts to send request to ServiceNow: null
at io.confluent.connect.utils.retry.RetryPolicy.callWith(RetryPolicy.java:429)
at io.confluent.connect.utils.retry.RetryPolicy.call(RetryPolicy.java:337)
at io.confluent.connect.servicenow.rest.ServiceNowClientImpl.executeRequest(ServiceNowClientImpl.java:245)
at io.confluent.connect.servicenow.rest.ServiceNowClientImpl.doRequest(ServiceNowClientImpl.java:241)
at io.confluent.connect.servicenow.rest.ServiceNowClientImpl.put(ServiceNowClientImpl.java:176)
at io.confluent.connect.servicenow.ServiceNowSinkTask.put(ServiceNowSinkTask.java:58)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:560)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:323)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.http.client.ClientProtocolException
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:187)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at com.google.api.client.http.apache.v2.ApacheHttpRequest.execute(ApacheHttpRequest.java:71)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:996)
at io.confluent.connect.servicenow.rest.ServiceNowClientImpl.lambda$executeRequest$2(ServiceNowClientImpl.java:246)
at io.confluent.connect.utils.retry.RetryPolicy.lambda$call$1(RetryPolicy.java:337)
at io.confluent.connect.utils.retry.RetryPolicy.callWith(RetryPolicy.java:417)

… 16 more

-Will

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.