Confluent Cloud & Private Link

Hello all,

I’m wondering if anyone has any experience with Confluent Cloud with Private Link? Any war stories? I’m looking for anything related to using the connect framework, schema & ksqlDB with it? I’m aware that you may need to spin your own connect, schema or ksql services when using Confluent Cloud & Private Link. Is this the case?

Thanks!
Chris

1 Like

Hello Chris!

Your assumption is correct. Today, you will have to roll your own connect/ksqlDB clusters if you wish to leverage PrivateLink… I have an example diagram of how this may work. Keep in mind private link is unidirectional, meaning only clients from inside your VPC can connect to Confluent Cloud (not the other way around). This is important because fully-managed connect tasks need to reach to your VPC in order to connect to your Databases, etc. For that reason you should use your own connect cluster for sources/sinks in your private network, and then you can leverage fully managed connectors for your workloads that already live in the cloud today. (Ignore the label that says VPC peering below, this will work with PL as well).

4 Likes

Hi Chris, adding to what @dwittekind said, you’ll also want to understand how PrivateLink endpoints work with DNS. You need to know that PrivateLink clusters will need access to some public DNS resolver to work. Let’s start with the he current naming scheme for clusters accessed over PrivateLink:

Bootstrap:
$lkc-$nid.$region.$cloud.glb.confluent.cloud:9092
example: lkc-n02w6-43860.us-east-1.aws.glb.confluent.cloud:9092

The bootstrap returns metadata about brokers, which have their own naming scheme:
e-$last2octets-$zoneid-$nid.$region.$cloud.glb.confluent.cloud:9092
example: e-0013-az1-43860.us-east-1.aws.glb.confluent.cloud:9092

When resolving these endpoints, you’ll need access to Confluent’s global DNS resolvers, which tell us these are actually CNAMEs and the returned names have glb removed, and converts the dash between the $lkc and $nid into a dot for the bootstrap, and converts the dash between the e-$last2octets, $zoneid, and $nid:

Bootstrap Example: lkc-n02w6.43860.us-east-1.aws.confluent.cloud:9092
Broker Example: e-0013.az1.43860.us-east-1.aws.confluent.cloud:9092

(notice the removed glb, and now the network id is it’s own subdomain)

You’re then expected to host SOA records for *.$nid.$region.$cloud.confluent.cloud and each of the zonal endpoints.

Here’s an example of the most common approach:

After the initial DNS request from the client:

  1. Resolve glb name which is forwarded from our local DNS resolver to the global Confluent Public DNS, which returns the CNAME without glb.
  2. Resolve the CNAME with the local SOA records in the local DNS server for the PL endpoints.
3 Likes

thanks for the architecture, may I ask, what tool did you use to create this diagram?