Connector config, source control, and env-specific variables

We check our connector configuration files into peer-reviewed source control so there’s less room for manual errors.

We like to use the same connector config for all environments, but some configuration properties need to vary by environment (for example, where to find the brokers, where to find schema registry, etc.).

Our workaround for that has been to use the FileConfig provider and manage those env-specific variables via external files.

This works, but it makes troubleshooting cumbersome. If there’s a problem, it helps to debug using a connector that inlines all of its configuration (no interpolated variables). But I’m not aware of any tool to generate an inline version automatically. I’d considered building one but that then raises questions of where would the tool live, who has access, etc.

So taking a step back, I wonder if others have solved this problem a different way? Perhaps you maintain separate connector config (in source control) for each environment, reducing the need for interpolation except for true secrets (DB credentials and so on would still need to be interpolated, but server addresses wouldn’t)?

Or perhaps there’s some technique to see the result of connector config interpolation? (I doubt it since, per Kafka Connect Security Basics | Confluent Documentation " Only when the connector starts does it transiently resolve and replace variables in-memory. Secrets are never persisted in connector configs, logs, or in REST API requests and responses.").

My issue here is that I’m leaning on FileConfig for env-specific connector configuration variables that don’t need to be secret. How to resolve them on the fly?

Another idea would be to write a program which dynamically generates a connector config. It could produce different output depending on what environment the config is for. Is anyone doing this?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.