lensesio/secret-provider

i think the aws secrets from secret manager aren't parsed correctly and thus stay empty

Closed this issue · 10 comments

Goodday

I was very glad with the solution you are creating. But unfortionately it doesnt work entire;ly correct.

First up i was struggling with the auth keys. The enviroment variables i cant get working and the credenitials i had to change the "config.providers.aws.param.aws.client.key=your-client-key" into "config.providers.aws.param.aws.access.key" to get the connection working with aws.

Then when i changed the connection.password into "${aws:ems-dev-eventmanagement-db-secret:password}" it unfortunately failed.

In trace logging i found this part which i think might be the problem but i'm not sure:

[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << "{"ARN":"arn:aws:secretsmanager:eu-west-1:577868951105:secret:ems-dev-eventmanagement-db-secret-B0TWsl","CreatedDate":1.591854819443E9,"Name":"ems-dev-eventmanagement-db-secret","SecretString":"{"username":"eventmanagement_user","password":"xxxxxx","engine":"mysql","host":"dev-ems.coollnhgrzes.eu-west-1.rds.amazonaws.com","port":"3306","dbname":"eventmanagement"}","VersionId":"2f3cc10f-b194-4a1f-bef5-523a03f68a70","VersionStages":["AWSCURRENT"]}" (org.apache.http.wire)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << HTTP/1.1 200 OK (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Date: Mon, 15 Jun 2020 05:47:08 GMT (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Content-Type: application/x-amz-json-1.1 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Content-Length: 491 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Connection: keep-alive (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << x-amzn-RequestId: fef90d70-733b-470f-8987-17ad2c6f38a3 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG Connection can be kept alive for 60000 MILLISECONDS (org.apache.http.impl.execchain.MainClientExec)
[2020-06-15 05:47:08,261] TRACE Parsing service response JSON (com.amazonaws.request)
[2020-06-15 05:47:08,261] DEBUG Connection [id: 7][route: {s}->https://secretsmanager.eu-west-1.amazonaws.com:443] can be kept alive for 60.0 seconds (org.apache.http.impl.conn.PoolingHttpClientConnectionManager)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7: set socket timeout to 0 (org.apache.http.impl.conn.DefaultManagedHttpClientConnection)
[2020-06-15 05:47:08,261] DEBUG Connection released: [id: 7][route: {s}->https://secretsmanager.eu-west-1.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50] (org.apache.http.impl.conn.PoolingHttpClientConnectionManager)
[2020-06-15 05:47:08,262] TRACE Done parsing service response (com.amazonaws.request)
[2020-06-15 05:47:08,262] DEBUG Received successful response: 200, AWS Request ID: fef90d70-733b-470f-8987-17ad2c6f38a3 (com.amazonaws.request)
[2020-06-15 05:47:08,262] DEBUG x-amzn-RequestId: fef90d70-733b-470f-8987-17ad2c6f38a3 (com.amazonaws.requestId)

I run kafka connect in kubernetes with the following plugins - versions:

FROM confluentinc/cp-kafka-connect:5.5.0

COPY extras/mariadb-java-client-2.1.0.jar /usr/share/java/kafka/
COPY extras/mysql-connector-java-8.0.15.jar /usr/share/java/kafka-connect-jdbc
COPY extras/ojdbc8.jar /usr/share/java/kafka-connect-jdbc
COPY extras/secret-provider-0.0.2-all.jar /usr/share/java/

and the connector i'm trying to install:

curl -X PUT
http://ems-kafka-dev-cp-kafka-connect.ems-dev:8083/connectors/JdbcSourceConnectorEventManagement/config
-H 'Content-Type: application/json'
-H 'Accept: application/json'
-d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"timestamp.column.name": "modified_at",
"config.providers.aws.param.aws.access.key": "xxxxxxxxxx",
"config.providers.aws.param.aws.secret.key": "xxxxxxxxx",
"connection.password": "${aws:ems-dev-eventmanagement-db-secret:password}",
"tasks.max": "1",
"config.providers": "aws",
"table.types": "VIEW",
"table.whitelist": "trigger_recipe_event_view,recipe_business_rules_with_parameters_view,event_linked_active_recipes_view",
"mode": "timestamp",
"topic.prefix": "connected-keyless-",
"poll.interval.ms": "60000",
"config.providers.aws.param.aws.auth.method": "default",
"config.providers.aws.param.aws.region": "eu-west-1",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"config.providers.aws.class": "io.lenses.connect.secrets.providers.AWSSecretProvider",
"validate.non.null": "false",
"connection.attempts": "3",
"batch.max.rows": "100",
"connection.backoff.ms": "10000",
"timestamp.delay.interval.ms": "0",
"table.poll.interval.ms": "60000",
"connection.user": "${aws:ems-dev-eventmanagement-db-secret:username}",
"config.providers.aws.param.file.dir": "/connector-files/aws",
"connection.url": "jdbc:mysql://dev-ems.coollnhgrzes.eu-west-1.rds.amazonaws.com:3306/eventmanagement",
"numeric.precision.mapping": "false"
}'

ulfox commented

Goodday

I was very glad with the solution you are creating. But unfortionately it doesnt work entire;ly correct.

First up i was struggling with the auth keys. The enviroment variables i cant get working and the credenitials i had to change the "config.providers.aws.param.aws.client.key=your-client-key" into "config.providers.aws.param.aws.access.key" to get the connection working with aws.

Then when i changed the connection.password into "${aws:ems-dev-eventmanagement-db-secret:password}" it unfortunately failed.

In trace logging i found this part which i think might be the problem but i'm not sure:

[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << "{"ARN":"arn:aws:secretsmanager:eu-west-1:577868951105secretems-dev-eventmanagement-db-secret-B0TWsl","CreatedDate":1.591854819443E9,"Name":"ems-dev-eventmanagement-db-secret","SecretString":"{"username":"eventmanagement_user","password":"xxxxxx","engine":"mysql","host":"dev-ems.coollnhgrzes.eu-west-1.rds.amazonaws.com","port":"3306","dbname":"eventmanagement"}","VersionId":"2f3cc10f-b194-4a1f-bef5-523a03f68a70","VersionStages":["AWSCURRENT"]}" (org.apache.http.wire)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << HTTP/1.1 200 OK (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Date: Mon, 15 Jun 2020 05:47:08 GMT (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Content-Type: application/x-amz-json-1.1 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Content-Length: 491 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << Connection: keep-alive (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7 << x-amzn-RequestId: fef90d70-733b-470f-8987-17ad2c6f38a3 (org.apache.http.headers)
[2020-06-15 05:47:08,261] DEBUG Connection can be kept alive for 60000 MILLISECONDS (org.apache.http.impl.execchain.MainClientExec)
[2020-06-15 05:47:08,261] TRACE Parsing service response JSON (com.amazonaws.request)
[2020-06-15 05:47:08,261] DEBUG Connection [id: 7][route: {s}->https://secretsmanager.eu-west-1.amazonaws.com:443] can be kept alive for 60.0 seconds (org.apache.http.impl.conn.PoolingHttpClientConnectionManager)
[2020-06-15 05:47:08,261] DEBUG http-outgoing-7: set socket timeout to 0 (org.apache.http.impl.conn.DefaultManagedHttpClientConnection)
[2020-06-15 05:47:08,261] DEBUG Connection released: [id: 7][route: {s}->https://secretsmanager.eu-west-1.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50] (org.apache.http.impl.conn.PoolingHttpClientConnectionManager)
[2020-06-15 05:47:08,262] TRACE Done parsing service response (com.amazonaws.request)
[2020-06-15 05:47:08,262] DEBUG Received successful response: 200, AWS Request ID: fef90d70-733b-470f-8987-17ad2c6f38a3 (com.amazonaws.request)
[2020-06-15 05:47:08,262] DEBUG x-amzn-RequestId: fef90d70-733b-470f-8987-17ad2c6f38a3 (com.amazonaws.requestId)

I run kafka connect in kubernetes with the following plugins - versions:

FROM confluentinc/cp-kafka-connect:5.5.0

COPY extras/mariadb-java-client-2.1.0.jar /usr/share/java/kafka/
COPY extras/mysql-connector-java-8.0.15.jar /usr/share/java/kafka-connect-jdbc
COPY extras/ojdbc8.jar /usr/share/java/kafka-connect-jdbc
COPY extras/secret-provider-0.0.2-all.jar /usr/share/java/

and the connector i'm trying to install:

curl -X PUT
http://ems-kafka-dev-cp-kafka-connect.ems-dev:8083/connectors/JdbcSourceConnectorEventManagement/config
-H 'Content-Type: application/json'
-H 'Accept: application/json'
-d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"timestamp.column.name": "modified_at",
"config.providers.aws.param.aws.access.key": "xxxxxxxxxx",
"config.providers.aws.param.aws.secret.key": "xxxxxxxxx",
"connection.password": "${aws:ems-dev-eventmanagement-db-secret:password}",
"tasks.max": "1",
"config.providers": "aws",
"table.types": "VIEW",
"table.whitelist": "trigger_recipe_event_view,recipe_business_rules_with_parameters_view,event_linked_active_recipes_view",
"mode": "timestamp",
"topic.prefix": "connected-keyless-",
"poll.interval.ms": "60000",
"config.providers.aws.param.aws.auth.method": "default",
"config.providers.aws.param.aws.region": "eu-west-1",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"config.providers.aws.class": "io.lenses.connect.secrets.providers.AWSSecretProvider",
"validate.non.null": "false",
"connection.attempts": "3",
"batch.max.rows": "100",
"connection.backoff.ms": "10000",
"timestamp.delay.interval.ms": "0",
"table.poll.interval.ms": "60000",
"connection.user": "${aws:ems-dev-eventmanagement-db-secret:username}",
"config.providers.aws.param.file.dir": "/connector-files/aws",
"connection.url": "jdbc:mysql://dev-ems.coollnhgrzes.eu-west-1.rds.amazonaws.com:3306/eventmanagement",
"numeric.precision.mapping": "false"
}'

Hi eblindeman,
Can you try moving the following options in kafka-connect.properties file?

  • config.providers=aws
  • config.providers.aws.class=io.lenses.connect.secrets.providers.AWSSecretProvider
  • config.providers.aws.param.aws.auth.method=credentials
  • config.providers.aws.param.aws.client.key=your-client-key
  • config.providers.aws.param.aws.secret.key=your-secret-key
  • config.providers.aws.param.aws.region=your-region

Next:

  • Restart kafka connect
  • Update your connector's configuration by removing the entries you moved to kafka-connect.properties

Note The secret provider should be installed in the directory used by kafka connect classpath loader and not plugins path.

See my example for confluent docker (I mount the secret provider under /etc/kafka-connect/jars/secret-provider-0.0.2-all.jar):

version: '3'
services:
  cconnect:
    image: confluentinc/cp-kafka-connect:5.5.0
    container_name: cconnect
    network_mode: host
    environment:
      CONNECT_BOOTSTRAP_SERVERS: PLAINTEXT://10.15.3.1:9092
      CONNECT_REST_PORT: 58083
      CONNECT_GROUP_ID: "devconf"
      CONNECT_CONFIG_STORAGE_TOPIC: "connect-configs-conf"
      CONNECT_OFFSET_STORAGE_TOPIC: "connect-offsets-conf"
      CONNECT_STATUS_STORAGE_TOPIC: "connect-statuses-conf"
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_REST_ADVERTISED_HOST_NAME: "0.0.0.0"
      CONNECT_PLUGIN_PATH: /connectors
      CONNECT_CONFIG_PROVIDERS: aws
      CONNECT_CONFIG_PROVIDERS_AWS_CLASS: io.lenses.connect.secrets.providers.AWSSecretProvider
      CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_AUTH_METHOD: credentials
      CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_ACCESS_KEY: ***
      CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_SECRET_KEY: ***
      CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_REGION: eu-north-1


      KAFKA_JMX_OPTS: >
          -Dcom.sun.management.jmxremote.port=59584
          -Dcom.sun.management.jmxremote.authenticate=false
          -Dcom.sun.management.jmxremote.ssl=false
    volumes:
      - ./secret-provider-0.0.2-all.jar:/etc/kafka-connect/jars/secret-provider-0.0.2-all.jar

Goodmorning

I just did exactly that. :-)

And works like a charm thnx!

Using following env variables

    - name: CONNECT_PLUGIN_PATH
      value: "/usr/share/java/"
    - name: CONNECT_CONFIG_PROVIDERS
      value: "file,aws"
    - name: CONNECT_CONFIG_PROVIDERS_FILE_CLASS
      value: "org.apache.kafka.common.config.provider.FileConfigProvider"
    - name: CONNECT_CONFIG_PROVIDERS_AWS_CLASS
      value: "io.lenses.connect.secrets.providers.AWSSecretProvider"
    - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_AUTH_METHOD
      value: "credentials"
    - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_ACCESS_KEY
      valueFrom:
        secretKeyRef:
          name: "connect-user-secretmanager"
          key: "access-key"
    - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_SECRET_KEY
      valueFrom:
        secretKeyRef:
          name: "connect-user-secretmanager"
          key: "secret-key"
    - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_REGION
      value: "eu-west-1"
    - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_FILE_DIR
      value: "/connector-files/aws" 

grtz Erik

solved using env variables

Hi, I'm trying to create a connect using the secret provider with aws secrets manager, but when I send the POST it returns this error: {"error_code":500,"message":"Failed to look up key [password] in secret [bd-ccs-bdv-qa}]"}.

Here the block where I put the secrets provider user/password:

"connection.password": "${aws:bd-ccs-bdv-qa:password}",
"connection.user": "${aws:bd-ccs-bdv-qa:username}",

Could someone help me?

Does the secret bd-ccs-bdv-qa exists in aws secret manager and has the key password?

Yes, exists as shown in the secrets value:

{
"password": "my_db_crypt_password",
"username": "my_db_user"
}

and the secrets name:

Secret name
bd-ccs-bdv-qa

ok, and how do you connect to secret manager? with aws keypair or via another means?

I did that by means of env variable for connect:

        - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: connect-user-secretmanager
              key: access-key
        - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_SECRET_KEY
          valueFrom:
            secretKeyRef:
              name: connect-user-secretmanager
              key: secret-key

I found that there were some issues in the aws iam policy.

Caused by: com.amazonaws.services.secretsmanager.model.AWSSecretsManagerException: User: arn:aws:iam::account_id:user/bkofin-kafka-connect-secretsmanager is not authorized to perform: secretsmanager:GetSecretValue on resource: bd-ccs-bdv-qa (Service: AWSSecretsManager; Status Code: 400; Error Code: AccessDeniedException; Request ID: c0f6cca8-73d2-44ad-ae37-a6d25a686c9b)

I fixed that and now it works properly. Thank you very much

your welcome! Allthough I didn't do a lot :-)

I am trying to use the IAM Role assigned to pod instead of Credentials (default mode). I have copied the Jar file in /usr/share/confluent-hub-components/. The IAM role is attached the pod (verified by getting a secret value within the pod). When I remove CONNECT_CONFIG_PROVIDERS_AWS_CLASS environment variable, my connect cluster start and shows that added plugins:

[2022-02-03 00:00:50,090] INFO Added plugin 'com.upstart.dataeng.kafka.connect.transforms.util.ConvertEpochPG$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
[2022-02-03 00:00:50,090] INFO Added plugin 'com.upstart.dataeng.kafka.connect.transforms.util.ConvertEpochPG$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
[2022-02-03 00:00:50,090] INFO Added plugin 'com.upstart.dataeng.kafka.connect.transforms.util.EpochtoTimestamp$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
[2022-02-03 00:00:50,090] INFO Added plugin 'com.upstart.dataeng.kafka.connect.transforms.util.EpochtoTimestamp$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
[2022-02-03 00:00:50,090] INFO Added plugin 'io.lenses.connect.secrets.providers.AzureSecretProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
[2022-02-03 00:00:50,090] INFO Added plugin 'io.lenses.connect.secrets.providers.VaultSecretProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
[2022-02-03 00:00:50,090] INFO Added plugin 'io.lenses.connect.secrets.providers.AWSSecretProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
[2022-02-03 00:00:50,090] INFO Added plugin 'io.lenses.connect.secrets.providers.Aes256DecodingProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
[2022-02-03 00:00:50,090] INFO Added plugin 'io.lenses.connect.secrets.providers.ENVSecretProvider' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)

Once I add the CONNECT_CONFIG_PROVIDERS_AWS_CLASS Kafka Connect fails to load and restarts.

JAR: 2.1.6-all
Kubernetes: OKD 3.11
Kafka Connect: confluentinc/cp-kafka-connect:6.2.1

Below are my environment variables:

              env:
                - name: CONNECT_PLUGIN_PATH
                  value: /usr/share/confluent-hub-components/
                - name: CONNECT_CONFIG_PROVIDERS
                  value: aws
                - name: CONNECT_CONFIG_PROVIDERS_AWS_CLASS
                  value: io.lenses.connect.secrets.providers.AWSSecretProvider
                - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_AUTH_METHOD
                  value: default
                - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_ACCESS_KEY
                  value: ""
                - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_SECRET_KEY
                  value: ""
                - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_FILE_DIR
                  value: /tmp/aws
                - name: CONNECT_CONFIG_PROVIDERS_AWS_PARAM_AWS_REGION
                  value: us-east-1

Please let me know if there is a fix.