Add support for secrets in Application parameters
niqdev opened this issue Β· 81 comments
It would be nice to have direct support for native secrets in the Application definition.
In the following use case, when referencing a public chart, it would be very handy to have the possibility to reference a secret directly.
# external-dns.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-dns
spec:
project: kube-do
source:
repoURL: https://github.com/helm/charts.git
path: stable/external-dns
helm:
parameters:
- name: provider
value: digitalocean
- name: digitalocean.apiToken
value: $(DIGITAL_OCEAN_TOKEN)
- name: domainFilters
value: $(EXTERNAL_DNS_DOMAIN)
# suggested implementation
secrets:
- name: secrets-do
destination:
server: https://kubernetes.default.svc
namespace: kube-do
# secrets-do.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: secrets-do
type: Opaque
stringData:
config.yaml: |-
# this could be overridden for example during bootstrap using
# helm template . --values values.yaml --set digitalOceanToken=abc
DIGITAL_OCEAN_TOKEN: {{ digitalOceanToken }}
EXTERNAL_DNS_DOMAIN: {{ externalDnsDomain }}
Resources related to external-dns
to have more details
- docs see
Deployment
env - helm chart values.yaml
- helm chart DO_TOKEN env declaration
I would also suggest to add a secretsUpdatePolicy
to monitor the secrets e.g. update, rotation, delete ...
none
don't do anything if the secrets are updatedrestart
restart the application if the secrets change
Contributing guide if you'd like to raise a PR: https://argoproj.github.io/argo-cd/CONTRIBUTING/
Argo CD has https://argoproj.github.io/argo-cd/user-guide/auto_sync/ feature which covers secretsUpdatePolicy
. If auto-sync is enabled and secret changes when argocd will go ahead push updated manifests.
We would have to make application CRD changes and serval code changes in api server/app controller:
CRD changes
Argo CD supports string parameters for helm, jsonnet (plugins parameters are WIP ). In addition to value we need to support valueFrom:
# helm specific config
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
valueFrom:
secretKeyRef:
name: mysecret
key: username
jsonnet:
# A list of Jsonnet External Variables
extVars:
- name: foo
valueFrom:
secretKeyRef:
name: mysecret
key: username
# A list of Jsonnet Top-level Arguments
tlas:
- code: false
name: foo
valueFrom:
secretKeyRef:
name: mysecret
key: username
I think it is ok to add support only for helm first and jsonnet, plugins later.
Code changes:
Manifest generation is performed by argocd-repo-server
which does not have permissions to access k8s api for security reasons. So parameter values should be resolved and passed to argocd-repo-server
by argocd-api-server
and argocd-application-controller
.
The argocd-repo-server
and argocd-api-server
request manifest generation using GenerateManifest
GRPC call. There are only three places in code where we execute GenerateManifest
method and pass an instance of ApplicationSource
which includes parameters:
https://github.com/argoproj/argo-cd/blob/master/controller/state.go#L113
https://github.com/argoproj/argo-cd/blob/master/server/application/application.go#L193
https://github.com/argoproj/argo-cd/blob/master/util/argo/argo.go#L379
To support valueFrom
we would have to add e.g. ResolveValuesFrom
method which replaces all valueFrom
with actual value in specified ApplicationSource
instance and use it before executing GenerateManifest
.
Thanks for the detailed info! I'm not familiar with go and the codebase, but I'll try to look into it and raise a PR
@alexmt I've finally got a bit of time and started to look into this, but I need a bit of help
Here a bunch of changes, but I have already few doubts
- in
manifests/crds/application-crd.yaml
there are 6 references to helm, it's enough to add thevalueFrom
to thespec
or should I c&p it also instatus
,operation
,history
, ... ? Is the openapi structure correct? I will add the description if that's correct then - I edited the
pkg/apis/application/v1alpha1/types.go
, how do I regenerate the models or what is the equivalent ofoperator-sdk generate k8s
? Cos runningmake codegen
overridden all my changes repoClient.GenerateManifest
take as parameter anApplicationSource
inargo-cd/controller/state.go
andserver/application/application.go
, but doesn't that contains already the unmarshalledValueFrom
type? I don't understand where I have to add/implementResolveValuesFrom
- Also, in my mind, I was expecting to find in the code something that is creating the pod/deployment/service, or whatever is the resource type declared in the application, where I could pass the parameters and using the native
&corev1.Secret{}
api to retrieve the secrets and attach it somehow. Am I completely wrong on this? The only place where I found something similar is inpkg/client/clientset/versioned/typed/application/v1alpha1/application.go
, but it's code generated and I'm not sure if is the right place. What am I missing?
Do you have an architectural diagram of how client/server/api and all the moving parts are connected? Unfortunately I'm also a golang newbie and I'm having a bit of difficulty to understand how everything works
Thanks in advance for the help π
Hello @niqdev , thank you for working on it!
-
The
manifests/crds/application-crd.yaml
is auto-generated. You can usemake codegen
to regenerate everything including CRD yamls. The dependencies are described here: https://argoproj.github.io/argo-cd/CONTRIBUTING/ -
Regarding
repoClient.GenerateManifest
. One of the parameters of this method isApplicationSource
fromtypes.go
which includesApplicationSourceHelm
,ApplicationSourceKustomize
etc. So once you addValueFrom
fields to data structures the same field will be available inrepoClient.GenerateManifest
-
The
repoClient.GenerateManifest
is responsible to pass parameter values to config management tool which produces yaml/json with resource definition. So we just need to make sure thatrepoClient.GenerateManifest
input has resolved values
Hey @niqdev
did you make any progress on this? Otherwise I am thinking about giving it a shot. But I am not a Go expert either ;-)
Hi @patst,
unfortunately I don't have any time to make a concrete progress at the moment, even if I'd really like to.
You are more then welcome to pick it up! So far all the changes that I've made are in the branch linked above. Let me know if you need more info.
Thanks π
Hi,
Just a heads up, I'm starting to tackle this using the same strategy as the helm-operator from WeaveWorks. Both secrets and configmaps would be supported.
My proposal is to introduce something like
valuesFrom:
- secretKeyRef: ...
- configMapKeyRef: ....
Since we already have support for passing inline values, the idea is to resolve any configmap/secrets that need to be fetched and merge them with the inline values before passing it off to the repo server. By doing that, it keeps the actual code that needs to change minimal.
Secrets/ConfigMaps would need to exist in the same namespace as ArgoCD which does present a problem; you could potentially reference other people's secrets/configmaps since Argo runs with the same service account.
Working around that seems like it'd require adding another resource type to ArgoCD which could then use argo's RBAC, but I'd like to hear someone's opinion that has more experience here.
Are we OK with the secret/configmap issue for an MVP with an explicit carveout that secrets prefixes with "argocd-" would be disallowed to avoid leaking argo-cd related secrets?
First off, I do understand the motivation why some may feel this feature is necessary. Many Helm charts have a fundamental problem which allows secret values / sensitive information to be held in values.yaml files, essentially making values.yaml files sensitive objects. However, it's not Argo CD's responsibility to solutionize on what I consider a bad pattern or serious problems with the helm charts.
A well written chart should allow existing secrets to be referenced in the chart, rather than containing the sensitive information directly in the values.yaml. Here is a good example of this done properly:
https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml#L113
Retrieving values from configmaps (#1903) and secrets in the argocd control plane namespace is straying into the territory of config management, which is something we intentionally eschew from Argo CD.
This proposal (and #1903) is essentially making Argo CD a centralized configuration/secret store, of which all applications can then reference secrets from. At this point, we would additionally need to implement RBAC rules on what applications are allowed to reference secrets.
Are we OK with the secret/configmap issue for an MVP with an explicit carveout that secrets prefixes with "argocd-" would be disallowed to avoid leaking argo-cd related secrets?
This is a perfect illustration of the type of problems we face if we go down this route. It is a slippery slope. Today, we already allow users to use arbitrary secrets in the argocd namespace to hold cluster credentials. These secrets do not necessarily start with the prefix argocd-
. With this feature, these secrets would suddenly become available to devlopers who simply could reference the secrets in their application and suddenly obtain cluster/git credentials.
A second issue with these proposals is that it is introducing yet another source of truth of the manifests, which is different than git. Today Argo CD provides two sources of truth which affect the manifests generation:
- The contents of the Git repo.
- Parameters overrides set in the app spec (e.g. helm parameters, inline value contents).
Taking aside from the additional complexity of introducing another lookup as part of manifest generation, being able to reference configmaps and secrets in the argocd namespace, adds yet another source of truth is needs to be managed independently, which is antithetical to the GitOps model.
Lastly, these configmaps and secrets which are stored in the argocd control plane, are only under the control of an argocd operator, and outside the control over an application developer. To reference these secrets/configmaps would require coordination between developers and operators, violating the separation of concerns.
For these reasons, I do not believe this issue or #1903 are ones that Argo CD should pursue.
I'm starting to tackle this using the same strategy as the helm-operator from WeaveWorks. Both secrets and configmaps would be supported.
The reason why this works for Helm Operator, but doesn't for Argo CD, is because these secrets are co-located with the HelmRelease instance, and are protected with the same Kubernetes namespace level RBAC. This means it is multi-tenant by default. Whereas in the case of Argo CD, the secrets are stored in the control plane namespace, not to mention mixed alongside Argo CD specific secrets, with zero RBAC protection and nothing stopping a developer from exposing the control plane secrets.
@jessesuen So beyond referencing secrets in a chart where I actually agree that a correctly designed chart will allow you to do something like existingSecret
, the actual use case that I personally have is generating a partial config during cluster creation via Terraform for use in some charts.
For example, I do not necessarily know the ARN of an IAM Role ahead of time if I'm trying to re-use the same repo to manage multiple clusters and need to reference that role either via kiam, or the new IAM for Service Accounts on EKS. My solution today has been to create a ConfigMap as part of Terraform that specifies just the parts that Terraform has knowledge of; the rest continues to be defined in git with the application.
WDYT?
@jessesuen thanks for your feedback, I 100% agree with
Many Helm charts have a fundamental problem which allows secret values / sensitive information to be held in values.yaml files, essentially making values.yaml files sensitive objects. However, it's not Argo CD's responsibility to solutionize on what I consider a bad pattern or serious problems with the helm charts.
but even if it's not Argo CD's responsibility, I personally think that like everything there must be a compromise. In work we are using Argo CD managing the secrets with aws-secret-operator and IRSA using our own charts - no issue so far. So I'll give you the use case for why I've opened this issue...
I want to run on my DigitalOcean cluster (which doesn't have any out of the box solution to manage secrets like SSM) a bunch of applications/MVPs/POCs with very low security concerns and unfortunately I've encountered this issue.
I wrote for fun an operator inspired by the former that maps CR to native secrets and I'm using external-dns
that force me to override the apiToken. At this point the right solution would be to fork the chart, open a PR and probably risk that all my changes are unaccepted (I had already issues with my PRs not being merged cos they got stale or simply cos the proposed changes in this case would be to invasive). I would then be forced to maintain my own forked chart - which I don't wanna do.
Given this use case and the fact that I wanna still be 100% fully declarative using Argo CD, I would be happy to accept/allow security risks - if that's the concern - and I'm sure that other people would have other reasons or use cases.
WDYT and what would be your solution in situations like this one?
I don't think the discussion is closed on this yet, I think we're monitoring it because there is interest. @niqdev we document solutions here, do you want to add https://github.com/mumoshu/aws-secret-operator?
https://argoproj.github.io/argo-cd/operator-manual/secret-management/
@jessesuen Putting some of the ideas that we were throwing around at KubeCon in here so they don't get lost:
Right now, the problem with the current proposal is that for multi-tenant solutions, it would be horribly insecure since applications would have access to config/secrets that they shouldn't necessarily have access to.
One potential way to solve this could be to enhance the custom tooling/plugins that you can provide to ArgoCD by providing the entirety of the application manifest to it via some file (manifest.json/yaml) which would then allow the custom plugin to combine the values in the manifest itself with values sourced elsewhere (configmaps, secrets, etc..).
That allows you to accomplish the proposal here, but punts on the security aspect by requiring the user to implement this themselves. One potential avenue that comes to mind is signing an app manifest, then using a plugin that verifies the provenance of your application before allowing it access to the requested secrets.
The actual functionality that would be needed for this seems to be minimal on the ArgoCD side of things; basically, just provide the entire app manifest to the plugin either via a file or environment variable that the user can parse and do whatever they want with.
We probably can/should then discuss how we extract some of the core functionality of Helm/Kustomize/etc.. out into plugins that can be iterated on independently of ArgoCD itself.
Looking forward to have this functionality!
Thanks for your work, guys
@jessesuen @alexec Just circling back to this - would something like what was proposed above be something that you'd be interested in / would accept into ArgoCD?
Basically, make the entire application manifest as either a file or environment var (not sure which atm) available to custom tools that can then parse it and do whatever they want with those values.
In addition, we would re-order
argo-cd/reposerver/repository/repository.go
Lines 353 to 361 in 57eeaa4
That should solve my personal use case since I could easily do something like have my own helm plugin that does something like envsubst
with values.yaml
inlined into the application manifest. The actual values could still be provided via a configmap, but that becomes my plugin's responsibility to provide all of that, not Argo's.
Using a projected serviceAccountToken is one way to inject config/secrets dynamically.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection
Another idea for securely doing this:
Each project can have a Kubernetes ServiceAccount assigned to it (or created dynamically using the project name as part of the convention). When ArgoCD attempts to retrieve either ConfigMaps or Secrets, it would do so using the token from the associated service account.
Access to secrets or configmaps in any given namespace can then be governed by plain old Kubernetes RBAC.
Either way, I think we could potentially implement the ConfigMap
part of this proposal and punt on the Secret
part in order to avoid tackling the security implications right now.
Any agreement or progress on this topic? I'm seeing a couple of related issues without a solution and I was very surprised to find out that there is no possibility to merge values from a secret. Pulling and customizing charts is a tedious workaround to replace values and create encrypted secrets to push applications to git.
Another follow up here -- @dcherman's use-case regarding seeding configuration from ConfigMaps created by Terraform is exactly my problem coming from FluxV1 / helm-operator to ArgoCD. This is particularly painful for managing 3rd party charts which need AWS IAM Role ARNs when you're dealing with many environments / clusters. Looks like that will be one of my largest hangups in the switch from Flux => Argo.
Retrieving values from configmaps (#1903) and secrets in the argocd control plane namespace is straying into the territory of config management, which is something we intentionally eschew from Argo CD.
May be it should be allowed to inject cm/secret from application destination namespace then? I understand that this may create circular dependency in case if namespace is created by the app itself, but this will just become a known (and clear by its nature) bootstrap issue.
Is there any updates or established best practices on this issue? I think it's a big pain point on adopting ArgoCD since a long time. I'd be willing to contribute if there are any plans for addressing this.
no other choice than write your own Secret on a separated file ... but not on Argo Application CRD
I tackled this issue by using an optional secret mount into ArgoCD.
Wrote a blog post about it here: https://utkuozdemir.org/blog/argocd-helm-secrets/
Sample repo: https://github.com/utkuozdemir/argocd-helm-secret-values-example
I solved my problem in the way I described in my comment above, but I think this is the one biggest thing missing in ArgoCD. I heard the same complaint from other people with them ending up choosing Flux over Argo for this sole reason.
I would therefore suggest addressing this issue with priority. Something as simple as valuesFrom->secretRef
would solve the problem for good for a lot of people. I'd be willing to help if there are any plans to address this.
Extremely needed indeed!
In the existing ArgoCD secret management documentation, while it mentions several popular kubernetes secrets solutions... it doesn't really have any guidance or instructions for how to get them to work specifically with ArgoCD.
It is at least also perhaps noting in that document that the argoproj-labs/argocd-vault-plugin supports not only HashiCorp Vault, but also GCP Secret Manager, AWS Secrets Manager, Azure Key Vault, and IBM Cloud Secrets Manager.
I found that there was a pretty good write up on using external secrets with ArgoCD in this Kubernetes External Secrets article. It also had a few helpful opinions based on their experience with other solutions:
There were some minor disadvantages:
- We canβt install ArgoCD via the operator because we need a customized image that includes KSOPS, so we have to maintain our own ArgoCD image.
And there was one major problem:
- Using GPG-encrypted secrets in a git repository makes it effectively impossible to recover from a key compromise.
Once a private key is compromised, anyone with access to that key and the git repository will be able to decrypt data in historical commits, even if we re-encrypt all the data with a new key.Because of these security implications, we decided we would need a different solution (itβs worth noting here that Bitnami Sealed Secrets suffers from effectively the same problem).
Both external secrets and argoproj-labs/argocd-vault-plugin tools are looking to solve the same problem but in different ways.
-
argoproj-labs/argocd-vault-plugin works as plugin to Argo CD or CLI tool to help inject secrets into Kubernetes Secret manifests with special annotations so it can find them and then update the manifests in place using a templating system.
-
external secrets makes use of Custom Resources and an Operator to generate a Kubernetes Secret from the provided Custom Resource.
valuesFromRef
loading the values from a secret located in application's namespace would just perfectly solve the issue for the most use-cases.
One thing to consider and an argument for handling this in Argo is the case where the original secret is in a separate k8s cluster (like a management cluster) but the target cluster for the Application is a production/dev/test cluster. Having the ability to look up values (and ideally keeping them in sync) would be very valuable in larger/distributed deployments.
any progress on this issue?
There was some discussion above about pushing the problem to plugins so that folks can build a solution that exactly fits their use case and security risk tolerances. I think that could be done today using the env
block in the plugin source spec. You could pass the secret ref via an env var and then fetch the secret in your plugin. The plugin sidecar would need to mount a service account token to give it k8s permissions.
I share Jesse's discomfort with a "native" solution. But I also recognize that the lack of that solution will push some folks to relatively hacky/risky workarounds.
If anyone feels there is enough here to go ahead and define a solution, I'd be eager to read a proposal.
@crenshaw-dev
I dont get the difference here. Other tools like argo-workflow already supports the valueFrom
reference.
I'd love to have at least a similar approach to things within one company....
In my case, I've to provide a VAULT_TOKEN
to my plugin. If I cannot supply the token from an existing secret the whole app-of-apps idea doesnt work for me.
Although the repetition does not scale well. To repeat repository.env
for every application instead of defining it once would reduce the code by like 5 lines per Application.
See the following references:
@Moep90 The big difference between those two is that Argo Workflow runs workflows using a specific service account, so you can easily restrict access to secrets/resources using plain Kubernetes RBAC. ArgoCD currently runs all of the manifest generation using a single service account which would make this very insecure for multi-tenant usage.
The two main workarounds/proposals that I made in previous comments were:
-
Generate a service account per project which is used during manifest generation. By doing that, you can use plain RBAC to restrict access to specific secrets/configmaps which should enable multi-tenancy similar to argo workflows.
-
Re-order the manifest generation so that custom plugins are run before built-in ones. While that doesn't solve the issue, it would allow people to write their own plugin to solve this use case while still being able to use the built-in structs for helm, kustomize, etc.
Note: I haven't kept up to date on changes for this project in a bit, so recent changes could affect those proposals.
@Moep90, @dcherman is exactly right. An Argo Workflow runs under a user-configurable ServiceAccount, which can be tuned to provide minimal privileges for just that Workflow. You can do something similar with repo-server by creating a ServiceAccount, giving it access to a Secret, and mounting that Secret in the repo-server (or a plugin sidecar). But, just as with Workflows, you have to do the ServiceAccount configuration part.
Sure, we could add valueFrom
and just let it fail if the admin hasn't configured a ServiceAccount. But I worry that this would happen far too often:
Tenant 1 Developer: "Hi, Very Busy Admin, my app won't sync. It says it can't access my-secret."
Very Busy Admin: "Oh, repo-server doesn't have access, I'll add that real quick."
Tenant 1 Developer: "Cool it works, thanks."
Malicious Tenant 2 Developer: "Ooh, secrets..."
Forcing the admin to configure the ServiceAccount and mount the Secret(s) should make it abundantly clear that the Secrets are available to all tenants. If the admin is using a plugin, it also allows them to add logic to constrain how the secret is used, rather than having to treat the secret the same as just any-other-parameter.
@crenshaw-dev I got the concern.
Sure, we could add valueFrom and just let it fail if the admin hasn't configured a ServiceAccount. But I worry that this would happen far too often.
If this makes it into the code, it at first, should be highligted in the documentation (right where the field is avaiable to be configured).
Secondly I'd suggest to enable ARGOCD to create an SA + required RBAC per Application since it all within the Argo eco-systme as well as the same namespace, right?
Anyhow would you please share an example code how you feel like this should work?
I can imagine that as part of the docs on how you can achive secure mutli-tenancy or similar as some part of "Best-Practise".
@crenshaw-dev is it possible to provide an wxample for your RBAC secret solution?
Secondly I'd suggest to enable ARGOCD to create an SA + required RBAC per Application since it all within the Argo eco-systme as well as the same namespace, right?
I'm not sure what you mean. If I understand correctly, Argo CD would have to create a SA/Role/RoleBinding for each Application which has at least one valueFrom
field. It would then have to apply that SA to its own repo-server deployment (non-declaratively). But even then it wouldn't work, because the deployment can only have one SA. So Argo CD would have to maintain the Role settings to allow the repo-server to access all secrets currently referenced by valueFrom
. And that's risky if multiple tenants are using the same repo-server.
My suggestion would be to have an application like this:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
plugin:
name: mypluginname
env:
- name: SECRET_SECRETNAME
destination:
server: https://kubernetes.default.svc
namespace: guestbook
Configure the plugin to populate the env var:
apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
name: cmp-plugin
spec:
version: v1.0
generate:
command:
- sh
- -c
- |
secret_refs=$(env | grep SECRET_)
for secret_ref in $secret_refs; do
# Use kubectl to get the secret value and set it in the env var.
done
# Run remaining generate logic.
discover:
fileName: "./subdir/s*.yaml"
Finally, install the plugin as a sidecar and use a ServiceAccount with access to all the secrets you want to make available to the users. (NOTE: when using sh -c
to write plugins, write the script securely! Follow scripting best practices and never eval unsanitized user input.)
This is just a rough sketch of how it could work.
@crenshaw-dev FWIW you wouldn't necessarily need a separate repo-server per application (which wouldn't scale at all), you would basically need to look up the secret associated with the serviceaccount with a specific project and make any requests to the k8s api using that token. I never got around to prototyping it, but that was the basic idea
@dcherman ah, I see.
Generate a service account per project which is used during manifest generation. By doing that, you can use plain RBAC to restrict access to specific secrets/configmaps which should enable multi-tenancy similar to argo workflows.
We'd have to give the repo-server access to those tokens somehow, but then I think we can trust it to enforce the project boundaries. I like that. I'm a little uneasy about loading secrets / tokens into the repo-server by default (given just how much code and external tools there are running in that pod + all the difficulty we've had with protecting env vars from improper access and files from path traversal), but I think as an opt-in it would be fine.
@crenshaw-dev Yep. If I recall correctly, you can find the secret associated with a given service account in the status
field, so during manifest generation you would find the service account associated with the app being worked on, find the correct secret, fetch the contents, then use that token for any requests to the k8s api.
It's been a while since I worked on this so I don't recall the exact architecture here, but I think there is a repo-server that's responsible for generating manifests (which is where plugins run) and an app controller responsible for actually doing reconciliation. That app controller could pass along the required token to the repo-server as part of the request, so the repo-server itself would have no access to read secrets, meaning that there should be no concerns about a malicious plugin reading secrets that it should not have access to.
Alternatively, the app controller could handle all of the logic required for getting the values of configmaps/secrets with none of the work being done in the repo server. Those values could then be passed along to the repo-server. I believe that's how my original PR worked, but it was lacking the serviceaccount stuff to enable this to be done securely.
I think there is a repo-server that's responsible for generating manifests (which is where plugins run) and an app controller responsible for actually doing reconciliation.
Yep!
That app controller could pass along the required token to the repo-server as part of the request
Some manifest generation requests come directly from the UI (API server), so the logic would have to be in both places. iirc that communication is over plaintext gRPC, so we'd probably want to implement TLS among those services before making this change.
It's been three years since the post initially started. What is the current state? Can we expect this anytime soon?
Currently, my workaround is to use kustomize and handle app deployments by pulling secrets from GCP buckets and overriding the default resources, but this becomes a tedious procedure for multiple apps. Additionally, on each version bump, things need to be reprocessed, so it's not a very efficient way; also, no support for UI-created manifests.
I am really thinking of switching to other tools, but I really enjoy the ease of declarative management Argo provides.
Can we expect this anytime soon?
I doubt it can be expected within the next three minor releases or so (~9mo). This change would probably involve a proposal and review process.
pulling secrets from GCP buckets and overriding the default resources
Where does this happen? In a plugin?
@crenshaw-dev Thanks for the update! Regarding the way to bypass the limitations, we are pulling the secrets via short script before applying the manifests, and disregarding them in later stages. The process just serves to embed the secrets (patch manifests with sensitive data) during deployment, and no sensitive information is available within repos.
You bet! Gotcha... and does that script live in a plugin? Or is it somehow tied to the built-in Kustomize config management tool?
Big big pain point, we just pulled out of Argo due to not giving the proper attention to these matters. We do not believe it is mature enough, for 3 years it has been open with no end in sight.
Relevant proposal: #12050
Big big pain point, we just pulled out of Argo due to not giving the proper attention to these matters. We do not believe it is mature enough, for 3 years it has been open with no end in sight.
Certainly feel the urge to follow.
The 'unopinionated' view means either nothing works, or nothing works well.
Any update on this?
Any updates here ??
My increasingly-strong opinion is that secrets should not be injected by Argo CD or by any GitOps tool. Instead they should be loaded on-cluster by a secrets operator.
My increasingly-strong opinion is that secrets should not be injected by Argo CD or by any GitOps tool. Instead they should be loaded on-cluster by a secrets operator.
This feature is very much needed. Our use case is as follows:
We use Terraform to set up k8s and install ArgoCD and a directory-type Application, which delivers the rest of the infrastructure (including the secret-operator). Currently, we have to do this in two stages because we need some data from the first step, which we manually add to the configuration for the GitOps repository. If this feature were implemented, we could create a secret (configmap) with the necessary data from Terraform and provide this data to the ArgoCD Application. This would allow us to create our entire environment in one step, without manually transferring configuration parameters.
@yolkov what information are you adding to the secret?
@yolkov what information are you adding to the secret?
for example dns zone ID, endpoint of kubernetes apiservers, etc and some parameters have to be duplicated for both terraform and argocd applications.
We have separate repositories and responsibility zones between teams. Team1 is responsible for the terraform (infra) repository, the Argocd application is a directory-type application to another git repository, which is responsible for team2 (platforms). This directory-type application describes other applications (appsofapps).
With this feature, we could describe these parameters in one place and then reuse them for other applications.
we could create a secret (configmap) with the necessary data from Terraform and provide this data to the ArgoCD Application
Is there a reason you couldn't create that secret and then use a mutating webhook or controller to inject that configuration on the cluster instead of injecting it via Argo CD?
Is there a reason you could create that secret and then use a mutating webhook or controller to inject that configuration on the cluster instead of injecting it via Argo CD?
This requires that this webhook be already installed in the cluster. But according to our flow, weβll put ArgoCD on bare k8s and it will deliver everything else(including this webhook).
There are several "kind: Application", using webhook will need to get certain parameters from the secret and mutate this βApplicationsβ. I wonβt immediately say that this is possible at all, but even if it is possible, itβs not trivial, itβs not very reliable + "Application" resource is managed by gitops, we need to add exceptions so as not to overwrite resources from the Git repository.
This is definitely a less nice solution than adding the ability for "kind: Application" to extract parameters for the helm chart from the external resources (like secret/configmap).
This requires that this webhook be already installed in the cluster. But according to our flow, weβll put ArgoCD on bare k8s and it will deliver everything else
I might be confused. The secret containing the config information you need, where is that located? In the Argo CD cluster, or in the destination cluster?
using webhook will need to get certain parameters from the secret and mutate this βApplicationsβ
My suggestion isn't to use a mutating webhook to modify the Application resource (certainly not if there are secrets to inject) but rather to use a mutating webhook to inject the information into the actual resources being deployed.
Ultimately, cluster-specific information has to be placed somewhere. Either it's placed somewhere for Argo CD to read and then inject, or it's placed elsewhere for some other tool to read and inject.
Placing the information on the destination cluster relieves Argo CD of the responsibility of gathering data from places besides git to apply to the cluster. There's significant value in limiting Argo CD's responsibility to applying declarative configuration sourced from git. In Kubernetes, the role of applying non-declarative information at apply-time is traditionally taken by mutating webhooks, and the role of applying non-declarative information at runtime is traditionally taken by controllers.
By expanding Argo CD's scope of responsibility from "applying declarative configuration from git" to "gathering external information and injecting it into manifests" we make it difficult for Argo CD to do either job well. This is especially true with secrets, which require an entirely different level of care to handle properly as compared to typical declarative config.
imo it's better to have two tools to each do their job well: Argo CD to apply declarative config from git, and webhooks/controllers to apply external information on the cluster.
While looking to migrate from flux, we have a slightly different view on valuesFrom.
We currently use valuesFrom to be able to generate multiple deployments based on a common gitops without having to have them all explicitly hardwired in git files - consider the case you have multiple client deployments you want to create via some other mechanism (in out case, HelmReleases using terraform) and where some values are not known at authoring time (think non-sequential numberings or guids). We use valuesFrom and secrets to inject those values into a HelmRelease, after which flux2/gitops will work it's magic.
Most likely we are not yet that well versed in ArgoCD but we are struggling on finding a similar mechanism to inject (a high number) of parameters in different charts dynamically, which would be solved with valuesFrom.
As for the very relevant point of whether valuesFrom would need to be sourced from argo-cd cluster or target cluster, this could be solved by having valuesFrom operate at the level of argo only (in the argo cluster) as a way to build values to dynamically create the manifests, or allow that to be an option/parameter of valuesFrom (or even separate with valuesFromArgo and valuesFromTarget).
At the moment we use this pattern to do it.
https://github.com/gitops-bridge-dev/gitops-bridge
So it is possible just not "natively"
any chance this could be really looked at ? this ticket was created 5 years ago.
At this point, perfect is literally the enemy of good: can we get "something" implemented ? it can for sure be refined if really needed
@jgournet https://argo-cd.readthedocs.io/en/stable/operator-manual/secret-management/
There are no good use-cases for implementing this on ArgoCD side. I don't understand why devs are not closing this issue.
@amkartashov : ok, I'll admit I'm not sure anymore now:
- are you talking about having a functionality that would allow argocd to manage secrets ?
because, as far as I understand, this ticket is about allowing argocd to just use secrets that are managed by some other way.
ie: our use case is:
- we inject the secrets into the cluster via terraform, as kubernetes secrets
- we then tell argocd: "hey, use those secrets for this helm chart"
we then tell argocd: "hey, use those secrets for this helm chart"
It was noted multiple times that it's helm chart responsibility to use sensitive data from existing secrets. Your usecase is covered by:
- you inject secrets into the cluster via terraform, as kubernetes secrets, into application namespace
- you then tell argocd: "hey, set values.existingSecret to that nam for this helm chart"
@Frankkkkk the wordpress chart does support externalDatabase.existingSecret
with the doc The name of an existing secret with database credentials
which should at least support password if not user & url.
This is exactly what was noted by amkartashov above that it is the helm chart's responsibility to accept sensitive data from existing secret. And if the helm chart doesn't support this basic pattern, it shouldn't be considered mature enough to be used in production.
@Frankkkkk the point is there are no good examples, good enough to justify adding this complex (because of security concerns) functionality to ArgoCD.
With flux it's different (from security pov) because HelmRelease references secrets in its own namespace (same as application namespace by default).
With argocd (or any other tool syncing manifests from git to k8s) - you have multiple ways to use sensitive data from existing secrets.
I'm genuinely interested in knowing how you would do the following:
Imagine you have a wordpress deployment on AKS (basically a kubernetes on Azure). The memcached server lives outside k8s. The host is stored on an azure keyvault. Using the akv2k8s controller, we can get the value from the keyvault and store it into a secret abc
.
Then, how to do setup the helm chart value externalCache.host
with the value from abc.hostname
?
BTW, the values included with valuesFrom
in flux are not necessarily secret strings.
PS: when 130+ people are liking this issue, I find it a shame to see how rudely dismissive you are of the feature request
Cheers
As far as I can tell, no one's being rude or dismissive, and tone doesnt carry well over text. The conversation seems to be basically about the tech, and tbh it's a good coversation. You're both really getting to the pros/cons. Let's please keep it about the tech.
host is stored on an azure keyvault.
Saving host into values file in gitops repo takes the same amount of effort. Another bad example.
Picking up on some previous examples, I would like to share our case.
We are trying to migrate from a flux-based architecture, where we use external-secrets + helmrelease/valuesFrom as way to have a pre-processor and templating language to customize values.yaml.
We are using aws parameter-store to store both secrets and configuration information.
This allows us to avoid going down the route of having dozens of similar files on git, each modified on 100s of small things like suffixes, dns names, account IDs, OIDC numbers, MSK cluster UIDs, Azure AD client IDs, ...
Of course we could (and still can) change the approach to tilt more towards the CI/CD space, generating the manifests with ansible+jinja (or your preference of template). However, we ended up going another way, but this approach has several approaches for our particular requirements, and it seems several other people/organizations would agree in some manner.
From what I see in this discussion, the request is for ArgoCD to provide the community with something as flexible as helmrelease/valuesFrom where we can fetch variables to inject into the ApplicationSets (or other places).
In my dreamworld, there would be a valuesFrom: (in addition to values: and valueFiles:) that would fetch values from ConfigMaps or Secrets, either required or optional (and yes, this is exactly what exists on flux).
As far as I understand it @amkartashov points out that this is not trivial to do in ArgoCD, as evaluation can happen in multiple contexts/locations, which is probably correct, and would probably means that this capability in ArgoCD would have to be discussed, and decisions taken on the exact implementation and restrictions.
However, this is not the same as dismissing what so many people are requesting as irrelevant or unnecessary. Having this feature would not force anyone to use it, and would enable new users (and even existing users) to decide on their own on whether they prefer to current and probably more pure-gitops approach, or this arguably more dynamic approach, depending on their preferences and use-cases.
I would be happy to provide feedback and support the @niqdev and others in submitting an improvement proposal (https://argo-cd.readthedocs.io/en/latest/developer-guide/code-contributions/) to bring this into Argo-CD.
@joaocc you're right, this is not trivial. But not only this.
We can avoid possible security issues if we allow to use secrets/configmaps only from application destination namespace (as flux does with HelmRelease). As @jessesuen wrote this is a major concern.
Anyway, I believe that nothing is preventing to use data from secrets/configmaps using pure manifests (f.e. you can inject it into pods as env vars or files). And pure manifests can be stored in git and ArgoCD can sync it from git to cluster.
@amkartashov it would be great if that was the reality, but unfortunately it is not.
I don't know any chart (except the bitnami ones if we use the $tpl mechanism) that allow injection of configuration from secrets/configmaps to customization of things like URLs, ingress classes, naming of IAM resources from parts of the tenant name and country, and many other real uses people are finding for the tech. We even have cases where we require much more complex configurations, including scripts with dynamic elements hardwired there, which is why we and others are using fully featured templating languages (like external-secrets template v2) to generate these configurations.
Please note that the use of k8s secrets to pass the information doesn't mean the information is really secret - in many cases it just means that there is more support for k8s secrets than ConfigMaps in terms of ferrying information around the different places where it is needed.
Honestly, the charts should not be worried about that... That is the role of the gitops or other toolchain people choose to work.
While I understand the very valid concern of security, I find it hard to understand why you continue to be so dismissive of the use cases people are reporting. It would be much more useful to try discuss what would be required to address this need in some degree, than to continue insisting that the need is simply not there, or that is due to some lack of understanding of Argo-CD/k8s/architecture/security/....
If you have the time and interest to understand the need and the lack of suitable workarounds, I would be glad to jump on a quick call to show show you some of our cases and our architecture, and i am sure some others will too.
Thanks
Only noticed #12050 now... Looks like an amazing first steps which would probably address a lot of the needs being requested here.
host is stored on an azure keyvault.
Saving host into values file in gitops repo takes the same amount of effort. Another bad example.
Sometimes you don't have a choice
IIUC, the secrets would be referenced in manifests after generation and be exposed on live objects (correct me if I'm wrong). Though it's not the same as being exposed in the manifests code, which more people can have access to. Also, IICU, secrets can rotate and we want to avoid changing the application every time.
One question I have is what should we do if the secret was rotated, since app manifest won't change and won't trigger update automatically?
Not sure if I understood the full implication of the question. However, I can share that the observable in flux (in equivalent scenario), where a secret/config map is used to create an overlay of values.yaml, changes are indeed picked up by driftDetection. Not sure how it is done internally though (just that it does work and is very useful :D)