auth0/auth0-deploy-cli

Ability to check configurations are correct without the need to import or export

Closed this issue · 7 comments

Checklist

Describe the problem you'd like to have solved

Specially when you keep track of your configs in a git repo, and having configured CI tooling to check the correctness of your code, would be useful to have a check command that perform validations over the configurations, without the need to actually execute an import or export, an offline checker that does the basic check of whether the JSON / Yaml files are correct, including the keywords used.

Describe the ideal solution

The command would be something like:

a0deploy check -c config.json -i tenant.yaml

And it will exit with 0 and no error message if all is OK, or a non 0 code exit with messages in the standard error stream explaining why the configs are not valid.

Possible errors that can be checked:

  • Malformed JSON / YAML files.
  • Invalid keyword used in the configurations.
  • Invalid values used in the configurations, e.g. using a literal like abc where is expected a number.
  • Invalid keyword replacements.

Alternatives and current workarounds

For the first point: checking for malformed JSON / YAML files that is easy, there are plenty of commands that allow to do that, but not for the rest of the checks that are very specific to Auth0, although defining schemas for the JSON files or the YAML files, the rest can be achieved as well with tools that check the files are in complaint with a given schema or not. The problem is that defining those schemas is a long work, so that should be built-in in this command.

Additional context

No response

The Deploy CLI actually employs basic schema validation currently (example). Granted, some are stricter than others but there is enough to prevent egregious errors from occurring. We rely on the server to impose most of the validation because it accounts for critical information that doesn't exist on the client like tenant tier, feature flags and account-level resource limits.

Further, what does "correct" mean here? Valid JSON? Accepted by the API? Or that it expresses your particular use case? My point being, the Deploy CLI, nor most client-side tools can know if your configuration is correct.

To your credit, I can imagine a frustrating situation where you need to deploy your resources in order to test their validity, but this is what the built-in schema validation is supposed to address. I don't think a dedicated command is the way to go and instead, I'd rather improve the schema validation. If you could help identify specific instances, that would be helpful for us to address.

Hi @willvedd . Yes I understand if some if not many of the validations are performed server side, but still the schema validations you pointed out are not available to use unless you are willing to deploy your stuff, but some times you are in the process of making changes but still don't want to mess even a test environment.

I understand that if I try to push invalid configs the validations will be executed first, but when I say "mess" I mean I'm writing changes that still I don't want to deploy, e.g. reducing or adding grants to an app, but still I want to know in advance the changes are valid, instead of creating a pull request with the changes without knowing in advance whether they will work once approved and merged to the upstream branch to be deployed.

I think actually 2 levels of validations could be performed: offline, in which case those schema can be used without executing a deploy, and online where all the configs are sent to Auth0 but for validations without deploying them, although I understand that would require changes in the Auth0 API, making it harder to implement in your side.

Thinking about the "ideal" implementation, maybe a more "Unix" style would be better, and in Unix most commands that allow to test something without executing the actual intent have a --dry-run option, like patch, git, or even Docker commands. So the syntax would be:

a0deploy export -c config.json [...OPTIONS] --dry-run

In this way, it's also more clear what are you trying to test (a export execution), and under what options, like the config file passed (-c), or with what path (-i) the check has to be executed, and so on.

In case the 2 levels of validations can be implemented, I would add another option: --offline, only valid when used with --dry-run, that allows to perform the validations available without connection to the Auth0, that will be the schema ones.

This is also an option that many commands include, like gradle, that allows to execute the build command but without checking first whether the dependencies are up-to-date through the network, reducing execution time, or useful for CI environments with limited access to Internet for security reasons.

One more think about my point of including this: terraform plan.

plan replaces terraform apply and does exactly what I want but for Terraform repos: check first whether the config files are valid (schema validations mostly), and then connects with the environment where the changes should be applied, simulating the changes but without actually performing them. The command have an option -refresh that is also quite similar to the --offline option, although not exactly the same:

  -refresh=false      Skip checking for external changes to remote objects
                      while creating the plan. This can potentially make
                      planning faster, but at the expense of possibly planning
                      against a stale record of the remote system state.

In a project that I used to work we used Terraform to allocate resources from AWS, but when making changes in a feature branch to e.g. add new EC2 instances I didn't want AWS to actually allocate the resources, but wanted to check whether my changes were valid before creating the PR, so I used the plan command lot.

I understand that if I try to push invalid configs the validations will be executed first, but when I say "mess" I mean I'm writing changes that still I don't want to deploy, e.g. reducing or adding grants to an app, but still I want to know in advance the changes are valid, instead of creating a pull request with the changes without knowing in advance whether they will work once approved and merged to the upstream branch to be deployed.

This is a fair point. My immediate suggestion is to provision a dedicated dev tenant, separate from your staging and prod tenants where you can test these. Some customers even provision ephemeral tenants to assist in developing of discrete features. The Deploy CLI should make the cloning of tenants fairly trivial.

The --dry-run is our most requested feature (see: #70). Though as I mentioned above, without actually applying your configurations to remote, you'll never actually know if they're valid or not. So I'm not sure if that would actually solve your issues here.

Interesting that you mention Terraform because you may consider adopting the official Auth0 Terraform Provider. It enforces stronger validations and will give you better insights into errors, diffs. It also better suits incremental development (as you describe).

My immediate suggestion is to provision a dedicated dev tenant

Yes I was thinking about doing it now that importing/exporting as you said is easier to implement. Don't know from the cost point of view whether it's best than a CLI checker though.

without actually applying your configurations to remote, you'll never actually know if they're valid or not.

I don't think it's a fair point, it's like because nobody achieve 100% test coverage you assume automated tests in reduced environments are useless (like unit tests).

Think about the ability of running the schema validations as when you are writing code in any language and you execute the compiler locally, do you really need to execute your compiler when you can just push your code to the repo and see what happen when is deployed in a stage environment? if you code in JS or Python, you don't even need to compile your code but you still want a linter, an IDE or a package manager to validate what you code, although there are many errors that only executing the code in an actual environment will allow you to detect the error, but reducing the chances earlier and faster is not a option but a must.

Closing anyway because #70 represents better what I was proposing.