aws/aws-cdk-rfcs

Integration tests

eladb opened this issue · 6 comments

eladb commented
PR Champion
#

Description

  • Add a source hash to avoid manual updates
  • Run periodically in a pipeline
  • Support Assertions
  • Support snapshotting the full cloud assembly
  • Externalize the cdk-integ tool
  • See #89 and #88 for more ideas

Progress

  • Tracking Issue Created
  • RFC PR Created
  • Core Team Member Assigned
  • Initial Approval / Final Comment Period
  • Ready For Implementation
    • implementation issue 1
  • Resolved

I definitely want the ability to automate the execution of these kinds of stack verification steps.

For example, the file linked above has:

/*
 * Stack verification steps:
 * * `curl -s -o /dev/null -w "%{http_code}" <url>` should return 401
 * * `curl -s -o /dev/null -w "%{http_code}" -H 'Authorization: deny' <url>` should return 403
 * * `curl -s -o /dev/null -w "%{http_code}" -H 'Authorization: allow' <url>` should return 200
 */

So if I run yarn integ authorizers/integ.token-authorizer.lit.js, I'd like for it to run the curl commands listed above prior to stack cleanup, and fail if it does not get the expected status codes.

I believe this included in the task Support Assertions, based on the first bullet point in #88.

eladb commented

Thanks, copying @nija-at

Hey, so I have just came across this PR and as I am currently implementation a solution for my team I wanted to just share my somewhat simplistic approach to performing post deploying integration testing on a stack.

  1. Environment
    In our company we use jest as our testing environment and framework for unit testing, this is common practice for typescript based applications we so wanted to ensure a consistent and unified experience for developers when writing unit tests or integration tests.

My approach was to simply create a secondary jest configuration that would allow us to distinguish between unit and integration test files (MyStack.unit.ts and MyStack.integ.ts), allowing for us to provide the developer with the ability to run yarn test or yarn test:integration to invoke each environment.

  1. Resource Identifiers
    In order for the jest integration test environment to know what resources identifiers it should use to perform the tests we have updated the cdk.json config file to add the following property ("outputsFile": ".deploy-output.json"), this outputs file will contains the fully resolved values exported via "CfnOutput" calls.

We have created some support utilities to make it simpler for developers to export those values within having a lot of duplicated CfnOutput within the classes, a simple function such as outputs({ "paramName": this.someResource.arn }).

When we execute the yarn test:integration command the configuration of jest will load a setup file that loads those values into memory and provides utilities that can be used within test environment to get the parameter value, e.g const someResourcesArn = getOutput("paramName"), this allows developers to not have to deal with dynamic names and can explicitly control the contract between the CDK Outputs and the accessing of those values within the test environment.

  1. Testing Resources.
    Now that we have jest environment that has the latest parameters for the previous deployment we simply write traditional jest tests using the aws-sdk library to interact with AWS services and perform integration testing in the similar way that we do unit testing.

This may have already been discussed and ruled out as an approach and if so it would be great to hear back from those who may have more experience with this approach to highlight some of the long term issues.

Regards :)

Thanks for your comment @robert-pitt-foodhub.

Could you also provide details into how you execute these integration tests? Ex: do they run as part of your CI pipeline?

Are you deploying a fresh stack each time for every integration test? How have you set up your AWS accounts? How do you clean up after a failure?

Our team has been experimenting with Step Functions to perform integration testing. We like this approach because the errors are easy to see in the execution history, they can take as long as they need, and it's easy to get at any x-ray traces to see where faults lie. It's also pretty easy to bundle up any custom lambdas and invoke them in one or several of the state machine steps - the code for that stuff can live right next to the definition of the test.

We've been starting the state machines from staging environment pipeline steps. We built a tool called cdk-exec to ease locating and running these tests, both in development and in the pipeline, from the cloud assembly.

We've also been starting and monitoring a few state machine integration tests via Custom Resources. We've liked these because they back out of deployments automatically if something is wrong, but we use them more sparingly.

We're also experimenting with low-code ways to make assertions in the state machines, but we're not sure yet what we'll do there.

integ-runner is available