ovh/venom

Repurpose .venomrc into an environment-aware config file

x80486 opened this issue · 2 comments

x80486 commented

I've been trying to configure Venom to use multiple environments with different configurations in order to run the same test suites in different environments.

I've also found out that it's quite tricky to use different files to store variables that change between environments. It gets even more complex when testing locally and reusing some of the other values.

I think repurposing .venomrc into an environment-aware config file would solve this problem. For instance, say it's config.yaml by default (instead of .venomrc), the file could look like:

environment: default
format: xml
lib_dir: lib
output_dir: results
stop_on_failure: true
variables:
  var1: val1
  var2: val2
  var3: val3
verbosity: 1
---
environment: local
format: json
stop_on_failure: false
variables:
  var1: val1_override_for_local
---
environment: test
variables:
  var1: val1_override_for_test
  var2: val2_override_for_test
  var3: val3_override_for_test
---
environment: integration
variables:
  var1: val1_override_for_integration
  var2: val2_override_for_integration
  var3: val3_override_for_integration

Venom could then pick up the correct environment based on a flag --venom.environment=test.

Ideas taken from Quarkus Configuration Reference Guide.

If you are able to add a .env file in the chain, it will be even better, but I guess it might not be strictly required. It's worth to note that if taken into consideration, the values should take references to environment variables and be able to expand them correctly.

fokion commented

We have wrapped our executions with a custom bash script that gets the environment and test that we want to trigger as arguments.

One thing that we are doing is that we consolidate all the yaml files in a directory to a single a single yaml file and we provide it to venom.

If you have your directories named after your environments then it is easy to execute venom in the right context.

script_directory is the current path that you are working on
CONTEXT_DIR is the directory you want to output the files to start a test
VARIABLES_DIR is a directory path ( ie env/dev or env/staging )

We use yq for merging all those yaml files into one ( ignoring any starting with example_... )

  
  touch "${script_directory}/${CONTEXT_DIR}/initial_variables.yml"

  files_to_merge=()
  for file in "${script_directory}/${VARIABLES_DIR}"/*.yml; do
    filename=$(basename "$file")
    if [[ "${filename}" =~ ^example_.* ]]; then
      echo "Ignoring ${filename}"
    else
      echo "appending ${file}"
      files_to_merge+=" vars/${filename}"
    fi
  done

  docker run --rm \
  --mount type=bind,source="${script_directory}/${CONTEXT_DIR}",target=/workdir/out \
  --mount type=bind,source="${script_directory}/${VARIABLES_DIR}",target=/workdir/vars \
  mikefarah/yq ea '. as $item ireduce ({}; . * $item )' ${files_to_merge}  > "${script_directory}/${CONTEXT_DIR}/initial_variables.yml"

Then in the venom execution

venom run ${test_file} --var-from-file="${script_directory}/${CONTEXT_DIR}/initial_variables.yml" --format=json --html-report 

It's a nice idea, but I'm not sure if it's venom's responsibility to maintain a dictionary of variables. Just like handling parallelism on venom run command, I think this needs to be done outside of venom.
Thank you @fokion for the example.