dialex/start-testing

Automation in Testing

dialex opened this issue · 10 comments

Links

.

.

.

.

.

Personalities

  • Alister Scott
  • Joe Colantonio

P.S: We are currently experimenting with this strategy. It may be tweaked as we go. When we have a definitive strategy, we will formalise the diagrams above. Our strategy was greatly influenced by this talk. Here is a brief summary:

Screen Shot 2017-08-18 at 14.07.24.png

15:29 - what to test

Linting? -> ShellCheck
Deployment scripts unit tests? -> Unit InSpec
Are services running? -> Acceptance InSpec

18:44 - tooling

(before provisioning)

  • Unit testing: bash scripts, terraform scripts
    • Linting: quick sanity check, run in CI before committing
    • Low value on testing configuration?

(after provisioning)

  • Integration testing: packages installed, services running, ports listening
    • Serverspec/Inspec: readable, quick run time, can SSH into instances
  • Acceptance testing: SSHing into machines, using apps deployed on the machine
    • Cucumber: readable for devs and business, reporting, executable specification
  • Smoke tests: run before everything else, really quick, catches obvious errors, not complex tasks

https://twitter.com/theBConnolly/status/915614905016795142

Also http://blog.getcorrello.com/2015/11/20/how-much-automated-testing-is-enough-automated-testing/#

“How much testing is enough?” is “It depends!”
(see “The Complete Guide to Software Testing”).

It depends on risk: the probablity of something going wrong and the impact of it. We should use risk to determine where to place the emphasis when testing by prioritizing our test cases. here's also the risk of over-testing (doing ineffective testing) and leaving the real needed testing behind.

Test case: Specific, explicit, documented, and largely confirmatory test ideas — like a recipe.

Note: A test case is not a test, any more than a recipe is a meal, or an itinerary is a trip. Open your mind to the fact that heavily scripted test cases do not add the value you think they do. If you are reading acceptance criteria, and writing test cases based on that, you are short-circuiting the real testing process and are going to miss an incredible amount of product risks that may matter to your client. More on the value (or lack thereof) of test cases here: http://www.developsense.com/blog/2017/01/drop-the-crutches/

As an industry, we are obsessed with automation for all the wrong reasons. The view that we can take a complex cognitive activity and distil it into code is a fallacy which results in both bad testing and bad automation. To be successful with automation we need to think deeply about what we do in testing as well as what we can do with automation. This has been my feeling for most of my testing career.

"Automation in Testing" vs "Test Automation"

http://www.mwtestconsultancy.co.uk/automation-in-testing/


AUTOMATION THAT SUPPORTS TESTING, and not TESTING AUTOMATED
https://automationintesting.com/

There is a pattern I see with many clients, often enough that I sought out a word to describe it: Manumation, A sort of well-meaning automation that usually requires frequent, extensive and expensive intervention to keep it 'working'.

You have probably seen it, the build server that needs a prod and a restart 'when things get a bit busy'. Or a deployment tool that, 'gets confused' and a 'test suite' that just needs another run or three.

Did it free up time for finding the important bugs? Or are you now finding the real bugs in the test automation, while the software your product owner is paying for is hobbling along slowly and expensively to production?

FROM: http://www.investigatingsoftware.co.uk/2018/02/manumation-worst-best-practice.html

A test script will check if what was expected and known to be true, still is.

https://madeintandem.com/blog/five-factor-testing/

Good tests can…

  1. Verify the code is working correctly
  2. Prevent future regressions
  3. Document the code’s behavior
  4. Provide design guidance
  5. Support refactoring