Scenarios and testing
zdne opened this issue · 10 comments
API Blueprint & Testing
Mission: Test backend responses against what is defined in a blueprint. This is the blueprint validation.
With API Blueprint in CI the goal is to minimize number of ambiguous transactions in blueprint and support complex tests using scenarios.
Terms
Test Case
An HTTP transaction to be tested. All the data needed to complete an HTTP transaction. A Well-defined Transaction.
Scenario
A scenario is a sequence of steps. Where a step is either another scenario, a test case or a freeform text.
Background
A setup block referencing a scenario.
Test Case
Implicit Test Case
An implicit test case is a test case created as a by-product of API Blueprint reference style description of an API.
Question
Can an API blueprint be, by default, composed of implicit test cases or do they need to be explicitly stated?
Collision points
-
Parameters of an HTTP Request
- URI parameters
- URI Query parameters
- HTTP message headers
- message-body properties
-
Ambiguous Transactions
A transaction with multiple defined responses.
-
Stateful Server
Possible need for some sort of set-up or ordering of test cases
Proposal
Proposed steps to resolve collision points:
-
Parameters of an HTTP Request
Parser warnings, when there is no example in parameter's description.
HTTP-message headers and a message-body properties are already addressed by API Blueprint specification-by example. -
Ambiguous Transactions
Parser warnings + introducing possible
keyword to denote a response that shouldn't be tested.
E.g.:
+ Response 201
+ Possible Response 401
+ Possible Response 503
Explicit Test Case
Inside out approach where somebody writes tests for an API and the reference-style documentation is the by-product.
Scenario
Ad-hoc Scenario
A scenario written after your API is described in an API blueprint. References to test cases and / or resources and actions.
Scenario-driven Blueprint
A freetext scenario. A scenario discussing and API and thus implicitly describing the API.
Background
A technicality, a reference to another scenario.
# Scenario A
...
# Scenario B
...
# My Resource [/resource]
## Retrieve [GET]
### Transaction Example 1
+ Background [Scenario A][]
+ Request A
+ Response 200
+ Possible Response 200
+ Possible Response 200
### Transaction Example 2
+ Background [Scenario B][]
+ Request B
+ Response 200
Notes
Transaction Example
As of API Blueprint Format 1A the API Blueprint support "under the hood" transactions examples. With implicit "pairing" planned in its parser.
The upcoming API Blueprint revision should consider introducing explicit support for defining transaction.
Transaction Examples
These are definitely needed, and this seems like a good approach. Can't wait to see the pairing!
Scenario
I'm not entirely sure I understand what a Scenario can contain? What exactly can the "series of steps" be? Arbitrary code? Other transactions?
General Thoughts
It seems like this might be mixing the concerns of testing and documenting to a degree that could be problematic down the road. If you build your testing support into the format, then you're committing to understanding and supporting an undefined set of testing requirements for all, or at least the majority of, blueprint users.
And while reading tests can be a useful form of documentation, in the case of rest APIs I would sort of expect those details to clutter things. That leaves more work in blueprint -> documentation parsers (to strip excess testing data).
Some cases that don't seem to be addressed by this proposal:
- Clean up - The background section allows for some setup, but I don't see where one could perform any cleanup operations?
- Ordering - This is listed as one of the collision points, but I didn't see anything that addressed it? There might be a need for a specific ordering (could be dependency-based and use a topological sorting, could be explicit)
- Arbitrary transaction manipulation - There are cases I can think of that just don't seem like they would map well to a Testing DSL like this.
- Consider an endpoint with query params:
/people/{?isAlive,isDead}/
. The two parameters are mutually exclusive, so when both are set, there are no results to test against (a bit contrived, but still). - Suppose there are different endpoints that require authorization by different users, so that separate subsets of transactions require separate Authorization headers. These can't be applied globally in dredd (without splitting up the files), and they can't be written in the header blocks of the transactions (without exposing your testing user info - which probably shouldn't exist in production, but it's still a concern).
- Consider an endpoint with query params:
A solution I've been pondering
This might not be the best solution, and I'm not sure if you're just looking for feedback for something you've already decided on or if this is more of a brainstorming session. Assuming the latter, I've been thinking about how to do this in a way that addresses everyone's testing concerns.
The basic idea is to expose the transactions and examples as nodes with before/after hooks. I'm a fan of the way mocha does things, so I was thinking in terms of their nomenclature. Here's some notes I typed up a few days ago about how it would work:
- Each node in the AST should be addressable
Methods
before('Node name')
set up for tests in nodeafter('Node name')
clean up tests in nodebeforeEach('Node name')
set up before each test in a nodeafterEach('Node name')
clean up after each test in a node
Dredd CLI
dredd nodes
- print a hierarchical list of nodes, for referencedredd blueprint.md localhost:3000 -f hookfile.js
- load a js or coffee file with before/after hooks
Node names
Node names would remove keywords:
Accounts > Create Account
orPOST /accounts
(this style can only be used for single endpoints, i.e. before = beforeEach)
It would actually be ideal if there were blueprint support for naming nodes so that one could do: accounts.create
Desired API
Sync:
before 'Accounts > Create Account', (transaction) ->
# create test user
transaction.headers['Authorization'] = base64 'testuser:testpass'
after 'Accounts > Create Account', () ->
#delete test user
Async:
before 'Accounts > Create Account', (transaction, done) ->
# create test user
transaction.headers['Authorization'] = base64 'testuser:testpass'
done()
after 'Accounts > Create Account', (done) ->
#delete test user
done()
This way, it's really not up to the blueprint to define how tests get set up/torn down and how transactions are manipulated. To me, this seems more flexible, easier to maintain, and better separates the concerns of documentation and testing. It also allows for more languages/libraries, so that for example someone could come along and write a better testing API, or a ruby library, etc.
I'm not sure this is the best approach. At the very least I think it could be made better if you could also add your own assertions to the tests, rather than relying solely on gavel. Not quite sure what an api for that would look like.
@ecordell I think Scenarios and Backgrounds allow you to create multi-request API workflows, both as potentially useful documentation and as a way of manually ordering tests with dependencies and limited setup/teardown, akin to rake tasks.
I have similar reservations about baking test-framework features into the blueprint format, but I can see a use-case for them outside of that context.
In my mind, test support in api blueprint / dredd is not a replacement for your integration test.
Rather, it is a way to verify two things:
- Regressions; when implementation changed, is your blueprint still up to date and working?
- Verify use-cases; some documentation is more use-case oriented than resource oriented (blueprint is not organised as resources and resource groups, but rather as "tutorials" and ordered list of api calls). I believe this can be extracted as optimistic test-case.
For those, I think it's perfectly OK to have them baked in blueprint itself, because they are documentation to users as well.
Once you are going for edge cases, implementation bugs etc, your favourite framework and/or cucumber is the way to go.
@christhekeele If that's the case, then this sounds very useful, but I guess I see it more as "workflow documentation" than "testing support".
@Almad It sounds like you've envisioned the Blueprint as a description of an API, rather than a prescription of the API (i.e. the API comes first, not the docs)? So when you talk of "regressions" you mean that you catch regressions in the documentation, rather than regressions in the functionality of the API?
The format seems to support both approaches rather well. We're using it basically as a TDD-style approach to API development - we define the API, then implement it. I don't see it as a replacement for integration tests, but I do see it as an integral part of our testing (it probably falls under "systems" testing).
The Blueprint contains a list of all of the endpoints, a list of all possible responses and how to get them, and schemas defining the format and valid values of all requests and responses. That's really the only information necessary to determine if an API is performing to spec, so it seems natural to use this existing, structured information to generate tests.
That all said, I think I see the use for defining some of the test requirements in the Blueprint now, from your and Chris's comments. But I did want to explain how we're using the format and why I think it's valid/useful (and why I think a system similar to what I described would be a useful addition regardless of changes to the format itself).
I have prepared a proposal of cucumber step definitions implementation for testing API using API Blueprint. I am going to implement it as a part of my "in progress" master thesis. Feel free to discuss my ideas here or in an issue in the proposal repository.
Thanks @stekycz I will check it during the weekend. Thanks for sharing.
Note that it is great to have full-featured API testing tool built on API Blueprint, the main focus of this issue is to support API Blueprint testability as discussed here: apiaryio/dredd#44 (comment)
@zdne Thanks for the API Blueprint toolset. I have been exploring Dredd recently. I have couple of questions as follows:
- I wanted to check the latest status of this issue/proposal. Has it been implemented / any progress updates, please? :-)
- Let us say that we have APIs/resources that require authorization. Gist Fox API is one such example. We may need to invoke one API end point for authentication that may return some kind of accesstoken in the response. To access other authorized resources/APIs, we need to pass around the access token. As far as I can see, Dredd does not support this kind of interdependency based testing currently. I am wondering if this proposal will cover the Auth scenario?
@praky thanks for your interest!
I wanted to check the latest status of this issue/proposal. Has it been implemented / any progress updates, please? :-)
There is no direct progress on this at the moment. Indirectly the introduction of implicit request / response pairing will affect the testing by Dredd and it is currently being implemented in the Dredd itself.
We may need to invoke one API end point for authentication that may return some kind of accesstoken in the response. To access other authorized resources/APIs, we need to pass around the access token.
As Dredd is primarily designated to run in a testing sandbox (not in production environment) I would see a (theoretical) way around it but using a set of backend fixtures with a specific token that is also used as an example value of a token parameter in the API Blueprint. This way Dredd will use the example value from blueprint and since it will be set in backend fixtures as well, it should work™.
Note we are looking at ways to supply some additional values for parameters in the blueprint (specific to requests, or more precisely transaction examples) but there was no development on this.
If you would be interested in contributing to the API Blueprint specification or the parser itself in this area I will be more than happy!
Thanks @zdne for your detailed response. We were evaluating multiple API documentation options and finally narrowed down to API blueprint - very beginning stages of implementation and getting feedback from the people using it. I am sure that we will have opportunities to contribute back.
Once again, appreciate your effort on the API blueprint toolset.