Create test suite
Closed this issue · 18 comments
How about using Jest for this? It automates quite a bit of stuff and works great with React.
@gwenf Jest is already included by Create-React-App, so it makes sense to use it unless there's a good reason not to. The tests need to actually be written though, which means ideally there needs to be a spec for what is supposed to happen for each function / api call so that the tests can capture that behavior.
Should be separate this task in 2 sub tasks? One for unit-tests and one for e2e tests?
@theodesp I'm not the project owner, but personally I'm 100% against integration and e2e testing. If you are designing your units well and writing your unit tests well there should never be a need for integration tests.
If your unit testing is good, integration testing can only expose design problems, not code problems, and they take much longer both to write and to run. The code/test/refactor feedback loop needs to be as short as possible to maximize for clean code and productivity.
@smiller171 @gwenf @theodesp I'm open to suggestions on this, as I don't have a whole lot of testing experience. I was thinking about pulling Enzyme into the project just because I've used it before and it's awesome. I've never used Jest before but that's not a good enough reason not to use it.
I'm not sure the best answer to your question @theodesp... @smiller171 answer seems to makes sense, but again, not my area of expertise, so open to the counterargument.
Now that a few other things are squared away though, I'm mostly ready to begin looking at this problem more closely and making some decisions.
Jest and Enzyme can be used together. I recently replace all my Karma/Mocha testing with Jest and I love it.
@smiller171 How do you want to go about writing the spec for that?
@gwenf I'm open to options for how it is documented, but perhaps a series of spec.md
files alongside the code? I feel like a good approach would be to start at the API level by writing a spec for each API call and how it should respond (both valid and error responses) Then going more granular and doing a spec for each function involved in executing each api call (and any shared functions). Once specs are written you can write the tests to match the spec. You could do this one API at a time until we reach full code coverage on the backend, then do the same for the frontend logic.
Tests should always be written so that external dependencies are mocked out. Jest looks to make this easy.
Jest also allows you to report on coverage natively, and fail the tests if a defined threshold is not met. This could be useful for making sure coverage is only going up, not down, and that PRs include test coverage.
I'm not an authority by any means, but this seems like a decent starting point. What do you guys think?
oh, also a top-level spec defining what api calls should exist with a basic description (more details in the spec for that api call)
Alternately, I've seen projects where the test is the spec. I don't like this for the API level, but it may work for the function level. While I prefer the idea of having a fully defined spec in human-readable format, there's a good argument for the additional level of effort required for an external developer who may not be used to writing documentation like that.
@smiller171 this sounds good to me, but how do we get started, what's the first step? I've never written test specs before so not sure how to begin here.
I'm doing a tutorial now on testing API endpoints with Mocha/Chai, and hoping to just be able to apply the same principles with Jest.
@smiller171 @theodesp @gwenf finding some good resources for getting started on the actual testing part. The above tutorial I mentioned helped, but this article is more relevant since we're going to be using Jest: https://hackernoon.com/api-testing-with-jest-d1ab74005c0a (for the API endpoints anyway), reading up on and learning about snapshot testing with Jest too. Pretty excited to see how this comes together.
@no-stack-dub-sack basically the spec is a list of requirements, something like this:
Community API
Overview
This spec is for the community
api endpoint.
Return codes and objects
- The '/api/community' endpoint accepts a
GET
method. - The content-type should be application/json.
- The request must contain an authentication header.
- If the request is not properly authenticated, a
401
code will be returned.- EDIT BY no-stack-dub-sack: pretty much faking this in mocked
isAuthenticated
- not sure if there's a real point
- EDIT BY no-stack-dub-sack: pretty much faking this in mocked
- If the method is something other than
GET
, a404
code will be returned. - If the content-type is not application/json, a
415
code will be returned. - If a function called by the router returns an error or unexpected value, a
500
error will be returned. - If the router function throws an unexpected error, a
500
error will be returned. - all error codes should include a json payload explaining the error eg:
{"error": "401 unauthorized"}
.- EDIT BY no-stack-dub-sack: not sure how to force these conditions to test all of these that are unchecked
- If the request is valid, a
200
code will be returned, with a payload containing an array of all user objects.{"users": [ user-objects ]}
- Any
private
data (such as an email address marked private) must not be returned by the API.
Now, in my opinion, you would not write an integration test that requires a running database for the API to do data lookup and parsing on. You would call the route function and mock out any dependencies it has on external functions. Those external functions would get their own tests. Essentially your test should be "Assuming all other functions behave properly, does the API call return the correct response?" as well as, "If any other function does not behave properly, does the API call return the correct error?"
@smiller171 thanks, this is helpful. I'm just going to take the simplest approach I can find on testing one of these endpoints for now, and then we can see how we can improve upon that. I want to start with small steps so I make sure I can grasp this along every step of the way.
Ok! I've made some progress... minimal but progress, will make a PR just to show what I've got, but not ready to merge anything yet
@smiller171 see my comments on your spec above and the PR linked directly above this comment
@no-stack-dub-sack if you are mocking out everything well, then it's quite difficult to get unexpected error responses, and as long as you have a catch-all for "unexpected error" then you should get everything. Also, the spec isn't necessarily a thing that says "all of this is tested" it just says "this is the desired behavior", and creates a map for what you should be coding and testing. If someone finds a case where the API response does not match the spec, it's easy to say, yes this is a bug that needs to be fixed, because it breaks the spec"
Ideally when introducing new functionality or changing existing functionality you would open a PR that introduces the spec changes (and only spec changes) then once merged, you would open an issue against the fact that the code does not meet the spec. One way to do this is to merge spec changes into a "develop" branch, then merge the code to resolve into that branch, then merge both together into "master"