sboysel/fredr

Consistent unit tests

Closed this issue · 7 comments

As of now, unit testing is quite haphazard. I'd like to establish a consistent set of tests for each function.

  1. fredr_set_key() - fine as is
  2. fredr()
    • errors when API key is not set
    • errors for bad endpoint
    • errors for non-200 HTTP response
    • return should always be a tibble.
  3. Endpoint functions wrapped around fredr()
    • errors for bad parameter names and values**
    • return should always be a tibble.

Note that the FRED API will return

  • a 404 error if the request URL is not found (e.g. bad endpoint passed to fredr())
  • a 400 if the endpoint is value but the request is improperly formed (e.g. bad parameters passed to fredr() or the endpoint functions).

Some initial testing suggests the FRED API 400 codes return useful error messages for bad values and that improperly named parameters are simply ignored (explicitly defined parameters would solve this anyways).

** @DavisVaughan What would you consider best practice for errors in an API package: check that parameter value types are proper before sending the request or allow the API to return an error?

It kind of depends on how good the API errors are. Normally I do some kind of validity checks on the class of the object, and maybe some other lower level feature. Like limit should be a non-negative integer, which we can check, but we would let the API check to see if it goes >1000 or >100000 since it changes depending on the endpoint. Other things we can do are ensure that limit is an integer (100L) even if the user passes a numeric (100), because it doesn't work with numerics. We can perform that autoconversion for them.

Another thing I was planning on doing is making observation_date (and others like it) only accept Date objects rather than strings. That way, the format is always correct which we can then use format(x, format = "%Y-%m%-%d") on to get it in the right format for the API. It also allows us to have unit tests on the observation_date parameter by checking for a Date class.

Generally, I was planning on having a capture_args(...) function that will accept all parameters and perform checks and modifications on them, and will return a param list that can be passed on to use in the API call. I will explain more in the PR tonight

We should also think about cases where 0 row tibbles are returned, because I think i ran into an issues with fredr_series where that failed with a strange (at least to a user) error.

Eventually for CRAN we will likely need a skip_if_no_auth() function. See riingo helper R file. Must be named helper_*.R for testthat. https://github.com/business-science/riingo/tree/master/tests/testthat

Also might get CRAN push back on donttest{} wrapped code, but I got around it by explaining my reasoning that auth is needed for testing and that I test extensively on Travis for riingo

I see. Any idea on how to deal with vignettes when the API key isn't present?

See the reply from hrbrmstr here https://stackoverflow.com/questions/47333912/r-package-vignette-include-api-key it covers all the bases of what you should do for API testing.

rtweet generally follows that advice and sets eval = FALSE for the chunks in the vignette. Its annoying but its the best you can do i think. I wouldnt try and cache / save data and try and load it in (to mock the API calls) unless it was really small files
http://rtweet.info/articles/intro.html
https://raw.githubusercontent.com/mkearney/rtweet/master/vignettes/intro.Rmd

Thanks for the tip! I'll follow that

Tests were overhauled to adopt this framework in f20f510. We can extend and modify testing as needed as development continues. Some remarks:

  • The use of helper functions defined in R/validation.R can be used to extend function checking
  • fredr() endpoint validation (a6d82f8)
  • skip_if_no_key() defined and used in tests requiring authentication (#29 )
  • Vignette chunks are cached and not evaluated when the API key is not present (72e5ddd)