Write a test suite
qdread opened this issue ยท 26 comments
@yyue-r @isafluck It might be a good idea to write some package tests. That way, if we make changes to the code, we can run everything through the test suite instead of having to test each change individually. Look at this chapter on tests from Hadley's package book.
This would be great to do this summer if we have time. Let me know if you want to discuss it further.
@qdread Thank you for the suggestion! We will try to set it up, and if we have any questions we will email you!
Nice! I make arya's words mine!
To be more specific, I think the package tests would basically consist of a script where we call the Ostats
function with a bunch of different inputs. For example, linear data, circular data, etc. Then also give it some weird inputs to see how it handles them, like datasets with only one species, species with only one individual measurement, that kind of thing. We would check to make sure the output is as expected for all the different inputs. Then, any time the package is built, the tests would automatically run and make sure that any of the changes we made are not breaking the code and causing Ostats
to yield bad output. We really only need to test Ostats
directly because all the other functions run from inside Ostats
.
@qdread Thank you! Is this R script going to be a separate file or within the package? I have watched a few youtube tutorials and tried out test_that, and will try to compile a few tests.
Check out the link in my first post on this thread (chapter on tests from the book on how to create R packages). It describes pretty well how to do it. If you run the use_testthat()
function from devtools, it will create a tests folder in the package directory. Then follow the rest of the instructions for adding test files to the tests directory. The test files are included in the package repo but are only used internally --- users that install the package don't see them, just us, if that makes sense. It might be a little complicated, and I've only done it a little bit myself, so if you want to have another meeting to discuss, please let me know!
use_testthat is now from the usethis package, but it was part of devtools in the past. Think this might be useful information to share!
@qdread I tried to test pairwise_overlap in the test folder just as an example. Is this what you were talking about?
Since I need to have an object to compare to the output of the function when using expect_equal, do I need to run the function and store the result first (like what I did with the pairwise_overlap example)? How do I store the lists generated by Ostats and then compare the output of the revised function to the stored lists when testing? Since there is a lot of sampling and randomization done in Ostats, should I use set.seed?
Thank you very much!
Hi Arya, I just wanted to let you know I saw this and I will try to get back to you on it in a couple of days.
@yyue-r Hi again! I think the way we should do it is to construct some fake data with a known overlap value --- that way you do not need to run the function to test it. If we give enough sample size we can get the equality within a tolerance value. Look at commit 67bb291. I edited the pairwise overlap test to create some fake data with 50% overlap (setting seed first), then made a result vector that we would expect. I added a tolerance
argument to expect_equal
because there is sampling error so the observed result won't be exactly 0.5.
Let me know if you have any other questions. We can leave this issue open for now since we will have to create quite a few other tests.
Hi @billspat ,
Do you have any resources or suggestions for getting started with a test suite for this package? We will be having the group work through the vignette for the Ostats package for our 9/28 meeting. If you have time and are able to think about how we might get a test suite and use the testhat package on Ostats we would really appreciate it. In general, if we are developing a lot of packages as a group and you have any wisdom about building test suites, this might be an area we could all benefit from having some guidance on.
Also, looping in @plzmsu so she is in the loop on this if she wants to chime in.
Thanks,
Sydne
I was just thinking along the lines of having a few sets of example data that represent "edge cases." For example, sites with only one species, species with only one individual, missing data, etc. We provide expected values in each case, then any time the package is built, the tests are run to see if the output from each case is . If this is too complicated for Arya and Isadora to do, I can work on it at some point. I think if we do want to release it as a cran package, it's important to have that so that the package can be regularly updated. It is probably not a high priority for the UGs to work on, honestly.
I think @qdread is right on, but tests can also be very very simple things just to get started. I always start with a test that simply asserts true names "test that tests are working." I also include the kinds of examples that would go into a readme or markdown file. Tests of things that you know are guaranteed to always work are still valuable because if they fail then you know something was broken. the testthat package makes it pretty easy to get started.
I used the chapter on testing from r-pkgs book to learn testthat : https://r-pkgs.org/tests.html
keep the tests short. If you are writing a bunch of code in a test to build up to running a function, perhaps that should actually go into the package.
Tests shouldn't need to clear the environment ( rm(ls()) ) or set the working directory.
hope this is helpful and let me know if you I can add any code. -- Pat
Additional comments from Pat in email:
If you really need some stuff from the tidyverse for testing/examples then I would put only the packages you need in the DESCRIPTION file. however like the vignette I think the tests can be written with base R.
As far as testing, I see only one test - are there others that are in a different branch or fork?
One thing you could use for writing tests is the code in the vignette. That ensures the vignette will actually run for the user.
Commit b48ffb5 has a few new tests I wrote for Ostats()
and Ostats_multivariate()
. I don't think we specifically need to write tests for every intermediate function, because they are all called by those final functions. I guess we could do some for the circular data type.
@sydnerecord yes, I put a bit of the code from the vignette into the tests but not all of it yet.
@isafluck in response to the email thread from before, I think these are the main outstanding things that still require tests
- Test case corresponding to line 111 in the vignette to make sure Ostats handles the different density arguments correctly
- Test case corresponding to line 162 in the vignette to make sure the hourly circular data is handled correctly
- Test case corresponding to line 217 in the vignette (regional pool)
They can be added to test-Ostats.R.
I don't think we need to test the plotting code since it only returns a plot.
Hello @qdread !
Thank you for guiding me into this.
Just to check we are looking at the same vignette version: line 111 corresponds to the Ostats_example2, right?
So is the aim here to test all these points you have raised in the comment above? For example, is it to test the Ostats_example2 using different -bw-, -ajusts- and -n-?
Another thing:
I just can't find the test-Ostats.R. code in github or in my Rproj tied to the Ostats github. Am I supposed to create this code?
Thank you
Hi @isafluck all the tests are in the folder tests/testthat/
and the scripts are named as test-functionname.R
.
The line numbers are from the current version of the vignette in the most recent commit. It looks like the changes you just made did not change those line numbers. So they should still be correct.
I don't think you really need to make lots of tests for different arguments. Just write a test to make sure the line of code as it exists in the vignette returns the values it is supposed to return. The idea is that in the future, if we make any changes to the code, the tests will verify that the changes we made didn't accidentally "break" the code and at least the vignette will still generate the same results when it is built.
Hello!
I tested the lines you have pointed out earlier. I made the tests in a new script called testthat_vignette_issue4.R and pull it to the repository. As it is the first time I use the testthat package alone I ask if you can take a look at it to check if I did it right. I also included some questions in the script to discuss further if you have some time for it.
Thank you very much, I'm learning a lot with this process!
@isafluck nice job with the tests! ๐ ๐ฅ They look good. I will clean up the code a bit and incorporate it into the existing script.
I'll also (attempt to) answer your questions from the script:
- You are correct in your first guess about what
test_that
andexpect_equal
do. They just compare the result with the expected result. So it will only detect a problem with the code if it changes the final result. That means it isn't perfect, but the main purpose of these tests is to quickly and automatically check that the functions are all working properly. For example if I accidentally deleted a line of code in a commit, and it caused theOstats
function to return an error instead of output, the test would fail and we would be able to find and fix the bug. - You are also correct that
test_that
should return nothing if the test is successful. We only care about the result if it is different than the expectation. - Your last question was about the
tolerance
argument. It is because the expected values you pasted into the script are not exactly equal to the output. By default R will only print a few decimal places, even though the value might have more nonzero decimal places. Like this:
> pi
[1] 3.141593
> pi - 3.141593
[1] -3.464102e-07
So pi == 3.141593
would be FALSE
. You would need to supply a tolerance of at most 0.000001.
By the way, you should try to configure your github username again because it is not showing up as isafluck for the commits you have made recently. It's good to do that so you can get credit and recognition for your contributions! Check this help documentation page.
Thank you @qdread ! I'm glad that it was right! And thank you very much for answering the questions, I'm learning a lot.
I'll check this name problem!
Excellent, I've moved everything into the test_Ostats file and fixed some other stuff that came up in the process. I think for now this is a good enough test suite! I am going to close this issue.