grains:total impacted by rounding errors
Gamecock opened this issue ยท 16 comments
Becuase total is a float it is actually comparing:
`Error: Test failed: 'returns the total number of square on the board'
- total() not equal to 1.844674e+19.
If you use option(digits = 22) you can see:
* total() not equal to 18446744073709551616.`
Test was supposed to be:
expect_equal(total(), 18446744073709551615)
See thread for more discussion. Do you want to teach bigInts and importing libraries this early in the track?
Hello @Gamecock,
could you please post your sessionInfo()
output? I'm not sure why you are seeing this different behavior. Here's mine:
R version 3.5.0 (2018-04-23)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] testthat_2.0.0
loaded via a namespace (and not attached):
[1] compiler_3.5.0 magrittr_1.5 R6_2.2.2 tools_3.5.0 withr_2.1.2
[6] yaml_2.2.0 rlang_0.2.2
@jonmcalder Can you reproduce the test fail that Mike reports?
sessionInfo()
R version 3.4.3 (2017-11-30)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Sierra 10.12.6
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_3.4.3 tools_3.4.3 yaml_2.1.18
Thank you. The easiest possible culprit to exclude is your R v3.4.3 (although there should have been other users noticing this problem then). Can you try upgrading to 3.5.x from one of the mirrors?
I'll try, but isn't the expected result that if you try to compare two large numbers beyond the size of an Integer you lose precision?
sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Sierra 10.12.6
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_3.5.1 tools_3.5.1
Same behavior, this should fail.
library(testthat)
expect_equal(18446744073709551616, 18446744073709551615)
The solution is:
library("gmp")
library("testthat")
expect_equal(as.bigz(2)^64, as.bigz(2)^64-1)
Error: as.bigz(2)^64 not equal to as.bigz(2)^64 - 1.
'target'(bigz) and 'current' differ
โฆ isn't the expected result that if you try to compare two large numbers beyond the size of an Integer you lose precision?
Ah, no I see what you mean. identical(18446744073709551616, 18446744073709551615)
yields TRUE
here, as well, but it shouldn't. Sorry for the version detour!
Which "thread for more discussion" do you mean?
@jonmcalder, @jkanche & @amoradell: What do you think about teach[ing] bigInts and importing libraries this early in the track
?
Sorry for the delayed response - I'm travelling at the moment so haven't got much capacity for keeping tabs on issues here.
Am I correct in saying that the issue @Gamecock is raising here is the same as referred to in #84?
Essentially base R doesn't provide support for large integers and we didn't have consensus previously on how to deal with this.
My sense (revisiting it again now), is that the issue of precision is in some ways fundamental to the problem and we should have a way of supporting more precise testing for the exercise.
Two potential solutions come to mind:
- include a hint to use a particular R package to provide support for big integers and extend the exercise with optional tests (i.e. commented out by default) that implement more precise matching using the relevant type from this package (making this more advanced solution optional should then mean we can keep this exercise early in the track)
- alternatively we could move the exercise later in the track, have a hard requirement to use a specific library and include the relevant test as a "required" test to pass for the exercise
@jonmcalder that is the correct issue, I didn't think to look through closed issues. I have sufficient skills to do the work, but don't have enough experience with r to determine the correct path.
+1 for the 2nd option.
Ok cool. I'm happy with the second option too. @Gamecock are you happy to get started on a PR and then raise any further questions on implementation details with @katrinleinweber or me?
I approve second option.
I suppose that core exercices and easy ones should be made with R base package.
Edit - Moved to new issue https://github.com/exercism/exercism/issues/4822
Hi, I'm not sure if I understand the above discussion, or if it is even referring to the same issue but I saw the first post said
Becuase total is a float it is actually comparing:
`Error: Test failed: 'returns the total number of square on the board'total() not equal to 1.844674e+19. If you use option(digits = 22) you can see:* total() not equal to 18446744073709551616.`
Test was supposed to be:
expect_equal(total(), 18446744073709551615)
but I found that for the test
test('total', () => {
expect(grains.total()).toBe('18446744073709551615');
});
using big-integer.js gave me an answer of '18446744073709551616' no matter what I did. So I looked at other people's answers to figure out what I was doing wrong.
What I found was that other people were just subtracting 1 from the 'total' method in their answers using a variety of different fixes so I did the same. There was only one test involving the total
method so all tests passed that way.
However, upon reflection, maybe the test should be fixed.
For sake of completeness, here was my total
method:
total() {
let tot = bigInt(0);
for (let i = 2; i < 65; i++) {
tot = tot + parseInt(this.square(i))
}
return bigInt(tot).minus(1).toString();
}
The .minus(1)
in the second to last line was my "fix".
Correct me if I'm wrong, but it sounds like the test used to test for the number I and everyone else I looked at was getting? That's sort of what I got from @Gamecock said
total() not equal to 1.844674e+19. If you use option(digits = 22) you can see:* total() not equal to 18446744073709551616.`
Test was supposed to be:
expect_equal(total(), 18446744073709551615)
Or, if I really am supposed to get '18446744073709551615' what should I be doing instead?
My full answer is here: https://exercism.io/my/solutions/2261a8c46c844d43992a4ac2c7ba5e03
Hi @beansprout! We're the R track here ;-) Please have a look at exercism/javascript/search?q=grains & maybe move your issue / comment there. Cheers!
Oops! my bad @katrinleinweber - opened new issue. Thanks!