PDLPorters/pdl

2.088 build fails on i386 due to test failures

Closed this issue · 5 comments

The Debian package build on i386 for PDL 2.088 failed due to test failures:

    #   Failed test 'hist works'
    #   at t/primitive-misc.t line 32.
    #          got: '[3 5 1 2 2 1 2 1 0 4]'
    #     expected: '[3 5 1 2 1 2 2 1 0 4]'
    # Looks like you failed 1 test of 1.
#   Failed test 'hist'
#   at t/primitive-misc.t line 33.
# Looks like you failed 1 test of 7.
[...]
Test Summary Report
-------------------
t/pdl_from_string.t      (Wstat: 0 Tests: 144 Failed: 0)
  TODO passed:   60-62
t/primitive-misc.t       (Wstat: 256 (exited 1) Tests: 7 Failed: 1)
  Failed test:  2
  Non-zero exit status: 1
t/primitive-random.t     (Wstat: 0 Tests: 3 Failed: 0)
  TODO passed:   1
Files=45, Tests=2267, 72 wallclock secs ( 0.40 usr  0.12 sys + 65.80 cusr  8.38 csys = 74.70 CPU)
Result: FAIL
Failed 1/45 test programs. 1/2267 subtests failed.

Full buildlog: https://salsa.debian.org/perl-team/modules/packages/pdl/-/jobs/5625325

Same failure on the buildds: i386, hurd-i386

The other 32bit architectures like armel & armhf are OK, unlike last time (#469).

This has boiled down to this:

$ make core && perl -Mblib -MPDL -e 'print pdl( 0.5000 )->histogram(0.1, 0, 10), "\n"'
# on i686:
# [0 0 0 0 1 0 0 0 0 0]
# on x64:
# [0 0 0 0 0 1 0 0 0 0]

This is because histogram does this:

    PDL_Indx j = ($in()-min)/step;

0.5 / 0.1 when truncated to an integer on x64 was producing a 5. For some reason, quite possibly a bug, on i686, it was producing a 4. This is despite trying to capture the double value and printf it with %1.70g, it still only showed a 5 (I was expecting a 4.999999999999999 etc).

@sebastic The above linked commit makes it work on my 32-bit box. Are you amenable to applying the above commit as a patch on 2.088?

Patch added in commit 9bb7f548.

The package build succeeded locally in an i386 chroot, in Salsa CI, and on the build daemons.

Thank you for the report, as always!

Just for fun, one way I could get 0.5 / 0.1 to evaluate (in C) to less than 5 is if 0.1 is single precision:

#include <stdio.h>
int main(void) {
float f1 = 0.5, f2 = 0.1;
double x;
x = (double)f1 / f2;
printf("%.17g %a\n", x, x);
return 0;
}

/* Outputs: 4.9999999254941949 0x1.3fffffb000001p+2 */

(Confirmed using MPFR library.)
Using 64-bit gcc-13.2.0, if that cast to double was removed, then the output changed to 5 0x1.4p+2, but using 32-bit gcc-13.2.0, it mattered not whether that cast was specified.