sosy-lab/sv-benchmarks

build fail after inserting assert(0) in body of reach error

skanav opened this issue ยท 6 comments

The issue seems to be only in those files which did not contain the definition of reach_error, e.g., touch-2.c.

A header file is being used in these cases. The script tries to find the line number in the c file on which the function is defined and pass it as a parameter to __assert_fail. In these cases it is not able to find the line number and inserts nothing. This is causing the build to fail

There are multiple folders with this scenario:
busybox-1.22.0
pthread-lit
ldv-sets/model
loop-lit
aws-c-common
pthread-driver-races/model
loop-new
loop-invgen
openbsd-6.2/sources/stubs
pthread-divine
pthread-nondet

These two seem to be possible solutions to me:

  1. explicitly add the definition of reach_error in files in each of this folder, and then rerun the scripts in this and related PRs,
  2. do not assume that c file would contain the definition of reach_error and point to the line number in the header file.

Originally posted by @skanav in #1119 (comment)

Stupid question for @dbeyer: can't we just revert the changes from #1119 changes for now?

@skanav your listing might be incomplete (at least busybox-1.22.0-todo is also affected but not in your list). This is somehow irrelevant though since we might be able to address this by adapting the very perl script that created these problems, so we do not need to localize every folder.

The best solution imho would be to adapt the perl script such that files without reach_error are not considered ("nothing to do").

Instead of 2. we could just remove the line with void reach_error(){} from the tasks that do not actually call that function in a version of the tasks before this PR, then run the perl script again, which would then probably automatically do the right thing.

But this is not the only fault vector. I just looked at c/loop-lit/afnp2014.c, here the .c file does #include "assert.h" from the local(!) directory with the following content:

extern void abort(void);
void reach_error(){}
extern void abort(void);
void assume_abort_if_not(int cond) {
  if(!cond) {abort();}
}
void __VERIFIER_assert(int cond) {
  if (!(cond)) {
      ERROR: {reach_error();abort();}
  }
  return;
}
int __VERIFIER_nondet_int();
#define LARGE_INT 1000000

This might be some special case that the original perl script also did not handle correctly (the declaration of reach_error is not in the ,c file itself, but I guess the perl script assumed that). In the end I think we can best fix all these issues if the original PR is reverted and we identify all causes of problems in a new PR. Unless @tautschnig says it is all easily fixable.

Moreover, not all the calls to reach_error are resolved. There are many files which contain the empty definition of reach_error (e.g., folders xcsp, regression, etc.)

We are in an in-between state where some files have been processed properly, some have error and some have not been processed at all.

Stupid question for @dbeyer: can't we just revert the changes from #1119 changes for now?

Sure. This is possible. Did you try if that passes the CI?

What happens when I press the Revert button:

Screenshot from 2020-09-25 21-27-07

So if you could manually create a reverting PR that would help me.

Done, I had to manually resolve some conflicts with PR #1107, luckily I am the author of this PR, so that was easy. I added the branch to this main repo (https://github.com/sosy-lab/sv-benchmarks/tree/revert1119), that way we can see in the gitlab CI how it goes. Already passed the gcc compile stage, let's wait how the other jobs go:
https://gitlab.com/sosy-lab/software/sv-benchmarks/-/pipelines/194711021

I had to push 200MB of objects to github for this revert though, we shouldn't add and revert those large commits if possible to reduce the already big repository size. Reverting on my disk already took several tens of seconds. So merge this PR only if it is without alternative.

git tells me that the objects in my revert commit take about 477MB in space (but of course git can do some delta magic and safe a bit of space, I guess that is why it only transmitted ~200MB to github):

git diff-tree -r -c -M -C --no-commit-id 7d3a8fec2dc25909e9218d4b28e37d92716fbc3e | awk '{print $4}' | git cat-file --batch-check="%(objectsize:disk)" | paste -sd+ | bc
477439327

...
Nope, pipeline for preprocessing consistency failed, 962d7fc should fix that, let's see and wait:
https://gitlab.com/sosy-lab/software/sv-benchmarks/-/pipelines/194719155

I had apply the fix that was mentioned by @hernanponcedeleon here #1129 (comment), seems like this was already broken by merging PR #1132 which went unnoticed because the gitlab CI was already broken by PR #1119. Luckily I am involved with all of the commits/PRs that are needed for getting the CI green again here. I am slightly optimistic that it will work.

Let's wait again ... ๐ŸŸข ๐Ÿ˜ƒ ๐Ÿ‘