Issue with cpdbench_bocpdms.py on custom dataset
jayschauer opened this issue · 4 comments
So when I run the benchmark suite with my own dataset, some of the methods don't succeed. I am still looking through to see what failed, but there is one thing that is causing issues with generating the summary file when I run "make summary".
This is the output from abed for default bocpdms on my dataset:
operands could not be broadcast together with shapes (1014,) (1013,)
log model posteriors: [-1.60943791e+000 -1.71559294e+000 -1.28561674e+000 ... -1.08929648e+308
0.00000000e+000 -4.79582130e+304]
log model posteriors shape: (1013,)
{
"command": "/TCPDBench/execs/python/cpdbench_bocpdms.py -i /TCPDBench/datasets/driver_scores.json --intensity 100 --prior-a 1.0 --prior-b 1.0 --threshold 0",
"dataset": "driver_scores",
"dataset_md5": "e342488cf23a6d82985d52ef729d526e",
"error": "UnboundLocalError(\"local variable 'growth_log_probabilities' referenced before assignment\")",
"hostname": "3e187210786d",
"parameters": {
"S1": 1,
"S2": 1,
"intensity": 100.0,
"intercept_grouping": null,
"lower_AR": 1,
"prior_a": 1.0,
"prior_b": 1.0,
"prior_mean_scale": 0,
"prior_var_scale": 1,
"threshold": 0,
"upper_AR": 5,
"use_timeout": false
},
"result": {
"cplocations": null,
"runtime": null
},
"script": "/TCPDBench/execs/python/cpdbench_bocpdms.py",
"script_md5": "c1be8d2c933f41a6d0396d86002c6f6f",
"status": "FAIL"
}
The extra output at the top is preventing summarize.py from parsing the json result correctly. Also, some of those log model posterior values are really big, I am not sure if that is correct.
After removing those extra lines at the top of that file, summarize worked. But there are some other issues, many methods failed:
These error messages apply to Best AMOC and some other methods:
"error" : "Invalid test statistic, must be Normal or CUSUM",
"command" : "/usr/lib/R/bin/exec/R --slave --no-restore --no-save --slave --file=/TCPDBench/execs/R/cpdbench_changepoint.R --args -i /TCPDBench/datasets/driver_scores.json -p Asymptotic -f mean -t Exponential -m AMOC"
"error" : "Invalid test statistic, must be Normal or CSS",
"command" : "/usr/lib/R/bin/exec/R --slave --no-restore --no-save --slave --file=/TCPDBench/execs/R/cpdbench_changepoint.R --args -i /TCPDBench/datasets/driver_scores.json -p SIC -f var -t CUSUM -m AMOC"
There are more errors (appears to be 589 out of 990 of the abed results), I don't think I will go through every one.
Hi @jayschauer, thanks for letting me know! The last message you mention is actually "on purpose". I'm doing a big grid search over all parameter configurations for the changepoint package, but not all are valid, so these generate an error. The summarization script skips output files that give an error, so that's not a problem (it means that in the end the grid search is only over valid parameter settings). Note that this is mentioned in the abed config file.
Regarding the other problem, there were indeed some experiments with bocpdms and rbocpdms that print an error to stdout, which messes up the json output. I remember manually removing this extra text as it wasn't that much of an issue at the time. It seems that the error occurs in this part of the code, where a ValueError is caught but then the except block doesn't actually handle the error. I'm not familiar enough with that codebase to tell you what the issue is, but what you could do is put a raise
after line 309 so that the execution actually stops when it hits an error.
Hope this helps!
Ok thanks! This was helpful. Sorry for not reading the docs.
Glad to hear!