bbfrederick/rapidtide

Error including cardiacfile from .tsv file is broken in 2.0a11

Closed this issue · 2 comments

Describe the bug
When running happy using the docker image of rapidtide version: 1.9.1 the argument of using an external cardiac file in .json/tsv format works using the --cardiacfile argument and happy completes without problems. Using the latest docker image of rapidtide_dev version: 2.0a11 the input of an external cardiac file in format .json/tsv format does not work.

To reproduce
Run docker image : fredericklab/rapidtide:latest no issue.

docker run \
    --volume=INPUTDIRECTORY:/data_in,OUTPUTDIRECTORY:/data_out \
    fredericklab/rapidtide \
        happy \
            /data_in/YOURNIFTIFILE.nii.gz \
            /data_out/outputname \
            /data_in/slice_timing.txt \
           --cardiacfile=/data_in/MYFILE_physio.json:cardiac

Run docker image: fredericklab/rapidtide_dev:latest issue.

docker run \
    --volume=INPUTDIRECTORY:/data_in, OUTPUTDIRECTORY:/data_out \
    fredericklab/rapidtide_dev \
        happy \
            /data_in/YOURNIFTIFILE.nii.gz \
            /data_out/outputname \
            /data_in/slice_timing.txt \
           --cardiacfile=/data_in/MYFILE_physio.json:cardiac

Error :

samplerate, starttime, columns, data = readbidstsv(inputfilename, debug=debug)
ValueError : too many values to unpack (expected 4)

Possible solution
I suspect the compressed argument is missing in readcolfrombidstsv function in the version: 2.0a11 in the script io.py.

Line 1245:
samplerate, starttime, columns, data = readbidstsv(inputfilename, debug=debug)

Adding compressed as additional argument might solve the bug :
samplerate, starttime, columns, data, compressed = readbidstsv(inputfilename, debug=debug)

Desktop
Run from windows 10 using the docker images rapidtide:latest (version 1.9.1) and rapidtide_dev:latest (version 2.0a11)

Ok, this is going to take a little thought as to how to fix it in a way that doesn't break something else. The problem is that I revamped the column specification interpreter, and it basically fails for named columns. As a workaround for the time being, if you know the number of the column you want to extract from the .json file (with the first column being 1, not 0), you can specify the column number and it should work.

I'm gonna close the issue since the error I mentioned here is fixed in version 2.0a12.
The fail for named columns is a different issue than the one I described here.

In addition specifying the colnum seems to take the wrong column for me that I wrote as new issue here : #58