zenetio/ai-4-clinical-workflow

Scanner Folder

jimmiemunyi opened this issue ยท 4 comments

So I am trying to run your code locally, and I have run every other command successfully until the part where we run this:

python ./inference_dcm.py ../scanner

Is this code correct? I can't seem to find any scanner folder on your code.

I tried changing it to scanned and then I get this error:

FileNotFoundError: [Errno 2] No such file or directory: '../../section2/out/model.pth'

Again, looking at the folders, I can't seem to find a section2 folder anywhere.
I can find some model.pth.zip in the out folder, and referencing that file in the inference_dcm.py file brings pickle errors.

Looking for series to run inference on in directory ../scanned/st_1.3.6.1.4.1.14519.5.2.1.4429.7055.290332099546120305394775536058...
Found series of 32 axial slices
HippoVolume.AI: Running inference...
Traceback (most recent call last):
  File "/home/sampsepi0l/repos/work/ai-4-clinical-workflow/src/inference_dcm.py", line 322, in <module>
    inference_agent = UNetInferenceAgent(
  File "/home/sampsepi0l/repos/work/ai-4-clinical-workflow/src/inference/UNetInferenceAgent.py", line 25, in __init__
    self.model.load_state_dict(torch.load(parameter_file_path, map_location=self.device))
  File "/home/sampsepi0l/mambaforge/envs/zenoto/lib/python3.10/site-packages/torch/serialization.py", line 713, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/home/sampsepi0l/mambaforge/envs/zenoto/lib/python3.10/site-packages/torch/serialization.py", line 920, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '\x05'.

Could you assist with the above issue?

Hi @jimmiemunyi , there was a typo in start_listener.sh script code. Note that the script will provide the studies to AI server in a directory that it will create and fill. It will work now.

Thanks for the fix. Will check it out.
What about the model issue? The code refers to a model that isn't in the repo.

Also thank you for your work on this repo. Its hard finding examples on the subject area and yours does a great job of summarizing the process.

@jimmiemunyi , there is a file size limitation when uploading files to github. That is the reason why I did not included the model file. But as you are very interested, ๐Ÿ˜Š I am providing a link where you can download the model file. Note that this model was generated from another project where I had to train a model to predict Alzheimer's disease. I hope this can help you.
https://1drv.ms/u/s!Apx26KdJ2bfzgo4WesJWbnV6Cx6IPA?e=m4yB6g

Thank you. This will be of great help.