Infer on large numbers of images using a trained QuickAnnotator model! Heavily borrowed from https://github.com/choosehappy/QuickAnnotator - please cite the QuickAnnotator project if using QuickAnnotator_infer (citation instructions on project page).
QuickAnnotator provides a neat, accessible and intuitive way of training Deep Learning models. An unmet need is a simple means to infer on large numbers of images without loading them into QuickAnnotator. This is the intended outcome of the QuickAnnotator_infer project.
A long-term goal would be an open share for clinicians/researchers across continents to try, use and compare performance of models on each others' data. Get in touch if you'd like to help take this forward!
- QA_infer can now accept multiple models at once to calculate an ensemble output!
- QA_infer can now perform evaluation of models against a QuickAnnotator ground truth!
- Removal of dependence on a config file - the dream would be to run QA_infer with nothing more than a trained QuickAnnotator model
Tested on Ubuntu 20.04 - feedback on performance on other OSs would be greatly appreciated! The environment requirements are a cut-back version of those needed for QuickAnnotator (env files provided in the cuda10 and cuda11 directories - use whichever is suitable for your gpu).
Requires:
- Python (tested on 3.8)
- pip
Once pip is installed, necessary packages can be installed by navigating to the appropriate cuda folder and running:
pip install -r requirements.txt
And the following additional python package:
- scikit_image
- scikit_learn
- opencv_python_headless
- torch
- numpy
- matplotlib
- Create a new project folder in the 'projects' directory - feel free to add a text file describing how you created the model, types of images used etc. This will be referred to as QuickAnnotator_infer project folder
- Copy the config file used to create your model in QuickAnnotator. Located here --> {YOUR QuickAnnotator DIRECTORY HERE}/config/config.ini
- Paste this into your QuickAnnotator_infer project folder - we need this to recreate the model trained in QuickAnnotator
- Copy the model you would like to infer with. These are files with the extension '.pth' located here --> {YOUR QuickAnnotator DIRECTORY HERE}/projects/{DESIRED PROJECT HERE}/models
- Paste the '.pth' file into your QuickAnnotator_infer project folder
- Create a folder named 'input' in your QuickAnnotator_infer project folder
- Add images you would like to infer on with your model here (accepted file types: jpeg, jpg, png, tiff)
- Run QuickAnnotator_infer:
python run_infer.py
- You will be asked which project you would like to run etc. - answer in the command line
- Inference output will appear in an 'output' directory in the QuickAnnotator_infer project folder
Forks, pull requests, co-operation all welcome!