/fully-automated-multi-heartbeat-echocardiography-video-segmentation-and-motion-tracking

The implementation of CLAS-FV described in "Fully automated multi-heartbeat echocardiography video segmentation and motion tracking".

Primary LanguageJupyter NotebookGNU General Public License v3.0GPL-3.0

Fully Automated Video Segmentation and Motion Tracking

The code in this repository is an implementation of CLAS-FV framework described in the paper, "Fully automated multi-heartbeat echocardiography video segmentation and motion tracking," SPIE Medical Imaging 2022.

architecture

Content

motion_segment.py: The main script for segmenting an echocardiography video with or without fusion augmentation.
Example usage: python motion_segment.py -p <path to the ultrasound video file> -d cuda -c all
echonet_r2plus1d_notebooks: Notebooks for training and validating our CLAS-FV framework
src: Source codes

Pretrained Model Weights is available here: https://drive.google.com/drive/folders/1NZ4A7hjfiztb-ud0IP4JahVn1EcMYDsP?usp=sharing.

Citation

If you use our code in your work or you find it useful, please cite the article below (or see our bibtex):

Chen, Yida, Xiaoyan Zhang, Christopher M. Haggerty, and Joshua V. Stough. "Fully automated multi-heartbeat echocardiography video segmentation and motion tracking." In Medical Imaging 2022: Image Processing. International Society for Optics and Photonics, 2022.

License & Disclaimer

The project is released under the MIT license. The code has not been tested for any medical applications.