Welcome to the DeepLabCut repository, a toolbox for markerless tracking of body parts of animals in lab settings performing various tasks, like trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has also already been successfully applied to rats, humans, various fish species, bacteria, leeches, various robots, and race horses. Please check out www.mousemotorlab.org/deeplabcut for video demonstrations of automated tracking.
This work utilizes the feature detectors (ResNet + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below).
In our paper we demonstrate that those feature detectors can be trained with few labeled images to achieve human-level tracking accuracy for various body parts in lab tasks. Please check it out:
"DeepLabCut: markerless pose estimation of user-defined body parts with deep learning" by Alexander Mathis, Pranav Mamidanna, Kevin M. Cury, Taiga Abe, Venkatesh N. Murthy, Mackenzie W. Mathis* and Matthias Bethge*
- 8/18: Our preprint appeared in Nature Neuroscience
- 7/18: Ed Yong covered DeepLabCut and interviewed several users for the Atlantic.
- All the documentation is now (also) organized in a website format!
- We added a simplified installation procedure including a conda environments & a Docker container. See Installation guide
- Thanks to Richard Warren for checking the compatability of the code in Windows. It works!
- We added "quick guides" for training and for the evaluation tools that we provide with the package. We still recommend becoming familiar with the code base via the demo (below) first.
- We also have a Slack group if you have questions that you feel don't fit a github issue (deeplabcut.slack.com) (please email Mackenzie at mackenzie@post.harvard.edu to join!)
A typical use case is:
A user has videos of an animal (or animals) performing a behavior and wants to extract the position of various body parts from images/video frames. Ideally these parts are visible to a human annotator, yet potentially difficult to extract by standard image processing methods due to changes in background, body articulation etc.
To solve this problem, one can train feature detectors in an end-to-end fashion. In order to do so one should:
- label points of interests (e.g. joints, snout, etc.) from distinct frames (containing different poses, individuals etc.)
- train a deep neural network while leaving out labeled frames to check if it generalizes well
- once the network is trained it can be used to analyze videos in a fast way
The key result of our paper is that one typically requires just a few labeled frames to get excellent tracking results.
The general pipeline for DeepLabCut is:
Install --> Extract frames --> Label training data --> Train DeeperCut feature detectors --> Apply your trained network to unlabeled data --> Extract trajectories for analysis.
Once one has a well trained network, one can just use it to analyze heaps of videos Analysis tools. The network can also be retrained on frames, where it makes errors. User guide in website format.
User guide (detailed walk-through with labeled example data)
Quick guide for training a tailored feature detector network
Quick guide for evaluation of feature detectors (on train & test set)
Analysis guide: How to use a trained network to analyze videos?
For questions and discussions with join our Slack user group: (deeplabcut.slack.com) (please email Mackenzie to join!).
If you are having issues, please let us know (Issue Tracker). Perhaps consider checking the already closed issues and the Frequently asked questions to see if this might help.
Otherwise, please free to reach out to by email: [alexander.mathis@bethgelab.org] or [mackenzie@post.harvard.edu].
DeepLabCut is an actively developing project and community contributions are welcome!
- Issue Tracker: https://github.com/AlexEMG/DeepLabCut/issues
- Source Code: https://github.com/AlexEMG/DeepLabCut
- Project Website: https://alexemg.github.io/DeepLabCut
Alexander Mathis, Mackenzie Mathis, and the DeeperCut authors for the feature detector code. Edits and suggestions by Jonas Rauber, Taiga Abe, Hao Wu, Jonny Saunders, Richard Warren and Brandon Forys. The feature detector code is based on Eldar Insafutdinov's TensorFlow implementation of DeeperCut. Please check out the following references for details:
@inproceedings{insafutdinov2017cvpr,
title = {ArtTrack: Articulated Multi-person Tracking in the Wild},
author = {Eldar Insafutdinov and Mykhaylo Andriluka and Leonid Pishchulin and Siyu Tang and Evgeny Levinkov and Bjoern Andres and Bernt Schiele},
booktitle = {CVPR'17},
url = {http://arxiv.org/abs/1612.01465}
}
@article{insafutdinov2016eccv,
title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele},
booktitle = {ECCV'16},
url = {http://arxiv.org/abs/1605.03170}
}
@article{Mathisetal2018,
title={DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
journal={Nature Neuroscience},
year={2018},
url={https://www.nature.com/articles/s41593-018-0209-y}
}
This project is licensed under the GNU Lesser General Public License v3.0.