This Python package provides a wrapper for Detectron2.
It is a submodule of smglib, the open-source Python framework associated with our drone research in the Cyber-Physical Systems group at the University of Oxford.
Note: Please read the top-level README for smglib before following these instructions.
-
Open the terminal.
-
Activate the Conda environment, e.g.
conda activate smglib
. -
If you haven't already installed PyTorch, install it now. In our case, we did this via:
pip install https://download.pytorch.org/whl/cu111/torch-1.9.1%2Bcu111-cp37-cp37m-win_amd64.whl pip install https://download.pytorch.org/whl/torchaudio-0.9.1-cp37-cp37m-win_amd64.whl pip install https://download.pytorch.org/whl/cu111/torchvision-0.10.1%2Bcu111-cp37-cp37m-win_amd64.whl
However, you may need a different version of PyTorch for your system, so change this as needed. (In particular, the latest version will generally be ok.)
-
Install
cudatoolkit
, e.g. viaconda install cudatoolkit==11.3.1
(the version you need may be different). -
Install Detectron2 as per here. If you run into any trouble:
i. If you get the error
ImportError: cannot import name '_nt_quote_args' from 'distutils.spawn'
, installsetuptools
version59.6
or below. (See also here.)ii. Try applying the fixes in
fix_torch_for_detectron2.sh
. (With newer versions of PyTorch, they may no longer be needed.) -
Change to the
<root>/smg-detectron2
directory. -
Check out the
master
branch. -
Run
pip install -e .
at the terminal.
If you build on this framework for your research, please cite the following paper:
@inproceedings{Golodetz2022TR,
author = {Stuart Golodetz and Madhu Vankadari* and Aluna Everitt* and Sangyun Shin* and Andrew Markham and Niki Trigoni},
title = {{Real-Time Hybrid Mapping of Populated Indoor Scenes using a Low-Cost Monocular UAV}},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
month = {October},
year = {2022}
}
This work was supported by Amazon Web Services via the Oxford-Singapore Human-Machine Collaboration Programme, and by UKRI as part of the ACE-OPS grant. We would also like to thank Graham Taylor for the use of the Wytham Flight Lab, Philip Torr for the use of an Asus ZenFone AR, and Tommaso Cavallari for implementing TangoCapture.