This project performs motion-based object detection of MAVs using optical flow and the Focus of Expansion. It also includes Python scripts to generate datasets from AirSim. It is intended to run on Ubuntu 20.04, with Python 3.8.
Install the Python dependencies:
python3 -m pip install -r requirements.txt
Set the settings for AirSim:
cp etc/settings.json ~/Documents/AirSim/settings.json
You will need the FlowNet2
and semantic-segmentation
forks listed under dependencies.
The following environment variables need to be set in .bashrc
with their correct paths:
export FLOWNET2="~/neural-nets/flownet2-pytorch"
export FLOWNET2_CHECKPOINTS_DIR="~/neural-nets/flownet2-checkpoints"
export HRNET_PATH="~/neural-nets/semantic-segmentation"
Optionally, you can set the following environment variables depending on which dataset you want to use:
export SIMDATA_PATH="~/datasets/sim-data"
export MIDGARD_PATH="~/datasets/midgard"
export EXPERIMENT_PATH="~/datasets/experiment"
To see all possible command arguments:
python3 src/main.py --help
Which outputs:
usage: main.py [-h] [--dataset DATASET] [--sequence SEQUENCE] [--mode MODE] [--algorithm ALGORITHM] [--debug] [--prepare-dataset] [--validate] [--headless] [--run-all] [--data-to-yolo] [--undistort]
Detects MAVs in the dataset using optical flow.
optional arguments:
-h, --help show this help message and exit
--dataset DATASET dataset to process
--sequence SEQUENCE sequence to process
--mode MODE mode to use, see RunConfig.Mode
--algorithm ALGORITHM
detection algorithm to use, see Detection.Algorithm
--debug whether to debug or not
--prepare-dataset prepares the YOLOv4 training dataset
--validate validate the detection results
--headless do not use UIs
--run-all run all configurations
--data-to-yolo convert annotations to the YOLO format
--undistort undistort original
To check the typing of the Python code run:
mypy