This is an adaptation of Path Aggregation Network for Instance Segmentation (PANet) by Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia for PyTorch 1.2 and AMD Radeon Open Compute (ROCm).
This implementation can be installed both on NVIDIA CUDA and AMD ROCm.
To install:
git clone https://github.com/anakham/PANet.git
cd PANet/lib
sh make.sh
See the original README for more details about installation and usage.
Changes required for a PANet PyTorch 1.2 project to start working on ROCm can be found in the commit 3c08e9. You may consider it as a template for adaptation of CUDA projects to ROCm-capable platforms in case these projects use JIT compilation modules.
Some improvements in the original project have also been done:
- DATA_DIR can be added to config file to allow data being placed in an arbitrary
directory, not only in
<project root>/data
directory - train parameters are stored in a
config_and_args.cfg
file in a human-readable format - some fixes have been done to enable operation with Cityscapes dataset
- added
--skip_top_layers
command-line argument to enable loading pretrained model with different classes number (i.e. COCO pretrained model for training on Cityscapes dataset) - config file for training on the Cityscapes dataset as described in the original paper (except what 4 GPUs are used instead of 8) is added
- handling of
--iter_size
parameter was changed to make more clear splitting effective batch size to smaller parts for running with lower number of GPUs or GPUs with lower memory to get similar result (different only in batch norm) with same config file and same checkpointing period
- means for working with data generated by Chameleon AI Tools Highwai simulator by Mindtech were added
- Domain-Adversarial Training of Neural Networks (DANN) domain adaptation is implemented
See branch Chameleon for details.