Pytorch implementation of the paper "ADNet: Lane Shape Prediction via Anchor Decomposition " (ICCV2023 Acceptance)
[paper]
In this paper, we revisit the limitations of anchor-based lane detection methods, which have predominantly focused on fixed anchors that stem from the edges of the image, disregarding their versatility and quality. To overcome the inflexibility of anchors, we decompose them into learning the heat map of starting points and their associated directions. This decomposition removes the limitations on the starting point of anchors, making our algorithm adaptable to different lane types in various datasets. To enhance the quality of anchors, we introduce the Large Kernel Attention (LKA) for Feature Pyramid Network (FPN). This significantly increases the receptive field, which is crucial in capturing the sufficient context as lane lines typically run throughout the entire image. We have named our proposed system the Anchor Decomposition Network (ADNet). Additionally, we propose the General Lane IoU (GLIoU) loss, which significantly improves the performance of ADNet in complex scenarios. Experimental results on three widely used lane detection benchmarks, VIL-100, CULane, and TuSimple, demonstrate that our approach outperforms the state-of-the-art methods on VIL-100 and exhibits competitive accuracy on CULane and TuSimple.
- Clone this repository
git clone https://github.com/Sephirex-X/ADNet.git
- create conda environment if you using conda
conda create -n ADNet && conda activate ADNet
- install pytorch and Shapely
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
conda install Shapely==1.7.0
- building independencies
pip install -r requirements.txt
adding -i https://pypi.tuna.tsinghua.edu.cn/simple
if you locate in China mainland.
- setup everything
python setup.py build develop
- create folder under root path using
mkdir data
- organize structure like these:
data/
├── CULane -> /mnt/data/xly/CULane/
├── tusimple -> /mnt/data/xly/tusimple/
└── VIL100 -> /mnt/data/xly/VIL100/
under each folder, you may see structure like this:
Download CULane.
/mnt/data/xly/CULane/
├── driver_100_30frame
├── driver_161_90frame
├── driver_182_30frame
├── driver_193_90frame
├── driver_23_30frame
├── driver_37_30frame
├── laneseg_label_w16
└── list
Download Tusimple.
/mnt/data/xly/tusimple/
├── clips
├── label_data_0313.json
├── label_data_0531.json
├── label_data_0601.json
├── test_label.json
└── test_tasks_0627.json
Download VIL-100.
You may find anno_txt here anno_txt.zip
/mnt/data/xly/VIL100/
├── Annotations
├── anno_txt
├── data
├── JPEGImages
└── Json
- You can inferencing model using:
python main.py {configs you want to use} --work_dirs {your folder} --load_from {your checkpoint path} --validate --gpus {device id}
for example:
python main.py configs/adnet/tusimple/resnet18_tusimple.py --work_dirs test --load_from best_ckpt/tusimple/res18/best.pth --validate --gpus 3
If you don't assign --work_dirs
, it will create folder named work_dirs
under root path by default.
- By adding
--view
you can see visualization results under foldervis_results
, under root path. - You can test fps using:
python tools/fps_test.py
- Simply using:
python main.py {configs you want to use} --work_dirs {your folder} --gpus {device id}
- You can resume from your last checkpoint by:
python main.py {configs you want to use} --work_dirs {your folder} --load_from {your checkpoint path} --gpus {device id}
-
Other hyperparameters can be changed within config files.
-
Your can check training procedure using tensorboard module:
tensorboard --logdir {your folder} --host=0.0.0.0 --port=1234
Backbone | F1@50 | Acc | FP | FN | Download |
---|---|---|---|---|---|
ResNet-18 | 89.97 | 94.23 | 5.0 | 5.1 | Link |
ResNet-34 | 90.39 | 94.38 | 4.4 | 4.9 | Link |
ResNet-101 | 90.90 | 94.27 | 4.7 | 5.0 | Link |
Backbone | F1@50 | Download |
---|---|---|
ResNet-18 | 77.56 | Link |
ResNet-34 | 78.94 | Link |
Backbone | F1@50 | Acc | FP | FN | Download |
---|---|---|---|---|---|
ResNet-18 | 96.90 | 96.23 | 2.91 | 3.29 | Link |
ResNet-34 | 97.31 | 96.60 | 2.83 | 2.53 | Link |
If you find our work is useful, please consider citing:
@article{xiao2023adnet,
title={ADNet: Lane Shape Prediction via Anchor Decomposition},
author={Xiao, Lingyu and Li, Xiang and Yang, Sen and Yang, Wankou},
journal={arXiv preprint arXiv:2308.10481},
year={2023}
}
@InProceedings{Xiao_2023_ICCV,
author = {Xiao, Lingyu and Li, Xiang and Yang, Sen and Yang, Wankou},
title = {ADNet: Lane Shape Prediction via Anchor Decomposition},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {6404-6413}
}