/BasicIRSTD

BasicIRSTD toolbox

Primary LanguagePython

BasicIRSTD

BasicIRSTD is a PyTorch-based open-source and easy-to-use toolbox for infrared small target detction (IRSTD). This toolbox introduces a simple pipeline to train/test your methods, and builds a benchmark to comprehensively evaluate the performance of existing methods. Our BasicIRSTD can help researchers to get access to infrared small target detction quickly, and facilitates the development of novel methods. Welcome to contribute your own methods to the benchmark.

Note: This repository will be updated on a regular basis. Please stay tuned!


Contributions

  • We provide a PyTorch-based open-source and easy-to-use toolbox for IRSTD.
  • We re-implement a number of existing methods on the unified datasets, and develop a benchmark for performance evaluation.
  • We share the codes, models and results of existing methods to help researchers better get access to this area.

News & Updates

  • Apirl 4, 2022: Pulic The BasicIRSTD ToolBox.

  • July 24, 2023: Update BasicIRSTD ToolBox (stable version).

    Code: We fix bugs to ensure the stability of training results.
    Results: We reprodude all models and update the results in table.

  • April 19, 2024: Update README.md.

    New section "Build": We add instructions for DCNv2 compiling of ISNet.
    New section "Train on your own models": We add instructions for self-defined model usage.

  • May 11, 2024: Update README.md.

    Updated section "Recources": Update the link to the pre-trained models and result files.


Requirements

  • Python 3
  • pytorch 1.2.0 or higher
  • numpy, PIL

Datasets

We used the NUAA-SIRST, NUDT-SIRST, IRSTD-1K for both training and test. Please first download our datasets via Baidu Drive (key:1113) or Google Drive, and place the 3 datasets to the folder ./datasets/. More results will be released soon!

  • Our project has the following structure:
    ├──./datasets/
    │    ├── NUAA-SIRST
    │    │    ├── images
    │    │    │    ├── XDU0.png
    │    │    │    ├── XDU1.png
    │    │    │    ├── ...
    │    │    ├── masks
    │    │    │    ├── XDU0.png
    │    │    │    ├── XDU1.png
    │    │    │    ├── ...
    │    │    ├── img_idx
    │    │    │    ├── train_NUAA-SIRST.txt
    │    │    │    ├── test_NUAA-SIRST.txt
    │    ├── NUDT-SIRST
    │    │    ├── images
    │    │    │    ├── 000001.png
    │    │    │    ├── 000002.png
    │    │    │    ├── ...
    │    │    ├── masks
    │    │    │    ├── 000001.png
    │    │    │    ├── 000002.png
    │    │    │    ├── ...
    │    │    ├── img_idx
    │    │    │    ├── train_NUDT-SIRST.txt
    │    │    │    ├── test_NUDT-SIRST.txt
    │    ├── ...  
    

Build

Compile DCN for ISNet:

  1. Cd to model/ISNet/DCNv2.
  2. run bash make.sh. The scripts will build DCNv2 automatically and create some folders.
  3. To skip the use of DCNv2, you have to annotate ISNet in model/__init__.py.

Commands for Training

  • Run train.py to perform network training. Example for training [model_name] on [dataset_name] datasets:
    $ python train.py --model_names ACM ALCNet --dataset_names NUAA-SIRST
    
  • Checkpoints and Logs will be saved to ./log/, and the ./log/ has the following structure:
    ├──./log/
    │    ├── [dataset_name]
    │    │    ├── [model_name]_eopch400.pth.tar
    

Train on your own models

  • Create a folder in ./model, and put your own model in this folder.
    ├──./model/
    │    ├── xxxNet
    │    │    ├── model.py
    
  • Add the model in model/__init__.py..
    from model.ACM.model_ACM import ASKCResUNet as ACM
    ...
    from model.xxxNet.model import net as xxxNet
    
  • Add the model in net.py..
    if model_name == 'DNANet':
       self.model = DNANet(mode='train')
    ...
    elif model_name == 'xxxNet':
       self.model = xxxNet()
    ...
    

Commands for Test

  • Run test.py to perform network inference. Example for test [model_name] on [dataset_name] datasets:

    $ python test.py --model_names ACM ALCNet --dataset_names NUAA-SIRST
    
  • The PA/mIoU and PD/FA values of each dataset will be saved to ./test_[current time].txt

  • Network preditions will be saved to ./results/ that has the following structure:

    ├──./results/
    │    ├── [dataset_name]
    │    │   ├── [model_name]
    │    │   │    ├── XDU0.png
    │    │   │    ├── XDU1.png
    │    │   │    ├── ...
    │    │   │    ├── XDU20.png
    

Commands for Evaluate on your own results

  • Please first put your results on ./results/ that has the following structure:
    ├──./results/
    │    ├── [dataset_name]
    │    │   ├── [method_name]
    │    │   │    ├── XDU0.png
    │    │   │    ├── XDU1.png
    │    │   │    ├── ...
    │    │   │    ├── XDU20.png
    
  • Run evaluate.py for direct eevaluation. Example for evaluate [model_name] on [dataset_name] datasets:
    $ python evaluate.py --model_names ACM --dataset_names NUAA-SIRST
    
  • The PA/mIoU and PD/FA values of each dataset will be saved to ./eval_[current time].txt

Commands for parameters/FLOPs calculation

  • Run cal_params.py for parameters and FLOPs calculation. Examples:
    $ python cal_params.py --model_names ACM ALCNet
    
  • The parameters and FLOPs of each method will be saved to ./params_[current time].txt

Benchmark

We benchmark several methods on the above datasets. mIoU, PD and FA metrics under threshold=0.5 are used for quantitative evaluation.

Note: A detailed review of existing IRSTD methods can be referred to Tianfang-Zhang/awesome-infrared-small-targets.

mIoU/PD/FA values achieved by different methods:


Methods

#Params

FLOPs

NUAA-SIRST

NUDT-SIRST

IRSTD-1K

IoU

Pd

Fa

IoU

Pd

Fa

IoU

Pd

Fa

Top-Hat

-

-

7.142

79.841

1012.003

20.724

78.408

166.704

10.062

75.108

1432.003

Max-Median

-

-

1.168

30.196

55.332

4.201

58.413

36.888

7.003

65.213

59.731

RLCM

-

-

21.022

80.612

199.154

15.139

66.348

162.996

14.623

65.658

17.949

WSLCM

-

-

1.021

80.987

45846.164

0.848

74.574

52391.633

0.989

70.026

15027.084

TLLCM

-

-

11.034

79.473

7.268

7.059

62.014

46.118

5.357

63.966

4.928

MSLCM

-

-

11.557

78.332

8.374

6.646

56.827

25.619

5.346

59.932

5.410

MSPCM

-

-

12.837

83.271

17.773

5.859

55.866

115.961

7.332

60.270

15.242

IPI

-

-

25.674

85.551

11.470

17.758

74.486

41.230

27.923

81.374

16.183

NRAM

-

-

12.164

74.523

13.852

6.931

56.403

19.267

15.249

70.677

16.926

RIPT

-

-

11.048

79.077

22.612

29.441

91.850

344.303

14.106

77.548

28.310

PSTNN

-

-

22.401

77.953

29.109

14.848

66.132

44.170

24.573

71.988

35.261

MSLSTIPT

-

-

10.302

82.128

1131.002

8.341

47.399

88.102

11.432

79.027

1524.004

ACM

0.398M

0.402G

69.440

92.015

22.707

64.855

96.720

28.587

60.326

93.266

68.494

ALCNet

0.427M

0.378G

61.047

87.072

55.978

61.131

97.249

29.093

58.088

92.929

74.453

ISNet

0.966M

30.618G

70.491

95.057

67.983

81.236

97.778

6.343

61.852

90.236

31.561

RDIAN

0.217M

3.718G

70.737

95.057

48.158

82.419

98.836

14.845

59.939

87.205

33.307

DNA-Net

4.697M

14.261G

74.815

93.536

38.279

94.192

99.259

2.436

65.735

89.562

12.336

ISTDU-Net

2.752M

7.944G

75.928

96.198

38.897

91.762

98.519

3.769

65.014

93.939

26.437

UIU-Net

50.540M

54.426G

77.531

92.395

9.330

90.517

98.836

8.342

65.690

91.246

13.475

Recources

  • We provide the result files generated by the aforementioned methods, and researchers can download the results via Baidu Drive (key:1113) and One Drive.
  • The pre-trained models of the aforementioned methods can be downlaod via Baidu Drive (key:1113) and One Drive.

Acknowledgement

We would like to thank Boyang Li, Ruojing Li, Tianhao Wu and Ting Liu for the helpful discussions and insightful suggestions regarding this repository.

Contact

Welcome to raise issues or email to yingxinyi18@nudt.edu.cn for any question regarding our BasicIRSTD.