SST
##Create Environment:
pip install -r requirements.txt
##Prepare Dataset:
Download cave_1024_28 (One Drive), CAVE_512_28 (Baidu Disk, code: ixoe
| One Drive), KAIST_CVPR2021 (Baidu Disk, code: 5mmn
| One Drive), TSA_simu_data (One Drive), TSA_real_data (One Drive), and then put them into the corresponding folders of datasets/
and recollect them as the following form:
|--SST
|--real
|-- test_code
|-- train_code
|--simulation
|-- test_code
|-- train_code
|--visualization
|--datasets
|--cave_1024_28
|--scene1.mat
|--scene2.mat
:
|--scene205.mat
|--CAVE_512_28
|--scene1.mat
|--scene2.mat
:
|--scene30.mat
|--KAIST_CVPR2021
|--1.mat
|--2.mat
:
|--30.mat
|--TSA_simu_data
|--mask.mat
|--Truth
|--scene01.mat
|--scene02.mat
:
|--scene10.mat
|--TSA_real_data
|--mask.mat
|--Measurements
|--scene1.mat
|--scene2.mat
:
|--scene5.mat
Following TSA-Net and DGSMP, we use the CAVE dataset (cave_1024_28) as the simulation training set. Both the CAVE (CAVE_512_28) and KAIST (KAIST_CVPR2021) datasets are used as the real training set.
##Prepare Pretrained ckpt:
Download pretrained (Simulation and real (Baidu Disk, code: 2ndf
| One Drive), and then put them into the corresponding folders of pretrained/
.
Simulation Experiement:
Training
cd SST/simulation/train_code/
# SST_S
python train.py --outf ./exp/SST_S/ --method SST_S
# SST_M
python train.py --outf ./exp/SST_M/ --method SST_M
# SST_L
python train.py --outf ./exp/SST_L/ --method SST_L
# SST_LPlus
python train.py --outf ./exp/SST_LPlus/ --method SST_LPlus
The training log, trained model, and reconstrcuted HSI will be available in SST/simulation/train_code/exp/
.
Testing
Run the following command to test the model on the simulation dataset.
cd SST/simulation/test_code/
# SST_S
python test.py --outf ./exp/SST_S/ --method SST_S --pretrained_model_path ./SST_S.pth
# SST_M
python test.py --outf ./exp/SST_M/ --method SST_M --pretrained_model_path ./SST_M.pth
# SST_L
python test.py --outf ./exp/SST_L/ --method SST_L --pretrained_model_path ./SST_L.pth
# SST_LPlus
python test.py --outf ./exp/SST_LPlus/ --method SST_LPlus --pretrained_model_path ./SST_LPlus.pth
- The reconstrcuted HSIs will be output into
SST/simulation/test_code/exp/
Real Experiement:
Training
cd SST/real/train_code/
# SST_S
python train.py --outf ./exp/SST_S/ --method SST_S
# SST_M
python train.py --outf ./exp/SST_M/ --method SST_M
The training log, trained model, and reconstrcuted HSI will be available in `SST/real/train_code/exp/` .
Testing
cd SST/real/test_code/
# SST_S
python train.py --outf ./exp/SST_S/ --method SST_S --pretrained_model_path ./SST_S.pth
# SST_M
python train.py --outf ./exp/SST_M/ --method SST_M --pretrained_model_path ./SST_M.pth
The reconstrcuted HSI will be output into `SST/real/test_code/exp/`
Acknowledgement:
Many thanks for the excellent work that has gone before and for their dedication. The code structure and datasets are borrowed from MST and DAUHST(https://github.com/caiyuanhao1998/MST).
please consider citing their works:
@inproceedings{mst, title={Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction}, author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool}, booktitle={CVPR}, year={2022} }
@inproceedings{dauhst, title={Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging}, author={Cai, Yuanhao and Lin, Jing and Wang, Haoqian and Yuan, Xin and Ding, Henghui and Zhang, Yulun and Timofte, Radu and Van Gool, Luc}, booktitle={NeurIPS}, year={2022} }