zilongzhong/SSTN

How to Run the code? Thank you!

AlanMorningLight opened this issue · 4 comments

Dear zilong,
I have run the code with the command "python train_UP.py --batch_size 50 --learning_rate 0.001 --gpu 0 --epochs 200 --model SSTN --phi EAAE" in prcharm terminal on win10 system.
However, ① I get the Multithreaded error. I modified the num_works=0, The program does not report an error, but no result.
② I also add the "if name == 'main':" in the train_UP.py,Otherwise the program cannot run.
③ Can you give an execution command that can run the program correctly?
I also have some questions which have been sent to your email z26zhong@uwaterloo.ca. Thank you very much.

Dear Wenchao, thanks a lot for your detailed feedback.
The repo has been updated with proper explanation and the commands to run codes can be found in the Usage section.
Here are my responses to your five questions:

  1. Yes, that's one reason. Also, additional experiments are requested by reviewers.
  2. The layer-level searching is for FAS. After layer-level searching, we focus on the architecture-level sequential setting.
  3. Spectral association is different from spectral attention. We have conducted preliminary experiments that show spectral attention did not work well as expected. Therefore, we further design a spectral association module to improve it. The essence of the spectral association kernel is the kxc spectral kernels, the physical meaning of which is the most prominent spectral features of input feature maps. This step also drastically decreases computational cost.
  4. Compared to both the spatial attention module and convolutional module, the spectral association kernels are sparse.
  5. The adopted sampling method is without replacement. You understand that Val and Dev sets are generally correct.

Dear Wenchao, thanks a lot for your detailed feedback. The repo has been updated with proper explanation and the commands to run codes can be found in the Usage section. Here are my responses to your five questions:

  1. Yes, that's one reason. Also, additional experiments are requested by reviewers.
  2. The layer-level searching is for FAS. After layer-level searching, we focus on the architecture-level sequential setting.
  3. Spectral association is different from spectral attention. We have conducted preliminary experiments that show spectral attention did not work well as expected. Therefore, we further design a spectral association module to improve it. The essence of the spectral association kernel is the kxc spectral kernels, the physical meaning of which is the most prominent spectral features of input feature maps. This step also drastically decreases computational cost.
  4. Compared to both the spatial attention module and convolutional module, the spectral association kernels are sparse.
  5. The adopted sampling method is without replacement. You understand that Val and Dev sets are generally correct.

Thank you very much for your serious reply one by one during your busy schedule. I have benefited a lot from your paper. My contact information is included in the email. If you are convenient, you can also directly add my WeChat to facilitate future academic discussions and exchanges. thanks for your help. Good Luck!!

Dear zilong,
I think that the FAS codes are not included in the code you are shared. Is it right? Thank you very much.
Best wishes.