linnanwang/AlphaX-NASBench101

Search space for NASNet

Closed this issue · 3 comments

Thank you for the great idea in this work!
I would like to ask about the difference of the search space based on NASNet between AlphaX and NASNet. It seems that in each block of a cell, either input hidden state can be passed through up to two layers with different operations. In my understanding, it is not the same as NASNet, which only exists one operation for either input hidden state. Is the setting made up purposely?
Please let me know if I misunderstood.
Thanks again.

The search space of AlphaX is totally consistent with NASNet. There is only one operation for either input hidden state.
Actually, We have tried to implement two layers with different operations in a hidden state but found out that the final performance(accuracy) was not improved.

Thanks a lot!

We also want to bring your attention to our recent work, "Sample-Efficient Neural Architecture Search by Learning Action Space ", which has significantly improved AlphaX. The codes will be made public under FAIR github repository soon.