titu1994/neural-architecture-search

Details

Opened this issue · 0 comments

gcooq commented

Thanks for your great contribution, I have some questions need your help. I am a new learner of NAS and reinforcement learning. There is an exploration, i.g., the probability to choose the random action, however, the results usually fall into the local area, and cannot obtain the new state. Can we randomly choose the state with a high exploration rate instead of randomly choosing the action? It seems that randomly choose the state can obtain more searching space.