A question about the result
hzx-ctrl opened this issue · 5 comments
Hi,
I noticed that the duration of a task is decided by the code in node.py, which used np.random.randint to generate the cost-time. But if I replace it with np_random which has specified seed, the result I got is still different each time I trained the model. I have no idea why it would happen
Thank you!
Hu
I think there's randomness in tensorflow action sampling too. As a result, each round of training will get different action trajectory, and model will go down different path. Try fixing a random seed for that too and see if results are repeatable.
One other potential problem I remember is some numerical instability of tensorflow. The training has multiple agents collecting experience in different processes. Mathematically, the order of getting the experiences to compute the gradient shouldn't matter. But empirically it seems that tensorflow gets different gradient when assembling the experiences in different order. You might also want to keep this in mind if you want repeatable outcome at every run. Hope these help!
Thanks for your reply!
And since the algorithm picked up different DAG each episode, how can we tell if Decima has already converged?
Looking at reward and entropy signal. You can set a criteria (e.g., signal flat out, or stay within x standard deviation computed from past n signal data point) for training convergence. This part is similar to standard RL training.
Thank you very much and sorry to bother you again, I trained with --num_init_dags 5 --num_stream_dags 10, and after several thousand episodes I find the output of policy network is so large that valid_mask can't work at all,which leads to take illegal actions. Could you please tell me is it normal and any possible reasons why could this happen? Thanks!
hmmm I don't recall valid_mask
failed. If the policy network can output something, valid_mask
is in the same shape. I don't quite get what you meant by "policy network is so large"? Are the numeric values being too large? That might leads to NaN when very large (basically being treated as Inf) number multiplies 0 at valid_mask
. In another context I have seen behavior like this, it's usually because the agent selects an invalid action in the previous step. Because it was masked with 0, the gradient descent will have an Inf for some parameters, then things blow up. But I don't recall seeing this in this training code.
Here's a pre-trained model #12 You might want to try the same parameters and compare with the model?