pythonlessons/RL-Bitcoin-trading-bot

Why Random in Action Selection?

Closed this issue · 4 comments

Hi,
I just finished your tutorial, and it's really interesting,
I'm just wondering why you are using np.random.choice() to select your action from the prediction, is it not better to get the maximum value instead of using random?

predictions_list = agent.Actor.actor_predict(np.reshape(state, [num_worker]+[_ for _ in state[0].shape]))
actions_list = [np.random.choice(agent.action_space, p=i) for i in predictions_list]

Hi,

From my understanding (and I'm kind of new to ML as well) :

  • The neural network provides us with a set of probabilities as outputs. (And I just imagine neural networks as prob-tree-mazes).
  • What we do with that ? Do we only EXPLORE the maximum probability output and REINFORCE it ? Or can we sometimes explore other outputs and reinforce/deinforce based on the reward ?

From what I've seen, old DQ networks use the argmax function on the output. They are forced to produce completly random actions from the start (entropy decay).

Here, the random choice is made using the p=i parameter, which effectively puts more weight to the outcome defined by the outputs of the network.
Let's say the network outputs [0.1, 0.7, 0.2] ... the np.random.choice still has a chance to make the agent EXPLORE that output number 0 with 0.1 probability, sometimes. But most of the time it will explore the 0.7. If you were using the argmax function here, you would make the agent stuck more easily.

Hi,

From my understanding (and I'm kind of new to ML as well) :

  • The neural network provides us with a set of probabilities as outputs. (And I just imagine neural networks as prob-tree-mazes).
  • What we do with that ? Do we only EXPLORE the maximum probability output and REINFORCE it ? Or can we sometimes explore other outputs and reinforce/deinforce based on the reward ?

From what I've seen, old DQ networks use the argmax function on the output. They are forced to produce completly random actions from the start (entropy decay).

Here, the random choice is made using the p=i parameter, which effectively puts more weight to the outcome defined by the outputs of the network.
Let's say the network outputs [0.1, 0.7, 0.2] ... the np.random.choice still has a chance to make the agent EXPLORE that output number 0 with 0.1 probability, sometimes. But most of the time it will explore the 0.7. If you were using the argmax function here, you would make the agent stuck more easily.

What a good answer. I would not have come up with such a good answer as you did

Hi Thx for the reply, it's more clear now.
What disturbed me back then, was that if we don't use the argmax function to get the result, we get a reward for something different from what the agent predicted, but now with your explaination and after taking a step back, I understood that when we send the datas to the Critic, we send the prediction, action and reward, so he know we didn't use the maximum from these datas, and he can confirme that the choice of 0.1 was bad if the reward was poor or he have some clue to where he can make some adjustement if the reward was good.

@pythonlessons Your Tutorial was really interesting and you did it just at the right time ^^
I continued to make some adjustment during the last 2 weeks, and what gave me the best result was removing order_history from the states and Normalizing each states datas (the division by min/max only) instead of all the datas at once. That way the datas are more generic and even a CNN_50 does better than a CNN_100(without modification) but the training take more time. And another good point is that after training on the BTCUSDT pair, and testing on other pair, it give some unexpected results ^^
for exemple :

1961.46_Crypto_trader, test episodes:1, net worth:17056.968456746374, orders per episode:375.0, model: CNN, comment: 3 months ENJUSDT
1961.46_Crypto_trader, test episodes:1, net worth:4308.664574548052, orders per episode:125.0, model: CNN, comment: 1st month ENJUSDT
1961.46_Crypto_trader, test episodes:1, net worth:1171.7433729711613, orders per episode:128.0, model: CNN, comment: 2nd month ENJUSDT
1961.46_Crypto_trader, test episodes:1, net worth:3488.0144749304186, orders per episode:121.0, model: CNN, comment: 3rd month ENJUSDT

Hi @TanJeremy,

Thank you for your submission here! I do have some questions about your last comment:

With "removing order_history from the states"; do you mean:

state = np.concatenate((self.orders_history, self.market_history), axis=1)
&
obs = np.concatenate((self.orders_history, self.market_history), axis=1)

becomes:

state = self.market_history
&
obs = self.market_history

or did you make other changes aswell?

And also, what do you mean by: "Normalizing each states datas (the division by min/max only) instead of all the datas at once. "? Did you remove the self.normalize_value?

With kind regards,

Erik