/CS4049_ASS2

Primary LanguageJupyter Notebook

CS4049_ASS2

Overleaf link

https://www.overleaf.com/8869713429ghrwwxjvrwzq#2a52d8

Assessment spec

Figure 1 shows the Taxi environment for experimenting with Reinforcement Learning. In this environment a taxi navigates in a grid world, picking up passengers at a source location, driving them to and dropping them off at the destination location. There are two versions of this environment as part of OpenAI Gym or Farama Foundation. Make sure you select one of these after checking all the associated libraries required to work with the selected version. Please use Python for all programming tasks. The students are encouraged to use Python-based frameworks, such as Tensorflow and Keras.

1.1

Describe the Epsilon-greedy method in the context of the exploration-exploitation tradeoff in reinforcement learning using the Taxi environment as an example. (5 marks)

1.2

Design and train a neural net agent, using the Epsilon-greedy method. Describe how you designed your agent, the motivation behind the design choices, what the parameters are and how you adjusted these parameters. You may use open-source code and libraries if you acknowledge them. Present the neural net architecture and details of training using figures where necessary. (30 marks)

1.3

Run experiments using the agent from 1.2 and discuss your results including a line plot that shows how the average amount of rewards evolves over time. Using the experimental results comment on the quality of the agent from 1.2. (15 marks)

1.4

Design and train another neural net agent replacing the Epsilon-greedy method with another method to manage exploration-exploitation tradeoff. Your mark for this part of the coursework depends on the level of sophistication of the tradeoff method and its suitability to the taxi environment. (35 marks) page 4 of 5

1.5

Run experiments using the agent from 1.4 and discuss your results including a line plot that shows the average amount of rewards evolves over time. Compare the performance of the two agents – agents from 1.2 and 1.4. (15 marks)