Markov decision process
a MDP problem for a generic grid world (e.g. figure 17.1 of AIMAtextbook page 646 ) and use value iteration to print the values of states in each iteration.
Steps to reproduce:
1. Install the following python Libraries
copy
JSON
2.
create a JSON file called input.json
a template can be found included in this zip
3.
run mdp.py