This code computes the Markov Decision Process(MDP) utilities for states via the Value Iteration Algorithm.
Fire up your terminal, and type
$ python mdp.py
Support for displaying all calculations involved along with the iteration table. Added final policy iteration computation and matrix display.
Also took care of the error in the previous computation.