Feature request: markov chains
Mv77 opened this issue · 0 comments
I am a big fan of what we have managed to do with the tools that represent and manipulate discrete distributions. The distr_of_func
and expect
functions are intuitive, expressive and flexible, even more so when used with labeled distributions.
One limitation of the toolkit is that we do not have similar tools for variables/processes with memory, i.e., markov chains. I think this is due to the fact that in most of our models the income process has a unit root and, after normalizing appropriately, the distribution of shocks does not depend on current or past states.
Models in which some process Distribution.expect(lambda z_tp1 : f(z_tp1))
don't allow us to calculate that type of object conveniently, because it is the distribution (not the function) that depends on
It would be very convenient if we could have some MarkovChain
object that allowed vectorized operations like, say
-
MarkovChain(lambda z_tp1: f(z_tp1))
to return a vector with the expectation calculated conditional on each value of$z_t$ , orMarkovChain(lambda z_tp1: f(z_tp1), current = x)
to calculate the expectation conditional on$z_t = x$ .