Linux |
Mac OS X |
Windows |
---|---|---|
This package provides a core interface for working with Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). For examples, please see POMDPExamples, QuickPOMDPs, and the Gallery.
Our goal is to provide a common programming vocabulary for:
- Expressing problems as MDPs and POMDPs.
- Writing solver software.
- Running simulations efficiently.
There are nested interfaces for expressing and interacting with (PO)MDPs: When the explicit interface is used, the transition and observation probabilities are explicitly defined using api functions; when the generative interface is used, only a single step simulator (e.g. (s', o, r) = G(s,a)) needs to be defined. Problems may also be defined with probability tables, or with the simplified QuickPOMDPs interfaces.
Python can be used to define and solve MDPs and POMDPs via the QuickPOMDPs or tabular interfaces and pyjulia (Example: tiger.py).
For help, please post to the Google group, or on gitter. Check releases for information on changes. POMDPs.jl and all packages in the JuliaPOMDP project are fully supported on Linux and OS X. Windows is supported for all native solvers*, and most non-native solvers should work, but may require additional configuration.
To install POMDPs.jl, run the following from the Julia REPL:
Pkg.add("POMDPs")
To install supported JuliaPOMDP packages including various solvers, first add the JuliaPOMDP registry:
using POMDPs
POMDPs.add_registry()
You can then list packages with POMDPs.available()
and install a solver (say SARSOP.jl
) with
Pkg.add("SARSOP")
To run a simple simulation of the classic Tiger POMDP using a policy created by the QMDP solver, you can use the following code (Note that this uses the simplified Discrete Explicit interface from QuickPOMDPs.jl; the main interface and the Quick Interface have much more expressive power):
using POMDPs, QuickPOMDPs, POMDPSimulators, QMDP
S = [:left, :right]
A = [:left, :right, :listen]
O = [:left, :right]
γ = 0.95
function T(s, a, sp)
if a == :listen
return s == sp
else # a door is opened
return 0.5 #reset
end
end
function Z(a, sp, o)
if a == :listen
if o == sp
return 0.85
else
return 0.15
end
else
return 0.5
end
end
function R(s, a)
if a == :listen
return -1.0
elseif s == a # the tiger was found
return -100.0
else # the tiger was escaped
return 10.0
end
end
m = DiscreteExplicitPOMDP(S,A,O,T,Z,R,γ)
solver = QMDPSolver()
policy = solve(solver, m)
rsum = 0.0
for (s,b,a,o,r) in stepthrough(m, policy, "s,b,a,o,r", max_steps=10)
println("s: $s, b: $([pdf(b,s) for s in S]), a: $a, o: $o")
global rsum += r
end
println("Undiscounted reward was $rsum.")
For more examples with visualization see POMDPGallery.jl.
Several tutorials are hosted in the POMDPExamples repository.
Detailed documentation can be found here.
Many packages use the POMDPs.jl interface, including MDP and POMDP solvers, support tools, and extensions to the POMDPs.jl interface.
POMDPs.jl itself contains only the interface for communicating about problem definitions. Most of the functionality for interacting with problems is actually contained in several support tools packages:
Package |
Build |
Coverage |
---|---|---|
POMDPModelTools | ||
BeliefUpdaters | ||
POMDPPolicies | ||
POMDPSimulators | ||
POMDPModels | ||
POMDPTesting | ||
ParticleFilters | ||
RLInterface |
Package |
Build/Coverage |
Online/ Offline |
Continuous States |
Continuous Actions |
---|---|---|---|---|
Value Iteration | |
Offline | N | N |
Local Approximation Value Iteration | |
Offline | Y | N |
Global Approximation Value Iteration | |
Offline | Y | N |
Monte Carlo Tree Search | |
Online | Y (DPW) | Y (DPW) |
Package |
Build/Coverage |
Online/ Offline |
Continuous States |
Continuous Actions |
Continuous Observations |
---|---|---|---|---|---|
QMDP | |
Offline | N | N | N |
FIB | |
Offline | N | N | N |
BeliefGridValueIteration | |
Offline | N | N | N |
SARSOP* | |
Offline | N | N | N |
BasicPOMCP | |
Online | Y | N | N1 |
ARDESPOT | |
Online | Y | N | N1 |
MCVI | |
Offline | Y | N | Y |
POMDPSolve* | |
Offline | N | N | N |
IncrementalPruning | |
Offline | N | N | N |
POMCPOW | |
Online | Y | Y2 | Y |
AEMS | |
Online | N | N | N |
1: Will run, but will not converge to optimal solution
2: Will run, but convergence to optimal solution is not proven, and it will likely not work well on multidimensional action spaces
Package |
Build/Coverage |
Continuous States |
Continuous Actions |
---|---|---|---|
TabularTDLearning | |
N | N |
DeepQLearning | |
Y1 | N |
1: For POMDPs, it will use the observation instead of the state as input to the policy. See RLInterface.jl for more details.
These packages were written for POMDPs.jl in Julia 0.6 and have not been updated to 1.0 yet.
Package |
Build |
Coverage |
---|---|---|
DESPOT |
Package |
---|
DESPOT |
*These packages require non-Julia dependencies
If POMDPs is useful in your research and you would like to acknowledge it, please cite this paper:
@article{egorov2017pomdps,
author = {Maxim Egorov and Zachary N. Sunberg and Edward Balaban and Tim A. Wheeler and Jayesh K. Gupta and Mykel J. Kochenderfer},
title = {{POMDP}s.jl: A Framework for Sequential Decision Making under Uncertainty},
journal = {Journal of Machine Learning Research},
year = {2017},
volume = {18},
number = {26},
pages = {1-5},
url = {http://jmlr.org/papers/v18/16-300.html}
}