/nanorl

A tiny reinforcement learning codebase for continuous control, built on top of JAX.

Primary LanguagePythonApache License 2.0Apache-2.0

nanorl

A tiny reinforcement learning codebase for continuous control, built on top of JAX. Minimal, self-contained and research-friendly. Inspired by Ilya Kostrikov's jaxrl.

Installation

  1. pip install --upgrade "jax[cuda]==0.3.25" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
  2. git clone https://github.com/kevinzakka/nanorl && cd nanorl
  3. pip install -r requirements.txt
  4. pip install -e ".[all]"