This repository contains the code for the RAPidLearn. A framework which integrates planning and learning to solve complex task in non stationary environments for open world learnings. The following code uses the gym-novel-gridworlds environments. The planner used in this case is the Metric FF_v_2.1. For any questions please contact shivam.goel@tufts.edu
Technical Appendix to RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments. in TechnicalAppendix.pdf
Download MetricFF.
cd Metric-FF-v2.1
make .
Use the requirements file to install the dependencies
pip install -r requirements.txt
[Optional] Create a conda environment
conda create --name <YOUR_ENV_NAME> --python=3.7
conda activate <YOUR_ENV_NAME>
pip install -r requirements.txt
Install gym_novel_gridworlds.
Switch to branch adaptive_agents
To run the RAPidLearn -->
Run
python experiment.py
You can also use arguments.
python experiment.py --TP <trials pre novelty> --TN <trials_post_novelty> --N <novelty_name> --L <learner_name> -E <exploration_mode> -R <render>
novelty_name
'axetobreakeasy'
'axetobreakhard'
'firecraftingtableeasy'
'firecraftingtablehard'
'rubbertree'
'axefirecteasy'
'rubbertreehard'
learner_name
'epsilon-greedy'
'smart-exploration'
exploration_mode
'uniform'
'ucb'
For questions please contact shivam.goel@tufts.edu