Evaluating commands
Opened this issue · 1 comments
Hello: thanks for your interesting paper and code, i am really enjoying your work and i have small questions
Q1: Do you have any documents, explaining the main files, configurations and experiments (i.e all commands to run for experiment (training, evaluation, visualization)
Q2: i run few experience but i get errors while running Evaluating and Visualizing , for example i run:
python -m aprl.train with env_name=multicomp/SumoHumans-v0 paper. and the output is three files under data/baselines/20220215_124424-default
Could you please tell me the right commands for Evaluating and Visualizing for above exp and also the right path for victim
Evaluation (example): python -m aprl.score_agent with path_of_trained_adversary(above path) path_of_victim (it is not working)
thanks
Have you seen our README? This includes example commands that should be sufficient to replicate the paper results.
You might also want to read some of the documentation for sacred which we use for configuration. In particular python -m aprl.score_agent print_config
might help you out by showing you all the configuration arguments and default values.