Help in creating Adversarial Environment
AizazSharif opened this issue ยท 9 comments
I wanted to try the adversarial multi-agent example as mentioned in the related paper, but there are only 2 available examples and an adversarial example is not in it.
Kindly help in how to create such env and use it for training and testing.
Any information would be helpful.
Thanks
Hi @AizazSharif ,
An adversarial scenario was not explicitly evaluated/studied in the paper but it is easy to add one to MACAD-Gym.
- Sample guide to create and new learning environment or scenario. This gives you the list of changes to create your own custom MACAD-Gym environment with a custom scenario.
- Specifically for the adversarial multi-agent environment you are interested in, you can re-use this urban driving through a signalized intersection scenario where 3 cars are crossing the intersection. To make the Actor
car2
behave adversarially to thecar1
Actor for example, you could change the following reward function spec forcar2
:
to use your own custom reward function or simply create a negative version (mini-max's counter part) ofcar1
's reward function.
For your reference, the Reward functions are implemented in reward.py.
I will be happy to help you if you need any further assistance in putting this together as a pull request.
Hi @praveen-palanisamy,
Thanks for the help. I will go through the instructions and will update here soon. I asked about the Adversarial example since it's mentioned in the naming convention by the name 'Advrs'.
In urban driving through a signalized intersection scenario I wanted to ask if the multi-agents training are done through shared policy or separate. I am interested in working on separate policy training.
Yes, you could use separate policies for each of the agents.
An example where different policy parameters are used for each Car actor is available in the MACAD-Agents repository. It's implemented using Ray and specifically, the following lines shows how you could use different policy parameters for each of the agents:
"multiagent": {
"policy_graphs": {
id: default_policy()
for id in env_actor_configs["actors"].keys()
},
"policy_mapping_fn":
tune.function(lambda agent_id: agent_id),
},
You can also use a custom/different Deep RL algorithm for each or the agents to train the policy.
This is great thanks a lot for the help @praveen-palanisamy. Hopefully, I am able to do this before I create a feature pull request.
Hi @praveen-palanisamy ,
I was going through the framework and was wondering how to change Deep RL policies within Macad-gym. It's visible in macad-agents (Impala, PPO) but I was unable to find it in macad-gym environments to add my own.
Any information would be helpful.
Thanks
Hey @AizazSharif ,
The Agent code which include the Deep RL policy implementation is kept out of the learning environment code MACAD-Gym to allow for modularity and plug-and-play with other RL libraries or RL environments. As you figured, sample Agent training scripts are provided in the MACAD-Agents repository.
MACAD-Agents works with MACAD-Gym and you can change the Deep RL Agent algorithms or policy definitions in MACAD-Agents repository.
Hope that helps.
Thanks for the reply @praveen-palanisamy.
I am facing trouble while using Macad-agent impala examples. Even if I lower the resources the code runs but the cars don't move at all in default settings and without any training or crash Carla restarts the environment. Whereas in macad-gym, the agents make actions, and also we can have 2nd window which shows the agent's POV from the front.
If you have any suggestions or pointer regarding this issue it will be really helpful.
Thanks.
Okay so that does sound to be specific to the the Impala Agent which has it's own resource requirements on top of the environment/CARLA. Since this is a different issue, can you open one on the MACAD-Agents repository to keep things organized?
@AizazSharif : If you need further assistance on the original topic (Help in creating Adversarial Environments), we can continue it as an Idea Discussion.