openai/multiagent-particle-envs
Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
PythonMIT
Issues
- 0
- 3
This code base is no longer maintained
#105 opened by KishoreKicha14 - 1
cant run different scenarios except simply.py
#85 opened by abeerM - 0
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Jarvis\\Desktop\\r-maac-main\\multiagent\\scenarios\\spread_collect.py'
#101 opened by LZL-boy - 0
raise NotImplementedError
#100 opened by wagh311 - 3
Checking collisions with agent itself?
#70 opened by tessavdheiden - 1
Wrong reward in simple speaker listener
#97 opened by StevenYuan666 - 6
- 1
agent out of the box
#64 opened by cu-rie - 0
- 0
Add image as background
#95 opened by i-am-neet - 2
Error when display simple_crypto
#61 opened by rical730 - 0
- 2
- 0
Index error: list index out of range
#92 opened by liuziwei0322 - 1
How to delete a landmark when training
#91 opened by liuqi8827 - 2
Turn the environment into 3D
#81 opened by zxm-NEU - 3
run interactive.py ctypes.ArgumentError
#42 opened by SweetPin - 0
Use the checkpoint file to continue training
#88 opened by Doris1039 - 0
- 0
Running timeout!
#84 opened by alimogharrebi - 1
- 6
Creating the boundary of the environment
#59 opened by lyp741 - 4
- 1
Reward Setting for simple_tag.py
#60 opened by hiroignis - 0
- 0
- 0
Centralized learning-decentralized execution clarification (engineering perspective)
#79 opened by Kimonili - 1
- 3
- 2
- 0
Typo in benchmark data of speaker_listener
#74 opened by tessavdheiden - 6
Communication Signal in simple_reference
#66 opened by isaeed3 - 0
how to fix the Observation for obstacle landmark
#73 opened by jslin053 - 0
Can I change the shape of landmarks
#72 opened by AmulyaReddy99 - 0
error in the test step.
#71 opened by HzcIrving - 0
- 0
- 0
Filled polygon 2 loops?
#65 opened by tessavdheiden - 0
Global state
#63 opened by cu-rie - 3
Fix: ImportError: cannot import name 'prng'
#53 opened by zhaolongkzz - 1
- 0
couldn't run interactive.py
#56 opened by GayaminiG - 0
- 1
I have trained using the m3ddpg source code train.py. But the scenario won't take actions accordingly. Any leads on non-hardcoded actions ?
#47 opened by hamzahamzii - 0
- 0
- 0
World.dim_c parameter meaning
#45 opened by njfdiem - 0
- 2
How to show the agents in other scenarios
#43 opened by zrjrj