praveen-palanisamy/macad-gym
Multi-Agent Connected Autonomous Driving (MACAD) Gym environments for Deep RL. Code for the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:
PythonMIT
Issues
- 1
Communication Mechanism
#86 opened by cuijiaxun - 1
How to create communicating environment?
#64 opened by medimol - 13
- 0
'NoneType' object has no attribute 'pid'
#90 opened by SExpert12 - 2
No support for other sensors?
#89 opened by angelomorgado - 2
Implement of IMPALA Agent Examples
#88 opened by Kinvy66 - 2
How to customize a learning environment?
#84 opened by hjh0119 - 1
v0.1.3 carla serve can't get connection
#87 opened by Kinvy66 - 4
How do we port an existing multi-agent leanring algorithm such as IDDPG, IPPO?
#85 opened by kailashg26 - 8
- 2
PathTracker generate wrong route
#81 opened by Morphlng - 6
`multi_view_render` will pop new display window on each frame with latest version of Pygame
#72 opened by Morphlng - 2
gym version will affect the usage of ray[rllib]
#76 opened by Morphlng - 3
The latest pull request is incomplete
#73 opened by Morphlng - 3
Support the library
#67 opened by johnMinelli - 2
How to visualize the learning environment?
#66 opened by qiangyuchuan - 3
Also stuck in env.reset() in example
#63 opened by KID0031 - 3
Multiprocess pickle Problem
#58 opened by Panshark - 3
- 2
- 1
How to set the spectator on the agent
#59 opened by Panshark - 2
error when import macad_gym
#48 opened by Zhang-Xiaoxue - 13
Running examle code
#40 opened by Yiquan-lol - 6
Tensorflow crashes
#27 opened by SHITIANYU-hue - 3
- 8
Cannot cteate OpenGL-enabled SDL window SDL error:'couldn't find matching GLX visual.
#21 opened by zengsh-cqupt - 2
Vehicle models
#43 opened by Yiquan-lol - 2
Reward
#41 opened by Yiquan-lol - 2
Running sample code
#39 opened by b-hakim - 1
- 9
Help in creating Adversarial Environment
#33 opened by AizazSharif - 7
How to manually control the agent in the scenario "DEFAULT_SCENARIO_TOWN1_COMBINED_WITH_MANUAL"?
#31 opened by YuffieHuang - 6
- 4
Unable to spawn actor: car1
#34 opened by eerkaijun - 4
Cannot run Agent interface demo code
#30 opened by mengyuest - 12
Modification of observations/actor states
#24 opened by Neel1302 - 3
Support for CARLA built from source
#23 opened by Neel1302 - 2
Different Carla versions
#22 opened by lcipolina - 1
The device's problem.
#19 opened by CHWLW - 0
- 6
Stuck in env.reset() until RAM runs out
#6 opened by tbienhoff - 21
Regarding the Agent Interface example.
#7 opened by lcipolina - 1
Extension to continuous action space
#5 opened by rsuwa - 1
env._seed issue
#4 opened by hsyoon94 - 3
How to use?
#3 opened by lijecaru