Cognitive-AI-Systems/pogema

Rendering Video Issues

Closed this issue · 10 comments

Hi bro,
I don't want it to be displayed on the command line. I want it to be a video of the whole pathfinding process. How do I do that? I checked render() and found there was very little I could do...

Pogema primarily supports rendering to the terminal and can generate browser animations, but it doesn't natively support exporting directly to video formats.

Please look at this google colab example

Yes I tried it, but how do I put it on pymarl?

I tried to use it on pymarl style, but I kept getting errors, first for missing get_num_agents, then for missing grid...

AttributeError: 'PyMarlPogema' object has no attribute 'grid'

I worked successfully after fixing what was missing, but now the problem is that the saved svg file only has the first frame of initialization, and no content beyond it

Can you provide a code snippet?

ok!

from pogema import pogema_v0, Hard8x8
from pogema.animation import AnimationMonitor, AnimationConfig

env = pogema_v0(grid_config=Hard8x8(integration='PyMARL'))
env = AnimationMonitor(env)
env.reset()

while True:
    # Using random policy to make actions
    # Modified the original pymarlbase step output style to accommodate animation usage
    obs, reward, terminated, truncated, info = env.step(env.sample_actions())
    # env.render()
    if all(terminated) or all(truncated):
        break

from IPython.display import SVG, display

env.save_animation("render.svg")
display(SVG('render.svg'))
# pymarl.py
def __init__(self, grid_config, mh_distance=False):
    gc = grid_config
    self._grid_config: GridConfig = gc

    self.env = _make_pogema(grid_config)
    self._mh_distance = mh_distance
    self._observations, _ = self.env.reset()
    self.max_episode_steps = gc.max_episode_steps
    self.episode_limit = gc.max_episode_steps
    self.n_agents = self.env.get_num_agents()
    self.grid = self.env.grid
    self.grid_config = self.env.grid_config
    self.spec = None

def step(self, actions):
    obs, rewards, terminated, truncated, infos = self.env.step(actions)
    self._observations = obs
    info = {}
    done = all(terminated) or all(truncated)
    if done:
        for key, value in infos[0]['metrics'].items():
            info[key] = value
        
    return obs, rewards, terminated, truncated, infos  # NO SUM REWARD

I looked at the self.history field in animation.py. Without pymarl, self.history keeps the state of each agent for the entire episode, but with pymarl, history only keeps the first frame.

Thank you for the code. I'll investigate it and come back with a solution.

Here is a quick workaround for adding SVG rendering to the current version:

from pogema import pogema_v0, Hard8x8, Hard16x16
from pogema.animation import AnimationMonitor, AnimationConfig

env = pogema_v0(grid_config=Hard16x16(integration='PyMARL'))
env.env = AnimationMonitor(env.env)
env.reset()

while True:
    rewards, done, info = env.step(env.sample_actions())
    obs = env.get_obs()
    
    if done:
        break

# from IPython.display import SVG, display

# env.env.save_animation("render.svg")
# display(SVG('render.svg'))

https://colab.research.google.com/drive/1u1DFBoLhxYDxPe4aWxTSSeVxfr309DEx?usp=sharing

The main problem with PyMARL integration and rendering due to how the step method functions. It is not planned to support PyMARL in future versions of Pogema, as this library is considerably outdated.

Thanks for the answer, but I'm still sad to hear that pymarl will no longer be supported in the next version. Although the code of this framework is not that brilliant, it does provide a fair comparison for many MARL researchers. Look forward to new ones!