Denys88/rl_animal

Attribute error in env

Closed this issue · 9 comments

This line in Player.ipnb:
ppo_player = players.PpoPlayerDiscrete(sess, a2c_config)

generates an error:
AttributeError: 'AnimalAIEnv' object has no attribute 'brain'

The error is generated from animalai_wrapper.py which is fed the env parameter from env_configurations.py

My original thought was that this was a path issue in trying to find the env executable. However, the executable AnimalAI.x86_64, is in the rl_animal directory and is correctly specified in env_configurations.py:
env_path = 'AnimalAI'
and the permissions are properly set to allow it to run as an executable.

Here is the detailed output:
INFO:mlagents.envs:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of Training Brains : 1
INFO:gym_unity:1 agents within environment.
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.

AttributeError Traceback (most recent call last)
in
2 import numpy as np
3 import players
----> 4 ppo_player = players.PpoPlayerDiscrete(sess, a2c_config)
5 from hyperparams import BASE_DIR
6 #config = BASE_DIR + '/configs/learning/stage4/redzone_bridge2.yaml'

~/animalai_trrr/rl_animal/players.py in init(self, sess, config)
32 class PpoPlayerDiscrete(BasePlayer):
33 def init(self, sess, config):
---> 34 BasePlayer.init(self, sess, config)
35 self.network = config['NETWORK']
36 self.obs_ph = tf.placeholder('uint8', (None, ) + self.obs_space.shape, name = 'obs')

~/animalai_trrr/rl_animal/players.py in init(self, sess, config)
15 self.sess = sess
16 self.env_name = self.config['ENV_NAME']
---> 17 self.obs_space, self.action_space = env_configurations.get_obs_and_action_spaces(self.env_name)
18
19 def restore(self, fn):

~/animalai_trrr/rl_animal/env_configurations.py in get_obs_and_action_spaces(name)
54
55 def get_obs_and_action_spaces(name):
---> 56 env = configurations[name]'ENV_CREATOR'
57 observation_space = env.observation_space
58 action_space = env.action_space

~/animalai_trrr/rl_animal/env_configurations.py in (inference, config)
46 },
47 'AnimalAIRay' : {
---> 48 'ENV_CREATOR' : lambda inference=False, config=None: create_animal(1, inference, config=config),
49 'VECENV_TYPE' : 'RAY'
50 },

~/animalai_trrr/rl_animal/env_configurations.py in create_animal(num_actors, inference, config, seed)
34 resolution=84
35 )
---> 36 env = AnimalSkip(env, skip=SKIP_FRAMES)
37 env = AnimalWrapper(env)
38 env = AnimalStack(env,VISUAL_FRAMES_COUNT, VEL_FRAMES_COUNT, greyscale=USE_GREYSCALE_OBSES)

~/animalai_trrr/rl_animal/animalai_wrapper.py in init(self, env, skip)
32 gym.Wrapper.init(self, env)
33 self._skip=skip
---> 34 self.brain = env.brain
35
36

AttributeError: 'AnimalAIEnv' object has no attribute 'brain'

I'll take a look tomorrow

@dan9thsense I am using custom animal ai enviroment with some changes and I put it here. Could you try to uninstall one installed with pip?

I think it is already using the AnimalAIEnv executable that is from this repo, since the specified path points to that one. Is there something else that I need to uninstall?

Starting to debug, I modified the code in env_configurations.py as below and found the step where env no longer works:
testbrain and testbrain1 get created OK, but testbrain2 fails with the same error:

testbrain2 = env.brain
AttributeError: 'AnimalWrapper' object has no attribute 'brain'

Then, in animalai_wrapper.py I added the line:
self.brain = env.brain
to the AnimalStack and AnimalWrapper classes. With that, the code gets past the errors. However, now the unity window opens in full screen and just stays black (I see this same black Unity screen when opening my own code, but then it pops into a window showing the arena). I am unable to close it or do anything else (the mouse is active but there is no place to click)-- I had to hard power down the laptop to get out. Any ideas on how to debug at this point? I am running Ubuntu 18.04 on a laptop without a dedicated GPU.


def create_animal(num_actors=1, inference = True, config=None, seed=None):
    from animalai.envs.gym.environment import AnimalAIEnv
    from animalai.envs.arena_config import ArenaConfig
    import random
    from animalai_wrapper import AnimalWrapper, AnimalStack, AnimalSkip
    env_path = 'AnimalAI'
    worker_id = random.randint(1, 60000)
    arena_config_in = ArenaConfig(BASE_DIR + '/configs/learning/stage4/3-Food Moving.yaml')

    if config is None:
        config = arena_config_in
    else: 
        config = ArenaConfig(config)
    if seed is None:
        seed = 0#random.randint(0, 100500)
        
    env = AnimalAIEnv(environment_filename=env_path,
                      worker_id=worker_id,
                      n_arenas=num_actors,
                      seed = seed,
                      arenas_configurations=config,
                      greyscale = False,
                      docker_training=False,
                      inference = inference,
                      retro=False,
                      resolution=84
                      )
    testbrain = env.brain
    print("testbrain: ", testbrain)
    env = AnimalSkip(env, skip=SKIP_FRAMES)
    testbrain1 = env.brain
    print("testbrain1: ", testbrain1)
    env = AnimalWrapper(env)
    testbrain2 = env.brain
    print("testbrain2: ", testbrain2)
    env = AnimalStack(env,VISUAL_FRAMES_COUNT, VEL_FRAMES_COUNT, greyscale=USE_GREYSCALE_OBSES)
    testbrain3 = env.brain
    print("testbrain3: ", testbrain3)
    input("see brains")
    return env

It works on both my home computer and laptop. I'll try to reproduce it from clean configuration.
I expect it should use my varian of the environment which is here: https://github.com/Denys88/rl_animal/tree/master/animalai
could you try to rename 'animalai' folder to the 'animalai2'
and
do the same in this two lines:
from animalai.envs.gym.environment import AnimalAIEnv
from animalai.envs.arena_config import ArenaConfig ?

When I changed the folder name from animalai to animalai2, and I changed it in environment.py
from animalai.envs.gym.environment import AnimalAIEnv
from animalai.envs.arena_config import ArenaConfig

I found it also needed to be changed in:
brain_parameters_proto_bp2.py:
from animalai2.communicator_objects import resolution_proto_pb2 as animalai_dot_communicator__objects_dot_resolution__proto__pb2
from animalai2.communicator_objects import space_type_proto_pb2 as animalai_dot_communicator__objects_dot_space__type__proto__pb2

and
space_type_proto_pb2.py
from animalai2.communicator_objects import resolution_proto_pb2 as animalai_dot_communicator__objects_dot_resolution__proto__pb2

and
unity_input_pb2.py

I gave up at that point, since there might be a lot of these. I just made a copy of the folder, so that there is an animalai2 folder and also an animalai folder to catch the rest of these imports.

Now it runs and loads the black Unity screen. However, as mentioned above, the Unity window opens in full screen and just stays black (I see this same black Unity screen when opening my own code, but then it pops into a window showing the arena). I am unable to close it or do anything else (the mouse is active but there is no place to click)-- I had to hard power down the laptop to get out. Any ideas on how to debug at this point? I am running Ubuntu 18.04 on a laptop without a dedicated GPU.

If you see black screen it may mean that you got an error during loading.
Please pull my latest commit. I just checked player notebook works.
Please download networks before if you didn't make it yet.

The latest commit works! What was the issue?
I'll spend some time understanding the code and then move on to trying to do training.
Thanks for the help.

@dan9thsense I think issue was that I forgot to push latest commit into github. I made these fixes two months ago