simoninithomas/Deep_reinforcement_learning_Course

ValueError: ('Cannot warp empty image with dimensions', (0, 24))

EXJUSTICE opened this issue · 2 comments

Hello,

I've tried adapting your approach during training to some pre-existing code of mine,
however I am constantly met with the ValueError. My model is different from yours, but essentially does the same procedure. My original training approach had a check for while not done, but as the first episode quits early (?) There would always be some error, and hence I wanted to try your approach.
https://gist.github.com/EXJUSTICE/0df29caedee2a72a7e5faf7aa88cbd03

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-159-aebc713aaeac> in <module>()
     63               next_obs=np.zeros(obs.shape)
     64 
---> 65               next_obs,stacked_frames= stack_frames(stacked_frames,next_obs,False)
     66               step = max_steps
     67               history.append(episodic_reward)

3 frames
/usr/local/lib/python3.6/dist-packages/skimage/transform/_warps.py in warp(image, inverse_map, map_args, output_shape, order, mode, cval, clip, preserve_range)
    805 
    806     if image.size == 0:
--> 807         raise ValueError("Cannot warp empty image with dimensions", image.shape)
    808 
    809     image = convert_to_float(image, preserve_range)

ValueError: ('Cannot warp empty image with dimensions', (0, 24))

After investigating, it's clear to me that the error is from the preprocessing function, where you call to transform the preprocessed image to size 84,84 using sci-image. I changed your code to ensure that grayscaling happened as well. The transformation cannot occur with an empty array of zeros?

def preprocess_observation(frame):

    # Crop and resize the image into a square, as we don't need the excess information
    cropped = frame[60:-60,30:-30]

    normalized = cropped/255.0

    # Improve image contrast See if works
    #img[img==color] = 0

    # Next we normalize the image from -1 to +1 See if works
    #img = (img - 128) / 128 - 1

    
    img_gray = rgb2gray(normalized)

    preprocessed_frame = transform.resize(img_gray, [84,84])

    return preprocessed_frame

Hi,
I'm having the same problem, with the exception of I'm not really adapting the code (I am adapting it to newer versions of python and packages: tf.compat.v1 is a thing). I tried to get around the ValueError problem with:

if (cropped_frame[np.ix_([0, -1], [0, -1])] == np.array([0, 0], [0, 0])).all(): cropped_frame[np.ix_([0, -1], [0, -1])] = np.array([1, 1], [1, 1])

Which gave:

IndexError: index 0 is out of bounds for axis 0 with size 0

Perhaps this can be interpreted as the state passed in as a frame is an empty array (or smaller than 40 x 60, as there's potentially cropping that happens). If you come across a solution, I'm all ears!

The initial state/frame is empty. Because transform.resize upsamples, only 1 pixel is necessary (not sure if there's a reason to use more, but more complexity meant more problems for me). Trying to give an array of zeros wouldn't run: Apparently I needed a 2- or 3- tuple. Giving a 3- tuple (each pixel, in my case, is RGB) failed on normalizing, because division doesn't work with tuples. Finally got:

if np.size(cropped_frame) < 1:
# pixel = tuple(np.array([0, 0, 0], dtype='uint8'))
cropped_frame = np.array([0, 0, 0], dtype='uint8')

To work. Since you're using greyscale, depending on where you look for the empty frame, you may not need the array.