VIPL-Audio-Visual-Speech-Understanding/LipNet-PyTorch

Need at least one array to stack

jainnimish opened this issue · 2 comments

Hello again. I ran the demo.py file and this time, I got this error.

Traceback (most recent call last): File "C:\__\__\demo.py", line 158, in <module> video, img_p = load_video(sys.argv[1]) File "C:\__\__\demo.py", line 123, in load_video video = np.stack(video, axis=0).astype(np.float32) File "<__array_function__ internals>", line 5, in stack raise ValueError('need at least one array to stack') ValueError: need at least one array to stack

Any advice would be great! Cheers!

It might be caused by face detection. The length of video will be zero when there is no face detected in given videos. Using the original video in GRID corpus is recommended to make it work properly

So I did not actually train my own model using GRID dataset. I just ran the demo.py file and it gave me this error. So can I run the demo.py file without actually training my own dataset. I thought you have a default demo dataset.

Also, I ran the demo.py file using a video that was saying "hello" and it predicted "BIC GREN AY J NINE LEOE". Can you please tell me why it is not correctly predicting the results.