sukjunhwang/IFC

evaluation error

Opened this issue · 1 comments

After running the following command to evaluate:

python projects/IFC/train_net.py --num-gpus 8 --eval-only --config-file projects/IFC/configs/base_ytvis.yaml MODEL.WEIGHTS pretrained_weights/coco_r50.pth INPUT.SAMPLING_FRAME_NUM 5

An error occurred

  File "/SSD_DISK/users/yanghongye/projects/rvos/IFC/projects/IFC/ifc/ifc.py", line 221, in forward
    video_output.update(clip_results)
  File "/SSD_DISK/users/yanghongye/projects/rvos/IFC/projects/IFC/ifc/structures/clip_output.py", line 103, in update
    input_clip.frame_idx] = input_clip.mask_logits[left_idx]
RuntimeError: shape mismatch: value tensor of shape [100, 5, 45, 80] cannot be broadcast to indexing result of shape [50, 5, 45, 80]

And I change


to

num_max_inst = 100

the error still occurred when update the second clip of the video

  File "/SSD_DISK/users/yanghongye/projects/rvos/IFC/projects/IFC/ifc/ifc.py", line 221, in forward
    video_output.update(clip_results)
  File "/SSD_DISK/users/yanghongye/projects/rvos/IFC/projects/IFC/ifc/structures/clip_output.py", line 103, in update
    input_clip.frame_idx] = input_clip.mask_logits[left_idx]
RuntimeError: shape mismatch: value tensor of shape [5, 5, 45, 80] cannot be broadcast to indexing result of shape [0, 5, 45, 80]

Could you help me to solve it?

Hi @hoyeYang ,

The majority of the problems that occur at clip_output.py are driven from either

  1. too many instances captured or,
  2. too long video sequence.

Can you please tell me the length of the video that you used? Is the video from YouTube-VIS 2019 dataset?