External test
gt-madanb opened this issue · 2 comments
I am trying to run on an Initialization notebook with external test videos which is already in avi format.
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 83, in default_collate
return [default_collate(samples) for samples in transposed]
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 83, in
return [default_collate(samples) for samples in transposed]
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 63, in default_collate
return default_collate([torch.as_tensor(b) for b in batch])
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 16, 472, 636] at entry 0 and [3, 16, 600, 800] at entry
AVIs need to be resized, model expects 112x112 videos.
AVIs need to be resized, model expects 112x112 videos.
when I try to test the video from dynamic dataset, I got the following error
loading weights from D:\stanford_AIMI\weights\r2plus1d_18_32_2_pretrained
cuda is not available, cpu weights
EXTERNAL_TEST ['0X1A58B506ED05C1D4.avi']
100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.83s/it]TypeError Traceback (most recent call last)
in
49
50 test_dataloader = torch.utils.data.DataLoader(ds, batch_size = 1, num_workers = 5, shuffle = True, pin_memory=(device.type == "cuda"))
---> 51 loss, yhat, y = echonet.utils.video.run_epoch(model, test_dataloader, "test", None, device, save_all=True)#, blocks=25)
52
53 with open(output, "w") as g:~\dynamic-master\echonet\utils\video.py in run_epoch(model, dataloader, train, optim, device, save_all, block_size)
312 y = []
313
--> 314 with torch.set_grad_enabled(train):
315 with tqdm.tqdm(total=len(dataloader)) as pbar:
316 for (X, outcome) in dataloader:~\anaconda3\lib\site-packages\torch\autograd\grad_mode.py in init(self, mode)
200 def init(self, mode: bool) -> None:
201 self.prev = torch.is_grad_enabled()
--> 202 torch._C._set_grad_enabled(mode)
203
204 def enter(self) -> None:TypeError: enabled must be a bool (got str)