ValueError: cannot reshape array of size 1 into shape (4,2)
Opened this issue · 10 comments
(pytorch) zhengxin@zhengxin:~/DB$ CUDA_VISIBLE_DEVICES=0 python train.py /home/zhengxin/DB/experiments/seg_detector/ic15_resnet18_deform_thre.yaml --num_gpus 1
[INFO] [2021-07-03 16:04:53,047] Training epoch 0
Traceback (most recent call last):
File "train.py", line 70, in
main()
File "train.py", line 67, in main
trainer.train()
File "/home/zhengxin/DB/trainer.py", line 72, in train
for batch in train_data_loader:
File "/home/zhengxin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/zhengxin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/home/zhengxin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/zhengxin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/zhengxin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/zhengxin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/zhengxin/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/zhengxin/DB/data/image_dataset.py", line 96, in getitem
data = data_process(data)
File "/home/zhengxin/DB/data/processes/data_process.py", line 9, in call
return self.process(data)
File "/home/zhengxin/DB/data/processes/augment_data.py", line 46, in process
self.may_augment_annotation(aug, data, shape)
File "/home/zhengxin/DB/data/processes/augment_data.py", line 72, in may_augment_annotation
new_polys = np.array([p.x, p.y] for p in keypoints).reshape([-1, 4, 2])
ValueError: cannot reshape array of size 1 into shape (4,2)
how can i fix it? I need help,thanks.
did you fix this problem?
I also give feedback on this problem.
I also give feedback on this problem.
have you fixed this issue?
I also give feedback on this problem.
have you fixed this issue?
nope
have you fixed this issue, i don't know what to do?
I may have solved this problem, but I don’t know why.
The number of GPUs in the sample code is 4, but I have only 1.
So I modify the code inspired by https://github.com/MhLiao/DB/pull/263/commits/417cb8432436fe4faac7c449536c738d3a6adaf4
new_polys = np.array([[p.x, p.y] for p in keypoints]).reshape((-1, 1, 2))
I may have solved this problem, but I don’t know why.
The number of GPUs in the sample code is 4, but I have only 1.
So I modify the code inspired by https://github.com/MhLiao/DB/pull/263/commits/417cb8432436fe4faac7c449536c738d3a6adaf4
new_polys = np.array([p.x, p.y] for p in keypoints).reshape((-1, 1, 2))
if still wrong, try this:
new_polys = np.array([[p.x, p.y] for p in keypoints]).reshape((-1, 1, 2))
I may have solved this problem, but I don’t know why.
The number of GPUs in the sample code is 4, but I have only 1.
So I modify the code inspired by https://github.com/MhLiao/DB/pull/263/commits/417cb8432436fe4faac7c449536c738d3a6adaf4
new_polys = np.array([p.x, p.y] for p in keypoints).reshape((-1, 1, 2))
if still wrong, try this:
new_polys = np.array([[p.x, p.y] for p in keypoints]).reshape((-1, 1, 2))
thanks,I will try.
@TangDL What is the difference between your answer and duxiangcheng's?
I have tried it. It works! But another problem shows up. My environment is Google Colab.
RuntimeError: CUDA out of memory. Tried to allocate 400.00 MiB (GPU 0; 14.76 GiB total capacity; 13.10 GiB already allocated; 327.75 MiB free; 13.37 GiB reserved in total by PyTorch)
@TangDL What is the difference between your answer and duxiangcheng's?
I have tried it. It works! But another problem shows up. My environment is Google Colab.RuntimeError: CUDA out of memory. Tried to allocate 400.00 MiB (GPU 0; 14.76 GiB total capacity; 13.10 GiB already allocated; 327.75 MiB free; 13.37 GiB reserved in total by PyTorch)
CUDA out of memory!!!please use larger GPU or reduce batch-size.