python scripts/get_labels.py --source ./ABSK
Opened this issue · 12 comments
Hello, author; Can you help me solve this problem?
(bpbreid1) D:\downloads\bpbreid-main>python scripts/get_labels.py --source ./ABSK
- OpenPifPaf model -> shufflenetv2k16
Processing: 0batch [00:00, ?batch/s] - MaskRCNN model -> COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml
Processing: 0%| | 0/130 [00:00<?, ?batch/s]D
:\Anaconda\envs\bpbreid1\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3191.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Processing: 2%|████ | 3/130 [04:35<3:15:13, 92.23s/batch]Traceback (most recent call last):
File "D:\downloads\bpbreid-main\scripts\get_labels.py", line 521, in
main()
File "D:\downloads\bpbreid-main\scripts\get_labels.py", line 517, in main
mask_model(imagery=img_paths, dataset_dir=args.source, is_overwrite=False)
File "D:\downloads\bpbreid-main\scripts\get_labels.py", line 392, in call
pifpaf_filtered: List[np.ndarray] = self.__filter_pifpaf_with_mask(batch, pifpaf_file_paths)
File "D:\downloads\bpbreid-main\scripts\get_labels.py", line 475, in __filter_pifpaf_with_mask
masks = filter_masks(self.model(batch))
File "D:\downloads\bpbreid-main\scripts\get_labels.py", line 444, in filter_masks
filtered_boxes, filtered_masks = zip(
ValueError: not enough values to unpack (expected 2, got 0)
Processing: 2%|████ 3/130 [06:06<4:18:55, 122.32s/batch]
Hello, author. Can I change the dataset from human to animal? How should I operate or modify it? about get_label.py
Hi @y1b2h3, to use on an animal dataset, you should first generate the animal parsing labels for your animal dataset using a Pifpaf model trained on another dataset with similar animals (or the same dataset). You should use that animal PifPaf model within the script provided by @samihormi (have a look at the README). You can also create the parsing labels using any other strategies, for instance using SAM or even doing it manually. Then you should create a Torchreid dataset class for your new dataset by replicating what was done for the other ReID datasets (you can have a look at how it's done for OccludedDuke in 'torchreid/data/datasets/image/occluded_dukemtmc.py' for instance).
Hi @y1b2h3, you can try to run a training with an existing dataset to see how it works, and then create your own subclass of "torchreid/data/datasets/dataset.py" by mimicking what is done in 'torchreid/data/datasets/image/occluded_dukemtmc.py' for instance. Then you should register your dataset in "torchreid/data/datasets/init.py". Then in the yaml config you can choose your dataset as source and target:
sources: ['your_dataset']
targets: ['your_dataset']
You can also have a look at the official Torchreid documentation for more information: https://kaiyangzhou.github.io/deep-person-reid/user_guide.html#use-your-own-dataset
Finally, make sure that your images and masks are properly loaded when launching the training, this loading is happening inside "torchreid.data.datasets.dataset.ImageDataset.getitem(...)"
@y1b2h3
I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)".
It is fixed by replacing the version of openPifPaf.
I remember that I also change the the output dim of the model in get_labels.py.
And finally, I get the correct heat map in my own custom dataset. Thanks for the author's great work.
@y1b2h3 I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". It is fixed by replacing the version of openpose. I remember that I also change the the output dim of the model in get_labels.py. And finally, I get the correct heat map in my own custom dataset. Thanks for the author's great work.
Hello, friend:
Thank you very much for your answer
1. When using a custom dataset, you modified get_ Where are the labels. py files located?
2. I encountered the following error while using a custom dataset for training. Can you resolve it? Python scripts/main.py -- config file configs/bpbreid/bpbreid_ Absk_ Train.yaml
=> Start training
Traceback (most recent call last):
File "D:\downloads\bpbreid-main\scripts\main.py", line 273, in
main()
File "D:\downloads\bpbreid-main\scripts\main.py", line 184, in main
engine.run(**engine_run_kwargs(cfg))
File "d:\downloads\bpbreid-main\torchreid\engine\engine.py", line 204, in run
self.train(
File "d:\downloads\bpbreid-main\torchreid\engine\engine.py", line 264, in train
for self.batch_idx, data in enumerate(self.train_loader):
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data\dataloader.py", line 628, in next
data = self._next_data()
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data\dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data\dataloader.py", line 1359, in _process_data
data.reraise()
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch_utils.py", line 543, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\worker.py", line 302, in worker_loop
data = fetcher.fetch(index)
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\fetch.py", line 61, in fetch
return self.collate_fn(data)
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 265, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 128, in collate
return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 128, in
return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem})
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 120, in collate
return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 172, in collate_numpy_array_fn
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 120, in collate
return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
File "D:\Anaconda\envs\bpbreid3\lib\site-packages\torch\utils\data_utils\collate.py", line 162, in collate_tensor_fn
out = elem.new(storage).resize(len(batch), *list(elem.size()))
RuntimeError: Trying to resize storage that is not resizable
Hi @y1b2h3, the purpose of the collate.py (where you have the error), is to process the output of the dataloader: this error means there's something wrong with the data returned by the dataloader, when building the training batch. The data returned by the dataloader and processed by the collate function comes from the getitem(...) function here: "torchreid.data.datasets.dataset.ImageDataset.getitem(...)". If you want to solve that error, you have to analyse all the data inside the "sample" object returned by this get_item function and understand what is causing the error: maybe you have an empty array, a wrong data type, or anything else
@y1b2h3 I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". It is fixed by replacing the version of openpose. I remember that I also change the the output dim of the model in get_labels.py. And finally, I get the correct heat map in my own custom dataset. Thanks for the author's great work.
@y1b2h3 I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". It is fixed by replacing the version of openpose. I remember that I also change the the output dim of the model in get_labels.py. And finally, I get the correct heat map in my own custom dataset. Thanks for the author's great work.
Hello friend, I use a custom dataset (not a human) about get_ In addition to modifying the corresponding model, which specific areas do I need to modify in the labels. py file code? I have weak coding skills. Thank you for your kind guidance
@gao1qiang Give me your email. I would like to share my code and solve your problem.
@gao1qiang Give me your email. I would like to share my code and solve your problem.
Hi @ellzeycunha0 , I have the same issue of "ValueError: not enough values to unpack (expected 2, got 0)". Could you please share your code to help me solve the problem?