open-mmlab/mmyolo

The testing results of the whole dataset is empty AND both loss_bbox and loss_dfl are 0

lkh2022 opened this issue · 9 comments

          just modify the config file as follow:
...
class_name = ('Truck','Car', ) # your categories
metainfo = dict(classes=class_name, palette=[(20, 220, 60)])
...
train_dataloader = dict(
    batch_size=train_batch_size_per_gpu,
    num_workers=train_num_workers,
    persistent_workers=persistent_workers,
    pin_memory=True,
    sampler=dict(type='DefaultSampler', shuffle=True),
    collate_fn=dict(type='yolov5_collate'),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        metainfo=metainfo,    # **there**
        ann_file=train_ann_file,
        data_prefix=dict(img=train_data_prefix),
        filter_cfg=dict(filter_empty_gt=False, min_size=32),
        pipeline=train_pipeline))
val_dataloader = dict(
    batch_size=val_batch_size_per_gpu,
    num_workers=val_num_workers,
    persistent_workers=persistent_workers,
    pin_memory=True,
    drop_last=False,
    sampler=dict(type='DefaultSampler', shuffle=False),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        metainfo=metainfo,    # **there**
        test_mode=True,
        data_prefix=dict(img=val_data_prefix),
        ann_file=val_ann_file,
        pipeline=test_pipeline,
        batch_shapes_cfg=batch_shapes_cfg))

Originally posted by @rhett-ye in #447 (comment)

After I modified it in this way, loss_bbox and loss_dfl are still 0, and the problem that the data set is empty is still prompted. Is anyone else like this:
01/15 18:33:28 - mmengine - INFO - Saving checkpoint at 20 epochs
01/15 18:33:31 - mmengine - INFO - Epoch(val) [20][ 50/548] eta: 0:00:08 time: 0.0169 data_time: 0.0046 memory: 5640
01/15 18:33:32 - mmengine - INFO - Epoch(val) [20][100/548] eta: 0:00:06 time: 0.0118 data_time: 0.0010 memory: 185
01/15 18:33:33 - mmengine - INFO - Epoch(val) [20][150/548] eta: 0:00:05 time: 0.0144 data_time: 0.0030 memory: 185
01/15 18:33:33 - mmengine - INFO - Epoch(val) [20][200/548] eta: 0:00:04 time: 0.0128 data_time: 0.0012 memory: 185
01/15 18:33:34 - mmengine - INFO - Epoch(val) [20][250/548] eta: 0:00:04 time: 0.0152 data_time: 0.0034 memory: 185
01/15 18:33:35 - mmengine - INFO - Epoch(val) [20][300/548] eta: 0:00:03 time: 0.0161 data_time: 0.0039 memory: 185
01/15 18:33:36 - mmengine - INFO - Epoch(val) [20][350/548] eta: 0:00:02 time: 0.0160 data_time: 0.0044 memory: 185
01/15 18:33:36 - mmengine - INFO - Epoch(val) [20][400/548] eta: 0:00:02 time: 0.0159 data_time: 0.0039 memory: 185
01/15 18:33:37 - mmengine - INFO - Epoch(val) [20][450/548] eta: 0:00:01 time: 0.0148 data_time: 0.0035 memory: 185
01/15 18:33:38 - mmengine - INFO - Epoch(val) [20][500/548] eta: 0:00:00 time: 0.0150 data_time: 0.0037 memory: 185
01/15 18:33:39 - mmengine - INFO - Evaluating bbox...
Loading and preparing results...
01/15 18:33:39 - mmengine - ERROR - /opt/conda/envs/mmyolo/lib/python3.8/site-packages/mmdet/evaluation/metrics/coco_metric.py - compute_metrics - 465 - The testing results of the whole dataset is empty.

Hello, I also had this problem, have you solved it yet?

Hello, I also had this problem, have you solved it yet?

This problem still exists now.
I useyolov8_s_mask-refine_syncbn_fast_8xb16-500e_coco.pyto train a model on another dataset, based on existing experience:

  1. I checked my dataset format and it is compliant with coco format
  2. I also added metainfo to train_dataloader and val_dataloader, and corresponded to the dataset annotation
  3. I modified num_classes
  4. I modified the coco category in coco.py and class_name.py to be consistent with classes in metainfo

In addition, loss_bbox and loss_dfl are always 0. After about two epochs, the total loss is also 0.

Hello, I also had this problem, have you solved it yet?

Hello,I successfully ran the official tutorial 15 MINUTES TO GET STARTED WITH MMYOLO OBJECT DETECTION,I modified the relevant information based on yolov8_s_fast_1xb12-40e_cat.py, and now I have trained for 30 epochs. It is normal now and there are no problems mentioned before.

Hello, I also had this problem, have you solved it yet?

Hello,I successfully ran the official tutorial 15 MINUTES TO GET STARTED WITH MMYOLO OBJECT DETECTION,I modified the relevant information based on yolov8_s_fast_1xb12-40e_cat.py, and now I have trained for 30 epochs. It is normal now and there are no problems mentioned before.

Hello, I also ran yolov5_s-v61_fast_1xb12-40e_cat.py and yolov8_s_fast_1xb12-40e_cat.py, and found that the cat.py can be successfully run, yolov5 voc model can be run the other coco model run will report an error, I do not know where the I don't know where the problem is. But then I can't see the AP of each category, if you have any progress after that, please reply me, thank you very much!

Hello, I also had this problem, have you solved it yet?

Hello,I successfully ran the official tutorial 15 MINUTES TO GET STARTED WITH MMYOLO OBJECT DETECTION,I modified the relevant information based on yolov8_s_fast_1xb12-40e_cat.py, and now I have trained for 30 epochs. It is normal now and there are no problems mentioned before.

Hello, I also ran yolov5_s-v61_fast_1xb12-40e_cat.py and yolov8_s_fast_1xb12-40e_cat.py, and found that the cat.py can be successfully run, yolov5 voc model can be run the other coco model run will report an error, I do not know where the I don't know where the problem is. But then I can't see the AP of each category, if you have any progress after that, please reply me, thank you very much!

If you want to know the AP of every class, you can modify classwise from False to True, you can find classwise in mmdet.evaluation.metrics.coco_metric, this parameter is in line 73 of coco_metric.py

Hello, I also had this problem, have you solved it yet?

Hello,I successfully ran the official tutorial 15 MINUTES TO GET STARTED WITH MMYOLO OBJECT DETECTION,I modified the relevant information based on yolov8_s_fast_1xb12-40e_cat.py, and now I have trained for 30 epochs. It is normal now and there are no problems mentioned before.

Hello, I also ran yolov5_s-v61_fast_1xb12-40e_cat.py and yolov8_s_fast_1xb12-40e_cat.py, and found that the cat.py can be successfully run, yolov5 voc model can be run the other coco model run will report an error, I do not know where the I don't know where the problem is. But then I can't see the AP of each category, if you have any progress after that, please reply me, thank you very much!

If you want to know the AP of every class, you can modify classwise from False to True, you can find classwise in mmdet.evaluation.metrics.coco_metric, this parameter is in line 73 of coco_metric.py

Hello, have made changes to coco_metric.py, but before and after the changes it is the first result, I want to show the second result.
1、 Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.459
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.647
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.521
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.354
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.551
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.505
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.466
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.590
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.611
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.450
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.712
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.670
01/17 20:23:02 - mmengine - INFO - bbox_mAP_copypaste: 0.459 0.647 0.521 0.354 0.551 0.505
2、+-------------+-----+------+--------+-------+
| class | gts | dets | recall | ap |
+-------------+-----+------+--------+-------+
| block | 12 | 255 | 1.000 | 0.967 |
| finger | 29 | 730 | 0.724 | 0.359 |
| dirt | 75 | 280 | 0.667 | 0.621 |
| corner | 24 | 855 | 1.000 | 0.850 |
| fragment | 25 | 227 | 0.680 | 0.614 |
| crack | 9 | 265 | 1.000 | 0.989 |
+-------------+-----+------+--------+-------+
| mAP | | | | 0.733 |
+-------------+-----+------+--------+-------+

Hello, I also had this problem, have you solved it yet?

Hello,I successfully ran the official tutorial 15 MINUTES TO GET STARTED WITH MMYOLO OBJECT DETECTION,I modified the relevant information based on yolov8_s_fast_1xb12-40e_cat.py, and now I have trained for 30 epochs. It is normal now and there are no problems mentioned before.

Hello, I also ran yolov5_s-v61_fast_1xb12-40e_cat.py and yolov8_s_fast_1xb12-40e_cat.py, and found that the cat.py can be successfully run, yolov5 voc model can be run the other coco model run will report an error, I do not know where the I don't know where the problem is. But then I can't see the AP of each category, if you have any progress after that, please reply me, thank you very much!

If you want to know the AP of every class, you can modify classwise from False to True, you can find classwise in mmdet.evaluation.metrics.coco_metric, this parameter is in line 73 of coco_metric.py

Hello, have made changes to coco_metric.py, but before and after the changes it is the first result, I want to show the second result. 1、 Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.459 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.647 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.521 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.354 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.551 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.505 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.466 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.590 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.611 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.450 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.712 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.670 01/17 20:23:02 - mmengine - INFO - bbox_mAP_copypaste: 0.459 0.647 0.521 0.354 0.551 0.505 2、+-------------+-----+------+--------+-------+ | class | gts | dets | recall | ap | +-------------+-----+------+--------+-------+ | block | 12 | 255 | 1.000 | 0.967 | | finger | 29 | 730 | 0.724 | 0.359 | | dirt | 75 | 280 | 0.667 | 0.621 | | corner | 24 | 855 | 1.000 | 0.850 | | fragment | 25 | 227 | 0.680 | 0.614 | | crack | 9 | 265 | 1.000 | 0.989 | +-------------+-----+------+--------+-------+ | mAP | | | | 0.733 | +-------------+-----+------+--------+-------+

I have not seen the second output format. The output after I modified classwise is like this:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.148
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.371
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.089
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.111
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.392
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.795
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.020
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.114
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.225
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.189
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.501
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.839
01/18 13:40:40 - mmengine - INFO -
+----------+-------+---------+---------+--------+--------+-------+
| category | mAP | mAP_50 | mAP_75 | mAP_s | mAP_m | mAP_l |
+----------+-------+---------+---------+--------+--------+-------+
| person | 0.148 | 0.371 | 0.089 | 0.111 | 0.392 | 0.795 |
+----------+-------+---------+---------+--------+--------+-------+

Hello, I also had this problem, have you solved it yet?

Hello,I successfully ran the official tutorial 15 MINUTES TO GET STARTED WITH MMYOLO OBJECT DETECTION,I modified the relevant information based on , and now I have trained for 30 epochs. It is normal now and there are no problems mentioned before.yolov8_s_fast_1xb12-40e_cat.py

Hello, I also ran yolov5_s-v61_fast_1xb12-40e_cat.py and yolov8_s_fast_1xb12-40e_cat.py, and found that the cat.py can be successfully run, yolov5 voc model can be run the other coco model run will report an error, I do not know where the I don't know where the problem is. But then I can't see the AP of each category, if you have any progress after that, please reply me, thank you very much!

If you want to know the AP of every class, you can modify from False to True, you can find in , this parameter is in line 73 of coco_metric.pyclasswise``classwise``mmdet.evaluation.metrics.coco_metric

Hello, have made changes to coco_metric.py, but before and after the changes it is the first result, I want to show the second result. 1、 Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.459 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.647 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.521 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.354 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.551 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.505 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.466 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.590 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.611 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.450 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.712 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.670 01/17 20:23:02 - mmengine - INFO - bbox_mAP_copypaste: 0.459 0.647 0.521 0.354 0.551 0.505 2、+-------------+-----+------+--------+-------+ | class | gts | dets | recall | ap | +-------------+-----+------+--------+-------+ | block | 12 | 255 | 1.000 | 0.967 | | finger | 29 | 730 | 0.724 | 0.359 | | dirt | 75 | 280 | 0.667 | 0.621 | | corner | 24 | 855 | 1.000 | 0.850 | | fragment | 25 | 227 | 0.680 | 0.614 | | crack | 9 | 265 | 1.000 | 0.989 | +-------------+-----+------+--------+-------+ | mAP | | | | 0.733 | +-------------+-----+------+--------+-------+

I have not seen the second output format. The output after I modified classwise is like this: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.148 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.371 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.089 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.111 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.392 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.795 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.020 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.114 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.225 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.189 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.501 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.839 01/18 13:40:40 - mmengine - INFO - +----------+-------+---------+---------+--------+--------+-------+ | category | mAP | mAP_50 | mAP_75 | mAP_s | mAP_m | mAP_l | +----------+-------+---------+---------+--------+--------+-------+ | person | 0.148 | 0.371 | 0.089 | 0.111 | 0.392 | 0.795 | +----------+-------+---------+---------+--------+--------+-------+

Thanks, I added --cfg-options test_evaluator.classwise=True when running in the terminal and it printed out successfully! The second form is the output from my previous training with the VOC dataset.