lvpengyuan/corner

File not found error.

Opened this issue · 10 comments

I'm getting the following error:

File "/home/mukut/gitlab/corner/data/icdar.py", line 49, in init
ic13_samples = open(ic13_list_path, 'r').readlines()
IOError: [Errno 2] No such file or directory: '../data/ocr/detection//icdar2013/test_list.txt'

when I executed eval_all.py file.

Any suggestion.

I'm getting the following error:

File "/home/mukut/gitlab/corner/data/icdar.py", line 49, in init
ic13_samples = open(ic13_list_path, 'r').readlines()
IOError: [Errno 2] No such file or directory: '../data/ocr/detection//icdar2013/test_list.txt'

when I executed eval_all.py file.

Any suggestion.

Is your problem solved?

have you solved this problem?

have you solved this problem?

yes .you should make such a .list file (similar to txt file) by yourself , each line is the file name of all test image files

Can you send me your txt files and the corresponding datasets? Thank you!

I did not test it on the icdar2013 task, I did it on COCO-Text, this data set needs to go to the official website to download the train2014 folder (https://rrc.cvc.uab.es/?ch=5&com=downloads )

And then i wrote a python script according to the file name of the test picture in the downloaded data to get the test.list file (Baidu SkyDrive: link: https://pan.baidu.com/s/1lCxm_z8HflyMdGXsKVnUHA
Extraction code: cfkp)

have you tried evaluate_msra.py. First I tested eval_all.py on td500 dataset, then I got the outputs_eval dataset, and I set the root of detection_results_dir as '/outputs_eval/td500/240/res/' in evaluate_msra.py, but I got
Traceback (most recent call last):
File "evaluate_msra.py", line 153, in
recall=float(tp)/(tp+fp)
ZeroDivisionError: float division by zero
Do you know how to fix it?

Yes, I rewritten evaluate_msra.py when I executed COCO-Text. If tp+fp is 0, it may indicate that there is no gt calibration frame. I haven't encountered this problem temporarily

can you send me your modified evaluate_msra.py ?

Ok. But I recommend that you can change it yourself to meet your requirements
(Baidu SkyDrive: link: https://pan.baidu.com/s/1cjhctYfjQ-zoz2Jwf0vLdg
Extraction code: qnhp)

have you tried testing on one image instead of the whole dataset ?