hzxie/Pix2Vox

One question on F-Score

LiyingCV opened this issue · 14 comments

Greetings,
In your paper, there are two evaluation metrics, Iou and F-Score. But there is not including F-Score in your test code. Could you provide some details about realizing F-Score in the test code? Thank you in advance.

Hi, I don't know the exact code used to report the results but in the Pix2Vox++ paper, a reference to this work was given when talking about F-score. The code for the referenced paper includes the F-score calculation.

Hi, I don't know the exact code used to report the results but in the Pix2Vox++ paper, a reference to this work was given when talking about F-score. The code for the referenced paper includes the F-score calculation.

Thanks for your help!I solved.

Hi, I don't know the exact code used to report the results but in the Pix2Vox++ paper, a reference to this work was given when talking about F-score. The code for the referenced paper includes the F-score calculation.

Thanks for your help!I solved.

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

Hi, I don't know the exact code used to report the results but in the Pix2Vox++ paper, a reference to this work was given when talking about F-score. The code for the referenced paper includes the F-score calculation.

Thanks for your help!I solved.

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

You can refer to this link https://github.com/lmb-freiburg/what3d and the issue https://github.com/lmb-freiburg/what3d/issues/1.

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

You can refer to this link https://github.com/lmb-freiburg/what3d and the issue https://github.com/lmb-freiburg/what3d/issues/1.
Thank you for your reply ! In fact, I have tried that, but failed, so I want to know how you solved it

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

You can refer to this link https://github.com/lmb-freiburg/what3d and the issue https://github.com/lmb-freiburg/what3d/issues/1.
Thank you for your reply ! In fact, I have tried that, but failed, so I want to know how you solved it

Could you show me about the difficulties which you met?

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

You can refer to this link https://github.com/lmb-freiburg/what3d and the issue https://github.com/lmb-freiburg/what3d/issues/1.
Thank you for your reply ! In fact, I have tried that, but failed, so I want to know how you solved it

Could you show me about the difficulties which you met?

Thanks for your help ! I solved.

I'm sorry to trouble you again. I want to know the results of f-score in the test process. I can't get the accuracy as the Pix2Vox++. A little help from you would be greatly appreciated!

1

I'm sorry to trouble you again. I want to know the results of f-score in the test process. I can't get the accuracy as the Pix2Vox++. A little help from you would be greatly appreciated!

1

Actually, we also get the wrong result. We test the single-view result and the value is 0.449. We still do not find out the reason. Besides. we find that the sampling is calculated by CPU which will waste too much time, do you also meet this trouble?

I'm sorry to trouble you again. I want to know the results of f-score in the test process. I can't get the accuracy as the Pix2Vox++. A little help from you would be greatly appreciated!
1

Actually, we also get the wrong result. We test the single-view result and the value is 0.449. We still do not find out the reason. Besides. we find that the sampling is calculated by CPU which will waste too much time, do you also meet this trouble?

Yes. Because Numpy cannot read CUDA tensor and needs to convert it to CPU tensor.

I also get the wrong result, I test the single-view result and the value is 0.415, Have you solve this?

I also get the wrong result, I test the single-view result and the value is 0.415, Have you solve this?

Yes. The single-view test value is different from paper.

I also get the wrong result, I test the single-view result and the value is 0.415, Have you solve this?

Yes. The single-view test value is different from paper.
I tested several other Multiviews and the results are all different from those in the article.
In addition, do you know how to train multi view, After I train the whole network (without the context-aware fusion module) with a single-view image for 250 epochs. What should I do to fix the encoder and decoder and train the rest of the network for 100 epochs ? How to fix the encoder and decoder?

I also get the wrong result, I test the single-view result and the value is 0.415, Have you solve this?

Yes. The single-view test value is different from paper.
I tested several other Multiviews and the results are all different from those in the article.
In addition, do you know how to train multi view, After I train the whole network (without the context-aware fusion module) with a single-view image for 250 epochs. What should I do to fix the encoder and decoder and train the rest of the network for 100 epochs ? How to fix the encoder and decoder?

Just follow the instruction on paper, keep merger on when training multi-view.