confifu/RepNet-Pytorch

can you give me the test demo?

Opened this issue · 9 comments

I trained 15 epochs, and the test result is relatively poor. Is there something wrong with my test demo? I hope you can provide me with a test demo. Thank you.

The Model.py file was old, I have updated it. Some of the trained checkpoints are here. The notebook that I am using locally is here. Note that for long videos you need to try at slower frame rates, i.e. dividing the video into multiple 64 frame sections. One 64 frame section can have a max of 32-31 repetitions.

ok,I have a try it, and thank you very much

For the same video, I test your model, the effect is not as good as the paper 's model; I have a question, for the period length estimator, in this paper, they use multi class classification objective (softmax cross entry) for optimizing the model. How can we make 64 * 32 dimensional labels?

I test your model, the effect is not as good as the paper 's model

The checkpoints that I linked were not trained exhaustively. There is room for improvement by simply training more.

How can we make 64 * 32 dimensional labels?

Output a (batchsize, 32, frames) tensor from the model. Get rid of fc1_3 and transpose the last two dimensions of y1.

I try both the checkpoints [tr9.pt and tr9_innt.pt], (https://drive.google.com/drive/folders/1uYJQTMR6gRXzFVbCfMREv940iLfk27oK?usp=sharing), the effect is bad too.

using the test2.mp4 from testvids for test:

Screenshot from 2021-06-07 17-45-34

Is the repetition in the video symmetric? There are some cases where the model performs badly. This is from the paper.

Double Counting Errors. We observe that a common fail-
ure mode of our model is that for some actions (e.g. jug-
gling soccer ball), it predicts half the count reported by an-
notators. This happens when the model considers left and
right legs’ motion for counting while people tend to con-
sider the ball’s up/down motion resulting in people double
counting the repetitions. We believe such errors are difficult
to isolate in a class-agnostic manner. But they can be fixed
easily with either labeled data or post-processing methods
if the application is known.

Also the OBO error is 0.30 which means that for 30% of videos the model predicted counts that were more than one off from true value.

I try both the checkpoints [tr9.pt and tr9_innt.pt], (https://drive.google.com/drive/folders/1uYJQTMR6gRXzFVbCfMREv940iLfk27oK?usp=sharing), the effect is bad too.

using the test2.mp4 from testvids for test:

Screenshot from 2021-06-07 17-45-34

I trained 15 epochs, and the test result is relatively poor. Is there something wrong with my test demo? I hope you can provide me with a test demo. Thank you.

do you have the train dataset,If so, I was wondering if you could kindly share it with me?

Hello, it's seems that the checkpoint link is lapsed. Can you please update the checkpoint link?

Hello, it's seems that the checkpoint link is lapsed. Can you please update the checkpoint link and train dataset?