How to test on custom data with the pretrained model?
Opened this issue · 2 comments
Hi authors!
This is nice work and congratulations on securing CVPR24!
I managed to deploy it on my machine and managed to test on some data in the test set and the pretrained model gave me amazing results! However, when I tried to test on custom data, I found I might need to post here for some more details. Based on my understanding, the event frame in img folder is the one the model used as an input for testing a data sequence. The .csv file recording individual events is not used actually. I am wondering how should I preprocess my event stream so that the domain of the input is not significantly changed, which might decrease the performance of the model.
Also, I found the events in recording_2022-11-16_18-56-09 show the rough contour of the room. I am wondering how was this recorded since if the camera is relatively stationary in the room, the events should not be triggered except for some sporadicly pseudo-uniformly distributed noise.
Thanks in advance!
Hi authors!
This is nice work and congratulations on securing CVPR24!
I managed to deploy it on my machine and managed to test on some data in the test set and the pretrained model gave me amazing results! However, when I tried to test on custom data, I found I might need to post here for some more details. Based on my understanding, the event frame in img folder is the one the model used as an input for testing a data sequence. The .csv file recording individual events is not used actually. I am wondering how should I preprocess my event stream so that the domain of the input is not significantly changed, which might decrease the performance of the model.
Also, I found the events in recording_2022-11-16_18-56-09 show the rough contour of the room. I am wondering how was this recorded since if the camera is relatively stationary in the room, the events should not be triggered except for some sporadicly pseudo-uniformly distributed noise.
Thanks in advance!
Hi, thank you for your attention!
Firstly, our event frames are generated from .csv files. Specifically, we divide a video into the same time windows and then stack each small window's event stream into a corresponding image frame. The code is here https://github.com/Event-AHU/EventVOT_Benchmark/blob/main/Scripts/csv2img.py. For your custom dataset, you can modify the script according to the format of the dataset and transform it into image frames or other forms of data representation.
Secondly, regarding the recording_2022-11-16_18-56-09 you mentioned, we used a handheld camera to capture it, which inevitably caused the camera to shake and trigger the event and noise.
Thanks so much for your detailed answer! That explains everything!