Missing module : id2
Closed this issue · 3 comments
Hello,
i am using a Neuromorphic camera (DVXplorer Micro) and i develop an application possibly on a Jetson Nano, where i need to compute optical flow. However when i try to follow the instructions I always found the error :
" ModuleNotFoundError: No module named 'id2' "
It seems that the module itself is able to convert the raw events into suitable input format for the network, but I do not find a way to install the module.
Is it a class or a external file? I do not see any related module to it.
Please tell me how to solve the problem if you can, I gently need it fast.
Wish you a good day
Alessandro Marchei
Hi Alessandro,
Apologies. This should already be fixed in the latest commit: 6e9ade0
Hi Yilun, thanks for the response. Tomorrow I will try again. I have a few questions : - I need to import my application in c++ environment because I need to be fast and because I am using a DVXplorer Micro (VGA 640x480). Do you have any recommendations on how to convert the input slices into the bins that you have? I usually use LibTorch, so do I need only to import the .pt (I plan to use the TID) only or would I need more files to import the trained model with weights in c++?
To convert the input slices (where events are simply accumulated) to voxel grids (the representation used in IDNet), you'd need to have a look at the code in https://github.com/tudelft/idnet/blob/6e9ade0c1e3c4458528201eeb1445db9d4c6b2d5/idn/utils/dsec_utils.py#L20C7-L20C16
This process of interpolating events by its timestamp to the nearest two bins can be expensive on embedded platforms (and they are not included in the inference time reported in the paper). For your application, I would suggest to retrain the network with just event slices and use that as the representation. That way, minimal processing needs to take place for events. You might experience a minor performance drop but for platforms with limited compute, this is worthwhile.
I have limited experience in running PyTorch model efficiently on embedded hardware. But in a previous project, we used libtorch but perhaps there are more modern tools to use nowadays such as TorchScript or TorchDynamo.
- I plan to use in a real time situation, on a drone, with very high event rate, do you think it is feasible? From the paper it says that it would run the DSEC on 10 fps on a Xavier. Would it fit into embedded platforms for real time computation?
It depends on which platform you are planning to use. If it has a GPU and is similar or more powerful than Xaiver NX, then definitely. TID can be made in real-time. Of course, by real-time, I meant the 10Hz standard from DSEC. If you want higher inference speed, you can retrain the network to use less than 15 bins for shorter amount for time (e.g. 5 bins for each 50ms of events) as input.