deepstack-lite?
Closed this issue · 4 comments
I am not expecting this to be a high priority, but I mentioned I wanted fast inferencing from an RPi and didn't want to require a USB stick for accel. Well I achieved this using tensorflow-lite in the repo below. My thought was this could be rolled into deepstack or a branch at some point, possibly as a deepstack:lite container or similar. Any thoughts?
This is cool. How fast is it and what is the accuracy of the detections like ?
Speed will depend on the platform clearly and I haven't done any benchmarking, but response is instantaneous on mac. RE accuracy there is a chart in this link which indicates accuracy around 75-80%.
Okay. Cool. Will investigate this further especially as the speed on the PI is often much slower compared to a mac
I think this is best served by a separate project, we dont want to maintain a fork. Closing