MyRobotLab/InMoov

get yolo "gesture" worky

Closed this issue · 7 comments

moz4r commented

Seem it is related to a personal build, we need to use yolo opencv filter

What do you want the gesture to do? add the yolo filter? what else?

moz4r commented

I think it is for "what do you see"

you want to be able to respond with the current objects that are seen?
yolo publishes that it recognizes on each frame.
I think you'd want to be able to ask the yolo filter what the last recognized objects are.. yes?

moz4r commented

you right, I think we have tools to play with it

yeah. the info should be available in the "lastResult" object on the OpenCVFilterYolo class..

moz4r commented

modded, still need some work, to merge with ultrasonic thing

moz4r commented

We are close.. classification publisher need to be converted to java land, because odd race conditions between python sleep / speech recognition …