diablodale/jit.openni

reduction jit.openni working aria

Closed this issue · 5 comments

Hello, I am working on an installation with small trampoline, and I am using kinect in it. I need to reduce kinect working aria to small rectangle(0.7x0.7)meters, only there where the trampoline is, and I need to track only one user in there.

I have try to make this in max, but it is not accurate because actually jit.openi see the user.
Even hi is out of working aria that I set.

Is there an other way to make this?

Please help me

The working area (also known as user position) feature of OpenNI was never successfully implemented by the OpenNI group. Unfortunately, they have abandonded the v1.x code and that limitation will never be fixed.

The output that you get from the depthmap and the skeleton output can both be filter by you in Max. For example, you could use a jit.op object to filter out data you don't want in the depthmap by filtering Z values less/greater than the trampoline. You can do the same with the X and Y.

The same could be done on the skeletons. You probably don't want to filter individual joints. So a better method might be to filter a whole skeleton based on the user CoM(center of mass). If the center of mass is in the region of the trampoline, then you could use that CoM directly...or...use the joints that have a skeleton # that matches the CoM

Thanks for your replay!

In my patch I am using only head coordinates. I have made a rectangle to use this head inside the rectangle only.
It works fine. The problem becomes when there are many people in front of the kinect. Openni found four random skeletons and it is a great luck if this one in the rectangle is one of them.

So what should I do, except to cover a part of kinect eye with sticker...

I still believe that the CoM would be the approach to use. You, the creator, need to choose which person. OpenNI doesn't have the knowledge of your trampoline and other parts of your installation. I recommend you develop some logic/code that will choose the person you want to watch using the CoM. Then use the ID number of that CoM to lookup the head joint for that same ID.

If you can use the CoM for everything, I recommend doing that. OpenNI can find CoM faster and more reliably than joints. However, you might have a need for the head coordinate that makes that joint necessary. Having lots of people moving and jumping on a trampoline is probably difficult for OpenNI to detect and recognize. haha.

If you are on Windows, I recommend you use dp.kinect instead. The Microsoft SDK is more reliable and accurate than OpenNI.

:)
I am on mac actually.
Ok I think that one piece of duck tape on the half of the kinect eye will slove the problem for now. :)
I can use CoM instead of the head...
I have thinked about something with cv.jit and depth cam picture, but maby it will not be stable enough for me because I need to track when somewone jumps. I am not shure in this blobs.

Thanks a lot for the patience.

I continue to believe that you do not need to cover the laser or the sensor. You can do all the filtering with math based on the coordinates in Max. Naturally, you are welcome to do whatever works for you and I wish you best of luck. :-)

I do want to additionally caution you that tape might leave a residue or damage the plastic covers on the Kinect. When you need to remove it, you might have permanently damaged it. Perhaps you can cover it with something more easily removed like a thick paper and use Gaff tape (a tape made for theater hot lights) to secure it to the Kinect.