So basically, I need to be able to distinguish between heads, so that it creates a new tracker for a new person. It obviously still loses where its tracking, non-trivially -- although it's better than before. It updates with face detection every 250 frames or so & also when its detecting too many zero pixels (this is after depth segmentation & only looking at the top 25% of blobs for heads -- yes, the kinect data is noisy).
Facial recognition is too CPU-intensive I think. I attempted to try tracking markers using some AR libraries (aruco, ARma) -- just as a prototype -- they were really light-weight & easy to implement -- but they were not meant for applications such as mine (nothing comes cheap in my case). I think I'm also ready to do better depth segmentation... & perhaps there is a way to disgard some of the Kinect noise.
I am worried about varied lighting conditions, etc. I am half-thinking about just turning the Kinect into a cheap IR sensor sans depth -- since the resolution of the depth information is fairly low & noisy for my purposes. Or just buying really high quality & fast webcams.
Another problem to solve: right now, the kinect is sucking up CPU -- like 120%... eek. I've traced the problem to the libfreenect-driver, but replacing the driver with an up-to-date version (the one on homebrew is 2 iterations behind) either crashes or runs once in debug mode, using even more CPU than before...
No comments:
Post a Comment