The home of Dino

Colour mapping Kinect data in 3D

[youtube]http://www.youtube.com/watch?v=GiA8IMRhk3A[/youtube]

Quick update on my Processing and Kinect journey. This time it’s mapping the colour information from the on-board RGB camera to the infrared depth data.

The OpenKinect libraries come with a few demo files that pretty much have the nuts and bolts of it already, kind of. Here’s how it works:

  1. OpenKinect returns the depth data as a complete array
  2. It also returns the video data which is accessed as an image each frame
  3. The depth data is then adjusted a little to make it map to the screen and allow for perspective distortion
  4. Each depth point is then mapped to a static X and Y point, which matches the same X and Y in the video feed (*sort of, see below.)
  5. It’s then simply a case of reading the colour data from a point in the video feed and colouring the corresponding depth point.

However, it did require a little fiddling with the video scale to get it to match the infrared data scaling. The raw video was slightly bigger and offset, so that your ‘texture’ didn’t sit correctly on the model.

Here’s the initial feed without correction. Note how the image feed is too small and to the bottom/left.

And below is the corrected image. It wasn’t particularly scientific however, I just tried a few values until it looked about right.

Here’s the code I used:

int pixel = kinect.getVideoImage().get(int (x*0.91) + 9,int (y*0.94) + 25);

I.e. the video is scaled by 91% and shifted +9 pixels on the x axis and scaled by 94% and shifted +25 pixels on the y. Not quite sure why this is needed but if you’re having the same issue, maybe it’ll help. Likewise, if I’m missing something, please let me know. As much as I like a good hack, I also like to know why it’s needed!

Incidentally, the video above was ‘recorded’ from Processing using individual bitmap frames. All the screen capture software I have causes the Kinect video to either freeze or not work. Strange. So handily, Processing can save out screenshots each frame. I then used  Time Lapse Assembler to stitch them back together.

To save the first 300 frames to the same folder as your code, it’s as simple as…

frameCount ++;
if (frameCount <= 300) {
saveFrame(“kinect-####.jpg”);
}

So next step is to record the incoming data and store it so that I can do interesting effects on preset data sets.

Add comment