Click here to Skip to main content
15,868,164 members
Articles / Game Development / Kinect

Understanding Kinect Coordinate Mapping

Rate me:
Please Sign up or sign in to vote.
4.78/5 (5 votes)
8 May 2014CPOL3 min read 42.8K   3   4
Understanding Kinect Coordinate Mapping

This is another post I publish after getting some good feedback from my blog subscribers. Seems that a lot of people have a problem in common when creating Kinect projects: how they can properly project data on top of the color and depth streams.

As you probably know, Kinect integrates a few sensors into a single device:

  • An RGB color camera – 640×480 in version 1, 1920×1080 in version 2
  • A depth sensor – 320×240 in v1, 512×424 in v2
  • An infrared sensor – 512×424 in v2

These sensors have different resolutions and are not perfectly aligned, so their view areas differ. It is obvious, for example, that the RGB camera covers a wider area than the depth and infrared cameras. Moreover, elements visible from one camera may not be visible from the others. Here’s how the same area can be viewed by the different sensors:

Watch video here.

An Example

Suppose we want to project the human body joints on top of the color image. Body tracking is performed using the depth sensor, so the coordinates (X, Y, Z) of the body points are correctly aligned with the depth frame only. If you try to project the same body joint coordinates on top of the color frame, you’ll find out that the skeleton is totally out of place:

Kinect Coordinate Mapping wrong

CoordinateMapper

Of course, Microsoft is aware of this, so the SDK comes with a handy utility, named CoordinateMapper. CoordinateMapper’s job is to identify whether a point from the 3D space corresponds to a point in the color or depth 2D space – and vice-versa. CoordinateMapper is a property of the KinectSensor class, so it is tight to each Kinect sensor instance.

Using CoordinateMapper

Let’s get back to our example. Here is the C# code that accesses the coordinates of the human joints:

C#
foreach (Joint joint in body.Joints)
{
    // 3D coordinates in meters
    CameraSpacePoint cameraPoint = joint.Position;
    float x = cameraPoint.X;
    float y = cameraPoint.Y;
    float z = cameraPoint.Z;
}

Note: Please refer to my previous article (Kinect version 2: Overview) about finding the body joints.

The coordinates are 3D points, packed into a CameraSpacePoint struct. Each CameraSpacePoint has X, Y and Z values. These values are measured in meters.

The dimensions of the visual elements are measured in pixels, so we somehow need to convert the real-world 3D values into 2D screen pixels. Kinect SDK provides two additional structs for 2D points: ColorSpacePoint and DepthSpacePoint.

Using CoordinateMapper, it is super-easy to convert a CameraSpacePoint into either a ColorSpacePoint or a DepthSpacePoint:

C#
ColorSpacePoint colorPoint = _sensor.CoordinateMapper.MapCameraPointToColorSpace(cameraPoint);
DepthSpacePoint depthPoint = _sensor.CoordinateMapper.MapCameraPointToDepthSpace(cameraPoint);

This way, a 3D point has been mapped into a 2D point, so we can project it on top of the color (1920×1080) and depth (512×424) bitmaps.

How About Drawing the Joints?

You can draw the joints using a Canvas element, a DrawingImage object or whatever you prefer.

This is how you can draw the joints on a Canvas:

C#
public void DrawPoint(ColorSpacePoint point)
{
    // Create an ellipse.
    Ellipse ellipse = new Ellipse
    {
        Width = 20,
        Height = 20,
        Fill = Brushes.Red
    };

    // Position the ellipse according to the point's coordinates.
    Canvas.SetLeft(ellipse, point.X - ellipse.Width / 2);
    Canvas.SetTop(ellipse, point.Y - ellipse.Height / 2);

    // Add the ellipse to the canvas.
    canvas.Children.Add(ellipse);
}

Similarly, you can draw a DepthSpacePoint above the depth frame. You can also draw the bones (lines) between two points. This the result of a perfect coordinate mapping on top of the color image:

Kinect Coordinate Mapping right

Note: Please refer to my previous article (Kinect v2 color, depth and infrared streams) to learn how you can create the camera bitmaps.

Download the source code from GitHub and enjoy yourself:

In this tutorial, I used Kinect for Windows version 2 code, however, everything applies to the older sensor and SDK 1.8 as well. Here are the corresponding class and struct names you should be aware of. As you can see, there are some minor changes regarding the naming conventions used, but the core functionality is the same.

Version 1 Version 2
SkeletonPoint CameraSpacePoint
ColorImagePoint ColorSpacePoint
DepthImagePoint DepthSpacePoint

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



Comments and Discussions

 
Questionresolution problem Pin
lash1co8-Jun-15 15:52
lash1co8-Jun-15 15:52 
QuestionKinect SDK Pin
Member 1138291216-May-15 2:52
Member 1138291216-May-15 2:52 
Dear sir
I want to save point cloud in Microsoft SDK. for example, in Developer Toolkit sample,C#, I implemented "Kinect Fusion Explorer Multi Static Camera".

I wan to save the fusion result. How can I do this?

In the other hand, I have an object such as a MUG, I want to determine the curvature of this MUG by Kinect. So I need to save point cloud of this mug and then measure the curvature of this mug.
QuestionDelay Pin
Pablo Margreff16-Apr-15 2:10
Pablo Margreff16-Apr-15 2:10 
QuestionCan u help me ? Pin
Member 1153244217-Mar-15 5:53
Member 1153244217-Mar-15 5:53 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.