Share via


Avateering C# Sample

Kinect for Windows 1.5, 1.6, 1.7, 1.8

Note

To open this sample in Visual Studio 2010, you must have XNA Game Studio 4 installed, which you can find on the Software Requirements page. XNA Game Studio 4 is not supported in Visual Studio 2012.

Overview

Demonstrates how to use the Joint Orientation API to help avatar animation and retargeting using XNA Game Studio.

These key combinations control the camera:

Keys Action
A or Left Cursor Pan camera left
D or Right Cursor pan camera right
W or Up Cursor Tilt camera up
S or Down Cursor Tilt camera down
Z Zoom out
X Zoom in

The following options are also implemented:

Keys Action
Escape Quit
F11 Toggle Full-screen
B Toggle display 3D skeleton with constraint cones (Note: Skeleton shown is before constraints are applied.)
C Toggle bone orientation constraints (default: on)
F Toggle bone orientation smoothing filter (default: on)
G Toggle draw of local axes for each joint in Avatar mesh (default: off)
H Toggle Fix Hip Center height (default: on) – can use in combination with O
I Toggle self-intersection constraints (default: on)
K Toggle avateering (default: on)
M Toggle mirroring of avatar (default: on)
N Toggle Near mode and seated mode (default: off)
O Toggle sensor height compensation (attempts to place skeleton feet on floor) (default: off, as H is on by default)
R Reset viewing camera and skeleton filters
T Toggle sensor tilt compensation (default: on)
V Toggle display avatar only when skeleton detected (default: on)

To Open the Sample in Visual Studio

  1. Click the Start button, select All Programs > Kinect for Windows SDK version number > Developer Toolkit Browser version number (Kinect for Windows). You can also click the Start button and type kinect toolkit.
  2. In the list of samples, find this sample and click the Install button.
  3. Select a location to install the sample to.
  4. Double-click the Solution file (.sln) that was installed.

To Build the Sample

In Visual Studio 2010 Ultimate:

  1. Start Visual Studio and from the menu, select File > Open > Open Project/Solution.
  2. Browse to the directory that contains the unzipped sample files. Double-click the Visual Studio Solution (.sln) file.
  3. From the menu, select Build > Build Solution.

In Visual C# 2010 Express:

  1. Start Visual C# and from the menu, select File > Open Project.
  2. Browse to the directory that contains the unzipped sample files. Double-click the Visual Studio Solution (.sln) file.
  3. From the menu, select Debug > Build Solution.

In Visual C++ 2010 Express:

  1. Start Visual C++ and from the menu, select File > Open > Project/Solution.
  2. Browse to the directory that contains the unzipped sample files. Double-click the Visual Studio Solution (.sln) file.
  3. From the menu, select Debug > Build Solution.

In Visual Studio 2012 Ultimate:

  1. Start Visual 2012 and from the menu, select File > Open > Project/Solution.
  2. Browse to the directory that contains the unzipped sample files. Double-click the Visual Studio Solution (.sln) file.
  3. From the menu, select Build > Build Solution.

To Run the Sample

To run the sample in the Visual Studio debugger, from the menu select Debug > Start Debugging. To run the sample in Visual Studio 2010 Ultimate without debugging, from the menu select Debug > Start Without Debugging.

Avateering

The Avateering sample application demonstrates how to animate a 3D humanoid avatar model using the Kinect for Windows SDK and XNA. Raw skeleton joint positions are initially filtered and refined before being passed to the Bone Orientation API in the Kinect for Windows SDK to calculate bone orientations using forward kinematics. The bone orientations are subsequently constrained and filtered then re-targeted to the bones in the 3D model mesh for animation.

Bone orientations are typically used for animation instead of joint positions to reduce the problem of models stretching. This stretching effect is typically seen when using joint positions both when the avateer has different body sizes and proportions to the mesh model being animated, and also during avateering where different poses can make bones appear to change size in the skeletal tracking system.

Avateering using bone orientations enable any size model to be animated by any size person without stretching, however when using forward kinematics to calculate bone orientations the avateer’s size and shape still has some impact on the avateering. A pose in real life may not exactly be reflected in the avatar if proportions are different, for example, if the 3D model’s neck and head are shorter (or conversely, if the avateer’s arms are longer), then the hands may appear floating over the top of the avatar 3D model head, when in reality they are on the avateer’s head.

Forward Kinematics is one only approach for calculating bone orientations for use in avatar animation, and when filtered and constrained provides reasonable visual quality with low processing requirements. Other methods, such as inverse kinematics, or a ragdoll physics-based approach should also be considered in circumstances where exact placement of avatar limbs at 3D locations in space or interaction with objects in a physics world is required.

Filtering

For each skeleton frame, we filter joint positions, calculate the Bone orientations then filter the bone orientation angles before using these angles to animate the “Dude” avatar mesh.

We implement 8 filters, and in order of application these are:

  1. Clipped Leg joint positions Double Exponential Filter - This filter applies heavy time-based filtering to joints in the legs to reduce the incidence of them becoming jumpy when clipped by the lower edge of the camera image. The overall smoothing amount is configurable with the Transform Smooth parameters. The output of the filter is the smoothed joints positions blended (linearly interpolated) with the raw positions. The amount of blending is configured by the tracking state of the joints. See SkeletonJointsFilterClippedLegs.cs for more information.
  2. Constrain Torso Self-Intersections - This filter collides the hand and wrist joints against a cylinder which represents the torso. The cylinder size is defined by a multiple of the radius of the shoulders and the distance between the Hip Center and Shoulders Center. If the hand or wrist joints are inside the cylinder, they are translated away from the center of the cylinder (along the normal at the intersection point) to be outside. See SkeletonJointsSelfIntersectionConstraint.cs for more information.
  3. Compensate for Tilt - This filter rotates the joint positions around the Hip Center position in the camera Y axis to invert the tilt of the camera as returned by sensorElevationAngle, making the skeleton appear more upright in the XNA world coordinate system. Note that there are three possible sources of tilt information, namely the floor plane returned from Skeletal Tracking, the raw 3D accelerometer in the camera (not currently available) and the sensorElevationAngle / camera motor tilt value. We use the sensorElevationAngle here as the floor plane is not always seen. See SkeletonJointsSensorTiltCorrection.cs for more information.
  4. Compensate for Floor height - This filter attempts to correct for the sensor height and move the feet of the avatar to the XNA ground plane, using either the floor plane information returned from Skeletal Tracking, or the 3D location of the lowest foot. Using the lower foot position will reduce the visual effect of jumping when an avateer jumps, as we perform a running average on the floor location, based on the foot position, hence the floor will appear higher if they jump. Note that this filter is off by default as it is possible for both the feet to be clipped by the lower edge of the camera and for the floor plane to not be visible and calculated in Skeletal Tracking. Instead, by default we fix the height of the Hip Center joint based on the size of the “Dude” avatar legs, however, this will prevent jumping and crouching. See SkeletonJointsSensorOffsetCorrection.cs for more information.
  5. Mirror skeleton - This filter will mirror the skeleton joint positions so the avatar appears to mirror the avateer on-screen. See SkeletonJointsMirror.cs for more information.
  6. Joint Position Double Exponential Filter - This filter applies smoothing to joint positions based on the previous historical locations, removes jitter and predicts new joint positions to reduce lag. The overall smoothing, correction, jitter reduction and prediction amount is configurable with the Transform Smooth parameters. See SkeletonJointsPositionDoubleExponentialFilter.cs for more information.
  7. Bone Orientation Constraints - This filter constrains the range of rotation of a bone to plausible human bio-mechanics to help prevent unnatural poses due to untracked joints. The constraints are described by a direction vector relative to the parent bone, in the parent bone coordinate system, and the maximum angle through which the bone can move relative to this direction vector. This can be visualized as a cone at the end of the parent bone, in which the child bone can move. If the joint positions cause a bone orientation outside the valid cone to be calculated, the bone is rotated back toward the specified direction vector to place it inside the valid cone. See BoneOrientationConstraints.cs for more information.
  8. Bone Orientation Double Exponential Filter - This filter applies double exponential smoothing to bone orientations, similar to joint positions above. See BoneOrientationDoubleExponentialFilter.cs for more information.

Most filters can be toggled on and off via the keyboard- see the key commands for more detail.

Re-Targeting Bone Orientations for an Avatar 3D Model in XNA

The Skeletal Tracking pipeline defines its own skeleton and set of bone orientations. However, the number of joints, joint locations and bone orientations differ between 3D models used in animation (for example, many have multiple spine joints, or finger joints). Many 3D Models are “rigged” with bones in a T-pose (legs together, arms stretched out horizontally, palms down) for ease of visualization and understanding, as all bones then align with an axis in their identity orientation. However the binding of the mesh skin to this skeleton can also be performed with a more A- like pose (this “bind-pose” is often a T-pose skeleton with rotated arms and legs). An A-like pose approximates a more normal human neutral pose, and hence can often appear to have less mesh deformation for small movements around the neutral pose when using simple skinning mesh rendering algorithms.

The Avateering sample uses the “Dude” 3D model from the XNA Skinning Sample as an example model. This has an A-like bind pose and many more joints and bones than the SDK, hence, a “re-targeting” step is required to convert from the SDK's Skeletal Tracking joint locations and bone orientations to the model. While the sample provides a plausible re-targeting to the Dude, this mapping is specific only to the Dude skeleton, and other models will require a different re-targeting.

To perform this re-targeting step, first look in your model file (e.g. “Dude.fbx”) for the index of bones, and their corresponding name. Look for bones which sound similar to the ones in the SDK.

Once you have decided on an initial mapping between these model bones and the SDK's bones, enter it into the dictionary in the BuildJointHierarchy function in AvateeringXNA.cs, add your model to the XNA content project, set it to have the SkinnedModelProcessor content processor and change the AvateeringXNA.cs file to load your model filename instead.

When running the sample it is now possible to visualize the 3D model local joint orientation axes in XNA. These axes may be in different directions in XNA, compared with 3D modeling applications such as Autodesk Maya or 3DSMax, hence it is important to use the model in XNA as the reference when calculating the re-targeting transformation, as seen in Figure 1. Note how the red, green and blue (corresponding to +X,+Y,+Z respectively) joint/bone orientation axes are different in a number of cases.

Figure 1.  The dude in his bind-pose in (left) Autodesk Maya, and (right) XNA.

JJ131041.k4w_avatar_1(en-us,IEB.10).png

Turn off Avateering by pressing “k” and turn on the model with “v”, then display the local bone orientations by pressing “g” when the model is visible. The model should be displayed in its bind pose. You can then compare it with the skeleton bone orientations in XNA by ensuring the Bone Constraint filter is enabled and pressing “b”. You may need to disable the constraint cone display in BoneOrientationConstraints.cs to more clearly see the orientation axes.

Axes on the SDK's 3D line skeleton are drawn at their storage location in the skeleton (i.e. at the end joint of the bones), whereas the 3D model axes are likely to be at the start joint of their respective bones. Axis colors are as follows: +X is red, +Y is green, +Z is blue.

You can now identify any differences in the axis orientations and add any necessary transformations to enable correct avateering in the SetJointTransformation re-targeting step in AvateeringXNA.cs. Looking in SetJointTransformation in AvateeringXNA.cs, you can see that compared to the skeleton, the joints in the Dude model have X,Y,Z axes which are swapped, and rotations on the arms and legs, so the re-targeting step is to re-order these axes (which is done by swapping the components of the bone orientation quaternion) and adding manual rotations to the arm and leg bones as required.

When re-targeting, remember that the Kinect for Windows SDK uses a right-hand coordinate system for bone orientations, with the bones lying along the +Y axis of the local bone coordinate system. In a default pose with the avateer facing the camera in the T-pose, the local +Z axis at each joint points forward towards the camera for all bones except the ankle-foot bone, which points up.

It may also be possible to avoid calculating a new re-targeting by copying the Dude’s skeleton into a new mesh and re-binding the skin after rotating the bones to match your model skin bind pose. Note that any scaling of the Dude’s skeleton itself (rather than the mesh) will also require a modification to the SkeletonTranslationScaleFactor constants in AvateeringXNA.cs and BoneOrientationConstraints.cs to appear correctly scaled when avateering.