Share via


FusionDepthProcessor Members

The following table(s) list the members exposed by the FusionDepthProcessor type.

Public Fields

Name Description
Public Field DefaultAlignIterationCount The default align iteration count. The value is 7.
Public Field DefaultColorIntegrationOfAllAngles The default color integration: no angle restriction, integrate +/-180 degrees (fastest processing).
Public Field DefaultIntegrationWeight The default integration weight. This value is 150.
Public Field DefaultMaximumDepth The default maximum depth value. This value is 8.0.
Public Field DefaultMinimumDepth The default minimum depth value. This value is 0.35.

Public Methods

Name Description
Public Method Static AlignPointClouds

The AlignPointClouds function uses an iterative algorithm to align two sets of oriented point clouds and calculate the camera's relative pose. This is a generic function which can be used independently of a Reconstruction Volume with sets of overlapping point clouds.

All images must be the same size and have the same camera parameters.

To find the frame to frame relative transformation between two sets of point clouds in the camera local frame of reference (created by DepthFloatFrameToPointCloud), set the observedToReferenceTransform parameter to the identity.

To calculate the frame-to-model pose transformation between point clouds calculated from new depth frames with DepthFloatFrameToPointCloud and point clouds calculated from an existing Reconstruction volume with CalculatePointCloud (e.g. from the previous frame), pass the CalculatePointCloud image as the reference frame, and the current depth frame point cloud from DepthFloatFrameToPointCloud as the observed frame. Set the observedToReferenceTransform to the previous frames calculated camera pose that was used in the CalculatePointCloud call.

To calculate the pose transformation between new depth frames and an existing Reconstruction volume, pass in previous frames point cloud from RenderReconstruction as the reference frame, and the current frame point cloud (from DepthFloatFrameToPointCloud) as the observed frame. Set the observedToReferenceTransform parameter to the previous frames calculated camera pose.

Note that here the current frame point cloud will be in the camera local frame of reference, whereas the raycast points and normals will be in the global/world coordinate system. By passing the observedToReferenceTransform you make the algorithm aware of the transformation between the two coordinate systems.

The observedToReferenceTransform pose supplied can also take into account information you may have from other sensors or sensing mechanisms to aid the tracking. To do this multiply the relative frame to frame delta transformation from the other sensing system with the previous frame's pose before passing to this function. Note that any delta transform used should be in the same coordinate system as that returned by the DepthFloatFrameToPointCloud calculation.

Public Method Static DepthFloatFrameToPointCloud Construct an oriented point cloud in the local camera frame of reference from a depth float image frame.
Public Method Static DepthToDepthFloatFrame Converts Kinect depth frames in unsigned short format to depth frames in float format representing distance from the camera in meters (parallel to the optical center axis).
Public Method Static GetDeviceInfo Enumerate the devices capable of running KinectFusion. This enables a specific device to be chosen when calling NuiFusionCreateReconstruction if desired.
Public Method Static ShadePointCloud Overloaded. Create a visible color shaded image of a point cloud and its normals

See Also

Reference

FusionDepthProcessor Class
Microsoft.Kinect.Toolkit.Fusion Namespace