ColorReconstruction.AlignPointClouds Method
Kinect for Windows 1.8
Aligns two sets of overlapping oriented point clouds and calculates the camera's relative pose.
Syntax
public bool AlignPointClouds (
FusionPointCloudImageFrame referencePointCloudFrame,
FusionPointCloudImageFrame observedPointCloudFrame,
int maxAlignIterationCount,
FusionColorImageFrame deltaFromReferenceFrame,
out float alignmentEnergy,
ref Matrix4 referenceToObservedTransform
)
Parameters
referencePointCloudFrame
Type: FusionPointCloudImageFrame
A reference point cloud frame. This image must be the same size and have the same camera parameters as the observedPointCloudFrame parameter.observedPointCloudFrame
Type: FusionPointCloudImageFrame
An observed point cloud frame. This image must be the same size and have the same camera parameters as the referencePointCloudFrame parameter.maxAlignIterationCount
Type: Int32
The number of iterations to run.deltaFromReferenceFrame
Type: FusionColorImageFrameOptional pre-allocated color image frame that receives color-coded data from the camera tracking. This image can be used as input to additional vision algorithms, such as object segmentation. If specified, this image must be the same size and have the same camera parameters as the referencePointCloudFrame and observedPointCloudFrame parameters. If you do not need this output image, specify null.
The values in the received image vary depending on whether the pixel was a valid pixel used in tracking (inlier) or failed in different tests (outlier). Inliers are color shaded depending on the residual energy at that point. Higher discrepancy between vertices is indicated by more saturated colors. Less discrepancy between vertices (less information at that pixel) is indicated by less saturated colors (that is, more white). If the pixel is an outlier, it will receive one of the values listed in the following table.
Value Description 0xFF000000 The input vertex was invalid (for example, a vertex with an input depth of zero), or the vertex had no correspondences between the two point cloud images. 0xFF008000 The outlier vertices were rejected due to too large a distance between vertices. 0xFF800000 The outlier vertices were rejected due to too large a difference in normal angle between the point clouds. alignmentEnergy
Type: Single
Optional value that receives a threshold in the range [0.0f, 1.0f] that describes how well the observed frame aligns to the model with the calculated pose (mean distance between matching points in the point clouds). Larger values represent more discrepancy. Note than it is unlikely that an exact value of 0.0f (perfect alignment) will ever be returned, as every frame from the sensor will contain some sensor noise. Specify null if you do not need this information.referenceToObservedTransform
Type: Matrix4
A matrix that receives the initial guess at the transform. This is updated when tracking succeeds. Tracking failure is indicated by a value of identity.
Return Value
Type: Boolean
Returns true if successful; returns false if the algorithm encountered a problem aligning the input depth image and could not calculate a valid transformation.
Remarks
This method runs on the GPU as an iterative algorithm.
Requirements
Namespace: Microsoft.Kinect.Toolkit.Fusion
Assembly: Microsoft.Kinect.Toolkit.Fusion (in microsoft.kinect.toolkit.fusion.dll)
See Also
Reference
ColorReconstruction Class
ColorReconstruction Members
Microsoft.Kinect.Toolkit.Fusion Namespace