Share via


ColorReconstruction.AlignDepthFloatToReconstruction Method

Kinect for Windows 1.8

Aligns a depth float image to the reconstruction volume to calculate the new camera pose.

Syntax

public bool AlignDepthFloatToReconstruction (
         FusionFloatImageFrame depthFloatFrame,
         int maxAlignIterationCount,
         FusionFloatImageFrame deltaFromReferenceFrame,
         out float alignmentEnergy,
         Matrix4 worldToCameraTransform
)

Parameters

  • depthFloatFrame
    Type: FusionFloatImageFrame
    The depth float frame to be processed.

  • maxAlignIterationCount
    Type: Int32
    The maximum number of iterations of the algorithm to run. The minimum value is one. Using only a small number of iterations will have a faster run time, but the algorithm may not converge to the correct transformation.

  • deltaFromReferenceFrame
    Type: FusionFloatImageFrame

    A pre-allocated float image frame, to be filled with information about how well each observed pixel aligns with the passed-in reference frame. This could be processed to create a color rendering, or could be used as input to additional vision algorithms such as object segmentation. These residual values are normalized −1 to 1 and represent the alignment cost/energy for each pixel. Larger magnitude values (either positive or negative) represent more discrepancy, and lower values represent less discrepancy or less information at that pixel.

    Note that if valid depth exists, but no reconstruction model exists behind the depth pixels, a value of zero (which indicates perfect alignment) will be returned for that area. In contrast, where no valid depth occurs a value of one will always be returned. Pass null to this parameter if you do not want to use this functionality.

  • alignmentEnergy
    Type: Single
    A floating-point value that receives a value that describes how well the observed frame aligns to the model with the calculated pose. A larger magnitude value represents more discrepancy, and a lower value represents less discrepancy. It is unlikely that an exact zero value (perfect alignment) will ever be returned, as every frame from the sensor will contain some sensor noise.

  • worldToCameraTransform
    Type: Matrix4
    The best guess at the current camera pose. This is usually the camera pose result from the most recent call to the FusionDepthProcessor.AlignPointClouds or ColorReconstruction.AlignDepthFloatToReconstruction method.

Return Value

Type: Boolean
Returns true if successful; returns false if the algorithm encountered a problem aligning the input depth image and could not calculate a valid transformation.

Remarks

This camera tracking method requires a reconstruction volume, and updates the internal camera pose if successful. The maximum image resolution supported in this method is 640×480. This method is designed primarily for tracking either with static scenes when performing environment reconstruction, or objects which move rigidly when performing object reconstruction from a static camera. If tracking failures occur due to parts of a scene which move non-rigidly or should be considered as outliers, consider using the FusionDepthProcessor.AlignPointClouds method instead. However, such issues are best avoided by carefully designing or constraining usage scenarios wherever possible.

Requirements

Namespace: Microsoft.Kinect.Toolkit.Fusion

Assembly: Microsoft.Kinect.Toolkit.Fusion (in microsoft.kinect.toolkit.fusion.dll)

See Also

Reference

ColorReconstruction Class
ColorReconstruction Members
Microsoft.Kinect.Toolkit.Fusion Namespace