Share via


Reconstruction.AlignDepthFloatToReconstruction Method

Aligns a depth float image to the Reconstruction volume to calculate the new camera pose.

This camera tracking method requires a Reconstruction volume, and updates the internal camera pose if successful. The maximum image resolution supported in this function is 640x480. Note that this function is designed primarily for tracking either with static scenes when performing environment reconstruction, or objects which move rigidly when performing object reconstruction from a static camera. Consider using the standalone function AlignPointClouds instead if tracking failures occur due to parts of a scene which move non-rigidly or should be considered as outliers, although in practice, such issues are best avoided by carefully designing or constraining usage scenarios wherever possible.

Syntax

public bool AlignDepthFloatToReconstruction (
         FusionFloatImageFrame depthFloatFrame,
         int maxAlignIterationCount,
         FusionFloatImageFrame deltaFromReferenceFrame,
         out float alignmentEnergy,
         Matrix4 worldToCameraTransform
)

Parameters

  • depthFloatFrame
    Type: FusionFloatImageFrame
    The depth float frame to be processed.
  • maxAlignIterationCount
    Type: Int32
    The maximum number of iterations of the algorithm to run. The minimum value is 1. Using only a small number of iterations will have a faster runtime, however, the algorithm may not converge to the correct transformation.
  • deltaFromReferenceFrame
    Type: FusionFloatImageFrame
    Optionally, a pre-allocated float image frame, to be filled with information about how well each observed pixel aligns with the passed in reference frame. This may be processed to create a color rendering, or may be used as input to additional vision algorithms such as object segmentation. These residual values are normalized -1 to 1 and represent the alignment cost/energy for each pixel. Larger magnitude values (either positive or negative) represent more discrepancy, and lower values represent less discrepancy or less information at that pixel. Note that if valid depth exists, but no reconstruction model exists behind the depth pixels, 0 values indicating perfect alignment will be returned for that area. In contrast, where no valid depth occurs 1 values will always be returned. Pass null if not required.
  • alignmentEnergy
    Type: Single
    A float to receive a value describing how well the observed frame aligns to the model with the calculated pose. A larger magnitude value represent more discrepancy, and a lower value represent less discrepancy. Note that it is unlikely an exact 0 (perfect alignment) value will ever be returned as every frame from the sensor will contain some sensor noise.
  • worldToCameraTransform
    Type: Matrix4
    The best guess of the camera pose (usually the camera pose result from the last AlignPointClouds or AlignDepthFloatToReconstruction).

Return Value

Type: Boolean
Returns true if successful; return false if the algorithm encountered a problem aligning the input depth image and could not calculate a valid transformation.

This method raises the following exceptions:

Exception Raised By
ArgumentNullException Thrown when the depthFloatFrame parameter is null.
ArgumentException Thrown when the depthFloatFrame parameter is an incorrect image size. Thrown when the maxAlignIterationCount parameter is less than 1 or an incorrect value.
InvalidOperationException Thrown when the Kinect Runtime could not be accessed, the device is not connected or the call failed for an unknown reason.

Requirements

Namespace: Microsoft.Kinect.Toolkit.Fusion

Assembly: Microsoft.Kinect.Toolkit.Fusion (in microsoft.kinect.toolkit.fusion.dll)

See Also

Reference

Reconstruction Class
Reconstruction Members
Microsoft.Kinect.Toolkit.Fusion Namespace