Share via


ColorReconstruction.IntegrateFrame Method (FusionFloatImageFrame, FusionColorImageFrame, Int32, Single, Matrix4)

Kinect for Windows 1.8

Integrates depth float data and color data into the reconstruction volume from the specified camera pose.

Syntax

public void IntegrateFrame (
         FusionFloatImageFrame depthFloatFrame,
         FusionColorImageFrame colorFrame,
         int maxIntegrationWeight,
         float maxColorIntegrationAngle,
         Matrix4 worldToCameraTransform
)

Parameters

  • depthFloatFrame
    Type: FusionFloatImageFrame
    The depth float frame to be integrated.

  • colorFrame
    Type: FusionColorImageFrame
    The color frame to be integrated.

  • maxIntegrationWeight
    Type: Int32
    A parameter to control the temporal smoothing of depth integration. The minimum value is one. Lower values have more noisy representations, but are suitable for more dynamic environments because moving objects integrate and disintegrate faster. Higher values integrate objects more slowly, but provide finer detail with less noise.

  • maxColorIntegrationAngle
    Type: Single

    Angle with respect to the surface normal over which color will be integrated, in degrees. The useful range of values for this parameter is [0.0f, 90.0f]. You can use this parameter to integrate color only when the Kinect sensor is nearly parallel with the surface (that is, the camera direction of view is perpendicular to the surface), or within a specified angle from the surface normal direction. Specify FusionDepthProcessor.DefaultColorIntegrationOfAllAngles to ignore the integration angle and accept color from all angles.

    This angle relative to this normal direction vector describes the acceptance half angle; for example, a +/- 90 degree acceptance angle in all directions (that is, a 180 degree hemisphere) relative to the normal integrates color in any orientation of the sensor towards the front of the surface, even when parallel to the surface. An acceptance angle of zero integrates color only directly along a single ray exactly perpendicular to the surface.

    Setting this value has a cost at run-time. However, not setting this value causes this method to integrate color from any angle over all voxels along camera rays around the zero crossing surface region in the volume, which can cause thin structures to have the same color on both sides.

  • worldToCameraTransform
    Type: Matrix4

    The camera pose. This is usually the camera pose result from the most recent call to the FusionDepthProcessor.AlignPointClouds or ColorReconstruction.AlignDepthFloatToReconstruction method.

    Note

    This method also sets the internal camera pose to this pose.

Requirements

Namespace: Microsoft.Kinect.Toolkit.Fusion

Assembly: Microsoft.Kinect.Toolkit.Fusion (in microsoft.kinect.toolkit.fusion.dll)

See Also

Reference

ColorReconstruction Class
ColorReconstruction Members
Microsoft.Kinect.Toolkit.Fusion Namespace