Share via


Kinect Fusion Basics D2D C++ Sample

Kinect for Windows 1.7, 1.8

This sample illustrates how to use Kinect Fusion for 3D reconstruction.

Important

DirectX 11 feature support is required to run Kinect Fusion.

In order to determine the DirectX Feature support level of your graphics card, run DXDiag.exe to determine the supported future level.

  1. Launch DxDiag.exe
  2. Navigate to the “Display” tab.
  3. In the “Drivers” area, there will be a text fields with the label “Feature Levels:”
  4. If 11.0 is in the list of supported feature levels, then Kinect Fusion will run in GPU mode.

Note: Simply having DirectX11 installed is not enough, you must also have hardware that supports the DirectX 11.0 feature set.

Overview

The Sample Uses the Following APIs To Do This
NuiGetSensorCount function Get the number of sensors that are ready for use.
NuiCreateSensorByIndex function and INuiSensor interface Create an interface that represents a connected sensor.
INuiSensor::NuiStatus method Check the sensor status to see if the sensor is connected.
INuiSensor::NuiInitialize method and NUI_INITIALIZE_FLAG_USES_DEPTH constant Initialize the sensor to stream out depth data.
CreateEvent function Create an event that will be signaled when depth data is available by returning an event handle.
INuiSensor::NuiImageStreamOpen method, NUI_IMAGE_TYPE_DEPTH constant, NUI_IMAGE_RESOLUTION_640x480 constant, and the event handle Open a depth stream to receive depth data.
NuiFusionCreateReconstruction function Use this helper function to create a Kinect Fusion volume to reconstruct the scene.
NuiFusionCreateImageFrame function and NUI_FUSION_IMAGE_TYPE_DEPTH_FLOAT enumeration value Use this helper function to create a frame generated from the depth input.
NuiFusionCreateImageFrame function and NUI_FUSION_IMAGE_TYPE_POINT_CLOUD enumeration value Use this helper function to create an image to raycast the reconstruction volume.
NuiFusionCreateImageFrame function and NUI_FUSION_IMAGE_TYPE_COLOR enumeration value Use this helper function to create an image to raycast the reconstruction volume.
NUI_DEPTH_IMAGE_PIXEL structure Store a frame from the depth input.
INuiFusionReconstruction::ResetReconstruction method Clear the reconstruction volume and set a world-to-camera transform (camera view pose) and a world-to-volume transform.
INuiFusionReconstruction::GetCurrentWorldToCameraTransform method Get the current internal world-to-camera transform (camera view pose).
INuiFusionReconstruction::GetCurrentWorldToVolumeTransform method Get the current internal world-to-volume transform.
INuiFusionReconstruction::ProcessFrame method Process a depth frame through the KinectFusion pipeline.
INuiFusionReconstruction::CalculatePointCloud method Calculate a point cloud by raycasting into the reconstruction volume.
INuiFusionReconstruction::DepthToDepthFloatFrame method Convert the specified array of Kinect depth pixels to a NUI_FUSION_IMAGE_FRAME structure.
NuiFusionShadePointCloud function Create visible color shaded images of a point cloud and its normals.

To run a sample you must have the Kinect for Windows SDK installed. To compile a sample, you must have the developer toolkit installed. The latest SDK and developer toolkit are available on the developer download page. If you need help installing the toolkit, look on this page: To Install the SDK and Toolkit. The toolkit includes a sample browser, which you can use to launch a sample or download it to your machine. To open the sample browser, click Start > All Programs > Kinect for Windows SDK [version number] > Developer Toolkit Browser.

If you need help loading a sample in Visual Studio or using Visual Studio to compile, run, or debug, see Opening, Building, and Running Samples in Visual Studio.