Kinect Fusion Explorer D2D C++ Sample
Kinect for Windows 1.7, 1.8
This sample illustrates how to use the individual pipeline stages of Kinect Fusion for 3D reconstruction.
Important
DirectX 11 feature support is required to run Kinect Fusion.
In order to determine the DirectX Feature support level of your graphics card, run DXDiag.exe to determine the supported future level.
- Launch DxDiag.exe
- Navigate to the “Display” tab.
- In the “Drivers” area, there will be a text fields with the label “Feature Levels:”
- If 11.0 is in the list of supported feature levels, then Kinect Fusion will run in GPU mode.
Note: Simply having DirectX11 installed is not enough, you must also have hardware that supports the DirectX 11.0 feature set.
Overview
When you run this sample, you see the following:
The Sample Uses the Following APIs | To Do This |
---|---|
NuiImageResolutionToSize function | Get the width and height of the depth frame. |
NuiGetSensorCount function | Get the number of sensors that are ready for use. |
NuiCreateSensorByIndex function and INuiSensor interface | Create an interface that represents a connected sensor. |
INuiSensor::NuiStatus method | Check the sensor status to see if the sensor is connected. |
INuiSensor::NuiInitialize method and NUI_IMAGE_TYPE_DEPTH_AND_PLAYER_INDEX constant | Initialize the sensor to stream out depth data. |
NuiFusionCreateReconstruction function | Create KinectFusion reconstruction volume. |
NuiFusionCreateImageFrame function | Create an image frame for frame data. |
NuiFusionCreateImageFrame function | Create an image frame for point cloud data. |
NuiFusionCreateImageFrame function | Create images of the raycast volume to display. |
INuiSensor::NuiImageStreamGetNextFrame method | Get an extended depth frame from Kinect. |
CKinectFusionExplorer.CopyExtendedDepth method | Get extended depth data. |
INuiSensor::NuiImageStreamReleaseFrame method | Release the Kinect camera frame. |
NuiFusionShadePointCloud function | ShadePointCloud and render. |
INuiSensor::NuiImageStreamSetImageFrameFlags method and NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE constant | Set depth data range to near range. |
CreateEvent function | Create an event that will be signaled when depth data is available by returning an event handle. |
INuiSensor::NuiImageStreamOpen method, NUI_IMAGE_TYPE_DEPTH constant, NUI_IMAGE_RESOLUTION_640x480 constant, the event handle | Open a depth stream to receive depth data. |
INuiSensor::NuiImageStreamGetNextFrame method | Get the next frame of color data (using the color data event handle). |
INuiFrameTexture::LockRect method and NUI_LOCKED_RECT structure | Lock the texture to prepare for saving texture data. |
INuiFrameTexture::UnlockRect method | Unlock the texture after saving the texture data. |
INuiSensor::NuiImageStreamReleaseFrame method | Release each frame of depth data after saving it. |
INuiSensor::Release method | Release the sensor when you exit the application. |
NuiShutdown function | Shut down the sensor. |
INuiFusionColorReconstruction::ResetReconstruction method | Clear the reconstruction volume and set a new world-to-camera transform (camera view pose) and world-to-volume transform. |
INuiFusionColorReconstruction::AlignDepthFloatToReconstruction method | Align a depth float image to the reconstruction volume to calculate the new camera pose. |
INuiFusionColorReconstruction::GetCurrentWorldToCameraTransform method | Retrieve the current internal world-to-camera transform (camera view pose). |
INuiFusionColorReconstruction::GetCurrentWorldToVolumeTransform method | Get the current internal world-to-volume transform. |
INuiFusionColorReconstruction::IntegrateFrame method | Integrate depth float data and color data into the reconstruction volume from the specified camera pose. |
INuiFusionColorReconstruction::CalculatePointCloud method | Calculate a point cloud by raycasting into the reconstruction volume, returning the point cloud containing 3D points and normals of the zero-crossing dense surface at every visible pixel in the image from the specified camera pose and color visualization image. |
INuiFusionColorReconstruction::CalculateMesh method | Export a polygon mesh of the zero-crossing dense surfaces from the reconstruction volume with per-vertex color. |
INuiFusionColorReconstruction::DepthToDepthFloatFrame method | Convert the specified array of Kinect depth pixels to a NUI_FUSION_IMAGE_FRAME structure. |
INuiFusionColorReconstruction::SmoothDepthFloatFrame method | Spatially smooth a depth float image frame using edge-preserving filtering. |
INuiFusionColorReconstruction::AlignPointClouds method | Align two sets of overlapping oriented point clouds and calculate the camera's relative pose. |
INuiFusionColorReconstruction::SetAlignDepthFloatToReconstructionReferenceFrame method | Set a reference depth frame that is used internally to help with tracking when calling the AlignDepthFloatToReconstruction method to calculate a new camera pose. |
INuiFusionColorReconstruction::CalculatePointCloudAndDepth method | Calculate a point cloud by raycasting into the reconstruction volume, returning the point cloud containing 3D points and normals of the zero-crossing dense surface at every visible pixel in the image from the specified camera pose, color visualization image, and the depth to the surface. |
INuiFusionCameraPoseFinder::ResetCameraPoseFinder method | Clear the INuiFusionCameraPoseFinder. |
INuiFusionCameraPoseFinder::ProcessFrame method | Add the specified camera frame to the camera pose finder database if the frame differs enough from poses that already exist in the database. |
INuiFusionCameraPoseFinder::FindCameraPose method | Retrieve the poses in the camera pose finder database that are most similar to the current camera input. |
INuiFusionCameraPoseFinder::GetStoredPoseCount method | Retrieve the number of frames that are currently stored in the camera pose finder database. |
NuiFusionAlignPointClouds function | Align two sets of oriented point clouds and calculate the camera's relative pose. |
To run a sample you must have the Kinect for Windows SDK installed. To compile a sample, you must have the developer toolkit installed. The latest SDK and developer toolkit are available on the developer download page. If you need help installing the toolkit, look on this page: To Install the SDK and Toolkit. The toolkit includes a sample browser, which you can use to launch a sample or download it to your machine. To open the sample browser, click Start > All Programs > Kinect for Windows SDK [version number] > Developer Toolkit Browser.
If you need help loading a sample in Visual Studio or using Visual Studio to compile, run, or debug, see Opening, Building, and Running Samples in Visual Studio.