Kinect Fusion Basics-WPF C# Sample
Kinect for Windows 1.7, 1.8
This sample illustrates how to use Kinect Fusion for 3D reconstruction.
Important
DirectX 11 feature support is required to run Kinect Fusion.
In order to determine the DirectX Feature support level of your graphics card, run DXDiag.exe to determine the supported future level.
- Launch DxDiag.exe
- Navigate to the “Display” tab.
- In the “Drivers” area, there will be a text fields with the label “Feature Levels:”
- If 11.0 is in the list of supported feature levels, then Kinect Fusion will run in GPU mode.
Note: Simply having DirectX11 installed is not enough, you must also have hardware that supports the DirectX 11.0 feature set.
Overview
The Sample Uses the Following APIs | To Do This |
---|---|
KinectSensor.KinectSensors property | Get the Kinect sensors that are plugged in and ready for use. |
Reconstruction.FusionCreateReconstruction method | Create a volume cube with the sensor at the center of the near plane and the volume directly in front of the sensor. |
FusionFloatImageFrame class | Create image frames for depth data, point cloud data, and reconstruction data. |
DepthImageFormat.Resolution640x480Fps30 enumeration value | Choose the depth stream format including the data type, resolution, and frame rate of the data. |
KinectSensor.DepthStream property and DepthImageStream.Enable method | Enable the sensor to stream out depth data. |
KinectSensor.Start and KinectSensor.Stop methods | Start or stop streaming data. |
ImageStream.FramePixelDataLength property | Specify the length of the pixel data buffer when you allocate memory to store the depth stream data from the Kinect. |
ImageStream.FrameWidth and ImageStream.FrameHeight properties | Specify the width and height of the WriteableBitmap used to store/render the depth data. |
KinectSensor.DepthFrameReady event | Add an event handler for the depth data. The sensor will signal the event handler when each new frame of depth data is ready. |
Reconstruction.ProcessFrame method | Calculate the camera pose and then integrate if tracking is successful. |
Reconstruction.ResetReconstruction method | If tracking failed, clear the 3D reconstruction volume and set a new camera pose. |
Reconstruction.CalculatePointCloud method | Calculate a point cloud by raycasting into the reconstruction volume. |
FusionDepthProcessor.ShadePointCloud method | Create a shaded color image of a point cloud. |
FusionColorImageFrame.CopyPixelDataTo method | Copy the pixel data to a bitmap. |
Reconstruction.GetCurrentWorldToCameraTransform method | Get the current internal world-to-camera transform (camera view pose). |
Reconstruction.GetCurrentWorldToVolumeTransform method | Get the current internal world-to-volume transform. |
Reconstruction.DepthToDepthFloatFrame method | Convert the specified array of Kinect depth pixels to a FusionFloatImageFrame object. |
To run a sample you must have the Kinect for Windows SDK installed. To compile a sample, you must have the developer toolkit installed. The latest SDK and developer toolkit are available on the developer download page. If you need help installing the toolkit, look on this page: To Install the SDK and Toolkit. The toolkit includes a sample browser, which you can use to launch a sample or download it to your machine. To open the sample browser, click Start > All Programs > Kinect for Windows SDK [version number] > Developer Toolkit Browser.
If you need help loading a sample in Visual Studio or using Visual Studio to compile, run, or debug, see Opening, Building, and Running Samples in Visual Studio.