转载自zubair网站。当初这个老兄获得KinectV2申请时发了博客,我搜到了和他聊过一二句话。如今又看到他的文章被微软网站分享,世界真小~我估计等四五个月买个新的笔记本再开展KinectV2研究了,现在只能分享文章或者找同学电脑玩下KinectV2.
Few weeks ago I received my Kinect for Windows version 2 and private SDK so I finally got to try it out, the new Kinect ships with many improvements from v1 such as an Full HD Camera, thumb and hand open/close detection, better microphone, improved infrared and several applications can use the sensor at the same time.
Deep dive
In this blog post I will show how to read Body source and draw Bones, Hands and Joints over the Color source received from the Kinect Sensor.
“This is preliminary software and/or hardware and APIs are preliminary and subject to change.”
In the constructor of our WPF app the code reads two important sources, Body frame, Color frame and Width and Height from the Depth Sensor. It then opens both readers to start receiving frames.
1 2 3 4 5 6 7 |
FrameDescription frameDescription = this.kinectSensor.ColorFrameSource.FrameDescription; FrameDescription bodyFrameDescription = this.kinectSensor.DepthFrameSource.FrameDescription; this.displayWidth = bodyFrameDescription.Width; this.displayHeight = bodyFrameDescription.Width; this.bodies = new Body[this.kinectSensor.BodyFrameSource.BodyCount]; this.colorFrameReader = this.kinectSensor.ColorFrameSource.OpenReader(); this.reader = this.kinectSensor.BodyFrameSource.OpenReader(); |
The MainWindow.xaml contains a Grid with two Image elements for Color and Body data.
1 2 3 4 5 6 7 |
<Grid Margin="10 0 10 0"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Image Source="{Binding ImageSource}" Grid.Row="0" Width="1200" Height="700" Stretch="UniformToFill" /> <Image Source="{Binding BodyImageSource}" Grid.Row="0" Stretch="UniformToFill" Width="1200" Height="700"/> </Grid> |
We subscribe to FrameArrived events of both readers in the Loaded event of our app
1 2 3 4 5 6 7 8 9 10 11 12 |
private void MainWindow_Loaded(object sender, RoutedEventArgs e) { if (this.colorFrameReader != null) { this.colorFrameReader.FrameArrived += this.ColorFrameReaderFrameArrived; } if (this.bodyFrameReader != null) { this.bodyFrameReader.FrameArrived += this.BodyFrameReaderFrameArrived; } } |
The color frame arrived event handler acquires a frame and validates it before converting it to Byte Array and writes to WriteableBitmap which is used by Image element in our Xaml to display color stream.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
private void ColorFrameReaderFrameArrived(object sender, ColorFrameArrivedEventArgs e) { ColorFrameReference frameReference = e.FrameReference; try { ColorFrame frame = frameReference.AcquireFrame(); if (frame != null) { // ColorFrame is IDisposable using (frame) { FrameDescription frameDescription = frame.FrameDescription; // verify data and write the new color frame data to the display bitmap if((frameDescription.Width == this.bitmap.PixelWidth) && (frameDescription.Height == this.bitmap.PixelHeight)) { if (frame.RawColorImageFormat == ColorImageFormat.Bgra) { frame.CopyRawFrameDataToArray(this.pixels); } else { frame.CopyConvertedFrameDataToArray(this.pixels, ColorImageFormat.Bgra); } this.bitmap.WritePixels( new Int32Rect(0, 0, frameDescription.Width, frameDescription.Height), this.pixels, frameDescription.Width * this.bytesPerPixel, 0); } } } } catch (Exception) { // ignore if the frame is no longer available } } |
The reader frame arrived event handler is the most interesting one, the code uses a DrawingContext to draw a rectangle, our Body frame data will be written within this rectangle. We then get the Body data from the Kinect sensor. Because the Kinect can detect up to 6 bodies at the same time, the code loops through each body object to check if it can read body joints information from the sensor before it can do something useful with it.
If the Kinect is able to track a body it loops through each Joint and uses a CoordinateMapper to give us X and Y coordinates for each joint which it then uses to draw Body and Hand joints. Note that I cheat a little bit on line 36 to fix the vertical position of my drawing by subtracting 80px from the Height.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
private void BodyFrameReaderFrameArrived(object sender, BodyFrameArrivedEventArgs e) { BodyFrameReference frameReference = e.FrameReference; try { BodyFrame frame = frameReference.AcquireFrame(); if (frame != null) { // BodyFrame is IDisposable using (frame) { using (DrawingContext dc = this.drawingGroup.Open()) { // Draw a transparent background to set the render size dc.DrawRectangle(Brushes.Transparent, null, new Rect(0.0, 0.0, this.displayWidth, this.displayHeight)); // The first time GetAndRefreshBodyData is called, Kinect will allocate each Body in the array. // As long as those body objects are not disposed and not set to null in the array, // those body objects will be re-used. frame.GetAndRefreshBodyData(this.bodies); foreach (Body body in this.bodies) { if (body.IsTracked) { IReadOnlyDictionary<JointType, Joint> joints = body.Joints; // convert the joint points to depth (display) space Dictionary<JointType, Point> jointPoints = new Dictionary<JointType, Point>(); foreach (JointType jointType in joints.Keys) { DepthSpacePoint depthSpacePoint = this.coordinateMapper.MapCameraPointToDepthSpace(joints[jointType].Position); jointPoints[jointType] = new Point(depthSpacePoint.X, depthSpacePoint.Y - 80); } this.DrawBody(joints, jointPoints, dc); this.DrawHand(body.HandLeftState, jointPoints[JointType.HandLeft], dc); this.DrawHand(body.HandRightState, jointPoints[JointType.HandRight], dc); } } // prevent drawing outside of our render area this.drawingGroup.ClipGeometry = new RectangleGeometry(new Rect(0.0, 0.0, this.displayWidth, this.displayHeight)); } } } } catch (Exception) { // ignore if the frame is no longer available } } |
Notice that for both hands Kinect sensor SDK gives us a state that the code uses to draw a Red/Green ellipses
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
private void DrawHand(HandState handState, Point handPosition, DrawingContext drawingContext) { switch (handState) { case HandState.Closed: drawingContext.DrawEllipse(this.handClosedBrush, null, handPosition, HandSize, HandSize); break; case HandState.Open: drawingContext.DrawEllipse(this.handOpenBrush, null, handPosition, HandSize, HandSize); break; case HandState.Lasso: drawingContext.DrawEllipse(this.handLassoBrush, null, handPosition, HandSize, HandSize); break; } } |
That’s all I had to do to draw body joints over a camera stream from the Kinect sensor.