Official Kinect SDK vs. Open Source Alternatives

Where do they differ?

What are the advantages of choosing libfreenect or OpenNI + SensorKinect, for example, over the official SDK and vice versa?

What are the disadvantages?

+56
kinect
Oct. 09 '11 at 20:45
source share
3 answers

Please note that the answer below is up to date, and some facts may be very outdated in the near future. The current state of the official Kinect SDK is beta 1.00.12.

The first obvious difference is that the official SDK is supported by the Microsoft Research team, and OpenKinect is an open source SDK supported by the open source community. Both have their pros and cons.

  • The official SDK is developed by Microsoft, which is also developing hardware and therefore needs to know the inside information about the device that the open source community needs to redesign. Obviously, this is an advantage of Microsoft.
  • Microsoft is investing a lot of money in this device, and I’m sure that they will do what they think is necessary to support their SDK. Having an economy behind it has many advantages.
  • On the other hand, never underestimate the power of an open source society: “The OpenKinect community consists of more than 2,000 members who contribute their time and code to the project. Our members joined this project with the goal of creating the largest possible set of applications for Kinect. OpenKinect is a true open source community! " - http://openkinect.org/wiki/Main_Page .
  • OpenKinect was released long before the official SDK, since the kinect device was hacked on the first or second day of its release. Kudos to OpenKinect!

Supported programming languages:

  • Official SDK: C ++, C # or Visual Basic using Microsoft Visual Studio 2010.
  • OpenKinect: Python, C, C ++, C #, Java, Lisp and more! Obviously, Visual Studio is not required.

Operating System Support:

  • Official SDK: installs only on Windows 7.
  • OpenKinect: works on Linux, OS X and Windows

Clear advantage of OpenKinect.

License:

  • The official SDK is in current beta state for testing only. The SDK was designed specifically to encourage broad research and experimentation with the academic, research, and enthusiastic communities. Commercial applications are not allowed. Please note that this is likely to change in future releases of the SDK. Visit the FAQ for more information.
  • OpenKinect applications may be open to commercial use, but online sources say this may not be so easy. I would have a good look at the conditions before releasing commercial applications with them. Read Kinect - Implications of licensing open hardware projects for more information.

Documentation and support:

  • Official SDK: well documented and provides a support forum
  • OpenKinect: mailing list, twitter and irc appear. but there is no official forum / OK? The documentation on the website is not as rich as I would like.

Device calibration:

Different Kinect devices may vary slightly depending on the batch in which they were manufactured. In this case, calibration of the device is sometimes required. But:

  • The official SDK does not provide any calibration settings, but so far I have not had to calibrate the device I'm working on. According to something that I read online (link lost) during production, the calibration parameters are written to the kinect device, so no official SDK calibration is required.
  • OpenKinect offers device calibration: http://openkinect.org/wiki/Calibration . So, I believe that you should calibrate your device if you go with OpenKinect.

If it’s true that calibration is only needed for OpenKinect, this is a big advantage for the official SDK, since it’s easier to distribute and install applications without such a need.




Personally, after a failed attempt with the OpenKinect SDK, I went with the official SDK, which

  • drivers appeared out of the box
  • examples and code came for easy transition to business
  • All-in-one: I could start my own development in 15 minutes or so.
  • Now, after working with Kinect for several months, I have to say that I am quite satisfied with the API provided. However, I can not compare it with the OpenKinect SDK, as I actually never worked (but maybe this did not give him an honest attempt).



UPDATE: As of February 1, 2012, there is a commercial license for the official SDK: “A commercial license for this release allows the development and distribution of commercial applications. The previous SDK was a beta version and, as a result, was only suitable for research, testing and experimentation and It wasn’t suitable for use with the final commercial product.The new license will allow developers to create and sell their Kinect Windows applications for end users using the Kinect Windows equipment and Windows platforms. " Developer Frequently Asked Questions

+55
09 Oct 2018-11-11T00:
source share

As explained by Avada Kedavra in his / her answer , these are some interesting differences:

  • supported operating systems : you can use the Microsoft SDK only on Windows, while open source solutions can usually work on other operating systems;
  • programming languages : you have a wider choice of open source, while Microsoft only supports C ++ and C # (Visual Basic is no longer supported SDK 2.0);
  • documentation and support . Microsoft offers a good forum and well-prepared documentation (with a large number of samples); but there are several open source solutions well documented;
  • license . Microsoft has less or more patent, open source provides less or more free. Consider also that open source ideas are sometimes bought by large companies and transformed into something more open. Yours will probably not deal, but keep this extra opportunity in mind.

In my personal opinion, the biggest difference between open source solutions and the Microsoft SDK is strictly related to the skeletal tracking algorithm .

Although depth and RGB data can be effectively provided by both the open / free APIs and the Microsoft SDK, implementing skeleton tracking capabilities is not just a reverse engineering issue.

To implement such an algorithm, developers must have strong competencies in the areas of pattern recognition and machine learning, and I am absolutely sure that this kind of knowledge is available among the open source community. But the implementation of skeletal tracking is based on a "trained" algorithm, which requires a large number of experiments to collect a very large amount of data. This data is then used to “train” an algorithm that can recognize skeletal joints.

Obtaining a sufficient amount of data, but also its correction and proper use, requires a lot of time and money. Microsoft researchers and developers are in the best conditions to work on such things, simply because it is their job.

In my previous experience, I noticed that open source solutions provide good skeleton tracking capabilities, but they are not at the same level that Microsoft offers with its SDK.

Remember also that the Microsoft SDK provides many additional features, such as face recognition or co-orientation, and several widgets are very useful if you want to quickly build an inline GUI.

So, I suggest: if you are working on a project in which you just need depth and / or RGB data, or if you need to use a programming language that is not supported by the Microsoft SDK, then you should choose an open source solution. Otherwise, the best option would be the Microsoft SDK.

+5
May 23 '15 at
source share

I would highly recommend the Cinder system. (Libcinder.org)

It supports both OpenNI and Kinect develoment if you use C ++. Now it supports Kinect SDK 1.7 and OpenNI 2 through these Cinderblocks:

MS Kinect SDK 1.7 (stable) https://github.com/BanTheRewind/Cinder-MsKinect

OpenNI 2 / NITE 2.2 (alpha) https://github.com/wieden-kennedy/Cinder-OpenNI

Both can perform skeletal tracking from boz, OpenNI is able to track up to six skeletons at a time. OpenNI 2 is rapidly gaining Kinect, although the new Kinect is likely to change this when it arrives next month. However, the basic principles are unlikely to change.

The main drawback of the initial release of OpenNI was that user recognition required full body activation activation, which was a transaction breaker for many applications - however, this seems to have been resolved in newer versions and OpenNI 2 also supports reliable hand tracking from close range, although focus gesture still required. If you are running on Mac or Linux, this is pretty much your only choice.

+3
02 Oct '13 at 1:04 on
source share



All Articles