3-D algorithms combine with Kinect to track objects using continuous point cloud data
Ryohei Ueda from the JSK laboratory at the University of Tokyo (Tokyo, Japan) has developed 3-D algorithms that can be used with the Microsoft Kinect to help computer systems track objects in a scene in real time.
During an internship at robot hardware and software developer Willow Garage (Menlo Park, CA, USA), Ryohei Ueda from the JSK laboratory at the University of Tokyo (Tokyo, Japan) has developed 3-D algorithms that can be used with the Microsoft Kinect to help computer systems track objects in a scene in real time.
Tracking 3-D objects in continuous point cloud data sequences is an important research topic for mobile robots -- it allows them to monitor the environment and make decisions and adapt their motions according to changes in a scene.
Ueda developed the algorithms for the Point Cloud Library (PCL) -- a project where software developers contribute algorithms for 3-D point cloud processing.
Aside from Ueda's work, the PCL contains numerous algorithms including filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. These algorithms can be used to filter noisy data, stitch 3-D point clouds together, segment relevant parts of a scene, extract key points and compute descriptors to recognize objects in the world based on their geometric appearance as well as create surfaces from point clouds and visualize them.
The PCL runs on many operating systems, and prebuilt binaries are available for Linux, Windows, and Mac OS X. PCL is released under the terms of the BSD license and is open source software. It is free for commercial and research use.
The PCL project is financially supported by Willow Garage, NVidia, Google, Toyota, Trimble, Urban Robotics, and Honda Research Institute.
-- by Dave Wilson, Senior Editor, Vision Systems Design