MIT robot navigates using Microsoft’s Kinect

MIT robot uses Microsoft’s Kinect to navigate through its surroundings

Researchers at the at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL; Cambridge, MA, USA) have developed a robot that uses Microsoft’s Kinect to navigate through its surroundings.

While a large amount of research has been devoted to developing one-off maps that robots can use to navigate around an area, such systems cannot adjust to changes in the surroundings over time.

The MIT approach, based on a technique called Simultaneous Localization and Mapping (SLAM), will allow robots to constantly update a map as they learn new information over time.

The MIT team has previously tested the approach on robots equipped with expensive laser-scanners, but in a paper to be presented this May at the International Conference on Robotics and Automation (St. Paul, MN), they have now shown how a robot can locate itself in such a map with a Kinect camera.

As the robot travels through an unexplored area, the Kinect sensor’s visible-light video camera and infrared depth sensor scan the surroundings, building up a 3-D model of the walls of the room and the objects within it. Then, when the robot passes through the same area again, the system compares the features of the new image it has created -- including details such as the edges of walls, for example -- with all the previous images it has taken until it finds a match.

At the same time, the system constantly estimates the robot’s motion using on-board sensors that measure the distance its wheels have rotated. By combining the visual information with this motion data, it can determine where within the building the robot is positioned. Combining the two sources of information allows the system to eliminate errors that might creep in if it relied on the robot’s on-board sensors alone.

Once the system is certain of its location, any new features that have appeared since the previous picture was taken can be incorporated into the map by combining the old and new images of the scene.

More information is available here.

-- by Dave Wilson, Senior Editor, Vision Systems Design

Webcasts

Vision technologies for robotics: Application do’s and don’ts

This webcast will offer tips and examples for integration of machine vision systems in robotics applications. Expert Jeff Boeve of JR Automation will explain how to clearly define your pass/fa...

Performing effective 2D and 3D pattern matching in machine vision applications

This webcast, sponsored by MVTec, will explain how pattern matching works and in what applications is being used.

Overcoming the Limitations of Vision Systems in Manufacturing

Expert speaker Jim Blasius, Solutions Architect, Measurement & Automation Product Group at ADLINK Technology will examine the pros and cons of different compact vision systems, discuss current ...

Why 3D imaging is important in robotics applications

With a focus on the use of 3D imaging in industrial automation and robotics, this webcast will begin by explaining exactly what 3D imaging encompasses and why the technique is important for industr...
October 14, 2014

Archives

Click here to view archived Vision Systems Design articles