Snapshots

April 1, 2007
Computer model mimics brain; Vision keeps humans out of the robot zone

Computer model mimics brain

Scientists at the Massachusetts Institute of Technology (MIT; Cambridge, MA, USA; cbcl.mit.edu) have applied a computer model of how the brain processes visual information to a complex, real-world task: recognizing the objects in a street scene. “People have been talking about computers imitating the brain for a long time,” said Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences and a member of the McGovern Institute for Brain Research at MIT. “That was Alan Turing’s original motivation in the 1940s. But in the last 50 years, computer science and artificial intelligence have developed independently of neuroscience.”

Image courtesy of Stanley Bileschi, McGovern Institute for Brain Research at MIT
Click here to enlarge image

Compared to traditional computer-vision systems, the biological model is versatile. Traditional systems are engineered for specific object classes; for example, systems engineered to detect faces or recognize textures are poor at detecting cars. In the biological model, the same algorithm can learn to detect widely different types of objects. Poggio and his colleagues chose street-scene recognition as an example because it has a restricted set of object categories, and it has practical social applications. Near-term applications include population surveillance and assistance for automobile drivers; eventually, applications could include visual search engines, biomedical imaging analysis and robots with realistic vision.

The model takes as input the unlabeled images of digital photographs from the street scene database (top row) and generates automatic annotations (bottom row). The orange bounding boxes are for pedestrians (“ped”) and cars (“car”). The system would have also detected bicycles if present. For sky, buildings, trees, and road, the system uses color coding (blue, brown, green, and gray). In a false detection in the image (right), a construction sign was mistaken for a pedestrian.

Vision keeps humans out of the robot zone

Production robots in automotive manufacturing require elaborate safety systems to ensure people who might accidentally get in their way are protected from injury. The radius of a production robot’s activities can encompass as much as 70 m3, and workers who penetrate this space run a high risk of injury. Robot workstations must be made safe for workers by means of extensive systems, including barriers to physically prevent access to the danger area, door contact switches, light barriers, laser scanners, and pressure mats to immediately activate the emergency stop if someone crosses the barrier or sets foot in the wrong place. Optical devices such as light barriers and laser scanners cannot monitor volumes-at best, they can cover a plane.

Click here to enlarge image

Under production conditions at its Sindelfingen plant, DaimlerChrysler (Stuttgart, Germany; www.daimlerchrysler.com) is testing a vision-based concept called SafetyEYE to monitor a robot’s radius of action. Researchers in the company’s Ulm Research Center have provided the algorithms needed to process the images. SafetyEYE uses double stereoscopic vision based on three CMOS image sensors to determine the coordinates of any object in the monitored area. The robot’s radius of action-and therefore the size and shape of the danger zone in this work station-are visualized by means of colored spatial segments that are imposed on the video image. The algorithms search for any changes in the pixel values from one image to the next-changes that correspond to the movement of an object. If any such movement is detected within the protection zone, the safety system sounds the alarm and stops or slows down the robot.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!