Snapshots
Robot fetches objects using lasers and vision
A team of researchers led by Charlie Kemp, director of the Center for Healthcare Robotics in the Health Systems Institute at the Georgia Institute of Technology and Emory University (Atlanta, GA, USA), have found a way to instruct a robot to find and deliver an item it may have never seen before using a more direct manner of communication—a laser pointer.
El-E (Ellie), a robot designed to help users with limited mobility with everyday tasks, autonomously moves to an item selected with a green laser pointer, picks up the item, and delivers it to the user—another person or a selected location such as a table. El-E, named for its ability to elevate its arm and for the arm’s resemblance to an elephant trunk, finds objects using an omnidirectional camera system and can grasp and deliver several types of household items including towels, pill bottles and telephones from floors or tables.
Imaging software relies on fewer pixels
It takes surprisingly few pixels of information to be able to identify the subject of an image, a team at the Massachusetts Institute of Technology (MIT; Cambridge, MA, USA) has found. The discovery could lead to advances in the automated identification of online images and, ultimately, provide a basis for computers to see as humans do. Antonio Torralba, assistant professor in the MIT Computer Science and Artificial Intelligence Laboratory, and colleagues have been trying to find out what is the smallest amount of information—that is, the shortest numerical representation—that can be derived from an image that will provide a useful indication of its content.
“We’re trying to find very short codes for images,” says Torralba, “so that if two images have a similar sequence [of numbers], they are probably similar—composed of roughly the same object, in roughly the same configuration.” If one image has been identified with a caption or title, then other images that match its numerical code would likely show the same object (such as a car, or person) and so the name associated with one picture can be transferred to the others.
To find out how little image information is needed to recognize the subject of a picture, the researchers reduced images to lower and lower resolution, seeing how many images at each level people could identify. “We can recognize what is in images, even if the resolution is very low, because we know so much about images,” Torralba says. “The amount of information you need to identify most images is about 32 by 32.” By contrast, even the small “thumbnail” images shown in a Google search are typically 100 by 100.
Testing for GigE Vision compatibility
GigE Vision and GenICam standards have been in place for well over a year and provide an opportunity to seamlessly integrate vision systems into a network environment using industry-standard Ethernet components such as cable and switches. For example, traffic monitoring and control is a vision application area that can benefit from these standards.
However, it has become apparent that not all Ethernet network systems are created equal, and careful consideration needs to be given to the quality of the components used in the network to ensure compatibility with vision.
In response, Stemmer Imaging Group (Puchheim, Germany; www.stemmer-imaging-group.com) has introduced a GigE Vision evaluation service. Using industry-standard Ethernet test equipment, an entire network or sections of a network can be evaluated for suitability for vision applications. This includes evaluating transmission capabilities for both copper and fiber cable, as well as identifying any breaks in the cable. Switch capability can also be tested. In addition, all products in the product portfolio are evaluated to ensure that Cat 6 cable really does conform to Cat 6 specification and other network components are truly vision-compliant.
Microsoft announces Robot Developer Studio
At the recent RoboBusiness Conference and Exposition in Pittsburgh, PA, Microsoft released the first community technology preview of Microsoft Robotics Developer Studio 2008, the new version of its robotics programming platform. Microsoft Robotics Developer Studio 2008 contains improvements in its runtime performance, distributed computational capabilities, and tools. Scheduled for release later this year, the first preview of the product is now available for evaluation and testing by developers, customers, and partners.
The platform is a Windows-based environment that can be used by academics, hobbyists, and commercial developers to create a variety of robotic programs and testing scenarios. Previous versions of the software gained widespread support throughout the robotics industry, with more than 200,000 copies downloaded and more than 50 companies pledging their support by joining the Microsoft Robotics Supporting Partner Program. For more information, go to www.microsoft.com/robotics.