Vision: An essential ingredient for autonomy

April 14, 2016
In last week's column, I used two case studies, robotic vacuum cleaners and drones, to illustrate the perhaps surprising power consumption savings that can be obtained by adding computer vision capabilities to a system design. In focusing on power savings, however, I wanted to make sure that a bigger-picture point wasn't lost—the necessity of vision processing for robust autonomy. 

In last week's column, I used two case studies, robotic vacuum cleaners and drones, to illustrate the perhaps surprising power consumption savings that can be obtained by addingcomputer vision capabilities to a system design. In focusing on power savings, however, I wanted to make sure that a bigger-picture point wasn't lost—the necessity of vision processing for robust autonomy.

Take robotic vacuum cleaners. Early-generation units included pressure sensors that rerouted a robot in response to detected collisions with walls and other objects, infrared "cliff" detectors to prevent the robot from tumbling down steps, and external infrared and RF "beacons" that allowed the robot to be contained in a defined space and guided it back to its charging dock if the battery ran low while it was in a different room. I owned several of these robotic vacuum cleaners, and their performance was passable at best.

The collision detectors didn't prevent noisy (and potentially wall-marring) impacts, and the random-direction response often resulted the unit being stuck under a piece of furniture, for example. The Infrared detectors ceased to work reliably once they got dirty... which was pretty much a given with a vacuum cleaner! The standalone beacons were a hassle to set up, move, and pick up afterwards, not to mention suffering from drained batteries at inconvenient times. And once the robot recharged itself, it didn't remember which rooms (and areas of rooms) it had already traversed.

The built-in vision processing in latest-generation models solves all of these issues. The cameras are high enough above the floor that they aren't dust-obscured. They can see objects in their path prior to hitting them. Since the units store a "map" of where they've already been, along with where the charging dock is, they become highly efficient autonomous travelers and cleaners. Enthusiastic reviews suggest that such products robustly deliver on their promises.

Or take ADAS (advanced driver assistance systems), along with self-driving vehicles, which I talked about two weeks ago. Supplemental sensor technologies such as radar or LIDAR (light detection and ranging) will likely be included along with cameras in future vehicles, since computer vision doesn't do very well after dark or in poor weather situations. But only vision enables a vehicle to discern the content of a traffic sign, for example, or to uniquely identify the person approaching a door or sitting behind the wheel. Vision also greatly simplifies the task of discerning between, say, a pedestrian and a tree trunk.

Either radar or LIDAR would be cost-prohibitive, not to mention too heavy and bulky, to include in an autonomous drone intended for consumers (larger, bigger-budget unmanned aerial vehicles tailored for law enforcement and military applications are a different matter). And consumer drones are also unlikely to be flown in heavy rain, thick fog and other challenging weather conditions. New products such as DJI's Phantom 4, outfitted with vision processors, are therefore able to leverage the on-board cameras (already present to capture in-flight video footage) for collision avoidance purposes, as well as to autonomously follow a mountain biker, skier or snowboarder, for example.

Deploying fully autonomous vehicles widely is going to take a while, given the complexity of the application and the associated technological and regulatory challenges. But in the meantime, plenty of semi-autonomous vehicles, as well as other fully autonomous devices such as the ones I've already mentioned here, are already on sale. A few other examples: in its warehouses, Amazon has deployed 15,000 Kiva mobile robots to speed collection of items for customer orders. And in hotels (a few hotels in Silicon Valley, at least), robots from Savioke deliver snacks and sundries to guests. So, while it may be a few years before you can use a self-driving car for your daily commute, today you buy a car that provides significant driving assistance, as well as purchase an autonomous robot to keep your floors clean, shoot aerial video, bring a snack to your hotel room, or assemble your Amazon.com order.

Enabling autonomy is one of the most important uses of computer vision technology, and autonomous devices are becoming a big market for vision technology. Reflecting this importance, we've made autonomy a major theme at next month's Embedded Vision Summit conference. The Summit takes place May 2-4 in Santa Clara, California. It's the only event in the world focused entirely on building better products using computer vision. This year's Summit includes several presentations on various aspects of enabling autonomy. Register now, as space is limited and seats are filling up!

Regards,

Brian Dipert
Editor-in-Chief, Embedded Vision Alliance
[email protected]

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!