Despite the fact that some of the first work in machine-vision-system development was in robotic guidance, it has remained a relatively small—but growing—segment of the machine-vision market. Industry-analyst and president of Vision Systems International (Yardley, PA, USA) Nello Zuech estimates that in 2002 the North American market for machine-vision-based robot guidance was about $30 million for hardware and software, or about 2.5% of the total North American machine-vision market, while the total machine-vision-based robot-guidance market, including complete engineering, was upward of $90 million.
Part of the problem has been that robotic guidance is a difficult application, requiring constant calibration to marry two separate coordinate systems: the vision system and the robot. This feat takes processing power, which is becoming an abundant resource as PC processing speeds climb upward. Robotic guidance does not stop with the PC, however; other tec hnology developments also played a role. For robot-guidance applications to multiply, cameras first had to shrink in size so they do not add unnecessary mass to the moving robotic arm; lighting systems had to be powerful enough to handle the movement of a robotic cell; fixtures had to be more flexible to cut down hard tooling costs; and, of course, software had to truly enter the third dimension to guide a robot to the location and orientation of a part.
Like so many vision applications, robot guidance is easy to describe but difficult to accomplish. To help develop the technology, the Automated Imaging Association (AIA; Ann Arbor, MI, USA; www.machinevisiononline.org) and sister organization Robotic Industries Association (RIA) held their first conference on Machine Vision for Robot Guidance at the Sheraton Detroit Novi Hotel, October 8 and 9.
Small but focused
"We attracted about 70 attendees, plus 25 tabletop exhibits," said Jeff Burnstein, executive director of the AIA. "We wanted to help educate people about how to successfully use machine vision for robot guidance. Case studies of successful applications were highlighted, along with a background on the key vision technologies and a look at cutting-edge technologies that may impact future robot-guidance applications."
The first day of the workshop was divided into morning and afternoon tutorials. Valerie Bolhouse, an automation systems specialist at Ford Motor Company (Detroit, MI, USA), first introduced attendees to the technologies and terminologies they needed to know. Speaking to a room full of engineers who were pondering the value of deploying a vision-based robotic system, Bolhouse gave an engaging talk that began with how the laws of physics impact vision-system design and proceeded to clarify the basics of image processing. On the last topic she noted that most of the techniques are in the public domain and that vendors really differentiate themselves only through implementation.
Among the many lessons that Ford has learned while implementing vision-guided robotics is that there are many dead ends where the application is not right for a vision system. However, a good application provides great savings and efficiencies, although system performance will never be 100%, and it's important to pay attention to how rejects or failures are handled. "The ability to store images of rejected parts for later analysis is imperative to fine-tuning performance," Bolhouse said. "Especially since the vision system will be blamed for down time!"
In the USA, the RIA breaks robotic applications into eight areas: spot welding (27%), arc welding (19%), dispensing/coating (7%), light material handling (13%), heavy material handling (23), material removal (3%), light assembly (6%), and heavy assembly (2%). Of these applications, only spot welding is not promising as a growth area for vision-guided robotic systems.
In the afternoon, John Merva, vice president of sales and marketing at Advanced Illumination (Rochester, VT, USA; www.advancedillumination.com), discussed the importance of lighting, emphasizing the value of optimizing the front-end lighting design to create contrast in the image. John Stack, president of Edmund Industrial Optics (Barrington, NJ, USA; www.edmundoptics.com) and current president of the AIA, followed with a presentation on machine-vision optics. Like Merva, he stressed the importance of achieving contrast to improve image quality, a relatively easy-to-achieve aspect of optical design that is, unfortunately, often placed behind the quest for better resolution.
Stack's recipe for laying out an optical system was simple: define your mechanical constraints (how much space is available), define your fundamental parameters, lay out a straight-line imaging system, place the illumination, and then evaluate whether the system will fit in the space available. If it does not fit, then bend the lightpath with optics—and there are many options for fitting a system into a constrained space.
Moderator David Dechow, president of Aptura Machine Vision Solutions (Wixom, MI, USA), opened the second day of the workshop with a focus on case studies. Adil Shafi, president of Shafi Inc. (Kensington, CT, USA; www.shafiinc.com), first gave an overview of 2- and 3-D technology for vision-guided robots and the case for implementation. The combination of vision and robotics can provide a return on investment in less than a year, allow the flexibility to run different products, and, if properly designed, provide a design life of 10 years, according to Shafi. However, vision has a tough reputation among workers on the plant floor. As a result, it's important to know the limits of the system; have a clearly defined role for the robot; provide an easy, nontechnical control interface; and ensure extensive worker training.
The morning wrapped up with three case studies, highlighted by automation-specialist Frank Masler's presentation on 3-D vision guidance to automate engine-head handling at Ford Motor Company. Later, Stephen Wienand from ISRA Vision Systems (Lansing, MI, USA; www.isravision.com) discussed the importance of automated on-the-fly calibration to synch the vision system and robot coordinate system, especially in tough applications such as depalletizing mixed loads.
Jack Justice, market segment manager at Motoman (West Carrollton, OH, USA; www.motoman.com) continued the afternoon briefing by showing how to use encoders on conveyors and material-handling systems in conjunction with vision systems to allow robots to pick moving parts regardless of position or orientation. John Chouinard, Northamerican sales manager at RVSI/NER (Weare, NH, USA; www.nerlite.com), expanded on the previous lighting tutorial by showing ways to use simple ray-tracing methods to beat ambient-light problems in robotic-guidance applications and illustrating the importance of camera integration times when using high-intensity strobe illumination.
The workshop concluded with presentations on the future of robotic guidance, including a discussion of calibration-free robotic control by Gary McMurray, senior research engineer at Georgia Tech (Atlanta, GA, USA), and visual line tracking by Donald Demotte, staff engineer--controller products at Fanuc Robotics (Rochester Hills, MI, USA; www.fanucrobotics.com). McMurray defined calibration-free vision-guided robotic control as visual servoing, which uses real-time vision feedback to accurately position the end-effector of a robot relative to a set of target features. The goal is to enable robots to function in an unstructured environment with minimal human intervention. Such advances bode well for increased growth in vision-guided robotic applications.