[email protected]
Anyone who has been involved with computer vision for a number of years will testify that much work needs to be done before machine-vision systems can emulate the power of the human visual system. Despite hardware advances in multicore CPUs, DSPs, GPUs, and FPGAs, researchers are still far from modeling how the human brain perceives and understands the visual world.
Yet many machine-vision systems require only simple measurement tasks to be performed. It is often not necessary to perform sophisticated image-processing functions to analyze image features. In these cases, simpler algorithms such as edge detection, histogram analysis, thresholding, and blob analysis can be used to perform a desired image-analysis function.
The premise that relatively few algorithms may be necessary for an effective machine-vision system begs the question of why these systems are not more intelligent or cannot be deployed more rapidly.
Indeed, a quick survey of the Automated Imaging Association’s web site where end users pose questions to vendors of vision equipment highlights the demand for such systems. For example, some of the questions posed are vague and must be investigated further by the system developer. The reason for this is simple: No software or hardware is currently available that allows the system integrator to automatically understand image features and assess whether certain measurement functions can be performed.
The number of machine-vision systems deployed illustrates that much can be accomplished with the correct use of lighting, optics, cameras, frame grabbers, and software. But because all these applications are different, system developers must tailor their hardware and software to meet the demands of each application.
When an application demands a single image-analysis function such as barcode reading, point products are available to perform the task. When multiple facets of an object such as its dimensions, position, color, defects, and barcode need to be inspected, the task becomes more challenging.
Tailoring vision systems to perform more complex tasks is the added value that system integrators bring to their customers. Many are reluctant to divulge which algorithms they use because lots of inspection tasks—for example, the analysis of color—can be performed in a number of ways using different color spaces and statistical methods.
For this type of inspection, most color-analysis algorithms are well documented. Still, end users who simply need a system to measure a specific color often do not have the time, willingness, or money to investigate whether a specific color-analysis method will perform better than another. They would rather have a machine-vision system optimized to perform the task.
To ease system setup and programming, OEM vendors must consider making their products more intelligent. It is perhaps not necessary in many cases to study how the human brain perceives objects, colors, or defects. For example, a system could be programmed to perform color image analysis in multiple color spaces.
By running a number of different algorithms automatically, a simple GUI could indicate the best fit for the user’s application. This concept could extend to automatically find multiple regions of interest within an image, their color, any barcodes present, and potential defects.
Performing more of these tasks automatically would diminish the role of the machine-vision software developer. But consider the rapid advances in FPGA designs, multicore processors, and low-cost memory—system integrators and end users may indeed see these approaches being embedded in smarter machine-vision systems. These developments will represent a first step toward building even more sophisticated systems that will more closely emulate human visual perception and understanding.