Software makes the imaging difference
Software makes the imaging difference
George Kotelly Executive Editor
Whereas machine-vision and imaging hardware captures images, special software customizes images. The need for customization generally occurs because OEMs, designers, and developers are often forced to meet customer-specific imaging application requirements.
For example, fuzzy-logic-based software has been implemented in a customized pharmaceutical vision-inspection system, says contributing editor John Haystead, to simultaneously count and identify tablets or capsules prior to bottling. This software runs on the system`s host processor, provides detailed imaging information, and saves costs by eliminating the need for a dedicated image processor (see p. 36).
Use of special imaging software for a remote-sensing satellite equipped with synthetic aperture radar is enabling generation of digital elevation models that measure ground-surface movements. According to contributing editor Shari Worthington, a major oil company is applying this software`s semiautomatic processing tools to measure the lowering of land-surface areas over oil-well fields (see p. 42).
An image-processing software technique called deconvolution has proven successful in compensating for 3-D microscopy blurring by applying point-spread functions. Contributing editor Lawrence Curran reports that a new software tool applies 3-D blind-deconvolution algorithms that automatically remove haze, blur, and noise from 3-D micrographs (see p. 26).
Image-enhancement algorithms are the most common ones applied by vision-systems designers, says Peter Eggleston. In the first part of a two-part series, he explains how these algorithms can improve the quality of images to facilitate interpretation or inspection (see p. 23).
Recent improvements in optical-correlation techniques, as spotlighted by editor-at-large Andy Wilson, have helped to develop prototype input versus target-image recognition systems using various peak-detection, filtering, morphological, and hit/miss transform operators (see p. 50).
Before software can be implemented, an image must be captured by a CCD camera. Although available CCD cameras can be broadly classified into linear, time-delay integration, and area-array devices, the CCDs used in them are supplied by a range of manufacturers. Therefore, says Andy Wilson, you should consider cameras in terms of quantum efficiency, resolution, noise, and dark count (see p. 56).