In optical surface inspection for industrial quality assurance, there is an increasing trend towards gathering a greater amount of information from the scanned surface; a trend that has been met by higher line-scan scanning speeds and higher resolution sensors. In the future, it will be necessary to inspect a greater variety of properties, and by using this data, classification algorithms will become more discriminative and thus more robust.
Today, a number of different methods exist to obtain large amounts of disparate data from a scanned scene. Systems that sort plastic waste, for example, may use multispectral imaging. To obtain the shape of objects on a production line or in bin-picking applications, line-scan or 3D cameras can be used. Other techniques such as acquiring direction dependent reflection properties of materials can also be used for material classification. Common to all these is the multi-dimensional nature of image data that has to be acquired.
In web inspection applications developed for 3D surface analysis, the use of multi-line-scan technology will offer new capabilities. In conjunction with computational imaging techniques, these include correction of optical aberration, noise reduction, adaptive time delay integration (TDI) and dynamic range enhancement by employing multiple exposures.
Further modifications of such multi-line-scan systems using multispectral imaging techniques will allow multi-line-scan sensors to be used as measuring devices for material properties. Similarly, multi-polarization imaging using multi-line-scan technology will enable the inspection of glossy and transparent objects.
In the future, single-sensor based systems with multiple pixels that can be flexibly addressed will enable a variety of applications by allowing regions of pixels to be addressed as required, even with different exposure values. This concept is similar to that already used by FPGA manufacturers that allow such circuits to be reconfigured as necessary. It will also be possible to change the functionality of such sensors dynamically to adapt to changes in the scene or changes in illumination.
By exploiting redundancy of image data with more pixels, more views and numerous exposure times, image processing sub-tasks can then be shifted flexibly between hardware and software to optimize the overall performance of an imaging system. This will have the benefit of allowing the maximum amount of information to be acquired from a scene, allowing a higher degree of object classification to be made.
|Ernst Bodenstorfer is a Scientist with the Austrian Institute of Technology (AIT; Vienna, Austria;www.ait.ac.at)|
Ernst Bodenstorfer is a Scientist with the Austrian Institute of Technology (AIT; Vienna, Austria;www.ait.ac.at)