Modern imaging optics are complex and may contain up to two dozen optical elements to compensate for geometric distortion, spherical and chromatic aberration and coma. Such complex lenses are, however, large, heavy and expensive. In many imaging systems, however, the need to reduce costs mandates the use of less complex simple lenses.
Although such lenses will exhibit optical aberrations, the effects of these can be reduced using sophisticated image processing techniques. This is the theory behind a new mathematical technique proposed by Dr. Felix Heide of the University of British Columbia (Vancouver, BC, Canada;www.cs.ubc.ca) that was presented at this year's SIGGRAPH conference in Anaheim, CA.
To demonstrate the effects of this technique, Dr. Heide and his colleagues developed a single element lens mounted in a barrel on a 12M pixel EOS D40 camera from Canon (Melville, NY, USA;www.usa.canon.com). As can be seen, images captured using this simple lens result in severe artifacts, the most prominent of which is image blur. Indeed, the corresponding point spread functions (the measure of the lenses optical response) are not only large but different across the RGB color channels.
To recover the image, each of the color channels could be deconvolved with the point spread function. However, using this technique results in images that suffer from severe chromatic artifacts such as ringing and color blur.
Rather than use such techniques, Dr. Heide and his colleagues have developed a method that uses cross channel (RGB) information that enforces the edges of each RGB image to be in the same position in all the color channels and thus effectively eliminating these color artifacts.
This reconstruction is a data optimization task that, in effect uses spatial frequencies that may appear in one channel to be used to reconstruct damped parts of the spectrum in another. Once such images have been computed they can be deconvolved with the point-spread function of the image.
However, since these point spread functions will be spatially varying across the image, a number of point spread functions must to be computed across the image. To accomplish this, a calibration pattern is used that consists of white noise patches.
Using the simple lens, a blurred image of the pattern is acquired and the patches extracted. A small in-focus image is also captured using the lens stopped at a small aperture. By doing so, the amount of image blur in the image can be estimated and used to reconstruct images from the RGB color channels using patch-wise deconvolution.
To demonstrate the effectiveness of this technique, Dr. Heide used a simple single plano-convex lens with a focal length of 130mm and an aperture of f/4.5 fitted to the 12M pixel Canon EOS D40 camera. As can be seen from Figure 1, the method is capable of producing high quality digital photographs with very simple lens designs. Furthermore while a commercially available camera lens such as the Canon 28-105 mm zoom lens with a 105 mm focal length and an aperture of f/4.5 exhibited reduced aberrations compared to the uncorrected simple lens, the software removed residual blurring effects.
While using such simple lenses may reduce the cost of imaging systems, processing 10Mpixel images using such techniques is compute-intensive. Running the algorithm on a single core of an Intel 2.4GHz Core2 Quad CPU with 4GBytes RAM, takes approximately 18s. While such techniques may not be useful in high-speed machine vision tasks, they may be used in low-cost digital imaging systems. More information including open source code, the original SIGGRAPH paper and numerous test images can be found at:http://bit.ly/11baPQF
Vision Systems Articles Archives