Hyperspectral system picks pickles; Single-pixel imaging compresses information
Hyperspectral system picks pickles
The pickle, made from a cucumber, is a popular vegetable, but consumers require pickles with consistent quality and no internal cavities or damage that might result from harvesting or shipping. Light-based methods for detecting vegetable quality have been developed but are not useful on an assembly line because of the execution time required. To speed inspection, researchers at Michigan State University (East Lansing, MI, USA; www.msu.edu) are developing a hyperspectral imaging system to check the quality of pickling cucumbers. They have experimented with different wavelength ranges and image-processing algorithms to find the best approach.
In one version, a visible and near-IR transmittance imager covers the 450-1000-nm region; it includes a high-performance back-illuminated CCD and control unit from Hamamatsu (Bridgewater, NJ, USA; www.hamamatsu.com), a zoom lens, an imaging spectrograph from Spectral Imaging (Oulu, Finland; www.specim.fi), and a tungsten halogen fiberoptic light source from Fiberoptics Technology (Pomfret, CT, USA; www.fiberoptix.com). A second setup, which captures hyperspectral images in the 900-1700-nm range, operates in reflection mode and includes an InGaAs-based camera from SUI, Goodrich Corp. (Princeton, NJ, USA; www.sensorsinc.com). Two classification algorithms were tested: partial-least-squares discriminant analysis and hyperspectral image thresholding. Both were run in Matlab from The MathWorks (Natick, MA, USA; www.mathworks.com), along with a Matlab plug-in partial-least-squares program from Eigenvector Research (Wenatchee, WA, USA; www.eigenvector.com).
The average reflectance of bruised areas was lower than that of normal areas over all but the 1400-1550-nm portion of the test spectral region; the difference was the highest in the 950-1350-nm region. This spectral difference decreased over time, with the differences sometimes disappearing after six days-showing the need for prompt testing.
Single-pixel imaging compresses information
Researchers at Rice University (Houston, TX, USA; www.rice.edu) have combined a MEMS array with a single optical sensor to create an image/video camera that incorporates compressed sensing. “White noise is the key,” says Richard Baraniuk, the Victor E. Cameron professor of electrical and computer engineering at Rice. “Thanks to some deep new mathematics, we’re able to get a useful, coherent image out of the randomly scattered measurements.”
The prototype camera uses a digital micromirror device (DMD) from Texas Instruments (Dallas, TX, USA; www.ti.com) and a single photodiode. The object of interest is focused upon the DMD, which has a pseudorandom pattern mapped onto it. The micromirrors can tilt by ±12º about the plane of the chip-the white parts of the pattern indicate mirrors tilted by +12º and the black ones are tilted by -12º. The reflected light from the white/black areas of the pattern is collected on a photodiode. Every pseudorandom pattern gives one coefficient (photovoltage), and, using these coefficients and the random seed, an image can be reconstructed.
Baraniuk says, “The beauty of compressed sensing lies in the fact that we measure (sample) the image/video fewer times than the number of actual pixels. This can significantly reduce the computation required for image/video acquisition and encoding.” It currently takes about five minutes to take a picture with the prototype camera, and only stationary objects have been photographed. Initial efforts are aimed at developing the camera for scientific applications where digital photography is unavailable, such as terahertz imaging, although imaging for the consumer market may be possible.