Vision system grades poultry carcasses on-line
Vision system grades poultry carcasses on-line
From 1965 to 1993, the number of chickens slaughtered at federally inspected establishments increased from 2.8 billion to 7.5 billion. To handle this increase, slaughter-plant production lines run at up to 91 birds per minute and require three inspectors per line to manually and visually inspect every chicken carcass.
Such manual inspection is labor-intensive, prone to human error, and a limiting factor in increasing production throughput. Because of this, the US Department of Agriculture (USDA) is developing an automated machine-vision system to inspect poultry to federal standards. Yud-Ren Chen, a research leader with the Instrumentation and Sensing Laboratory of the USDA`s Beltsville Agricultural Research Center (Beltsville, MD), uses parts of the system to image the exterior of each bird with color cameras and analyzes and classifies each one using image processing and neural-network software (see photo).
"Chicken carcasses must be inspected and removed from the production line for a number of reasons," says Chen. These include symptoms of septicemia, brusing, and tumor. "While the skin of septicemic carcasses shows a red-bluish discoloration, tumorous carcasses show swollen or enlarged tissue," he says. "The advantage of using multispectral imaging for classification is that the whole body surface, size, and shape can be imaged and measured," says Chen.
Chen designed a multispectral imaging system using off-the-shelf cameras, frame grabbers, and lighting equipment. To image both the front and rear of the carcasses, Chen chose four TM-9701 progressive CCD cameras from Pulnix (Sunnyvale, CA). Using C-mount lenses from Schneider (Kreuznach, Germany), two of the four cameras were equipped with 540-nm filters and the other two with 700-nm filters from Omega Optical (Brattle, VT). "Spectral images captured at 540 and 700 nm can be used to separate unwholesome carcasses from wholesome carcasses based on the spectral image intensity and the intensity distribution of Fourier power spectra of each carcass image," says Chen.
Inspection process
To perform this inspection, images are digitized into a 120-MHz Pentium PC using an AM-CLR PCI-based frame grabber from Imaging Technology (Bedford, MA). Then, custom image-processing software written in Microsoft C++ analyzes the image.
In operation, original images of 768 ¥ 484 resolution are averaged to reduce the size of the image to 254 ¥ 240 pixels. From this reduced image, a 64 ¥ 64 subimage is extracted for feature analysis. "At 540 nm," says Chen, "the reflected pixel intensity of normal carcasses is much higher than the intensity of abnormal ones." And at 700 nm, the reflected intensity of abnormal carcasses is higher than that at 540 nm.
Even though reflected intensity can discriminate between carcasses, results are dependent on both ambient light variations and the orientation of the bird. Consequently, Chen and his colleagues performed texture analysis by computing the Fourier power spectra of each image. Because this process analyzes the frequency distribution of the gray-level images, it is less susceptible to variation in light intensity and carcass orientation.
To compute the Fourier power spectra, the same 64 ¥ 64 subimage is acquired at 540 and 700 nm. By converting this two-dimensional image array into a one-dimensional array of numbers, an FFT is used to compute the power spectra. "For wholesome carcasses," says Chen, "the power spectrum is spread around the x axis and concentrated around horizontal lines." For tumorous carcasses, however, the power spectrum is concentrated near the origin.
Even though intensity and FFT methods can discriminate between different types of carcasses, either method is not robust, says Chen. While measuring the intensity of the image can identify systematic diseases, it is not as useful as the FFT at finding local abnormalities. Therefore, Chen turned to neural-network training to identify both local and systemic anomalies.
To classify poultry carcasses using both spectral and FFT data, Chen turned to NeuralWorks Professional II from NeuralWare (Pittsburg, PA). However, because the number of neural-network input nodes was limited, the 64 ¥ 64-pixel subimage was further reduced to 16 ¥ 16 pixels or 256 pixels.
The neural network has an input layer with 256 nodes for each camera image, an output layer with two nodes, and a hidden layer with 16 nodes. During the learning process, information is propagated back through the network and updates the connection weights.
"Neural-network models perform very well for classifying chicken carcasses," says Chen. When spectral-image pixel intensities were used as input, the accuracy of classification varied from 76% to 93%. However, when image-pixel intensity data of combined 540- and 700-nm wavelengths are used, the accuracy increases to 93%. In contrast, when combined 540- and 700-nm FFT image-intensity data are used, the accuracy of neural classification decreased. "In this case," says Chen, "the best accuracy was 83%. However, when 700-nm-wavelength image data were used as input, the accuracy of the neural classifier rises to 90%." ©