Andrew Wilson, Editor
To increase the quality of their products while reducing waste and manual labor costs, manufacturers in agricultural and packaging industries are leveraging machine-vision systems. When compared with manual inspection techniques, the benefits of using such systems provide these vendors with a cost-effective means to evaluate the quality of food products as they ensure a high-level of consistency.
Today, different types of machine-vision systems perform these inspection tasks. For high-speed inspection and sorting of recently harvested products such as potatoes, linescan camera-based systems are most commonly deployed. In less demanding applications, harvested or baked products can be evaluated using color systems that employ area-array or multispectral cameras. To ensure the correct portions of foods such as meat or fish are properly packaged, structured light systems may evaluate the products' volume prior to automatic slicing.
Once products are packaged, machine-vision systems can validate sizes, defects, color, and barcodes to ensure package consistency. Although the hardware on which the systems are based may be different, developers often use off-the-shelf software packages to process and analyze captured images. Using these packages with their associated graphical user interfaces, engineers and integrators can rapidly develop machine-vision systems with a minimum level of coding.
The requirements of the task to be performed must be carefully considered before choosing a software package. Luckily, many of the low-level functions required by machine-vision systems have already been incorporated into these packages. For example, preprocessing functions such as lens distortion correction, geometric calibration, Bayer interpolation, and numerous filters for noise reduction are now commonplace in most manufacturers' software toolsets.
To extract quantitative information from captured image data, higher-level image segmentation algorithms are often used. These include methods such as thresholding, histogram analysis, edge-based segmentation, and region-based segmentation. While simple thresholding classifies regions within an image based on their intensity values, histogram-based methods can locate specific clusters of color or image intensity within an image. Edge-detection algorithms can detect discontinuities within images based on parameters such as intensity, color, or texture. Pixels of similar intensities, colors, or textures can also be grouped together into regions.
Using these segmentation techniques, image features can be represented by boundaries within images, then used to determine characteristics such as the size or shape of an object. Similarly, sets of regions can be used to analyze specific defects or image texture within the object.
To measure features within segmented regions or boundaries, most off-the-shelf software packages offer different types of tools for measurement and feature extraction. These include caliper-based measurement, blob analysis, morphological operators, and color analysis tools. In many applications, multiple algorithms are often combined to discern multiple features of a specific product. Color image analysis is also now commonplace in software packages.
For example, to automatically process carrots, Iris Vision has developed a means to analyze carrots before they are automatically cut. Prewashed carrots at arbitrary orientation between 150 mm and 300 mm in length are transported along a conveyor.
After the system captures grayscale images of the carrots, images are thresholded. Next, the CVB Blob tool within the Common Vision Blox toolkit from Stemmer Imaging determines the centroid and moments of inertia. This information is used to calculate the orientation of the carrot. Then, a region of interest (ROI) is defined perpendicular to the longitudinal axis and the CVB Edge tool determines edge pairs. By measuring how rapidly the carrot decreases in thickness on both sides of the thickest point, the system can locate the point at which the carrot should be cut.
Many machine-vision systems for use in the food industry operate on grayscale images, but others require the use of color image analysis. This was the task faced by engineers at Orus Integration, a company tasked with developing a blueberry sorting system capable of processing 30,000 lb of fruit per hour. Interestingly, rather than use linescan cameras, Orus opted for five color Marlin 1394 cameras from Allied Vision Technologies (AVT) interfaced to three PCs fitted with three Meteor-II/1394 adapter cards from Matrox Imaging. The vision system captures images as blueberries are thrown from the end of a vibrating conveyor at a rate of 600 ft/min.
To identify the berries, a blob analysis module in the Matrox Imaging Library (MIL) software analyzes each blob according to its average hue, average red intensity, size, and roundness. Based on these measurements, ice chunks, twigs, or insects can be rejected. Unripe, overripe, or other types of fruit such as cranberries are also detected based on their color values or size.
The emergence of low-cost, 3-D structured light inspection systems has led software vendors to incorporate 3-D image analysis tools into their offerings. Tordivel has used its Scorpion Vision Software to develop a combined 2-D and 3-D based image-processing system. The 3-D part is based on a structured light system that measures thickness profile (see Fig. 1). The 2-D part measures minimum and maximum diameter with 0.1-mm resolution and checks the perimeter for edge defects. Intended to reduce wasted product and improve quality, the system is capable of inspecting 7200 pizzas/hr as they travel along a conveyor at 0.5–1.0 m/sec. The system handles different sizes through a flexible recipe feature. After captured image data are processed, computed 3-D image data are used to measure the height of the products in different regions of the surface. Should a pizza not meet certain criteria such as height, width, or shape it can be diverted from the production line. The same system has also been deployed to count and sort a flow of 40,000 tortillas/hr.
Using structured light-based systems in conjunction with visible light-based systems is especially helpful if the size and weight of food products need to be determined. Machine Vision Technology has developed a system to measure the size and weight of biscuits as they pass along a production line at the rate of 3600 biscuits/min (see Fig. 2).
In the system, three scA1390 scout Gigabit Ethernet cameras from Basler capture visible images of the biscuits from above. Employing structured laser light, a Basler scA640-120gm camera is used in partial scan mode to image the laser line profile of the biscuit. Operating within the HALCON imaging environment from MVTec Software, images from the three scA1390 scout cameras are then used to determine the length and width of the biscuit to within ±0.17 mm. By reconstructing a 3-D image from images captured by the scA640-120gm camera, HALCON also determines the height of the biscuit to within ±0.17 mm.
Once a specific analysis has been performed on these objects, the results need to be classified so the machine-vision system can reject any potential defective foods based on the analyzed data. Again, a number of algorithms exist to perform supervised, semisupervised, and unsupervised classification or learning techniques. In many machine-vision applications, a supervised approach is used.
In a supervised approach, systems are trained from sample images that have been judged for quality manually by a human expert. At the University of Lincoln, Tom Duckett, PhD, and his colleagues have developed a prototype computer vision system that can identify sub-standard potatoes. Initial input from a human expert on a sample batch is used to classify the potatoes based on any blemishes or diseases that may be found (see "Potato industry reaps benefits of computer vision").
After packaging, supervised vision-based inspection systems can be used to verify the correct location and orientation of labels. At the Kitt Green, UK facility of H.J. Heinz, V-viz Ltd. has deployed a system capable of label inspection on canned food at a rate of 650 cans/min (see Fig. 3).
The operator first acquires images of labels using Visionscape software from Microscan to begin the system training. At full operating speed, a can is presented for inspection every 75 msec; during this time the vision system acquires an image of the can, transfers the image to a PC, analyzes the image for the correct orientation and location of the label, and triggers a pneumatic rejector should the can be deemed unacceptable.
Vision Software Vendors
To find more vision software vendors, please visit the machine-vision software section of the Vision Systems Design Buyer's Guide.