ENVIRONMENT/AGRICULTURE: Autonomous vehicle images apple orchard yields

Oct. 1, 2012
In work funded by the USDA Specialty Crop Research Initiative, engineers from the Robotics Institute at Carnegie Mellon University have developed a computer vision-based system that can accurately estimate the yields from orchards of apple trees.

In work funded by the USDA Specialty Crop Research Initiative (www.nifa.usda.gov), engineers led by Qi Wang, PhD, from the Robotics Institute at Carnegie Mellon University (www.cmu.edu) have developed a computer vision-based system that can accurately estimate the yields from orchards of apple trees.

Two D300 cameras from Nikon (www.nikon.com) fitted with wide-angle lenses were fixed to an aluminum bar about 0.28 m apart to form a stereo pair, after which they were mounted at the rear of an autonomous vehicle (see figure) to capture images of the fruit. To reduce the variance of natural illumination and allow the system to operate at night, the trees were illuminated with two AlienBees ABR800 flash lights from Paul C. Buff (www.paulcbuff.com).

In work funded by the USDA, engineers from the Robotics Institute at Carnegie Mellon University have developed a computer vision-based system to analyze the yields from orchards of apple trees.

The autonomous vehicle travels through orchard aisles at a preset constant speed of 0.25 m/sec by following fruit tree rows in the orchard. As it does so, the system scans both sides of each tree row in the orchard. Sequentially acquired images provide multiple views of every tree from different perspectives to reduce the occlusion of apples by foliage and branches.

Online software developed in Python from the Python Software Foundation (www.python.org) controls the image acquisition process. Once acquired, data are then processed offline by software developed in MATLAB from The MathWorks (www.mathworks.com), which detects the apples, identifies the location of the apples from sequential acquired images, counts the apples, and estimates the crop yield.

To detect the pixels in an image that represent a red apple, the algorithm first takes a 1072 × 712-pixel color image and removes any distortion. Then it analyzes the hue, saturation, and value of the pixels in the HSV color space to determine which represent the presence of a red apple.

Although the pixels representing the green apples and foliage are naturally both green, the pixels representing the green apples have a stronger green color. Hence the apple pixels can be separated from the leaves on the trees by analyzing the degree of saturation of the pixels' color.

Most foliage pixels are removed after the saturation segmentation process. However, the central parts of most apples are also removed because the camera flashes generate specular reflections at their centers. The algorithm uses this technique to detect green apple pixels in the images by combining the pixels detected as a result of specular reflection with those obtained by segmentation according to their degree of saturation.

To count the apples, the software determines the average diameter of apples in the images. It does so by calculating the eccentricity of each apple region and uses a thresholding operation to find regions that are relatively round. In cases where image regions contain two or more touching apples, they are segmented.

Because multiple images of each apple tree are captured, the system must eliminate any overestimation of the number of apples present. An onboard POS LV inertial/GPS positioning system from Applanix (www.applanix.com) computes the coordinates of the vehicle and correlates these with stereo images from the cameras. Software then calculates the global coordinates of each apple and eliminates any duplicates.

The system was deployed at the Sunrise Orchard at Washington State University, Rock Island, WA, in September 2011. Results from the test showed that the system worked most effectively in red apple orchards that had been subject to thinning. In a group of red apple trees that had been thinned, the system predicted the yield to within 3.2%. By calibrating the data with a sample based on human measurements, the researchers produced a result that predicted the crop yield to within 1.2%.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!