Vision-guided robot automates vegetation analysis

To reduce the amount of herbicide used in automated agricultural systems, it is important to correctly identify a multitude of plants and weeds. As a result, autonomous vision-guided robots developed for this purpose must robustly identify plants and weeds in unpredictable and often nonuniform lighting conditions.

At the University of Illinois at Urbana-Champaign (Urbana, IL, USA) and United States Department of Agriculture (USDA; Wooster, OH, USA), Dr. Hongyoung Jeon and his colleagues (Drs. Lei Tian and Heping Zhu) have developed a machine-vision-based system that uses adaptive image segmentation and neural networks to identify vegetation varieties.

Mounted on a 3-AT skid steering robot from Adept MobileRobots (Amherst, NH, USA), stereo images are captured using a stereo camera from Videre Design (Menlo Park, CA, USA). Equipped with a C-mount lens with a 6-mm focal length from Kowa Optimed (Torrance, CA, USA), the camera is fitted with a polarizing filter from Sony (Tokyo, Japan) to reduce specular reflectance caused by outdoor illumination.

Positioned approximately 0.6 m from the ground and angled at 20°, the system captures a trapezoidal area of images 768 × 572 mm at a resolution of approximately 2.4 mm/pixel. Captured images are then transferred over the stereo camera’s FireWire interface to a host PC on the robot.

After two sets of images are captured by the system under different stages of plant growth and illumination conditions, each image is processed using algorithms provided by MATLAB from The MathWorks (Natick, MA, USA).

Each RGB image is captured, then converted to a normalized excessive green (NEG) channel, represented by NEG = 2.8 (g/r + g + b) – (r/r + g + b) – (b/r + g + b) to emphasize the green channel. After NEG pixel values are then converted to integer values, variances of histogram distribution of each image are then used to segment the plant against the soil. To eliminate any random noise in these images, a 3 × 3 median filter is applied to each of the segmented images.

To identify weeds from crop plants, Jeon and his colleagues used MATLAB’s Neural Network Toolbox for identification model development. Before this neural network could be used, however, it was first trained to recognize a number of different species of corn and weed plants. A number of images were captured using both the machine-vision system and an SD-110 Powershot camera from Canon (Lake Success, NY, USA) to train the neural network.

Before training, these images were pre-processed to measure specific morphological features of the plants within the images. After the plant perimeter, inner area, width, and height of a plant were measured, the features converted to five normalized features—height/width, height/perimeter, perimeter/area, width/area, and height/area—to minimize the influences of the image size of each plant. These normalized features of plants were then used to train the neural network.

After initial testing of the system, the neural network was shown to identify approximately 72% of the corn plants within the images. To improve the identification accuracy, two criteria were applied to improve the identification results of the neural network.

First, the identification results of the neural network for plants at the edges of the image that exhibited incomplete morphological features were excluded from the identification process. Second, a maximum weed size of 300 pixels was set to limit the size of detected weeds. With these improvements, the accuracy of the system increased to approximately 94%.

-- By Andy Wilson, Vision Systems Design

Get All the Vision Systems Design News Delivered to Your Inbox

Subscribe to Vision Systems Design Magazine or email newsletter today at no cost and receive the latest news and information.

 Subscribe Now

SPONSORED CONTENT

Webcasts

How the newest machine vision standards will affect you

How will the latest developments in machine vision standards affect you? This webcast will answer that question by featuring updates on relevant standards by some of the industry’s most knowledgeab...

How vision systems are changing automotive manufacturing

How do Ford and General Motors leverage machine vision for powertrain manufacturing? How does traditional vision application development serve today’s needs, and what machine vision capabilities co...
Date: June 24, 2015

High-speed imaging and the future of vision systems

What technologies are advancing high-speed imaging? What markets and applications are affected, and what can potential purchasers expect? In this live, interview-style webcast, Vision System Design...
Date: May 28, 2015

Archives

Click here to view archived Vision Systems Design articles