Vision-guided robot automates vegetation analysis

To reduce the amount of herbicide used in automated agricultural systems, it is important to correctly identify a multitude of plants and weeds. As a result, autonomous vision-guided robots developed for this purpose must robustly identify plants and weeds in unpredictable and often nonuniform lighting conditions.

At the University of Illinois at Urbana-Champaign (Urbana, IL, USA) and United States Department of Agriculture (USDA; Wooster, OH, USA), Dr. Hongyoung Jeon and his colleagues (Drs. Lei Tian and Heping Zhu) have developed a machine-vision-based system that uses adaptive image segmentation and neural networks to identify vegetation varieties.

Mounted on a 3-AT skid steering robot from Adept MobileRobots (Amherst, NH, USA), stereo images are captured using a stereo camera from Videre Design (Menlo Park, CA, USA). Equipped with a C-mount lens with a 6-mm focal length from Kowa Optimed (Torrance, CA, USA), the camera is fitted with a polarizing filter from Sony (Tokyo, Japan) to reduce specular reflectance caused by outdoor illumination.

Positioned approximately 0.6 m from the ground and angled at 20°, the system captures a trapezoidal area of images 768 × 572 mm at a resolution of approximately 2.4 mm/pixel. Captured images are then transferred over the stereo camera’s FireWire interface to a host PC on the robot.

After two sets of images are captured by the system under different stages of plant growth and illumination conditions, each image is processed using algorithms provided by MATLAB from The MathWorks (Natick, MA, USA).

Each RGB image is captured, then converted to a normalized excessive green (NEG) channel, represented by NEG = 2.8 (g/r + g + b) – (r/r + g + b) – (b/r + g + b) to emphasize the green channel. After NEG pixel values are then converted to integer values, variances of histogram distribution of each image are then used to segment the plant against the soil. To eliminate any random noise in these images, a 3 × 3 median filter is applied to each of the segmented images.

To identify weeds from crop plants, Jeon and his colleagues used MATLAB’s Neural Network Toolbox for identification model development. Before this neural network could be used, however, it was first trained to recognize a number of different species of corn and weed plants. A number of images were captured using both the machine-vision system and an SD-110 Powershot camera from Canon (Lake Success, NY, USA) to train the neural network.

Before training, these images were pre-processed to measure specific morphological features of the plants within the images. After the plant perimeter, inner area, width, and height of a plant were measured, the features converted to five normalized features—height/width, height/perimeter, perimeter/area, width/area, and height/area—to minimize the influences of the image size of each plant. These normalized features of plants were then used to train the neural network.

After initial testing of the system, the neural network was shown to identify approximately 72% of the corn plants within the images. To improve the identification accuracy, two criteria were applied to improve the identification results of the neural network.

First, the identification results of the neural network for plants at the edges of the image that exhibited incomplete morphological features were excluded from the identification process. Second, a maximum weed size of 300 pixels was set to limit the size of detected weeds. With these improvements, the accuracy of the system increased to approximately 94%.

-- By Andy Wilson, Vision Systems Design

Webcasts

Vision technologies for robotics: Application do’s and don’ts

This webcast will offer tips and examples for integration of machine vision systems in robotics applications. Expert Jeff Boeve of JR Automation will explain how to clearly define your pass/fa...

Solving factory automation challenges with machine vision

What do you need to know to implement your machine vision setup for industrial automation? This webcast will answer that question using real-world application examples—such as inspection, assembly,...

Performing effective 2D and 3D pattern matching in machine vision applications

This webcast, sponsored by MVTec, will explain how pattern matching works and in what applications is being used.

October 30, 2014

Overcoming the Limitations of Vision Systems in Manufacturing

Expert speaker Jim Blasius, Solutions Architect, Measurement & Automation Product Group at ADLINK Technology will examine the pros and cons of different compact vision systems, discuss current ...
October 28, 2014

Archives

Click here to view archived Vision Systems Design articles