3D imaging: 3D vision harvests agricultural products

July 2, 2017
Using a combination of 3D laser scanners, robotics, image processing and deep learning software, Capture Automation is developing systems to automate the harvesting of broccoli heads.  

There is a growing demand from supermarkets to provide products with the correct weight, size, shape and quality. To meet these demands, automated harvesting systems must inspect products such as potatoes, carrots and broccoli for these features as well as characterizing any defects and disease.

At present, many harvesting methods are performed manually. Harvesting a large amount of product and then sorting those that are suitable for sale, however, may result in wastage since a supermarket may only require products of a particular weight.

To overcome this, the sorting task can be performed in the field so that the product can be graded before it is harvested, thus improving yield. Automated harvesting systems using machine vision can perform this task. Using 2D vision systems that employ cameras mounted in front of a tractor are, however, subject to ambient lighting conditions.

Using a combination of 3D laser scanners, robotics, image processing and deep learning software, Capture Automation is developing systems to automate the harvesting of broccoli heads.

To develop algorithms to detect, identify and size such products is complex. Typically, when a product must be located within an image, edge-based detection methods are used. Under different ambient lighting conditions, shadow effects cause 2D methods to fail. However, with 3D imaging, the edges of products are defined within a volumetric representation so that even lighting is not required and vegetables such as lettuce or broccoli heads can be picked by shape and size.

Since 2D systems are also calibrated to a particular plane, products of variable height make 2D image processing systems used for such tasks more complex. However, with a pre-calibrated 3D imaging system, the angle and tilt of the product can be analyzed so that before the product is picked, a robot can be fed the correct angular coordinates such that the product will not be damaged.

Data generated in the form of a height map from the Gocator is generated as a grey-scale image (left) where the lighter the pixel data the closer the broccoli head is to the camera and vice-versa.

Currently, such 3D automated harvesting systems are being developed by Capture Automation (Hove, East Sussex, England; www.captureautomation.co.uk). Using a combination of 3D laser scanners, robotics, image processing and deep learning software the company is developing systems to automate the harvesting of broccoli heads.

One of the advantages of 3D laser scanners is that they can be used continuously to capture images that can be processed on-the-fly. Using an encoder mounted on the tractor, positional coordinates can be sent to the robot, track the product as it moves under the robot head and then pick them. To gain such accurate information from a moving system, an accurate encoder must be used. In the first prototype of the system, a rotary encoder wheel was positioned at the front of the tractor.

Unfortunately, in rainy weather, such rotary encoder wheels may slip rendering the calibration of the system ineffective. To overcome this, a spike-based encoder was developed that provides more accurate positional information.

The harvester can then be used to select different types of broccoli heads according to their weight and size and place each different head into a different bin. The system can also report what heads may be left in the ground or ones that may be suitable for picking later. While it takes many months to grow broccoli, there is approximately a three-day window in which it may be under or over-ripe, so that gathering such data is important.

To perform the 3D scanning in the robot harvester, a Gocator scanner from LMI Technologies (Burnaby, BC, Canada; www.lmi3d.com) is mounted onto the front of the tractor and interfaced to a host PC. One of the advantages of using such a scanner is that multiple scanners can be used to cover a large field of view.

Unlike custom laser/camera triangulations systems that need to be calibrated to ensure correct measurements are made, the Gocator is pre-calibrated allowing the system to immediately provide measurements with millimeter accuracy. Using data from the encoder provides information on how fast the tractor is travelling in millimeters/second and allows captured imaging data to be used to size the crop and provide the correct position and depth information to the robot to track each broccoli head and pick it at the correct time.

Using a standard software image stream such as the GeniCam transport layer is useful since it provides flexibility in choosing which software performs image analysis. After images are captured, Sherlock software from Teledyne DALSA (Waterloo, ON, Canada; www.teledynedalsa.com) is used to perform 2D and 3D image processing. To detect the broccoli heads, a custom algorithm was used as a plug-in with the Sherlock software.

Data generated in the form of a height map from the Gocator is generated as a grayscale image where the lighter the pixel data the closer the broccoli head is to the camera and vice-versa. Reducing some of the scaling issues using 2D algorithms results in broccoli heads with a similar size. Thus, for classification purposes, there is no need to train very large or very small broccoli heads in which case the error rate would increase since leaves or weeds may be detected.

With harvesting systems such as this, fast image processing times are required since classification needs to be performed quickly. On a tractor, for example, if the operator is not driving in a perfectly straight line, the camera will identify the broccoli head in a certain position, but, by the time the robot head reaches the broccoli head, the picking arm could be out of position.

Using 2D image processing algorithms, the broccoli heads can be identified and then 3D algorithms can identify the center of the broccoli head. To identify these heads, deep learning techniques are applied using the Polimago pattern matching tool, part of the Common Vision Blox (CVB) software from Stemmer Imaging (Puchheim, Germany; www.stemmer-imaging.de).

Typically, working with organic products such as broccoli is difficult. With computer vision, many different variables must be trained into the system, e.g. broccoli heads that are very large or small with different shape and texture.

Since each broccoli head looks slightly different, the system needs to be trained to identify whether they are perfectly circular or slightly miss-shaped. One of the biggest challenges in such image identification is separating the broccoli head from the leaf since the leaf will often blend into the broccoli head. Thus, many different images must be used to train the system, a process that involves driving the tractor-based system around the field to identify different types of broccoli heads as they are growing.

Once the system has identified the broccoli head, it needs to be graded by size. Unfortunately, this is not an easy task since the broccoli head may be partially covered by leaves so that the leaves need to be segmented from the broccoli heads. Using the 3D images, segmentation by texture can separate the leaf from the broccoli heads. The result is an image that contains only data about the broccoli heads so that a measurement of the diameter of the head can be performed.

Using a graphical user interface (GUI) on the PC, the operator can then select which size of broccoli head is picked. After the broccoli heads are correctly identified, their positional information is sent from the tractor's on-board PC to a robot from Fanuc (Oshino, Japan; www.fanuc.com) fitted with a custom-built picking head. Those that are not picked can then be identified and logged for later analysis.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!