Researchers develop vision-guided robot system for cucumber harvesting

Feb. 21, 2018
A group of international researchers are developing and testing a lightweight, dual-arm, vision-guided robot for the automated harvesting of cucumbers in Germany.

A group of international researchersare developing and testing a lightweight, dual-arm, vision-guided robotfor the automated harvesting of cucumbers in Germany.

The "Cucumber Gathering – Green Field Experiments," or "CATCH" project team includes the Fraunhofer Institute for Production Systems and Design Technology IPK, the Leibniz Institute for Agricultural Engineering and Bioeconomy in Germany and the CSIC-UPM Centre for Automation and Robotics (CAR) in Spain. In Germany, explains Fraunhofer IPK, cucumbers that are to be used for pickles are harvested by hand with the aid of "cucumber flyers," which are farm vehicles with wing-like attachments, on which seasonal workers lie on their stomachs and pluck ripe cucumbers.

This is both a labor-intensive and uneconomical operation, and many of Germany’s agricultural regions consequently face an uncertain future—so much so that cucumber farming has already begun relocating to Eastern Europe and India, according to Fraunhofer IPK. Because of this, the international team of researchers are studying the potential for automated cucumber harvests in the CATCH project, for which they are looking to develop and test a dual-arm robot system consisting of inexpensive lightweight modules. The ultimate goal, according to the team, is for the robot to be a viable option for automated cucumber farming and other agricultural applications.

Numerous challenges exist within a project such as this one, however, including the fact that in cucumber harvesting, a robot must identify green objects camouflaged by green surroundings. Additionally, cucumbers are randomly distributed throughout a field and some are concealed by vegetation, while varying light conditions make the task all that much more difficult.

Dr. Roemi Fernandez Saavedra, Researcher at Center for Automation and Robotics CSIC-UPM, the team responsible in the project for the vision system, explains that there are two different vision system options that are being considered. The first option involves CCD camera paired with a Time of Flight camera. The reflectance measurements in the visible region provided by the progressive camera are utilized as basic input for the detection of areas of interest. The TOF 3D camera supplies simultaneously fast acquisition of accurate distances and intensity images of targets, enabling the localization of cucumbers in the coordinate space when color and range information are registered.

A Prosilica GC2450 camera from Allied Vision was used as the color camera in this option. This color, GigE Vision camera features the 5 MPixel Sony ICX625 CCD image sensor, which has a frame rate of up to 15 fps at 2448 × 2050 pixels resolution. The ToF camera provides a depth map and an amplitude image at the resolution of 176 × 144 pixels with 16-bit floating-point precision and maximum frame rate of 54fps, as well as x, y and z coordinates to each pixel in the depth map, according to Dr, Fernandez. The detection range of this device goes from 0.1 m to 5.0 m, and its field of view is 69° (h) × 56° (v).

The second system being considered, according to Fernandez, consists of the monochrome version of Prosilica camera, with a custom-made filter wheel and servomotor enabling the accurate positioning of the wheel, which allows interchanging of up to five optical filters. This set-up enables the analysis of spectral information, providing additional capabilities, such as the early detection of diseases. Both vision systems utilized the now-discontinued Mesa Imaging SwissRanger SR-400011 3D Time of Flight camera.

Fernandez also stressed that it was important to point out that other color or monochrome cameras, as well as other Time of Flight cameras, can be used for this application instead of the Prosilica and MESA cameras. The key issue, she explained, is the combination of data (color and 3D point cloud or multispectral bands plus 3D point cloud), and the processing algorithms designed to work with these input data.

It terms of which vision system may be utilized for a particular application, Fernandez explained further: "The first set-up is simpler and provides a faster acquisition. The input data provided by this system is enough for cucumber detection. The second set-up can provide additional capabilities, such as early detection of disease on the crops, thanks to the multispectral information. Nevertheless, the acquisition time with this system is slower, since it is necessary to move the filter wheel to take images in different spectral bands."

Combining these vision systems with intelligent image processing techniques will enable the locating of cucumbers and the guiding of the robot’s gripper arms to pluck them, suggested Fraunhofer IPK. In fact, the vision system should ensure that the robot detects and locates approximately 95% of cucumbers, with the ultimate goal being the advancement of technology so that the robot picks all the ripe cucumbers to foster growth of new ones.

Fraunhofer IPK developed the robot arms with five degrees of freedom on the basis of hardware modules developed by igus GmbH in Cologne, Germany. The team is tasked with developing three gripper prototypes: gripper based on vacuum technology, a set of bionic gripper jaws (Fin Ray®) and a customized "cucumber hand" based on OpenBionics robot hands. The team is learning from previous insights acquired during a European research project in which they developed a dual-arm robot control system with efficient task-oriented programming for Workerbot I – a humanoid robot capable of industrial assembly.

Project experts from IPK are reportedly working on the system so that it can plan, program and control the behavior of robots harvesting cucumbers. These pre-programmed behavioral patterns will reportedly enable the robot to search for cucumbers as a person would. Dr. Dragoljub Surdilovic, a scientist at Fraunhofer IPK, explains further: "The robot can, for example, push leaves to the side using symmetrical or asymmetrical movements, or congruent and incongruent movements. As a result, it can automatically change directions on the fly to approach and then grasp a cucumber."

The ultimate goal of the researchers is to create an "intelligent control system capable of making judgment calls: assigning a certain task to a certain gripper arm, monitoring cucumber picking and dealing with exceptions."

This past July, the Leibniz Institute for Agricultural Engineering and Bioeconomy used various types of cucumbers to conduct initial field testing of the robot system at its test site. The institute also tested harvesting new types of cucumbers with distinguishing features that make them easier to pick. This first round testing validated basic functionality, and since the fall of 2017, project partners have been conducting additional tests in a Leibniz Institute greenhouse.

Fraunhofer IPK notes that the team is especially eager to see the extent to which interference or malfunctions affect the efficiency and robustness of the system. Once testing has been completed, the team will look to make it commercially available, as companies, farmers, and project partners will strive to make it commercially viable. Companies, cucumber farmers and agricultural associations have expressed considerable interest in the dual-arm robot.

Additionally, the CATCH project was unveiled to the general public at Agritechnica, the world’s leading trade fair for agricultural technology in November 2017. The German Agricultural Society (DLG e.V.) exhibited the robot at its Agritechnica booth, reportedly eliciting enthusiastic feedback from agricultural specialists and numerous companies.

View a Fraunhofer IPK press release on the project.

Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design

To receive news like this in your inbox,
click here.

Join our LinkedIn group | Like us on Facebook | Follow us on Twitter

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!