While robots are good at performing repetitive tasks within controlled environment, working within a changing environment - or performing tasks that may vary from time to time - has traditionally been very challenging.
"Designing a robot that can pick a number of different semi-structured objects such as fruit for example, requires many tasks to be performed - from recognizing the correct objects, calculating which order to pick them in, planning how to grip them and calculating how best to lift and place them," says Chris Roberts, head of industrial robotics at Cambridge Consultants (Cambridge, England;www.cambridgeconsultants.com).
To show how such a low-cost system can be built to perform such tasks, Roberts and his colleagues have built a technology demonstrator designed to highlight the talents of the over 500 engineers employed by the company in Cambridge, England and Boston, MA, USA.
In a demonstration of the system at last year's Electronic Design Show in Coventry, England, Roberts and his colleagues showed how the system could be used to identify different types of fruit, such as apples, bananas and oranges and then pick and place them depending upon their color. A video of the system in operation can be seen athttp://bit.ly/1MhVbv4.
To locate the position of the randomly placed fruit, the system uses a Kinect sensor from Microsoft (Redmond, WA, USA;www.microsoft.com) placed above a five-axis robot from ST Robotics (Cambridge, England; www.strobotics.com) that incorporates both a 640 x 480 RGB camera and an IR emitter that projects an infrared speckle pattern onto the scene. The projected pattern is then captured by an infrared camera in the sensor, and by comparing the returned pattern to reference patterns collected at known distances from the camera, a depth map of the scene is computed.
Both color and depth information is transferred over a USB 3 interface to a laptop PC using the Image Acquisition Toolbox from MathWorks (Natick, MA, USA;www.mathworks.com). Here, color image features such as shape, color and size of objects within the field of view are computed as well as gradient features from the depth map using both image processing tools from MatLab and custom algorithms developed by Cambridge Consultants.
Analyzed results from both the color image and depth map are then combined to determine the type and most likely position of the objects within the field of view. Once these 3D coordinates are known, the best trajectory for the robot gripper must be planned to avoid any possible collisions.
"Trajectory planning software written in Python running on the PC then computes the optimum movement of the robot's gripper arm from the top of the scene," says Roberts. This data is then transferred from the PC to the host controller of the five-axis robot.
Because of the irregular shape of the fruit, Cambridge Consultants needed to develop a custom gripper to pick and place the fruit. Using five articulated fingers and a fixed central vacuum bellows, the gripper approaches the object of interest and the center bellows is used to make contact. Once contact has been made, a pressure sensor actuates a vacuum pump that is attached to the bellows to seal the contact between the object and the finger. The five fingers are then actuated to grip the object in a similar manner.
Once picked, the fruit can also be sorted by color so that, for example, red apples can be separated from green apples. Although the system remains a technology demonstrator, it highlights how such vision-guided robots can be used to pick disparately shaped and colored parts.