Two stationary Ensenso cameras are mounted in each cell, which allows both object detection and part placement to occur at the same time. When working with multiple cameras, the N10 software enables the generation of a single 3D point cloud containing data from all cameras used. The software also controls the CMOS sensors and random pattern projector, and handles the capture and pre-processing of the 3D data. As a result, both frame rate and image quality are optimized. In addition, a calibration plate mounted on the robot gripper enables the calibration of the camera and robot. The software uses the plate to calculate the mounting position of the camera, and the 3D data is immediately represented in the robot’s coordination system.
Images captured by the cameras are analyzed using MVTec’s HALCON 11 software. Using these images, along with CAD data from the cell, the robot, and the robot gripper, a collision-free robot path is generated. This path is transferred to robot control, where it is then executed. If the robot detects a failure in the bin picking process, it navigates independently out of the container and tries to pick the part in another location. A programmable logic controller monitors the entire process and is responsible for informing the machine vision system which type of parts are to be picked, when, and from which bin. With this vision system, the robot cells achieve cycle times of less than 10 seconds.
View the IDS case study.
Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design
To receive news like this in your inbox, click here.
Join our LinkedIn group | Like us on Facebook | Follow us on Twitter | Check us out on Google +
Page 1 | Page 2