Vision System Enables Robotic Picking at Industrial Bakery

Aug. 19, 2022
Challenges, including the products’ irregular shapes and sizes as well as transparent plastic packaging causing low contrast and partial reflections called for a solution that incorporated machine vision.

A Finland-based industrial bakery needed to replace a manual operation deemed inefficient and fallible. The operation entailed taking packaged baked goods coming down a conveyor into bins for transport. The bakery enlisted KINE (Turku, Finland;, a Finnish provider of custom turnkey robotics solutions to develop an automated solution. The solution called for a conveyor tracking system with on-the-fly robotic picking of bags of bread loaves and rolls. There were various challenges to overcome, including the products’ irregular shapes and sizes as well as transparent plastic packaging causing low contrast and partial reflections, which made it difficult to detect the product using optical sensors. KINE quickly determined it needed a solution that incorporated machine vision.

The Process

Baked goods come along a conveyor belt and need to be picked on-the-fly from the moving conveyor. Before upgrading to the vision system, two human operators manually removed and deposited bread packages into bins. The bread came from the packaging machine and moved onto a rotating table where an operator was positioned. Workers picked the baked goods off and placed them into bins and manually stacked the bins onto pallets for later transport.

The system KINE designed determines position and orientation of the packages of bread, transmitting this information to the robot controller as coordinates to pick up the product. The products are identified and picked up by a robot and placed into the bins for shipping to the final customer. A time of flight (ToF) camera allows the machine to “see through” the semi or fully transparent packaging to determine the overall 3D shape of the baked goods. This method allows the vision system to confirm that it is the correct item and to position and orient the robot to pick the product at the correct position and angle without damaging the bread or its packaging.

The vision system works to determine the position of the product with the “tracking” being the compensation for the distance that the baked good has traveled since being imaged by the ToF sensor and located (x, y coordinates) by the vision software. This distance compensation is handled by the robot’s software, which uses a rotary encoder to record/follow conveyor movement/motion to accurately pick the baked good further down the conveyor line by translating the position and angle information to real-world units by the distance.

The vision system is using software-based 3D and 2D tools to segment the baked goods (in effect, the foreground) from the background conveyor and determine the position and orientation. It’s not doing anything to count whether there are the appropriate number of rolls in a package.

The Turnkey System

When KINE recognized the need for machine vision for the new system, it solicited the help of OEM Finland Oy (Turku, Finland;, a provider of industrial components in Europe and a distribution partner of Matrox Imaging (Dorval, QC, Canada; Matrox Imaging was recently acquired by Zebra Technologies (Lincolnshire, IL, USA;

To initiate the inspection system, a B&R (Mississauga, Canada; X20CP1585 PLC that communicates with the vision program via a user datagram protocol triggers a Basler AG (Ahrensburg, Germany; Blaze 101 3D ToF camera via a photodetector. The camera provides real-time measurement of up to 30 fps. A Stäubli (Horgen, Switzerland; TS2-60 SCARA robot handles the picking and placing and communicates via TCP with Stäubli VALtrack software for conveyor tracking and picking. A B&R industrial panel PC serves as the platform to run the Matrox Design Assistant X® runtime environment and the associated vision program.

The camera works to identify each product, making sure it matches different limits, gets the exact angle of the position of the product on the conveyor, and uses that information to communicate with the robot so it picks up the correct product from the conveyor.

Matrox Design Assistant X completes camera data analysis and produces the central interface to the vision system. The camera records the surfaces as a point cloud with more than 300,000 XYZ coordinates. The software converts the point cloud into a depth map, which 2D vision tools analyze to determine the grip points for the robot. The robot needs to get a good grip on the plastic bags without causing any damage to the delicate bread or its packaging.

The Matrox Design Assistant X runtime environment is used on the touch panel PC, and the machine vision application is developed with Matrox Design Assistant X. The system uses Matrox Design Assistant X recipes (i.e., job handling) to switch to handling another product when a different baked good is produced, packaged, and ready to be sorted for shipping. Recipes are used to configure select flowchart steps so that a flowchart can inspect different product variants. This allows for the use of a single project to inspect the different types of breads with their unique characteristics. Recipes can be created and configured in Matrox Design Assistant X at both design time and runtime.

The main purpose of the robot is to replace the human operators. There is no vision “on” the robot; rather the robot is part of the overall vision system, and it uses the information provided to it from the system to guide its movement. The robot and the rest of the system are installed after the packaging machine at the end of the production line and is the final quality control assessment before the baked goods exit the production process.

Working in conjunction with Matrox Design Assistant X, vision is also used to streamline the process of identifying non-conforming products. For example, breads that make it down the line without packaging, breads that are mispackaged, or breads that should be sliced but aren’t bypass the robot and are filtered into a rejection bin automatically.

After passing through the inspection system, the collected packages are automatically stacked and then placed onto pallets for distribution.

Implementation Challenges

Although the main challenge for this application was solved by using vision to distinguish packaging and contents, implementing the system itself posed challenges. For example, the transparent bag is not seen by the camera, and the actual product height/arrangement inside the plastic bag is not relevant, so using 3D information to determine pick height was not an option. What is relevant is the total volume of the plastic bag (product plus air) so the robot can get a good grip and not damage the plastic bag. The recipe-based solution afforded by Matrox Design Assistant X was more reliable, as it allowed for use of a single project to inspect the different types of bread with their unique characteristics.

Another challenge occurred because of communication delays with the PLC trigger, making synchronization difficult. KINE could not use physical I/O to relay the trigger to the machine vision. The camera does not have an I/O trigger option, and KINE used a third-party panel PC, so communication between the trigger sensor and the Matrox Design Assistant Runtime involves a PLC and network communication—in this case, UDP. The network communication piece is not real time, which causes a variable delay. Another variable delay resulted from the camera’s internal processing and the Matrox Runtime/Design Assistant integration. Camera time varies from one acquisition to another by up to 50 ms. KINE overcame these challenges by using a trigger signal from the photodetector to determine the distance traveled on the conveyor. It uses the same trigger to trigger the camera to start image acquisition. The robot uses an encoder to track the conveyor movement after the trigger event, and the machine vision software then determines the transversal position and orientation before performing the analysis.

KINE and its customer are very pleased with the final vision system, with a lower error rate than the prior manual handling, and with a higher throughput of goods. With the new vision system, the robot grip can handle between 25 and 30 packages per minute.

About the Author

Chris Mc Loone | Editor in Chief

Former Editor in Chief Chris Mc Loone joined the Vision Systems Design team as editor in chief in 2021. Chris has been in B2B media for over 25 years. During his tenure at VSD, he covered machine vision and imaging from numerous angles, including application stories, technology trends, industry news, market updates, and new products.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!