Two intertwined themes—deep learning and embedded vision—bubbled up in conversations, product demonstrations and classes throughout Automate 2023 in Detroit.
Those weren’t the only topics covered, of course, but they were among the dominant subjects in a show—put together by the staff at the Association for Advancing Automation (A3)—exploring the latest developments in machine vision, robotics, motion control and industrial AI.
Held in Detroit, Michigan, from May 22-25, the event included a large show floor with more than 600 exhibitors, classes, networking events, and a startup challenge.
For the staff members at Vision Systems Design, the show kicked off early Monday morning when we announced the 2023 Vision Systems Design Innovators Awards Honorees.
Based on impartial ratings from a panel of 11 judges, honorees were chosen in four categories: bronze, silver, gold, and platinum. The judges evaluate the submissions based on originality; innovation; impact on designers, systems integrators or end users; whether it fulfills a new market need, leverages a novel technology, and/or increases productivity.
There were three platinum honorees this year: Smart Vision Lights (Norton Shores, MI, USA; www.smartvisionlights.com) with its DoAll Light for Robotic Inspection, Radiant Vision Systems (Redmond, WA, USA; www.radiantvisionsystems.com) with its XRE Lens, and LUCID Vision Labs (Richmond, BC, Canada; www.thinklucid.com) with its Atlas10 Camera with RDMA for Optimal 10GigE Image Transfer.
Learn about all honorees and their products and technologies here.
After the awards presentation, Vision Systems Design’s staff members—there were quite a few of us there representing both editorial and sales—fanned out across the show floor and classrooms.
Deep Learning in Embedded Applications
Deep learning is enabling the inspection of products and surfaces in situations in which traditional machine vision methods have struggled. Jim Witherspoon, product manager at Zebra Technologies (Lincolnshire, IL, USA; www.zebra.com), presented an educational session on “Deep Learning at the Edge.” He described these scenarios as those in which “you don’t know what your defect is, and you don’t know the location.” One example would be a scratch on a car door that occurs during production, he says.
Roger Altendorf, product marketing manager for Baumer Optronic GmbH (Radeberg, Germany; www.baumer.com), says deep learning models can complete complex inspection tasks, such as determining fill levels in bottles, identifying broken pieces, or counting the number of items.
How much of this work is occurring at the edge?
Witherspoon says the current trend is to build deep learning models—including annotating images and then training algorithms—on large servers located in the cloud. Once the model is built, the integrator or end user will deploy it on the factory floor using smaller edge machines. At this point, the model is optimized to take in images, process them and produce results.
Michael Cyros, vice president of sales and support Americas and president of Euresys Inc. (Seraing, Belgium; www.euresys.com), agrees, saying, “a lot of these embedded compute engines are optimized to execute. They are not so well optimized to do the training. Training is done on a bigger, beefier PC with a really expensive GPU card.”
Overall, the trend is for embedded computers to become more powerful without becoming larger, explains Tina Liu, assistant sales director at Cincoze (New Taipei City, Taiwan; www.cincoze.com) “We want to keep the compact size; we don’t want to make it larger. For example, the latest generation Intel 12th generation CPU is 20% larger. We try to keep the same size but incorporate the larger CPU. It is a little bit challenging,” she says.
Smart Cameras at the Edge
Smart cameras—which include an imager, I/O and processor—are another form of edge device. Altendorf says these cameras can be deployed on the factory floor for inspection applications involving an AI model.
Witherspoon adds that these types of cameras would typically be used for “single-point inspections, at a normal speed and simple deep learning.” That’s because you want to limit the amount of processing—and, therefore, heat generation—done in proximity to where the images are acquired.
Whatever phase of the AI/deep learning process show attendees were interested in pursuing—such as model development or execution—companies with associated software and hardware products were at Automate 20023 with booth displays and ready to help.