Machine vision drives flexible automation
A discussion with Andre By, Automation Engineering
A discussion with Andre By, Automation Engineering
VSD: We last spoke two years ago. What important developments have you noted in machine-vision technology and products since then?
By: Several come to mind. One is that the adoption of multiple available network approaches for camera communication has eliminated the need for traditional image-acquisition hardware for many machine-vision applications. Image-analysis algorithms and software libraries and tools also continue to improve, as do configurable vision-system development platforms. We are seeing price points for megapixel-resolution cameras decrease, allowing these to be more applicable for more demanding imaging applications.
VSD: What market changes are driving these developments? How are the developments driving the markets?
By: Machine vision has always been a big enabling technology for flexible automation, and it is now an even more significant tool in manufacturing as product development and market window time frames become continually shorter. We also see more advanced image acquisition being introduced using higher-quality camera modules for applications such as cell phones, automotive, and medical imaging. Tighter manufacturing tolerances required to satisfy these higher performance requirements means we are using active alignment of optics to the image sensors instead of passive alignment methods that have been employed more typically in the past for early generation products.
AEi shipped more than three dozen of our CMAT automated camera module align-and-test stations to China in the last nine or so months, employing exactly this type of active alignment. It certainly wasn’t to save on labor but instead to make much better, more consistent products and with higher yield (see figure).
VSD: How has the advent of low-cost smart cameras impacted the system-integration business?
By: These continue to be often quite effective tools, especially for low-end to moderately challenging machine-vision applications. The setup and programming of these smart cameras also continues to improve as do the on-board vision processing capabilities of platforms like the VC40XX series from Vision Components [Ettlingen, Germany; www.vision-components.com]. However, the more demanding and complex machine-vision applications we typically encounter at AEi require a very tight integration, with machine motion planning and control being vision-results-driven in real time.
Most of our deployed automation systems typically use two to four cameras with anywhere from 3 to 24 axes of servo motion, and we find it more technically effective to use an integrated PC control solution that ties this all together seamlessly. That’s why tools such as our FlexAuto and National Instruments [Austin, TX, USA; www.ni.com] LabView have been developed for machine control where vision and motion need a high level of integration.
VSD: How will the impact of Microsoft’s .NET framework and the soon-to-be-released Vista operating system affect system designs of the future?
VSD: What developments do you see occurring in deterministic Ethernet protocols such as EtherCAT, Ethernet-IP with determinism, and the isochronous real-time ProfiNET-IRT?
By: These and other deterministic networks offer the advantages of distributed I/O for the manufacturing environment but with high predictability and minimal latencies. That can be quite important to overall controls robustness and reliability. Distributed I/O in any event offers higher modularity and flexibility, especially if it is open architecture.
VSD: Object-oriented programming tools are now allowing more sophisticated machine-vision software to be deployed. What do you see as the main advances in this area?
By: One area of advance that I think is important is providing the benefits of object-oriented programming without a massive overhead hit. This has been an ongoing obstacle in effectively deploying applications with object-oriented principles and tools. Other key advances and innovations are in development environments that encourage the software designer or programmer to associate software objects with real-world elements that are intuitive to the user. This can lead to much more effective and easy-to-customize configurable machine control and machine-vision platforms. Defining objects such as “camera,” “machine I/O point,” “motion stage,” “alarm event,” and so forth when setting up machine-vision applications makes deploying these much easier and faster.
VSD: Lighting plays a critical role in machine-vision systems development. What significant developments are you seeing in this area?
By: There are all kinds of exciting new options out there that can offer very bright and increasingly uniform lighting. Even the best machine-vision analysis algorithms are helped by improved uniformity in illumination.
VSD: Specifically, could you discuss LED, halogen, and fiberoptic lighting and which choices prove best for a specific application?
By: We find that LED lighting and fiberoptic lighting driven by halogen sources continue to be important tools in our toolbox. Recent advances in LEDs-and in particular organic LEDs-are very exciting because these lights have very long lives and great power efficiency. However, we have also been doing more projects recently with cold fluorescent and electroluminescent backlighting with very good results.
ANDRE BY is chief technology officer of Automation Engineering (Wilmington, MA, USA; www.aeiboston.com), which he founded in 1990. He has a master’s degree in mechanical engineering from the Massachusetts Institute of Technology and ran the automation systems group at the NREC Division of Ingersoll-Rand. Editor in chief Conard Holton talked to him about trends in automation.