Machine vision system counts, identifies and verifies highly-regulated objects

May 1, 2018
Umbo Computer Vision leverages deep learning and cloud computing to provide autonomous video security systems.

Consumer-driven advancements in camera technology are driving quality and precision improvements in the broader supply chain. One result is that commercial-off-the-shelf (COTS) camera sensors have never been more powerful or more affordable, and they are catching the attention of manufacturers and their equipment suppliers focused on industrial automation.

COTS and open source have dramatically shifted the playing field, opening up opportunities for nearly any original equipment manufacturer (OEM) in any industry to improve equipment design, device control, and operational outcomes while improving accuracy, quality, and operating costs.

Using commercial-off-the-shelf cameras and open-source software, engineers can achieve the level of performance most manufacturers would expect from an industrial vision system but at a much lower cost.

For example, when a maker of high-speed object counters needed to improve its equipment, engineers at the company determined that deploying a conventional, general-purpose machine vision system from a smart camera supplier at a cost of $5,000 to $10,000 per unit plus software licensing fees would be price-prohibitive for their high-volume application.

That’s why, to achieve significant savings, they teamed with engineers at EmbedTek (Waukesha, WI, USA; www.embedtek.net) to develop a machine vision system using COTS camera sensors and components, along with open source software (Figure 1).

“A high-speed camera that can capture 240 frames per second used to only be available in an industrial-grade and came tied to a four-figure price tag. But now the same technology is available off-the-shelf for hundreds of dollars,” explains Kent Tabor President & CTO of EmbedTek. “We leverage COTS camera sensors and components in conjunction with our established product line of computers and integrated displays, and have been able to change a year-long research and development process into a 6-8 week rapid prototyping process that solves customer design challenges faster.”

At a cost of just hundreds of dollars per unit, and a few thousand more for the complete application with no software licensing fees, the next-generation counting solution, not only saves the OEM money, but also provides a solution with upgrade and migration plans built in at the beginning through a lifecycle management program that accommodates the entire supply chain.

Formerly, the object counter technology employed lasers, relying on the interruption of the laser beam to identify when an object was present for counting. A drawback of this approach, however, was that if two objects interrupt the laser beam at the same time, they would be counted as one. So, to ensure count accuracy operators had to hand-feed objects into the machine.

A vision system immediately came to mind as a solution to increase speed, but the variability in size, shape, and rotation of the objects as they fell would compromise accuracy. However, by using COTS camera sensors from suppliers such as On Semiconductor (Phoenix, AZ; USA; www.onsemi.com), Sony (Tokyo, Japan; www.sony-semicon.co.jp), STMicroelectronics (Geneva, Switzerland; www.st.com) and Teledyne e2V (Chelmsford, UK; www.e2v.com), along with different LEDs from various LED suppliers, and COTS lens and filter components from various vendors such as Navitar (Rochester, NY, USA;https://navitar.com), the EmbedTek team, after a few rounds of trial and error using and open-source software, was able to design and optimize an entirely new environment inside of the counter.

In each image, the lower left frame is the raw image seen by the camera. The upper left frame is the raw image with red lines encircling the objects being counted. The middle top frame is a binary view of the objects. The upper right frame is the color-coded view. The lower right frame is the outline of the objects.

Transitioning to a camera sensor not only improves object counting, according to Tabor, but also identifies and verifies a variety of the highly-regulated objects being counted, by storing visual evidence of what was dispensed (Figure 2).

During operation, the process starts with an area for the operator to fill the counter, and then a feed mechanism manages the speed at which the objects are dropped through. The feed mechanism also triggers a motion-activated camera and lighting system.

Polarization filters and strobe lighting synchronize with the camera frame speed to ensure brightness and clarity, which is necessary in order to distinguish objects as they present in a wide variety of ways. Once the image is captured, the software interprets the image and algorithms eliminate inaccuracies.

“Our customer can now provide their customers who rely on quick, accurate, and verified counting with a solution that not only ensures quality but can record each counting session in the case they ever need to audit inventory or production processes,” Tabor explains. “This vision system and specialized lighting environment make a next-generation technology possible without compromising manufacturing costs or drastically increasing the end-user investment.”

One open-source platform that EmbedTek engineers favor is Open Source Computer Vision Library (Open CV; http://opencv.org), an open-source computer vision and machine learning software library that offers C/C++, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android operating systems.

“OpenCV was built for computational adeptness with multi-core processing. Its heterogeneous compute platform allows it to be used for everything from graphic design to military operations and robotics,” says Tabor. “About 80 percent of the work is done for us with open source software. As the community continues to grow and develop, it provides more mechanisms that give us, engineers, access to better algorithms, faster performance, and more tools at our disposal. Then we spend the remaining 20 percent refactoring, redesigning, and honing different aspects to customize the software for our use.”

About the Author

John Lewis | Editor in Chief

John has technical, industry, and journalistic qualifications, most recently with more than 13 years of progressive content development experience working at Cognex Corporation. Prior to Cognex where his articles on machine vision were published in dozens of trade journals, Lewis was a technical editor for Design News, the world's leading engineering magazine, covering automation, machine vision, and other engineering topics since 1996.

B.Sc., University of Massachusetts, Lowell

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!