Q&A Series: Industry veteran John Merva on what has shaped machine vision

Nov. 15, 2021
As part of Vision Systems Design’s 25th anniversary, John Merva, industry veteran, looks back at the last 25 years in machine vision and toward the future.
2115 Vsd 25th P01 Pri

Vision Systems Design celebrates its 25th anniversary in 2021. Talk about your history in the machine vision market.

Machine vision was of interest to me before usable technology existed. As a college engineering student, I attended General Motors (GM) Institute and was a co-op student at GM. I saw a need for machine vision back in the early 1970s. I was fascinated by the potential of machine vision, but of course, there was no technology to use it. As time went on, I followed the technology, and in the early 1980s as we began hearing stories that the capability existed, I got much more interested and investigated further. I was at Packard Electric, and I spent some time with the folks at the GM Tech Center where they were doing a lot of work on machine vision and also at Delco Electronics, which was one of the first GM divisions to implement vision in any kind of serious way. We actually installed our first successful system at Packard in early 1984. It was a custom engineered system built on a Versabus architecture with a Tech Center designed array processor using line scan cameras. It was created via a collaboration between the GM Tech Center and me. We were making flexible printed circuits that were manufactured in a web form. We used the vision system to measure the alignment of two webs and then generate correction signals back to the two axes of control in the system to ensure proper alignment. That was my baptism into machine vision.

In 1986, I was offered a position at Itran. Having earned an MBA, I saw the opportunity in machine vision startups and was excited about that. As Director of Technical Marketing, I ran the application engineering group and provided technical support to the sales and product development teams. We then founded an installation group for pharmaceutical character verification systems to ensure the proper labels were going on the right containers and that the date and lot codes were properly formed and legible. This satisfied a need to provide installed, operational systems in pharmaceutical packaging. We installed over 100 of these systems as the pharmaceutical market responded to the FDA’s increased requirements for label verification.

When Acuity Imaging was formed by the merger of Automatics and Itran, I moved to a project management role. In that role, I was responsible for OCV, OCR, and 2D code products. We were soon purchased by RVSI, which then purchased the International Data Matrix Corporation. I was Director of Operations in that group for a time while we integrated their operation into RVSI. We went on to acquire Northeast Robotics (NER), an LED lighting company as it was clear that one of the biggest keys to being able to read direct part marks—marks that were made on a part with a laser or with pin stamp embossing—was unique lighting. LED lighting was the focus of NER at that point in time, so RVSI acquired them. After negotiating the acquisition, I was assigned to run the company, where we subsequently developed a complete line of products. I was a coinventor on a number of patents, and we created a very successful product line. We grew Nerlite from a startup to a $6 million/year business in the next four years, enabling many new applications for RVSI and Acuity Systems. That was my move into the lighting world. Along with my colleagues Marcel Laflamme and Allen Burns, we created machine vision lighting training. The classes were offered at AIA conferences and customer seminars. I still see the basic concepts and some slides from these original presntations in use by others today.

I moved on to work at Advanced Illumination as Executive Vice President of Sales and also Vice President of Sales and Operations at CCSA Lighting. I was familiar with Gardasoft as we were selling and using Gardasoft controllers back at NER. The primary reason for that was that, as a lighting company, you had to focus on lighting designs and performance. Lighting control was always very important, but it was always the neglected product line in a lighting company. So, we used Gardasoft to provide those solutions that required very precise pulse control and current drive. Gardasoft had all that capability. I knew Gardasoft founder Peter Baghaet from those days, and I liked the product line and the company. When Gardasoft decided to grow their business here in North America, I joined the company to help accomplish that. As time went on, Gardasoft was sold to Optex, and Optex subsequently purchased CCSA, resulting in integrating Gardasoft into CCSA operations this year.

What are some of the most interesting and notable machine vision advancements you’ve seen since you entered the space?

Image analysis software based on neural nets/machine learning is clearly the most important algorithm advancement in machine vision. Until these algorithms matured, all analysis was based on “something somewhere” not being as per specification. By this, I mean only targeted features in specific areas of a part, such as dimensions and specific flaws, could be inspected. Machine learning allows “anything anywhere” inspections that are nonspecific differences in a part to be recognized and flagged. These algorithms target anything that might be different, a perception skill previously limited to human decision analysis.

It’s important to note that these algorithms can now be run on affordable machine vision systems due to advancements in processor power and processor cost reductions made as a result of consumer electronics, phones, and handheld devices. The machine vision industry did not have the commercial size to drive these developments but was able to take advantage of the components that were developed due to the financial strength of these markets.

What sort of advancements are you most looking forward to, looking toward the future?

There was a vendor that was reportedly developing a true multispectral camera chip a number of years ago. This chip could electronically be tuned in real time to have its sensitivity to different wavelengths of light change. This would allow analysis of different parts of the spectrum to occur very quickly and without moving physical filters in front of the chip. Real-time spectrum spectral selectivity would be a very powerful feature.

Do you have any notable Vision Systems Design-related stories to share from over the years?

I really enjoyed when Andy Wilson had a back page column, which always highlighted an important concept or situation in the machine vision market in a lighthearted way. Andy is missed by all who knew him.

Is there a trend or product in the next few years that you see as “the next big thing?”

Clearly, the big thing for the future is embedded vision. The integration of multiple components into single, small packages that will function more as appliances rather than machine vision systems added to other existing systems. Just this weekend I had a key duplicated. The salesperson used a machine that imaged the existing key, processed the image, told the attendant which key blank to choose, and then set up the correct profile to cut in the key based on the image taken. These are the kind of systems we will see more of in the future.

Do you have any general comments about the machine vision/imaging market?

I think machine learning is enabling a lot. At Itran, we had an AI product in the early 1990s called Merlin, which did a lot of the things that current machine learning systems do. However, we had challenges back then. For those systems to work well, they need a lot of memory that’s fast and inexpensive, and they need powerful processors. The development of computer processing and cell phone CPUs has really helped the machine vision industry. The vision industry has not been big enough by itself to justify the investments that have gone into developing those processors and memory systems and making them small and compact. However, since those processors were created for other spplications, the vision industry has been able to take advantage and use that hardware to build faster systems, more capable systems, and systems with much more memory that can be used to modify the models in machine learning systems as their reference models become more complete.

Of course, the creation of standards has been very important. This was an initiative I championed when chair of the AIA Board, which resulted in the creation of today’s global standard committees such as G3, GenICam, GigE Vision, and others. In the early days, one company might take one approach to solve a need, and another company would take a different path. The market would decide which it liked and thought better. Those that weren’t liked as well would fall by the wayside, and the companies that had put time and effort into them would have to abandon those features/products. By getting the standards laid down in advance of the developments, it’s allowed all the companies in the industry to “pull on their oars” in the same direction rather than working in different directions. It’s allowed development to happen faster and more economically, and it’s been good for the users because they’ve been able to see these product developments happen a lot more quickly at much more affordable pricing than when different companies were pursuing different solutions.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!