Flexibility Pushes Vision Beyond the Factory Floor

Feb. 15, 2022
We are going to see technologies moving us away from the assumptions of a well-illuminated, well-presented scene captured in a frame. It is this flexibility that will push vision out of the factory and into the real world.

Predicting technological trends in any industry is hard, but machine vision is probably particularly so given its reliance on technological trends in so many different fields like optics, lighting, computer science, cameras, and a host of other peripheral technologies ranging from cabling and wireless to encoders. Given this, sometimes it helps to focus on end user needs and the test cases that would create a fundamental breakthrough.

Flexibility

These test cases stack up around a theme that can generally fall under the term flexibility. Right now, the tests vision systems fail are when some parameter or something in the environment changes that the system can’t just dynamically adapt to. So, one driver that will impact what emerging technology is adopted will be its ability to solve this need for flexibility.

Another driver is that businesses like simple solutions with strong business cases. If we accept the premise that vision is fundamentally like human eyes and will solve problems outside the factory floor the same way it solves problems on the factory floor—avoiding mistakes and accidents and tracking applications—then that points us toward certain problems in the marketplace that emerging technologies will probably evolve toward.

Note that I’ve left out guidance, which is one of the things people typically use their eyes for. With guidance come problems involved with robot dexterity compared with the amazing technology of human hands, so those applications are still difficult on the factory floor in constrained environments and budgets of $100K+. Guidance applications in the real world are even tougher. So, fully flexible automated guidance is probably still a long way away, let alone a simple business case.

The space involving tracking and looking for the mistake or accident, a space where factory applications are more mature, is where errors are costly. That is the space that will get investment. Furthermore, the greater the frequency of costly errors and the larger the addressable market, the greater the drive for technological adoption.

Outside the Factory

Outside the factory floor, it can be difficult to find the volume to justify the investment, so solutions have to be cheaper, quicker to install, and more maintenance-free. A factory can make $1 million worth of product in a few hours, so a solution that prevents a few hours of scrap can cost $200,000, and it’s likely worth it to spend six months installing the solution. A restaurant may bring in $10,000 in a few hours during busy times. For this market, a day of labor to install the solution may sink the ROI. Similarly, bringing power to the system for an agricultural application may also sink the ROI. Or, new hidden maintenance costs (eg. keeping a lens clean) can be problematic as well. Solutions at these price points need unit volumes, and at these unit volumes, the solutions cannot require much end user maintenance/involvement. Too much end user involvement gets referred to as “babysitting” the solution, which can be an accurate description of what’s involved. This brings us back to more flexibility for the solution to handle its environment.

New Technology Drivers

Instead of enhancing current technologies, the need for flexibility is driving us toward new technologies. As a company that deploys these solutions, we’ve learned that while a new processor is great, a good eyelid, ideally with eyelashes, would be game changing as it would keep the lens clean despite the real-world environmental conditions without compressed air—a traditional lens is always a droplet of water or spec of dust away from needing a person to clean it. As a solution provider, we could just install systems, and the end users could use them. That’s not to say a new processor doesn’t matter, but what would really be game changing is a processor that’s low enough power that it can power the system from a solar cell. This enables faster install and means solutions are not contingent upon a reliable power hookup.

What’s been very promising on this front is the large sensor makers pivoting to address the autonomous vehicle market. Previously, machine vision relied on the DSLR and broadcast market for its higher end sensors and the security and cell phone markets for the other enabling technologies. In these markets, the end user “babysits” technology either by carrying it around and keeping it clean or being the security guard who watches the security system.

The car market begins moving away from this scenario because the end user expects the car to put up with the outdoors year-round. Further requirements include operating in real time in highly variable conditions without needing to set parameters. Most directly, cameras in a car must handle highly variable light without changing settings like exposure. This is leading to new innovations like the newer pixel architectures for 120-decibel single-frame HDR.

Similarly, the very low light single-photon avalanche diode (SPAD) sensors being released help cameras handle environments where it’s unrealistic to fill the field of view with light all the time. Event-based sensors also offer a new form of continuous readout of photon-flows, doing away with notions of a frame. In more traditional camera markets, an end user pointed the camera, waited for the exposure to adjust, and then captured the frame. The security market started a migration away from this. The car market will accelerate that shift as no end user will want to adjust their cameras before driving off.

From the perspective of the integrator trying to solve the end users’ problems, these innovations give us flexibility to help technologies better work in the real world. We are going to see technologies moving us away from the assumptions of a well-illuminated, well-presented scene captured in a frame. It is this flexibility that will push vision out of the factory and into the real world. To do that, someone will need to build an “eyelid” or something that allows a camera system to pass the test cases an eyelid solves.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!