On Vision: Did We Celebrate Autonomous Quality Inspection Too Soon?

Feb. 3, 2023
Whether or not we can easily automate something depends largely on the use case and the level of complexity this involves.
Onvision

In many industries, advocates of artificial intelligence (AI) and autonomous technology are quick to promise sweeping transformation and fully autonomous solutions. However, the optimists usually promise more than they can deliver and soon find the engineering challenges are greater than they first realized.

With the sudden influx of claims about AI and autonomous solutions, quality managers have been left with bold promises of fully autonomous quality inspection—the expectation of a system that can operate flawlessly without an operator guiding its setup at every step of the installation.

The Complexity Minefield

The challenge of introducing autonomous machine vision is best examined through the lens of a given use case. Let’s use a driving analogy. You can easily have a vehicle that drives, without a human driver in control, if the challenge is to slowly drive along a straight line in a closed environment. Autonomous construction vehicles that move material from one area of a quarry to another come to mind. If, however, you are trying to develop a passenger vehicle that can navigate a complex urban environment, the task is completely different. The same applies with machine vision technology. Whether or not we can easily automate something depends largely on the use case and the level of complexity this involves.

There are two areas where we might expect different levels of complexity: image capture and defect inspection. Image capture complexity refers to the difficulty of capturing an image, which is influenced by factors such as lighting profile, camera setup and requirements, and integration efforts. This turned out to be much trickier than many of us anticipated a few years ago. For some applications, where the complexity of capturing an image is low, a straightforward smart sensor and a simple camera with simple white lighting is enough. In other instances, where a special elimination profile other than white light or HDR camera is required, we might classify the use case as high on the scale of image acquisition complexity.

The second major challenge is defect inspection complexity. Some defects are much harder to spot or categorize than others. For a simple use case, where the complexity of defect inspection can be classified as low, a defect might be detected by, for example, detecting presence or component polarity. In contrast, detecting a minor scratch on a metal surface after grinding is not an easy task and can be classified as highly complex in comparison. Another example of use cases where complexity of defect inspection is high might be where there is a need to detect each unique defect type (class) and applying special criteria, such as size or tolerance.

How Many Images Do You Need?

Autonomous quality inspection solutions that can point-and-shoot with a camera and integrated software with just a small sample of around 50 good images are very ambitious. However, it might be achievable in use cases where acquisition and defect inspection complexity are low. In a simple acquisition setup with consistent product texture and deterministic defects, this might be realistic.

However, in cases where there are many surface nuances, highly complex defects with specific tolerances, in addition to “permissible defects” that are accepted, the only way to train a model is by tagging huge amounts of datasets. Expecting autonomous machine vision to self-learn more sophisticated use cases is simply not going to work.

Although there is no escaping the need for having lots of images for complex use cases, greater automation still holds the key to improve quality inspection. A hybrid approach is to use sophisticated AI models to automate the learning process and to allow the user to provide feedback when needed to guide the learning process in the right direction. The challenge is to minimize the amount of guidance but not to eliminate it. The premise is that the user holds a high level of knowledge pertaining to the product target, using it to guide the learning process and automating the process while allowing the user to provide guidance where needed.

About the Author

Zohar Kantor

Zohar Kantor is Vice President of Sales for Lean.AI (Tel Aviv, Israel)

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!