MACHINE-VISION SOFTWARE: Software targets high-speed pattern-matching applications

Feb. 1, 2010
One of the most important functions of any machine-vision system is pattern matching. In vision-guided robotics, for example, pattern-matching algorithms are widely used in pick-and-place systems where camera-guided robots need to pick and then place randomly oriented parts. Indeed, so important is pattern matching that specific algorithms to perform the task have been the subject of multiple lawsuits from software manufacturers.

One of the most important functions of any machine-vision system is pattern matching. In vision-guided robotics, for example, pattern-matching algorithms are widely used in pick-and-place systems where camera-guided robots need to pick and then place randomly oriented parts. Indeed, so important is pattern matching that specific algorithms to perform the task have been the subject of multiple lawsuits from software manufacturers.

For years, the most popular method of performing pattern matching was by using normalized grayscale correlation (NGC) techniques. Based on a technique that returns a score indicating the confidence of a match between a trained object and an unknown object, the method’s accuracy degrades with any object scale, rotation, or skew, or if variations in lighting occur.

Because of these factors, many software vendors have employed geometric pattern-matching techniques in their latest generation of software. Rather than comparing a grayscale template of an object, these methods extract edge features from objects, compute their gradient direction and position, and build a database of vectors and angles to describe the object.

When the reference object is then compared to an unknown object, data describing each one is compared to find a best-fit match. This results in a pattern-matching algorithm that is invariant to translation, rotation, scaling, and skew to some degree.

“Despite proving useful in applications where the edge features of an object are fairly well defined,” says Dr. Pedram Azad, president of Keyetech (Karlsruhe, Germany; www.keyetech.de), “such methods often fail when objects lack contours and edges.” In those cases, texture-based pattern matching can be used either as a standalone method or in conjunction with edge-based pattern matching to recognize the identity and compute the full pose of an object.

Texture-based recognition

In developing Keyetech’s texture-based recognition software, Azad leveraged his open-source computer vision library called the Integrating Vision Toolkit (IVT; ivt.sourceforge.net), originally developed at the Karlsruhe Institute of Technology (Karlsruhe, Germany; www.kit.edu), which is now maintained in cooperation with Keyetech. The toolkit is a computer vision library suitable for various applications in research and industry, in particular for robotics and automation.

Invariant to rotation, translation, occlusion, scaling, and perspective skew, Keyetech’s texture-based pattern-matching software computes feature vectors that indicate position, scale, and orientation of key points within an image.

With the so-called Keyetech Performance Primitives (KPP), Keyetech offers optimized implementations of time-critical image-processing algorithms, exploiting MMX/SSE technology and multicore CPUs as well as GPU processing. The KPP routines can be used either automatically via the IVT or executed explicitly via a general interface. Building on the IVT, the company also offers geometric pattern-matching software (Keyetech Edge-based Recognizer) and, for detection of objects lacking distinctive edge features, its Keyetech Texture-based Recognizer.

“While many different methods have been published for texture-based object recognition,” says Azad, “perhaps the most groundbreaking is the work originally performed by David G. Lowe of the Computer Science Department at the University of British Columbia” (Vancouver, BC, Canada; www.cs.ubc.ca). In his definitive paper entitled “Distinctive Image Features from Scale-Invariant Keypoints,” Lowe discusses what is now commonly known as the scale-invariant feature transform (SIFT).

In this method, specific locations within an image that are invariant to scale changes are found by analyzing a so-called scale space representation. To transform the original image into scale space, it is convolved with several Gaussian kernels for subsequent computation of difference of Gaussian images. Then, local maxima and minima in scale space are detected by comparing each point with all its neighbors. For each detected point, the local neighborhood is represented by a compact descriptor specific to the SIFT algorithm.

“Such a scale space analysis,” says Azad, “is a very time-consuming process, which is not suitable for high-speed object recognition and pose estimation.” In the development of Keyetech’s texture-based recognition software, Azad has replaced the scale space analysis and applied a fast corner detector and an efficient multiscale extension, as described in his paper, “Combining Harris Interest Points and the SIFT Descriptor for Fast Scale-Invariant Object Recognition” and in his book Visual Perception for Manipulation and Imitation in Humanoid Robots (Springer, Berlin, Germany; November 2009, Vol. 4).

Feature matching

While the extraction of SIFT features—or similar features, such as Speeded Up Robust Features (SURF)—takes approximately 150–400 ms for images measuring 640 × 480 pixels, the Keyetech Texture-based Recognizer is claimed to perform feature extraction within 10–15 ms. Using the Keyetech Performance Primitives, the subsequent feature matching can be performed within less than 5 ms by exploiting GPU processing, so that the complete task of object recognition and pose estimation takes approximately 15–20 ms. CPU-only versions are also available, with typical total runtimes of approximately 30–50 ms using a heuristic search for feature matching.

To promote the use of the company’s performance primitives, geometric-based pattern matching, and texture-based recognition software, Keyetech is offering free demonstration packages in the form of executables that can be downloaded from the company’s web site. After evaluation, OEMs can then purchase each individual software package for €250 per single license.

More Vision Systems Issue Articles
Vision Systems Articles Archives

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!