Demands intensify for industry benchmarks

ecently, an avid Vision Systems Design reader, a vendor of image-processing equipment, called to talk about what he considered a major machine-vision industry problem. With all the PCI-based frame grabbers, image processors, and display controllers on the market, he asked, how can developers expect to judge the performance of their products and systems without benchmarks? This is a very good question and one that continually plagues machine-vision and imaging vendors and users alike.

Mar 1st, 1998

Demands intensify for industry benchmarks

Andy Wilson Editor at Large

andyw@pennwell.com

ecently, an avid Vision Systems Design reader, a vendor of image-processing equipment, called to talk about what he considered a major machine-vision industry problem. With all the PCI-based frame grabbers, image processors, and display controllers on the market, he asked, how can developers expect to judge the performance of their products and systems without benchmarks? This is a very good question and one that continually plagues machine-vision and imaging vendors and users alike.

Vendors of OEM image-processing equipment should have a standard image benchmark against which their equipment could be evaluated. Faced with no industry-accepted standards, many potential users of machine-vision systems often send sample parts to machine-vision vendors. They rely on the vendors` expertise as to whether an imaging inspection can be performed at all, and at what speed and at what price.

Like most aspects of machine-vision technology, developing what may appear to be a simple benchmark is more complex than it appears. Certainly, many image-processing board vendors do supply benchmarks. Unfortunately, these tests often consist of how fast a processor can perform a 3 ¥ 3 convolution, a 16-bit fast Fourier transform, a histogram equalization, or a polynomial warp.

Although such benchmarks are useful in comparing how products process certain image-processing functions, their only real value seems to be in helping marketing managers influence end users. The reason for this is that the standards do little to solve the problem of grading a machine-vision system in a real-world application.

At the end of the day, all that most end users want to know, for example, is how fast numbers can be identified and read on pharmaceutical bottles, how large a hole is in a piece of metal, or whether the red color remains consistent on a production run of 10,000 items. Telling end users that your board can perform a complex Fourier transform on a 512 ¥ 512 image in 10 ms generally does not relate to their particular requirements.

Industry standards needed

What is needed is an industry-standard image-processing benchmark--similar to a television test pattern or chart--that can provide an overall figure of board performance such as an "Image Integer" ranking or an "Image Floating-Point-Operation" rating. Such tests could evaluate real images for monochrome and color test-pattern features.

Rather than specify how fast a board or system can perform erosion, image magnification, or pixel counting, tests could be developed that determine how fast a board can distinguish colors or the number and width of lines; recognize characters; or measure certain areas. Several benchmarks could then be combined to provide the potential purchaser with an overall real-world measurement gauge that would offer a means of differentiating the performance of similar products. If only a few large image-processing hardware and software vendors were prepared to agree on the format of such benchmarks, the machine-vision industry would have a standard that is long overdue and very much in demand.

Vision Systems Design would publicize these benchmarks with the aid of industry organizations and groups. We invite machine-vision vendors to send us suggestions for standards, and we will publish the best versions in a future issue.

More in Boards & Software