MACHINE-VISION SOFTWARE: Consortium rises to challenge of benchmarking software

June 1, 2012
Despite the numerous machine-vision software packages now available, there is still no means to properly benchmark their performance.

Despite the numerous machine-vision software packages now available, there is still no means to properly benchmark their performance. In 2008,Vision Systems Design asked software vendors for their ideas of what features should be included in such a benchmark (see "Setting the Standard," Vision Systems Design, November 2008).

While some questioned the need for a benchmark, others pointed out the difficulties in implementing such an idea. Taking up the challenge, Wolfgang Eckstein, managing director ofMVTec Software (Munich, Germany), proposed a number of benchmarks, each consisting of a set of image sequences in which the influence of a "defect" was continuously increased. The quality of specific software would then be judged by the number of images that can be processed correctly (see "Toward a Machine-Vision Benchmark," Vision Systems Design, May 2009).

Now,Datalogic (Bologna, Italy) and System (Fiorano Modenese, Italy; www.system-group.it) groups have partnered with the University of Bologna (Bologna, Italy; www.deis.unibo.it) and the T3LAB Consortium (Bologna, Italy; www.t3lab.it) to produce a benchmark to evaluate the capabilities of commercial machine-vision software libraries as part of the VIALAB research project (www.progetti.t3lab.it/vialab/).

One of the benchmarks being proposed by the VIALAB research project is that of camera calibration using a 2-D image calibration target.

At the VISION 2011 trade show held in Stuttgart, the team—headed by VIALAB manager Claudio Salati—met to discuss with vendors of machine-vision libraries the principles, procedures, and goals of the proposed software benchmark they plan to create (see "Project seeks to institute a benchmark for machine vision").

In April 2012, VIALAB released a draft definition of the first set of image-processing tasks required by the proposed benchmark (http://bit.ly/HCJnEW). These consist of five tasks or "challenges": One is focused on 2-D camera calibration and the others concern object detection and localization in the image space.

To benchmark the quality of camera calibration algorithms, machine-vision software vendors will be provided with sets of images of specific 2-D calibration targets; an additional set of verification images of a checkerboard will be used for assessment. For all these images the real-world position of key points has been independently measured. The accuracy of each algorithm is evaluated through the estimation of forward- and back-projection errors.

Similarly, four other challenges have been created to benchmark procedures for object detection and localization, from both orthogonal and perspective camera views, and for textured and nontextured objects.

Solutions are tested with different sets of images, both real and synthetically elaborated, which are generated under a variety of noisy conditions. Evaluation is carried out by estimating the accuracy and robustness of procedures, and by measuring speed using timestamps.

To allow software developers to implement and test the software required for these benchmarks, VIALAB researchers provide the source code of the benchmark execution framework, the application programming interface (API) that defines each challenge and an example solution based on the OpenCV library. The benchmark framework has been built in MS Windows environment with MS Visual Studio 2010. Additionally, a development dataset with associated ground truth is also provided.

At present, the proposed benchmarks can be downloaded from the VIALAB web site. According to Salati, researchers have finalized the datasets that will be used in each of the benchmark challenges and final versions of the benchmark descriptions are now available.

For software library providers who do not wish to participate in the project, VIALAB's developers will independently benchmark software packages. Before disclosure, the results collected for each library will be discussed with the corresponding vendor. Final results are expected to be published by the end of November 2012.

More Vision Systems Issue Articles
Vision Systems Articles Archives

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!