Towards intelligent imaging

Feb. 1, 2015
Product differentiation has always been a key factor for OEM manufacturers of machine vision components such as image sensors, cameras, software, lighting or cabling products.

Product differentiation has always been a key factor for OEM manufacturers of machine vision components such as image sensors, cameras, software, lighting or cabling products. With the advent of low-cost, high-speed microprocessors and memory devices, for example, many camera manufacturers developed "smart cameras" that are used to off-load image processing tasks from the host computer, thus lowering system costs.

Today, this trend is continuing with lighting controller suppliers such as Gardasoft introducing devices that can be embedded into other manufacturers LED lighting components. In doing so, the lighting controller can recognize the type of light being used, its electrical parameters and, better still, display both camera triggering, exposure time and lighting control functions on a single display. The result: it is now easier for developers to diagnose camera and lighting timing issues.

For programmers, the introduction of the GenICam standard has meant that no matter what camera to computer interface is used, the task of camera control now be achieved with a generic programming interface. While such developments could, in some sense, allow such products to be termed "smart," and has indeed alleviated many of the problems associated with developing machine vision systems, intelligent systems are still in their infancy.

Although systems integrators can use smart cameras, lighting components and APIs to develop their systems, the task of choosing the correct type of OEM components is of paramount importance. Once such components have been chosen, however, the systems integrator is still faced with the task of optimizing the contrast of an object or part to extract the correct features. This, of course, will depend on numerous factors such as lighting intensity, camera exposure time and the ability of the software to discern specific features.

Today, a number of software companies provide easy to use tools to perform specific measurements of parts. However, once these tools are set up, no "middleware" exists to automatically test these measurements under different lighting conditions and camera exposure times to automatically ensure that the best contrast image and the highest accuracy results are obtained.

Just as intelligence can be added to lighting, cameras, cables and other vision components, adding intelligence to vision software would alleviate the developer of testing numerous illumination and camera parameters. By automatically cycling through a number of different options, such middleware could save developers the task, reducing system development times and lowering the cost of machine vision systems.

Andy Wilson, Editor in Chief
[email protected]
About the Author

Andy Wilson | Founding Editor

Founding editor of Vision Systems Design. Industry authority and author of thousands of technical articles on image processing, machine vision, and computer science.

B.Sc., Warwick University

Tel: 603-891-9115
Fax: 603-891-9297

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!