by Andy Wilson, editor
Many years ago, before the advent of personal computers, journalists used typewriters to compose their musings. After an article was typed, it was proof-read and then sent to be typeset on a typesetting machine that produced long strips of type known as galleys.
The galleys were returned to the publishing company, where they were pasted onto boards, photographed, and the negative images stripped into pages along with negative images of any pictures that accompanied the article. These were then sent to the printer, made into forms that consisted of multiple pages, and set on a print drum for final printing.
With the introduction of the personal computer and direct-to-plate printing systems, this process has been automated, making typesetting companies, prepress houses, and film technology obsolete. But the journey from there to here was not as easy as some may think.
For a number of years the typewriter coexisted with the personal computer as users found it difficult, if not impossible, to print labels using dot-matrix printers, for one thing. With the introduction of graphical user interfaces, a plethora of software standards, and hardware products such as laser writers, these problems no longer exist in today’s business world.
Unfortunately, the same cannot be said for the machine-vision industry. Like the publishing days of old, the machine-vision industry is replete with products ranging from lighting and illumination sources, cameras, and software that provide single-point solutions for specific machine-vision applications.
While researching this month’s article on machine-vision lighting (see “Spanning the Spectrum,” June 2009, pp. 27–31), for example, I discovered a number of companies who have produced LED light sources that claim to replace older products that use halogen lamps. In their literature these manufacturers promote the products’ long lifetimes, high luminous intensity, and reduced running costs.
Take one look at the spectral characteristics of these illumination systems, however, and you realize that the spectral output and linearity (light output vs. light intensity) of LED light sources is far different from those that use halogen lamps. Thus, microscopy users, who have for a number of years used halogen illumination to capture their images, will not obtain comparable images by simply replacing the halogen light source with one based on LED technology.
Before deploying an LED system, the user may need to optically filter the light source in an attempt to spectrally match the light output obtained by the older halogen lamp. Still, increasing the light intensity of the LED source may not provide the same results as the halogen lamp.
System integrators facing these tasks are also challenged in every other aspect of machine vision. Every year, new camera standards are being introduced, each of which offers its own unique benefits. For example, while Camera Link is fast and deterministic, GigE Vision products can extend camera-to-computer distances by up to 100 m.
Once again, the system integrator must determine which camera interface best suits the application, as well as factors such as the type of sensor used, the lens mount options, and the published specifications of the camera.
In developing new systems, of course, the problem of accommodating older technologies may not matter. If a problem can be solved using new products or technologies, then they will obviously be adopted. However, where backward-compatibility with existing installed systems is important, a complete understanding of the specifications of OEM product replacements is especially important.
Indeed, just as the typewriter and dot-matrix peacefully coexisted for a number of years in every publisher’s office, so too will older and newer technologies continue to coexist in the next generation of machine-vision systems on the factory floor. This prospect is likely to continue until future hardware and software standards are developed to unify the choice of components for any specific machine-vision application.