Machine design simplifies system integration
Faced with developing machine-vision systems for high-speed area and linescan inspection, many integrators turn to OEM suppliers of off-the-shelf lighting, cameras, frame grabbers, and software.
Faced with developing machine-vision systems for high-speed area and linescan inspection, many integrators turn to OEM suppliers of off-the-shelf lighting, cameras, frame grabbers, and software. While the task of choosing such specific components may be relatively easy, the process of integrating these components is difficult, resulting in many hours of development leading to costs exceeding the components themselves.
Recognizing this, LMI Technologies (Delta, BC, Canada; www.lmitechnologies.com) has developed a modular, extensible architecture known as FireSync that allows these components to be easily integrated into a scalable machine-vision system. LMI’s system consists of a number of OEM-style components including lighting, cameras, embedded sensor processors, machine-vision software, and industrial PCs that can be configured in a number of different ways.
But it is not so much the range of components offered as part of the system’s building blocks as it is the architecture of the system itself that supports rapid system integration. Within this architecture lie a number of machine-vision firsts, including the use of the 2.3-Gbit/s SERDES chipset from National Semiconductor (Santa Clara, CA, USA; www.national.com) that allow images from up to four CMOS-based cameras to stream data to the M200 sensor controller at speeds of up to 320 Mbytes/s. “To synchronize multiple cameras using one or more sensor controllers,” says Terry Arden, chief technology officer of LMI, “a master synchronization controller provides time and encoder stamps, trigger logic, strobe outputs, I/O control, laser safety, and power distribution across these multiple sensors allowing components and image processing functions to be synchronized to within a microsecond.”
With synchronization support, FireSync timing, sequencing, and reduction algorithms embedded in the Xilinx Spartan FPGA of the M200 sensor controller support precise control over camera, light, and DSP functions. Once camera video is deserialized, image data are transferred to 256 Mbytes of host memory under control of a DM648 DSP from Texas Instruments (TI; Dallas, TX, USA; www.ti.com). “Because the DM648 features an on-chip cross-point switch, image data from multiple sources can be transferred directly to host memory,” says Arden. Any of the four image data streams can then be compressed through processing by the DSP and transferred to PC-based systems over Gigabit Ethernet for further distributed reduction.
By embedding the same processing modules within each PC and sensor controller, the distributed architecture of the FireSync system allows image-processing functions to be performed across a network of processing modules. In addition, each M200 sensor controller can control a number of different lighting components including LEDs and laser illumination systems. M200 sensor controllers include local I/O to support direct triggered inputs and strobed outputs for stand-alone smart-camera functionality.
Taking an integrated, modular approach in the FireSync design has allowed LMI to offer a one-stop, integrated development software environment called the FireSync Studio. Studio allows developers to set up key timing, synchronization, and network relationships, visualize the status of all major components of the system, carry out data capture and playback, introduce custom C-based processing code, program sensor controllers and embedded PCs, simulate algorithm behavior, and deploy system configuration and executables. Studio gives developers the ability to route image data across the network to perform image-processing functions on sensor controllers and/or embedded PCs. Routing management is the next step required by application software to rapidly build distributed vision systems.
“FireSync Studio is the central integrated development experience for algorithm, sensor, and network development,” says Arden. “Working with Microsoft Visual Studio and TI compiler tools has allowed us to develop an environment for timing configuration, network routing, data visualization, and parallel debug and verification of machine-vision systems. It’s the ultimate application for scalable machine-vision design,” he says.
To illustrate the power of the system, LMI chose Vision 2007 in Stuttgart, Germany, to show a web-inspection system that uses two CMOS area imagers placed next to a low-duty-cycle LED light source to capture and display a color image from a drum rotating at 3500 rpm for a total effective line rate of 90 kHz. “Instead of using linescan cameras with costly optics and a light source running at 100% duty cycle, where cameras are physically far away and lights are placed just above the drum, two CMOS area cameras and a white LED light source were mounted together 400 mm from the drum surface” (see figure). This configuration enables integration of cameras, lights, and controller into a single sensor package. Scanning a wider web is easily supported by using many such integrated sensors connected to the FireSync master synchronization controller and additional embedded PCs.
To reconstruct the image, a stitching algorithm programmed into the DSP of the M200 sensor controller connects both halves of the captured image sequence. The mosaic image data are transferred over Gigabit Ethernet to an embedded PC, where flat-field correction and Bayer decoding are carried out and the final composite image displayed on a monitor. According to Arden, this function will be offered as a standard library with FireSync Studio in the future. As well, the company expects to have ported other image-processing software such as the company’s HexSight geometric pattern-matching and metrology software to the FireSync system in 2008.