High-speed camera leverages telecom technology
In high-speed imaging applications, the large amounts of data generated by megapixel CMOS imagers can create a bottleneck.
In high-speed imaging applications, the large amounts of data generated by megapixel CMOS imagers can create a bottleneck. This has led to a number of different designs, including high-speed cameras that use high-speed interfaces such as Camera Link to transfer image data to memory-intensive frame grabbers that either store the data or transfer the information to disk using high-speed interfaces such as Fibre Channel (see Vision Systems Design, December 2004, p. 39).
Vision Research latest Phantom 9.0 camera can transmit image data over an OC-192 VSR interface known as Image3 at speeds up to 15 Gbytes/s.
“In many applications, however,” says Frank Mazella, chief information officer with Vision Research (Wayne, NJ, USA; www.visionresearch.com), “systems need to be rugged, have no moving parts, and store large amounts of uncompressed data. In these systems, solid-state memory must be used to capture megapixel image data for periods of between 10 and 15 minutes.” Such applications may include monitoring a missile or rocket launch from lift-off to the time the spacecraft reaches the edge of the atmosphere or even creating a special effect for the entertainment industry.
Vision Research engineers understood that data transferred from the company’s Phantom 9.0-a 1600 × 1200 × 10-bit CMOS camera-needed to be transferred to hundreds of gigabytes of off-camera image memory. Rather than use traditional machine-vision camera interfaces, Vision Research turned to OC-192 very-short-reach (VSR) technology, a telecommunications interface originally developed as a low-cost alternative to expensive serial technology for the interconnection of optical network elements that reside within the same central office.
After 1600 × 1200 × 8-bit image data are transferred to between 1.5 and 12 Gbytes of on-board SDRAM, information is multiplexed as a 16-bit, 622-Mbit/s LVDS signal that is then mapped onto 10 parallel channels. Two additional channels are also generated: one channel is a ‘protection’ channel created by performing an XOR operation on the 10 data channels. If any of the ten channels fail, data can be recovered at the receiver from the protection channel. The final channel carries a set of cyclic redundancy checks to produce a checksum to detect errors in transmission or duplication of each of the other 11 channels. This determines whether any errors occurred during transmission and corrects the errors at the receiver.
An array of twelve 850-nm-wavelength lasers, each operating at 1.25 Gbits/s, allow up to 15 Gbits/s to be transmitted between the camera and the host at distances of up to 400 m. To store data off-camera, Vision Research has developed its Image3-a microprocessor-controlled Linux-based memory subsystem containing 80 Gbytes DDRAM. At the Image3, the 12 parallel optical signals are re-converted to a 16-bit-wide data bus operating at 622 Mbits/s and demultiplexed into memory.
“Often,” says Mazella, “memory requirements dictate that more than 80 Gbytes of memory are required, such as in a military applications. In these cases, it is possible to configure up to four Image3 subsystems together to provide a maximum storage capacity of 320 Gbytes of image memory.” At present, the Phantom 9.0 camera is capable of only transmitting image data directly as a data stream to Image3 storage system.
In the future, however, the system will be upgraded to allow the host computer system to interface directly to the camera using a true OC-192 transmit/receive implementation. To address such applications using solid-state memory, however, is not inexpensive.