By Philip Colet
Frame grabbers—or devices that interface between a sensing device (typically a camera) and a computer—have existed since the late 1970s. However, early versions of the frame grabber and current products share little in common other than naming conventions. In the early days of computing, frame grabbers were very expensive devices that interfaced to standard video sources such as RS-170 or CCIR. Often, they could acquire a complete video image only after capturing multiple frames of data; in other words, no moving-image capture was possible.
During the past five years, a dramatic increase in the number of suppliers, interface standards, and capabilities has made the task of selecting the right frame grabber much more complex. The wrong choice can lead to significant delays in system deployment and lagging system performance. Making the right choice means understanding the varying capabilities and price points of the many products on the market and application requirements and how those requirements relate to product performance.
Bus architecture refers to how the frame grabber physically connects to the primary computer to provide power, control, and data interchange. Current bus standards commonly used in machine-vision applications include the PCI specification (PCI-32, PCI-64, and PCI-X) and the VME specification.
Onboard timing controllers provide the deterministic control of events that is crucial in manufacturing applications, such as bottling or labeling.
Since it was the first standard to be introduced to the market and because it meets 60% of current market applications, the PCI-32 is the most popular of these buses. PCI-32 delivers a maximum data-transfer rate of 120 Mbytes/s—a tenfold improvement over the previously predominant standard, the ISA bus, which had a maximum data-transfer rate of 5 Mbytes/s. Upcoming buses that, when used with high-speed cameras offer much greater data-transfer rates, are based on extensions of the PCI-32 standard and include PCI-64 (480 Mbytes/s of transfer bandwidth) and PCI-X (up to 1 Gbyte/s). Both of the latter buses are currently available but command higher prices because of the higher performance.
The right bus depends on the need for speed in the application and on two corresponding hardware factors: the speed of the camera being used and the number of frame grabbers installed in the system. A 450-byte/s CMOS camera will automatically require the use of a higher-speed bus interface. But what if you have several slower (for example, 60 Mbytes/s) cameras inspecting multiple angles? Two cameras will very quickly overwhelm a PCI-32 system, and, therefore, system designers would be advised to move to a bus with higher bandwidths to ensure a more reliable transfer of image data into host memory.
Another variant in frame-grabber design is the use of onboard memory. At the extreme ends of the spectrum, frame grabbers can have no memory or be equipped with gigabytes of onboard memory. However, the majority include between 16 and 64 Mbytes of onboard memory. The difference again comes down once to reliably transferring images. In the case of frame grabbers with no onboard memory, image data are immediately transferred onto the system bus and into host memory. In situations where the host bus is loaded with other data—say the 80% maximum loading that can occur when multiple frame grabbers are used or when images are transferred to and from a hard drive—then frame grabbers with no onboard memory run a high risk of losing valuable images.
Onboard memory versions of the boards have the capacity to archive image data in memory to ensure that no data are lost. This means that, if the host PCI bus is not immediately available, image data can be stored temporarily until the bus is available. To select the correct amount of memory, designers should factor in the resolution and data rates of the cameras, as well as expected bus loading and some safety factor for worst-case scenarios. While onboard memory provides higher reliability, it is also somewhat more expensive than frame grabbers with no onboard memory.
The physical interface between the camera and the board also presents a number of choices for system designers. While interface selection is governed by the exact camera model chosen (with the camera choice itself being determined by such characteristics as resolution, sensitivity, size, and speed of the sensor in the camera), system designers should be aware of the options available.
Camera interfaces can be divided into two general categories: analog and digital. Analog interfaces adhere to RS-170 and CCIR voltage specs; however, the timing specifications will vary greatly from model to model. The physical connection, which is performed through standard video cable, is of less concern to system designers.
Several digital interface standards are currently available. Taking them chronologically, the features and benefits of each interface are
RS-422: One of the first standards to be introduced, the goal of this differential voltage interface was to provide camera and frame-grabber vendors with standards for voltage levels and connector types. Although voltage standards were adopted, connector types were not, leading to an enormous investment on the part of vendors and users alike in custom cabling.
LVDS: A variant of the RS-422 standard, low-voltage differential standard (LVDS) increased the data throughput and cable lengths possible because of its higher noise immunity. However, like RS-422, LVDS suffered from a plethora of cable/connector types.
FireWire: Also known by the IEEE nomenclature "1394," FireWire was originally created as a low-speed peripheral interface bus for connecting printers and human-machine interface and archiving systems to the Apple computer. During the late 1990s, Sony adopted FireWire as a digital video interface standard on both its broadcast and industrial camera lines. Several extensions to the original specification (1394a and 1394b) have adapted the standard for machine-vision applications. Currently, 1394 is still targeted at lower-end multimedia applications that do not require real-time imaging performance.
Camera Link: Based on the Channel Link standard established by National Semiconductor (Santa Clara, CA, USA; www.national.com), Camera Link was the first interface that successfully defined and established signal, connector, and cable standards. Camera Link delivers several distinct advantages over any other standard, including the highest feasible bandwidth, scalable performance, and secure industrial cables. Camera Link has been widely adopted by both camera and frame grabber manufacturers, and has proven beneficial to suppliers and users alike in the form of lower support costs and faster system integration. And, compared to FireWire and GigE, Camera Link is the only deterministic standard. Camera Link delivers a reliable mechanism to control the acquisition process from the time the trigger occurs to the time that an image is transferred to system memory.
GigE (Gigabit Ethernet): The latest of the standards to be introduced, GigE is still in the process of being defined and created. Based on 1000-Mbytes/s Ethernet, GigE provides about 108 Mbytes/s of serial bandwidth (compared to more than 500 Mbytes/s for Camera Link). The biggest advantage of GigE is cable length. With more than 100 m standard, GigE cable length can be extended to more than 1000 m with the use of standard routers or switchers.
Which standard is best? Since it offers the highest bandwidth and secure industrial connectors and has been widely adopted as the standard throughout the machine vision industry, Camera Link is certainly a good choice. When coupled with vendor technologies such as Coreco Imaging Trigger-to-Image Reliability framework, Camera Link also offers a highly deterministic and reliable transfer protocol. GigE is advantageous in low-bandwidth, long-distance (greater than 10 m) applications, while FireWire offers advantages when many low-bandwidth cameras are required (noncontinuous acquisition from each camera).
INS AND OUTS OF I/O
Another benefit of frame grabbers is integrated I/O capabilities, as the ability to control the acquisition process is a basic requirement of many machine-vision systems. Essentially, a sensor device establishes that a part is in place to be imaged (referred to as a "trigger event"), which is followed by a signal to fire the strobe, expose the camera, and read the data. Depending on the exact physical configuration, variable delays exist between the time the part is detected and the correct time to expose the camera. Until recently, system designers were forced to use independent I/O cards to sense the part and control the strobe/camera signal sequence, which required a time-consuming and costly extra level of integration and programming.
Analog and digital frame grabbers designed for machine-vision applications (unlike FireWire, GigE, or multimedia frame grabbers) incorporate sophisticated levels of onboard I/O. Since developers traditionally were forced to utilize separate I/O cards, onboard I/O reduces the number of computer slots used, integration time, and software interfaces. In addition, onboard timing controllers deterministically control the sequence of events, which is especially important in applications where the speed of the material handling equipment is not constant.
PHILIP COLET is vice president of sales and marketing at Coreco Imaging, St-Laurent, QC, Canada; www.corecoimaging.com.