In the future, system integrators will be able to choose from a range of PCI-based products with a variety of on-board processing capabilities.
By Andrew Wilson, Editor
There were three apparent trends at Vision 2005, held in Stuttgart, Germany, in November. The first was that the emergence of the PCI Express bus as the next-generation switched-fabric interconnect was as slow as had been predicted (seeVision Systems Design, August 2005, p. 37). Of the companies present at Vision 2005, only Dalsa Coreco and dPict Imaging announced frame-grabber products for the PCI Express (see “PCI Express frame grabbers debut at Vision 2005,” p. 22). Of those frame grabbers that were exhibited, however, perhaps the most interesting was from Silicon Software-a company that showed how its latest generation of high-level graphical software tools could be used to program FPGAs on the company’s Camera Link frame grabber (see “Visual applets make FPGA programming a snap,” p. 48).
But if these trends were not apparent, one trend was-the plethora of Gigabit Ethernet (GigE) cameras, including those from Dalsa, Imperx, JAI Pulnix, and Toshiba Teli (see Fig. 1). To speed time to market, all of these companies leveraged the work already done by Pleora Technologies in the development of GigE Internet Protocol (IP) engines.
JAI Pulnix and Pleora developed a custom board set based on Pleora’s iPORT PT1000-VB in-camera engine for the JAI Pulnix TM-4100GE, 2k × 2k, 15-frame/s monochrome, progressive-scan CCD camera. JAI Pulnix also used the PT1000-VB in the development of its megapixel CCD cameras. For both Toshiba Teli and Dalsa, Pleora supplied its IP engine in the form of intellectual property buried inside a gate array.
In addition to leveraging off-the-shelf technology to speed time to market, the camera vendors are taking advantage of the nearly three years of work that Pleora and others have put into the development of the forthcoming Automated Imaging Association GigE Vision standard, specifically designed for machine vision (seeVision Systems Design, June 2005, p. 43). This defines, among other things, how to control GigE cameras and provides a mechanism for cameras to send image and other data to a host.
Pleora’s iPORT IP engines convert imaging and video data into IP packets for GigE transport to PCs and provide serial or parallel control channels for input to cameras and other equipment. The purpose-built FPGA in the product performs a TCP/IP offload function using no embedded OS, improving performance and easing in-camera integration.
Getting around the OS
“One of the problems with many Ethernet video-delivery solutions is that they use general-purpose software-such as the Windows or Linux IP stacks and commercial network interface card/chip (NIC) drivers-at the camera and PC to receive and/or transmit data. These general-purpose, OS-based solutions are designed to deal with a host of different applications, as well as competing demands on processing time. The result is too much latency and unpredictability for real-time vision systems,” says George Chamberlain, president of Pleora.
To overcome this problem at the PC end, Pleora’s iPORT solution uses a high-performance driver on a standard NIC, which routes data to PC memory without using the computer’s OS. Other companies also are starting to offer intelligent programmable TCP/IP offload engines. At Vision 2005, for example, Dalsa Coreco announced its X64-GigE series, a product that bundles the company’s X64 frame-grabber technology in a stand-alone box to capture images from a multitude of cameras and transfer them over standard GigE links to the PC (see Fig. 2). According to Dalsa Coreco, the X64-GigE series is suited for machine-vision applications where the host computer cannot be located near the camera body and/or distributed processing is desired and/or data concentration is necessary.
Philip Colet, vice president sales and marketing, says that the first product in the series will be the X64-CL GigE Lite, which will use the company’s hardware and software networking technology. The X64-CL GigE Lite provides a Camera Link single Base input along with a single 1000-Mbit/s Ethernet connection using Ethernet and UDP/IP protocols. In operation, it is possible to connect one Camera Link camera using the X64-CL GigE Lite and a network hub to more than one host computer at the same time. Similarly, multiple X64-CL GigE Lites can be connected to a single host computer using a multiport hub or a router.
Tattile also recognizes the growing interest in GigE cameras and interfaces. At Vision 2005, the company showed its latest TAG server product that can interface a number of different camera formats to GigE networks.
“By offering different models with either two Camera Link interfaces, two GigE channels, or four LVDS channels,” says Robert Fenwick Smith, managing director, “we can offer the system integrator the benefits of interfacing multiple cameras to GigE networks.” To offload the latency and unpredictability of off-the-shelf NIC cards, the TAG server series incorporates an Intel XScale host PC, two FPGA front ends, and four Blackfin DSPs, each with dedicated RAM. To program the interface, system developers can use Antares Explorer, a graphical development environment with an optimized vision library, or Tattile’s Standard C-based image library.
Unlike both Dalsa and Tattile, GigaLinx offers a number of boards that use a set of hardware building blocks, called vAtoms, that each perform a different machine-vision task. The board set communicates using a GigE switched-network architecture that provides DSP- or FPGA-based image processing, image capture, and memory capabilities. vAtom Krypton is a dual-DSP board that features two autonomous TI-C6414 DSPs running at up to 720 MHz that are connected to a switched GigE network through two GigE channels.
System scaling is achieved by stacking vAtom Krypton boards together. vAtom Krypton can be used together with other GigaLinx vAtoms, such as vAtom Argon Camera Link-to-Gigabit Ethernet converter, the vAtom Neon memory board, or vAtom Cobalt FPGA processing board, or as a stand-alone hardware engine for offloading processing from the PC (see Fig. 3). Connection to PC-based systems is performed through a standard Gigabit Ethernet NIC.
Other frame-grabber-board vendors are also looking to exploit the trend toward GigE cameras. According to Joseph Sgro, president of Alacron, his company is currently developing a four-channel version of a GigE interface board for the PCI bus. Alacron’s Fast-X GigE series accepts a range of inputs from the Full Camera Link to three independent Basic Camera Links, analog, UXGA, and LVDS sources as well as to and from four on-board standard GigE links. Captured video streams from a mix of the Camera Links, analog, UXGA, LVDS, and GigE cameras can be moved to the host PC memory and can be alternatively distributed over the same four on-board standard GigE links to other PCs or set directly to HD storage via on-board SATA controllers.
“The current line speed of GigE devices limits data bandwidth to less than 100 Mbytes/s, that is, to the Basic Camera Link speed,” says Sgro. “Thus, GigE cameras with a higher resolution or higher frame rates cannot use single GigE connections for real-time data capture.” Practically all PCs have one or two GigE interfaces and cannot use this for the continuous line speed data transfers without impacting the host CPU’s performance.
TCP off-load engines incorporated in GigE interfaces help to reduce this load, but do not eliminate it. More advanced GigE adapters include embedded processors that support remote direct memory access (RDMA) for high data-rate transfers. These RDMA embedded processors unwrap the data and deposits the information directly in the system memory or take data from the system memory and streams the information to a remote client. “When an add-in computer board or stand-alone box is equipped with an embedded processor to handle RDMA data streaming, the GigE adapter becomes similar to a frame grabber-getting large streams of video structured data in and out of the PC’s system memory,” Sgro adds.
For its part, Matrox Imaging is developing a multiport GigE board with host offload for tasks such as image processing for introduction later this year. Within the same timeframe, the company will also announce a C-based FPGA developer’s toolkit for customizing the FPGA used on both of the company’s Odyssey Xpro+ vision processor and Solios frame grabber.
More interesting, however, may be the move toward fiberoptic interfaces. Already, Pleora Technologies offers versions of its IP engines for delivering image data over fiber-based GigE connections. Pleora’s iPORT FB1000-CL and iPORT FB1000-ST IP engines stream image data directly onto fiber cable, eliminating the need for copper-to-fiber converters (see Fig. 4). “These products meet market demand for systems based on optical fiber,” says Chamberlain.
While the FB1000-CL interfaces directly to Base-configuration Camera Link cameras, the FB1000-ST interfaces to raw digital data. To date, no vendors have incorporated the product into their cameras. But this is likely to change. When it does, expect frame-grabber vendors to follow suit, offering PCI-based plug-in boards that allow fiber-based cameras to directly interface to PCs.
Visualapplets make FPGA programming a snap
Although most every camera and frame grabber uses field-programmable gate arrays (FPGAs) to perform camera setup and embedded image-processing functions such as Bayer color conversion, the ability to program these devices is often hidden from the user. From a system integrator’s perspective, however, leveraging the power of such devices can be very beneficial, since dataflow-like algorithms can often run at speeds hundreds of times faster than on any host PC.
One of the drawbacks of using these devices for image processing and machine vision has been the need for the developer to understand high-level languages such as VHDL, which, for nonprogrammers, make using the devices both expensive and slow. Of course, there have been companies that have attempted to create high-level dataflow-like languages that speed the development of FPGA based systems. One of the first of these was Datacube, which introduced its Visual CHIP Studio more than two years ago (seeVision Systems Design, September 2004, p. 87).
At the recent Vision show (Stuttgart, Germany; November 2005), Silicon Software (Mannheim, Germany; www.silicon-software.com) introduced a graphically oriented interface called VisualApplets that lets developers program machine-vision and image-processing functions on the company’s Xilinx (San Jose, CA, USA; www.xilinx.com) Spartan IIe-based microEnable III PCI Camera Link frame grabber. “VisualApplets is a tool for hardware programming of FPGAs based on graphical dataflows,” says Klaus-Henning Noffz, managing director of Silicon Software (see figure). “These dataflows are arranged using a combination of operators and filter modules and are compiled to a loadable hardware applet.” These libraries contain operators for pixel manipulation, logical operators for classification tasks, and more complex modules for color processing and image compression.
Using these libraries, a number of image-processing functions such as look-up tables, thresholding, binarization, and counter functions can be intuitively placed on-screen in a pipelined fashion. Without any additional programming, the finished program is converted to functional blocks within the microEnable III’s FPGA. “No programming is necessary to control the synchronization or timing of the dataflow within the FPGA,” says Noffz. “The developer simply controls the complexity of the processing by allocating different processing resources or functions from the library.”
Silicon Software’s synthesis and Xilinx’s place and route software transparently convert the hardware design into an FPGA layout and, once installed, the system developer’s program is automatically integrated, configured, and executed. After this process is complete, a hardware applet that describes the image-processing functions performed in the dataflow diagram is created. This can then be loaded into the company’s microDisplay viewer and camera-configuration software.
At Vision 2005, Silicon Software demonstrated the power of the software performing a sum of absolute difference (SAD) algorithm to compute image motion. Interestingly, this is the same algorithm used by Focus Robotics (Hudson, NH, USA; www.focusrobotics.com) to compute depth perception from two independent cameras (seeVision Systems Design, August 2005, p. 23).
In the dataflow architecture developed by Silicon Software for motion analysis, data from a single Camera Link camera is split into two adjacent buffers. Every other frame from the second image buffer is then removed and the two data paths synchronized so that each pixel in both images is correctly registered. After synchronization, the two images are subtracted and the result of the absolute value determined. “This,” says Noffz, “will result in an image that visually shows the motion between the image frames.” The result is then shifted to the PC and displayed as a new image.
“To develop such an image-processing function using VisualApplets would take a integrator approximately five minutes,” says Noffz, “greatly reducing FPGA development time over using traditional VHDL programming methods.” With a price of around 5500 euros for both the microEnable III and the VisualApplets software, the software currently runs under Windows 2000/XP; a device driver from Windows XP64 is currently under development.- AW
Nashua, NH, USA
Waterloo ON, Canada
St-Laurent, QC, Canada
Indianapolis, IN, USA
Boca Raton, FL, USA
San Jose, CA, USA
Dorval, QC, Canada
Kanata, ON, Canada
Irvine, CA, USA