MACHINE-VISION SOFTWARE: Graphical software tools target embedded applications

Aug. 1, 2010
In many machine-vision and image-processing applications, increasing the speed of point or neighborhood operators can be most efficiently accomplished using FPGAs. Many camera manufacturers have taken advantage of this fact, embedding image-preprocessing functions such as flat-field correction and Bayer interpolation into their cameras. This functionality is also offered by vendors of camera interface boards to allow tasks such as color conversion and image filtering to be performed before images are transferred to host memory.

In many machine-vision and image-processing applications, increasing the speed of point or neighborhood operators can be most efficiently accomplished using FPGAs. Many camera manufacturers have taken advantage of this fact, embedding image-preprocessing functions such as flat-field correction and Bayer interpolation into their cameras. This functionality is also offered by vendors of camera interface boards to allow tasks such as color conversion and image filtering to be performed before images are transferred to host memory.

Despite the advantages of these developments, it still remains difficult for system developers wishing to add their own intellectual property (IP) to cameras or frame grabber boards because relatively few companies are willing to offer the development tools capable of performing such tasks. Embedded system designers are often forced to build their own hardware and image-processing software and use development tools from FPGA vendors such as Xilinx (www.xilinx.com) and Altera (www.altera.com; both of San Jose, CA, USA) to perform specialized tasks.

At the same time, most software vendors have recognized the need to offer high-level image-processing functions such as blob analysis and geometric pattern-matching functions to their customers. This has resulted in the emergence of graphical programming interfaces such as LabView from National Instruments (Austin, TX, USA; www.ni.com), which ease the task of programming machine-vision systems.

Recognizing the frustrations of embedded system designers and the advantages presented by graphical programming interfaces, third-party software vendors have embarked on providing software packages that allow FPGAs to be programmed at a high level.

Five years ago, Silicon Software (Mannheim, Germany; www.silicon-software.com) introduced a graphically oriented interface called VisualApplets that lets developers program machine-vision and image-processing functions on the company’s Xilinx-based microEnable series of PCIe Camera Link and GigE Vision camera interface boards (see “VisualApplets make FPGA programming a snap," sidebar to "GigE standard looks for frame grabber support" in Vision Systems Design, January 2006).

After generating a hardware specification of a camera and FPGA system as an XML file, the eVA CoreGen core generator creates VHDL code. The eVA installer is then used to embed the resulting code along with place and route constraints, DLLs, and netlist to the hardware description. Visual programming can then be accomplished using the graphical tools within VisualApplets.

At AUTOMATICA 2010, held in Munich in June, the company announced that it plans to expand these software offerings, allowing embedded system developers to port VisualApplets functions to new hardware platforms.

Developers must first generate a hardware specification of their camera and FPGA system as an XML file. Once developed, the company’s eVA CoreGen core generator then creates VHDL or Verilog code (see figure). An eVA installer embeds the resulting netlist, constraint file for the place, and route tool and hardware description into VisualApplets. Visual programming can then be accomplished using the graphical tools within VisualApplets to allow, for example, smart camera vendors to more easily develop FPGA-based code for Xilinx-based cameras.

“While such functions include image merging, Bayer interpolation, and flat-field correction,” explains Michael Noffz, head of marketing with Silicon Software, “we recognized that some developers needed more sophisticated algorithms that combine the lower-level functions found in VisualApplets.” Because of this, the company has also introduced a series of Smart Applets such as adaptive binarization and blob classification that reduce the development times needed to deploy the algorithms.

Of course, Silicon Software is not the only company to recognize the benefits of an FPGA-based graphical programming language. At The Vision Show in May in Boston, Adsys Controls (Irvine Controls, CA, USA; www.adsyscontrols.com) announced an FPGA image-processing toolkit the company has developed for NI’s PXI-based FlexRIO FPGA module first debuted at NIWeek 2009 (see “Third parties add vision modules to PXI systems,” Vision Systems Design, October 2009).

At The Vision Show, Adsys Controls showed how images from a GigE camera captured with its ProLight CLG-1 GigE Vision adapter module for the FlexRIO FPGA could be processed in real time using the image-processing toolkit within LabView FPGA.

“Initially,” says Brent Bergan, embedded products manager with Adsys, “these functions will include filters, color-decoding algorithms such as Bayer conversion, and 1-D and 2-D FFTs.” To develop these algorithms, Adsys is using both Simulink from The MathWorks (Natick, MA, USA; www.mathworks.com) and in-house developed VHDL code. This code is then brought into LabView FPGA’s graphical environment using NI’s Component-Level Intellectual Property (CLIP) Node.

More Vision Systems Issue Articles
Vision Systems Articles Archives

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!