Sensors, embedded processors, and digital I/O capability allow smart cameras to be used as machine-vision systems
When building a machine-vision system, system integrators are faced with a number of choices, all at different price/performance levels. In applications such as machine monitoring and surveillance, CCD- and CMOS-based cameras with little or no processing capability can be used. In more sophisticated machine-vision applications, an FPGA can perform pipelined image-processing tasks such as Bayer interpolation, flat-field correction, and image enhancement.
Indeed, the flexibility of these devices is now being augmented by both FPGA vendors themselves and third parties. While Xilinx and Altera, for example, offer 2-D DCTs for their FPGAs, other vendors such as Oki Electric are now offering face-detection algorithms as hardware IP. While FPGAs allow camera developers to offload parallel image-processing functions from a host CPU to the camera, their programmability is generally not offered to developers of machine-vision systems.
Although FPGAs are most often used for point and windowing operations, they are less suitable for global operations such as image segmentation, where iterative image-processing techniques must be performed. Realizing this, many camera vendors also incorporate von-Neumann like processors in the form of CPUs and DSPs in their designs. Those algorithms that are inherently pipelined are then delegated to the FPGA while global operations can be performed on the CPU or DSP.
For the system developer, the addition of programmable processors allows those familiar with high-level programming languages to port their code from a PC to the smart camera. For the camera vendor, the use of embedded processors allows in-house graphical-user interface-based software and third-party machine-vision software packages to be run autonomously on the camera. Both these factors bring added benefit to the system integrator.
In its Iris E-Series smart cameras, for example, Matrox offers the Matrox Design Assistant, a flowchart-based integrated development environment (IDE) that allows developers to configure and deploy machine-vision applications without the need for programming. The development environment provides a set of image-analysis and measurement tools, as well as I/O and communication tools. Application development using Matrox Design Assistant is accomplished as a visual step-by-step approach, where each step is taken from an existing toolbox and is set up through a configuration pane. Image processing and analysis, communication, and control operations within the camera are all performed using an embedded Intel Celeron processor.
While companies such as Matrox supply their own development tools, others such as Sony offer smart cameras that can be configured with other third-party tools (see Fig. 1). To perform image-processing functions, developers can choose from a variety of third-party software such as NI Vision Builder from National Instruments, Sapera and Sherlock software from Dalsa, VisionPro from Cognex, and Halcon from MVtec Software.
Because the company’s XCI-V3 and XCI-V3/XPE cameras are PC-based, image data can be processed within the camera and results transmitted to a PC over a network. While the XCI-V3 camera runs Linux, the XCI-V3/XPE camera features Windows XP embedded. To process images, both cameras feature an AMD Geode GX533 processor, 256-Mbyte SDRAM, compact flash memory, Ethernet, USB, serial port and monitor out.
While many established companies such as Leutron Vision, PPT Vision, and Siemens Energy and Automation also offer smart cameras, other less well known companies are looking to enter the market. This month, for example, Axtel Machine Vision introduced its Machine Vision Express (MVE) software package that includes three development platforms: a graphical user interface (GUI) and Python and C++ scripting (see p. 15 this issue). In September, the company will also introduce a range of PC-based cameras running Windows CE that will each include a digital I/O, LAN interface, and hard drive and directly support the software.
Such smart cameras are proving their worth by replacing the functions previously achieved by cameras, frame grabbers, and PCs running machine-vision software. In France, for example, system-integrator Acyrus has used an mvBlueLYNX intelligent camera from Matrix Vision to automate the process of linking garments (see p. 20 this issue). Based on a traditional sewing machine, the system can be programmed for several linking step sizes, recalculating each linking step position as the linking process occurs and guaranteeing the sewing needle punches each link precisely.
Lighting and illumination
Using software-development tools, many smart cameras can be customized to perform specific machine-vision functions using the vendor’s or third-party GUI software. However, it is more than an embedded sensor, FPGA and CPU, or DSP that makes today’s cameras smart. One of the most important factors in developing any machine-vision application is choosing the correct type of illumination.
Because of this, many smart-camera vendors have incorporated lighting into their cameras, most often in the form of LED ringlights. Brightfield illumination may be useful in a number of applications such as reading barcodes, however, it may not be ideal for other types of machine-vision applications that require darkfield illumination to highlight, for example, the scratches on a shiny metallic part. In these applications, it may be necessary to disable the ringlight and use the camera’s digital I/O to initiate other types of illumination.
Today, a number of companies including Cognex and National Instruments offer cameras with built-in illumination (see Vision Systems Design, December 2007, p. 39). Often billed as smart image sensors, these devices are supplied with intuitive GUIs that can be rapidly set up for inspection tasks such as barcode reading and parts-presence applications (see Fig. 2).
At the Eastec trade show (May 2008; West Springfield, MA, USA), Averna Vision & Robotics showed an automated system that combines a robot and smart camera to automate the deburring and inspection of airfoil blades (see Integration Insights, p. 31, this issue). After deburring the part, the robot presents it to a National Instruments LabView-enabled NI 1722 smart camera. However, rather than use the embedded ringlight of the camera, the part is first illuminated with a red LED ringlight from Boréal Vision mounted to the front of the smart camera. Once the part is correctly positioned within 6 in. of the camera, this brightfield illumination provides uniform lighting of the sample, and an image of the part is captured by the camera.
To properly emulate a machine-vision system, such cameras must also perform machine-vision functions autonomously and output images over analog, digital, or Ethernet-based network. As well, results of machine-vision functions in the form of pass/fail operations must be output using the camera’s digital I/O interface. Indeed, much of the functionality of smart cameras lies in their ability to act as both a vision system and a machine controller.
In the design of its PicSight P34B GigE Vision-compatible camera, for example, Leutron Vision has incorporated a Sony ICX098 696 × 494 CCD imager, 32-bit RISC processor, and Ethernet interface. To trigger the camera, developers can use the camera’s digital I/O lines and control any lighting using the camera’s in-built digital strobe lines. For specific machine-vision tasks, the company offers an image acquisition API as well as third-party software support.
Like Leutron Vision, Vision Components offers a range of smart cameras. At VISION 2007 (Stuttgart, Germany), the company showed a color version of its VC4465, a 1-GHz DSP-based camera that features a 768 × 582 CCD, RS-232 and Ethernet interfaces, an encoder interface, and an external trigger input. As machine-vision systems, the camera features a direct video output, four digital PLC inputs, and four digital outputs. Vision Components’ own VCLIB and COLORLIB software libraries are used to program the camera.
In evaluating a smart camera for any application, the need to properly specify the machine-vision task and the speed required to accomplish it is of prime importance. In some applications, such as color part detection, functions such as change in image contrast may be all that needs to be measured. These functions are often embedded in the camera and are callable from a GUI. Since many machine-vision tasks only require this functionality, the use of products such as the PresencePLUS P4 color series from Banner Engineering or the ZFV-C smart color sensors from Omron may be all that is required to perform image inspection tasks (see Fig. 3).
FIGURE 3. In parts-presence applications, products such as the PresencePLUS P4 Color Series from Banner Engineering may be all that is required to perform these image-inspection tasks.
Simple inspection tasks can be performed within the camera at very high inspection rates, but other more complex applications may require the use of an FPGA/CPU combination to perform functions such as edge detection and geometric pattern-matching. In these cases, high-level programs, running under Windows or Linux are more often used.
Here again, the nature of the operating system itself plays a major role in determining how quickly results can be obtained. Because of this, evaluating a camera based on its hardware specification alone may be next to impossible. Only evaluating the algorithms in situ provides a proper measure of how a particular smart camera will perform.