By Andrew Wilson, Editor
When designing machine-vision systems, developers must inevitably make trade-offs between hardware and software costs and must balance them with the specifications of the machine-vision application and project time constraints. To meet application requirements during software design, they often must write their own image-processing functions using C-callable image-processing routines available from third parties, hardware vendors, and integrated development environments such as Microsoft Visual Studio. Because these packages are relatively inexpensive, developers can reduce their overall system costs at the expense of longer development times.
C-based packages
Currently, many C-based image-processing packages are available for host-based PCs running Windows, Linux, or UNIX operating systems. Many of these packages, such as XmegaWave (amiserver.dis.ulpgc.es/ xmwgus/), ImageLib (www.dip.ee.uct.ac.za/ ~brendt/srcdist/), and CLIP (www.cs.huji. ac.il/~moshe/clip/man.html), were developed by universities and are freely available on the Web. A complete list of these packages can be found at www.mathtools. net/C++/Image_Processing. Since many of these software packages are offered as noncommercial products, developers opting to use them should be aware of the limited support available.
For developers considering frame-grabber-based designs, machine-vision and image-processing primitives and drivers supplied with each software product should be carefully considered. Whereas many frame-grabber vendors supply drivers for third-party software packages, others offer programming libraries or third-party software support with their products. For example, BitFlow (Woburn, MA) offers drivers for off-the-shelf image-processing packages such as Common Vision Blox (CVB) from Stemmer Imaging (Puchheim, Germany), Halcon from MVTec (Munich, Germany), and Image Pro Plus from Media Cybernetics (Silver Spring, MD).
Because these packages are also supported by several hardware companies, systems developers can choose from a number of different hardware and software configurations. Recently, BitFlow announced its own software package, Image Warp, an image-editing, processing, and analysis software package that combines a graphic development environment, image-analysis toolset, and programming development techniques.
Integral Technologies (Indianapolis, IN) also supports Image Pro Halcon, and its own IVL programming library is provided as a conventional C-callable interface supplied as a dynamic link library. The software, which was introduced at the Vision Show West (San Jose, CA; November 2002), features image-processing primitives such as color correction, spatial domain convolution, blob analysis, and gray-scale pattern-matching. Although such primitives may not perform every function that developers require, a combination of these functions can be built relatively inexpensively for many applications.
GUI-based design
To reduce the programming development times associated with callable image-processing functions, many hardware vendors offer packages that provide graphically oriented machine-vision and image-processing software. Packages such as MIL from Matrox (Dorval, QC, Canada), DT Vision Foundry from Data Translation (Marlboro, MA), and Sherlock from ipd (Billerica, MA) all combine visual programming environments and imaging libraries to support each company's hardware offerings.
Machine-vision system vendors also offer graphical user interface (GUI) based machine-vision system software to support their products. At the Vision Show West, PPT Vision (Eden Prairie, MN), for example, showed its latest machine-vision system, called Impact, that incorporates a digital frame grabber, an image processor, an I/O controller, and software (see Vision Systems Design, November 2002, p. 39).
null
Using the company's Inspection Builder software, Impact provides a GUI-based method of developing algorithms that include blob analysis, OCR/OCV, morphology, and image calibration in a drag-and-drop environment (see Fig. 1). The system presents systems integrators with an interesting decision—whether it is more cost-effective to purchase cameras, frame grabbers, and image-processing software separately or to opt for a complete system, or for that matter, include a smart camera.
It's not just system vendors that have recognized this trend. A number of companies now offer sophisticated point-and-click-based software for use with their smart sensor products. With these products, system developers can create image-processing programs on PC-based systems and, by downloading the application to the smart sensor, create embedded machine-vision systems.
Smart trends
Recognizing this trend, DVT (Norcross, GA) offers a number of Smart Sensors that use the company's GUI-based FrameWork software. By using this software, developers can design machine-vision applications on PCs using the FrameWork user interface. Once developed, applications can be downloaded to the company's range of Smart Sensors to perform stand-alone parts inspection, compute results, and communicate the results over the Ethernet network.
For those developers who prefer to wait until after the code is written and the application is proven before choosing an image-processing hardware platform, some vendors offer PC-development platforms and smart cameras that allow the same software to be run on each environment. Such systems offer developers a flexible and cost-effective way of deploying machine-vision systems.
At both the recent VISION 2002 Show in Stuttgart, Germany, and the Vision Show West, Stemmer Imaging, JAI (Glostrup, Denmark), and Asentics (Siegen, Germany) jointly announced ThinkEye, a PowerPC-based smart camera designed around Stemmer's CVB running under the OSEK operating system (www.osek-vdx.org). Using CVB software on a PC, programmers can develop applications and then, depending on the system performance required, adapt the code to either run on the ThinkEye camera or a standard platform. Therefore, the same software can solve many application problems using the same development environment. To program the camera-based system, developers use Stemmer's recently introduced iTuition GUI that allows developers to link CVB functions graphically on an ergonomically designed user interface (see Fig. 2).
FIGURE 2. Using Stemmer Imaging's Common Vision Blox software on a PC, programmers can develop applications and then, depending on the required system performance, adapt the code to either run on a JAI ThinkEye camera or a PC. Therefore, the same software can solve many application requirements using the same development environment. To program the camera systems, developers use Stemmer's iTuition GUI can link the CVB functions graphically on an ergonomically designed user interface.
Allied Vision Technologies (AVT; Stadtroda, Germany) also envisaged a similar concept in its latest generation of Trimedia-based smart cameras, the AVT Genius series. These FireWire-compatible cameras use a 640 × 480 × 8-bit CMOS sensor, on-board SDRAM, digital I/O, and a VGA display output. To program the camera, AVT has developed Rhapsody, a software library for both PCs and Genius cameras that features the same application programming interface to call identical functions. As a result, programming for either a PC-based system or a camera becomes transparent. Rhapsody software also allows other off-the-shelf vision packages, such as MVTec Halcon image library, to be used in conjunction with both a PC and a smart camera.
At the VISION 2002 Show, AVT showed how the new Genius camera could host a complete machine-vision system with a number of cameras. The Genius camera can host both the AVT ARM-7-based Dolphin camera and Sony (Park Ridge, NJ) machine-vision cameras. In this way, the camera can act as a host interface to other FireWire cameras, an embedded machine-vision system, or an I/O controller (see Fig. 3).
null
Interestingly, Stemmer and AVT have used off-the-shelf components, processors, real-time operating systems, and software to lower the time to market of their products and, in the process, lower the cost of embedded machine-vision systems. These products offer users an easy way to perform several machine-vision operations at low cost.
With the increase in processing capability of PCs, RISCs, and DSPs; faster DDR SRAM; and highly integrated gate arrays; developers may soon be able to run different software packages and environments embedded in smart cameras.
Vision appliances
Unlike these types of smart-camera products, the latest "vision appliances" from ipd (Billerica, MA), the intelligent products division of Coreco Imaging (St. Laurent, QC, Canada), aim at offering "out-of-the-box" solutions. These appliances hide the complexity of even GUI-based programming behind an intuitive interface to solve specific vision applications. At the Vision Show West, ipd showed its initial offering, iGauge, an embedded stand-alone image-processing system tethered to a remote camera.
The iGauge features an easy to use, point-and-click interface that allows users to quickly and easily implement gauging applications. The appliance guides developers through a series of setup screens that permit previously configured solutions to be loaded, sensor focus adjusted, brightness and contrast calibrated, and gauging tasks defined by making measurements on automatically extracted edge points that become visible as a mouse is moved across an image (see Fig. 4).
FIGURE 4. iGauge appliance features an easy-to- use, point-and-click interface that allows users to quickly and easily implement gauging applications. During operation, the sensor is programmed by guiding the user through a series of setup screens that allows a previously configured solution to be loaded, sensor focus to be adjusted, brightness and contrast to be calibrated, and gauging tasks to be defined by making measurements on automatically extracted edge points that become visible as a mouse is moved across an image.
Pass, fail, and warning tolerances for each measurement can be loaded and decisions made via a simple table based on user-defined inspection criteria. On the company's Web site, an alternate version of the system is shown as a fully programmable smart camera. Future products will be targeted at such markets as label inspection and OCR. However, whether this price/performance/product mix is viable in today's low-cost machine-vision market remains to be seen.
Will factory-automation developers be prepared to pay $4000 for a system that performs a single function? Or will they opt for more programmable GUI-based systems, such as PPT's Impact system that costs a few thousand more? Or is the future destined to be in the hands of lower-cost GUI-based smart cameras that perform multiple functions? At present, lower pricing appears to be attracting a stronger market following.