Smart cameras simplify systems integration tasks

Sept. 1, 2018
Combining lighting, imagers, software and I/O, smart cameras are easing the task of machine vision system design.

Combining lighting, imagers, software and I/O, smart cameras are easing the task of machine vision system design.

Some complex machine vision tasks may require sophisticated lighting, lighting placement, customized software, multi-Megapixel or high-speed cameras and sophisticated I/O. However, in less complex applications, such as color analysis, barcode reading, verification, presence/absence and flaw detection, systems developers can leverage the power of smart sensors and smart cameras that integrate lighting, image sensing, vision software and I/O capability into a single unit.

In such products, images are not transferred to a PC but analyzed using the embedded processor in the camera. Thus, they can save systems developers time and money since no frame grabber or host PC will be required in the final imaging station and only the results of image analysis will be transferred to a host PLCor PC.

Furthermore, employing such vision sensors and smart cameras can save systems developers time and money since they can be easily configured using off-the-shelf software. Better still, because of their I/O capability, it may be easy to reconfigure some of these products with additional I/O and display capability.

Although they are now offered by a variety of different vendors, not all such imaging devices are created equal. Indeed, such cameras can be generally classed into five categories. First, there are what is generally termed smart sensors, which are typically those that are offered by companies that only allow their proprietary software to be used with the devices. With vision sensors, systems developers can configure a machine vision application, often from a graphical user-interface (GUI)-based menu supplied by the vision sensorvendor.

While at first this might seem limiting, it does allow such camera vendors to maintain a greater control of their hardware/software offerings, making customer support easier. Better still, many smart vision sensor functions such as focusing may be performed automatically with other functions such as gain and exposure-time illumination adjusted automatically with one-touch controls.

Finally, companies that supply these vision sensors have tailored their software to be configured with a selection of most commonly used machine vision tools to perform such tasks as presence/absence detection, color analysis, parts measurement and optical character reading and verification (OCR/OCV).

Sensors and cameras

Many vendors now offer such vision sensors, the most well-known being Baumer (Frauenfeld, Switzerland;www.baumer.com), Banner Engineering (Minneapolis, MN, USA; www.bannerengineering.com), Cognex (Natick, MA, USA; www.cognex.com), Datalogic (Bologna, Italy; www.datalogic.com), Keyence (Osaka, Japan; www.keyence.com) and Omron (Kyoto, Japan; www.ia.omron.com).

Each company supplies its vision sensors with their own GUI-based software so that they can be set up without requiring any programming by the developer. These include the VeriSens Application Suite for Baumer’s VeriSens vision sensors (Figure 1), a scroll-down menu interface for Banner Engineering’s iVu TG that is programmed using a separate remote display, Cognex’s EasyBuilder interface for its In-Sight 2000 Vision Sensors and the DataVS2 GUI-based software for Datalogic DataVS2 sensor. Keyence also offers a GUI-based interface for its IV Series vision sensors while Omron employs its Sysmac Studio software in its latest FQ-M Series vision sensor, designed for motion tracking of parts.

Figure 1:Companies that supply vision sensors offer GUI-based software so they can be set up without requiring any programming by the developer. These include the VeriSens Application Suite from Baumer.

With the understanding that such products may be deployed by those somewhat unfamiliar with vision, manufacturers such as Omron have produced intuitive guides, user manuals and application notes with which to support developers. For its FQ-M Series vision sensor, for example, the company has an online description of the fundamentals of vision sensors (http://bit.ly/VSD-OMR) and an application note showing how to interface the sensor to a robotic system (http://bit.ly/VSD-FQM).

Added functionality

Today, the difference between smart sensors and what constitutes a smart camera is somewhat undefined since the functionality of a high-end vision sensor may appear similar in terms of processing power, triggering, I/O and networking capability. Generally, however, manufacturers of smart cameras offer products with higher performance processors and both area and line-scan imagers making such smart cameras useful in both part inspection and web inspection applications.

Coupling these with more sophisticated machine vision software, smart cameras can be configured to perform more complex tasks such as geometric pattern matching, feature extraction as well as presence/absence detection, color analysis, part measurement and OCR/OCV often using GUI-based menus that require no programming.

Like smart sensor vendors, many smart camera companies offer closed systems that can only be used with their machine vision software. Like vision sensor companies, this eases technical support since only a single software package needs to be supported.

Needless to say, to enhance their range of products, all the companies that offer vision sensors also offer smart vision cameras. Datalogic’s P1-x Series, A Series and T Series of smart cameras, for example are supplied with the company’s IMPACT machine vision software. In some cases, however, a company may wish to employ its own proprietary code within such software packages.

In such cases, third-party or open-source code especially tailored for the application needs to be added. Because of this, a number of smart camera companies provide their own software and allow third-party or open-source machine vision software to run on their products. Such smart cameras have evolved from companies that traditionally offered both frame grabbers, embedded machine vision systems and machine vision software. By using cameras that offer this extra flexibility, systems developers can leverage the power of off-the-shelf often GUI-driven software and, if necessary, add their own or code from an open source or third-party vendors.

While supporting this software then remains the responsibility of the systems integrator, such approaches do allow proprietary products to be developed around standard off-the-shelf image processing packages. Matrox Imaging (Dorval, QC, Canada;www.matrox.com) and National Instruments (Austin, TX, USA; www.ni.com), Omron Microscan Systems (Renton, WA, USA; www.microscan.com) and Teledyne DALSA (Waterloo, ON, Canada; www.teledynedalsa.com) and Vision Components (Ettlingen, Germany; www.vision-components.com) all offer products that can be used with their machine vision software and, in some cases, can be enhanced with proprietary algorithms.

For example, Matrox’s Iris GTR smart cameras can be used with the company’s Matrox Design Assistant, an extendable integrated development environment (IDE). Running Windows Embedded 7 or Linux, Matrox Iris GTR offers a PC-like environment for applications. Application development can be performed using the Matrox Imaging Library (MIL) software development kit (SDK) that supports C# and Visual Basic compilation and CPython scripting; code can then be executed from within a MIL-based application.

Similarly, the ISC Series of smart cameras from National Instruments are supplied with the company’s Vision Builder for Automated Inspection (AI) software and the cameras can be programmed with NI’s LabVIEW Real-Time Module and Vision Development Module. Teledyne DALSA’s latest smart camera, the BOA series, are offered with both the company’s Sherlock and iNspect Express vision software that allows custom in-line and background scripting.

Recognizing that smart cameras can alleviate many machine vision systems integration tasks, some companies leverage the power of third-party software packages and open-source code from a range of established machine vision software vendors.

For its mvBlueGEMINI smart camera, for example, Matrix Vision (Oppenweiler, Germany;www.matrix-vision.com) offers its mvIMPACT Configuration Studio menu-based software pre-installed. Based on the HALCON image processing library from MVTec (Munich, Germany; www.mvtec.com) applications can be developed using individual tools to, for example, acquire images and find objectswithin.

Likewise, SynView software running on the CORESIGHT smart camera from New Electronic Technology (NET, Finning, Germany;www.net-gmbh.com) allows software packages such as Adaptive Vision Studio from Adaptive Vision (Gliwice, Poland; www.adaptive-vision.com), HALCON or NI’s LabView, and open-source packages such as OpenCV to be deployed.
Alternatively, developers can choose cameras such as the NEON-1021 from ADLINK Technology (New Taipei City, Taiwan; www.adlinktech.com) that support a number of software packages including Common Vision Blox from Stemmer Imaging (Puchheim Germany; www.stemmer-imaging.com), VisionPro from Cognex, MIL from Matrox Imaging, Sherlock from Teledyne DALSA as well as open-source packages such as OpenCV (https://opencv.org). The NEON-1021-M features MERLIC from MVTec which allows integrators to create dedicated machine vision applications with no programming required via a user-oriented, image-centered user interface designed to simplify development of alignment, inspection, measurement and recognition applications.

Needless to say, any problems encountered in configuring such products with third-party software to perform specific image processing or machine vision tasks will be relegated more to the systems developer than the camera or software supplier.

Another dimension

As well as offering products that support area and line-scan sensor configurations, a new class of smart camera has emerged to allow three-dimensional measurements to be taken. In the past, the computationally-intensive requirements to perform this task were often relegated to stereoscopic vision, light pattern projection or time of flight (TOF) cameras that transferred image data to a PC to generate 3D point clouds and allow measurements to be made onthe data.

Figure 2:Intel’s RealSense D435 stereo camera uses active infrared (IR) to capture images across a 85.2° (H) x 58° (V) field of view (FOV) at distances of up to 10m. An on-board RealSense vision processor D4 is used to process raw image data from the cameras and compute 3D depth maps.

Now, such tasks can be performed in smart 3D cameras that, like their 2D counterparts, alleviate the need for host PC-based systems to be employed. As an example of a smart stereo camera, the RealSense D435 from Intel (Santa Clara, CA, USA;www.intel.com) uses active infrared (IR) to capture images across an 85.2° (H) x 58° (V) field of view (FOV) at distances of up to 10m. At the heart of the camera is Intel’s own RealSense vision processor D4, which is used to process raw image data from the cameras and compute 3D depth maps without the need for dedicated GPU or host processor. This data can then be transferred to a PC using the camera’s USB3 interface (Figure 2). Rather than use a stereo technique to generate 3D depth maps, the PhoXi family of 3D cameras from Photoneo (Bratislava, Slovakia; www.photoneo.com) uses laser pattern projection. In this method, a fringe pattern is created with several wave patterns of shifting phase relationship (see “Active pattern projection improves AOI 3D measurement accuracy,”Vision Systems Design, February 2018; http://bit.ly/VSD-3DPS). In the design of its PhoXi family, Photoneo employs a Jetson processor from NVIDIA (Santa Clara, CA, USA; www.nvidia.com) to perform 3D imaging and transfers the resulting data over a GigE interface to a host computer (Figure 3). The camera has been used by ROMI Industrial Systems (Trnava, Slovakia; www.romi-is.com) in a bin picking application that uses Photoneo’s localization software development kit (SDK) to find part positions and Photoneo’s path planner to guide a UR5 robot from Universal Robots (Odense, Denmark; www.universal-robots.com) to pick and place injection molded parts. A video of the system in operation can be found at: http://bit.ly/VSD-BNP.

Figure 3:ROMI Industrial Systems has used a 3D camera from Photoneo in a bin picking application to guide a UR5 robot from Universal Robots to pick and place injection molded parts.

Smart 3D cameras that use the time-of-flight principle are also available from such companies as Advanced Scientific Concepts (Santa Barbara, CA; USA;www.advancedscientificconcepts.com). In the company’s TigerCub 3D, a 128 x 128-pixel focal plane array is used to capture light as it is pulsed and reflected from an object. Because the light source and image acquisition are synchronized, distances can then be extracted from the returned data.

Here again, an on-board processor is used to perform image processing, allowing the camera to output 3D point cloud and intensity information over an Ethernet interface. Whether based around single area or line-scan sensors or employing 3D-based stereo, light pattern projection or TOF imaging techniques, the introduction of such smart cameras with embedded image processing software is saving systems developers time and money in configuring machine vision systems.

In the future, the rapid integration of even greater image capture, processing and display functionality by semiconductor vendors will result in even less expensive smart cameras appearing on the commercial market, resulting in an increasingly large number being deployed in machine vision and embedded vision applications.

Companies mentioned

Adaptive Vision

Gliwice, Poland

www.adaptive-vision.com

ADLINK Technology

New Taipei City, Taiwan

www.adlinktech.com

Advanced Scientific Concepts

Santa Barbara, CA; USA

www.advancedscientificconcepts.com

Banner Engineering

Minneapolis, MN, USA

www.bannerengineering.com

Baumer

Frauenfeld, Switzerland

www.baumer.com

Cognex

Natick, MA, USA

www.cognex.com

Datalogic

Bologna, Italy

www.datalogic.com

Intel

Santa Clara, CA; USA

www.intel.com

Keyence

Osaka, Japan

www.keyence.com

Matrix Vision

Oppenweiler, Germany

www.matrix-vision.com

Matrox Imaging

Dorval, QC, Canada

www.matrox.com

MVTec

Munich, Germany

www.mvtec.com

National Instruments

Austin, TX, USA

www.ni.com

New Electronic Technology (NET)

Finning, Germany

www.net-gmbh.com

NVIDIA

Santa Clara, CA

www.nvidia.com

Omron

Kyoto, Japan

www.ia.omron.com

Omron Microscan Systems

Renton, WA, USA

www.microscan.com

OpenCV

https://opencv.org

Photoneo

Bratislava, Slovakia

www.photoneo.com

ROMI Industrial Systems

Trnava, Slovakia

www.romi-is.com

Stemmer Imaging

Puchheim Germany

www.stemmer-imaging.com

Teledyne DALSA

Waterloo, ON, Canada

www.teledynedalsa.com

Universal Robots

Odense, Denmark

www.universal-robots.com

Vision Components

Ettlingen, Germany

www.vision-components.com

About the Author

Andy Wilson | Founding Editor

Founding editor of Vision Systems Design. Industry authority and author of thousands of technical articles on image processing, machine vision, and computer science.

B.Sc., Warwick University

Tel: 603-891-9115
Fax: 603-891-9297

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!