3D expands the dimensions of vision systems

Jan. 14, 2016
By lowering the cost of 3D components, OEM vendors are offering new opportunities for developers of vision systems

By lowering the cost of 3D components, OEM vendors are offering new opportunities for developers of vision systems

Historically, the first 3D vision systems were based on the use of two cameras and triangulation techniques, but now include those based on single camera systems, stereo cameras, structured light, pattern projection and time of flight imaging techniques. Although the concept of dual-camera stereo imaging is probably the most well-known of these, single camera systems can also be used to render 3D images.

However, since the cameras used in dual-camera systems are often pre-calibrated, a more accurate rendering of 3D geometry can be made. In either case, generating a 3D image is accomplished by triangulation, a process that necessitates identifying image pixels in an image scene from one perspective that correspond to the same point in a scene observed from another perspective.

This so called correspondence problem can be solved using a number of different algorithms. For its range of Bumblebee stereo cameras, for example, Point Grey (Richmond, BC, Canada;www.ptgrey.com) offers PC-based software that first rectifies each pair of stereo images to remove lens distortion. A Laplacian of Gaussian filter is then used to detect edges within both images and a sum of absolute difference (SAD) algorithm used to find corresponding pixels in the stereo image pair. Once these are identified, triangulation can be used to generate a disparity map of the scene (Figure 1).

Figure 1: For its range of Bumblebee stereo cameras, Point Grey offers PC-based software that uses a Laplacian of Gaussian filter to detect edges within both images and a sum of absolute difference (SAD) algorithm to find corresponding pixels in the stereo image pair. Once these are identified, triangulation is used to generate a disparity map.

Single camera systems

Just as such disparity maps can be generated from dual-camera systems, single-camera systems can also be used to generate depth maps using a technique called stereo from motion (SfM). Here, the correspondence between images taken from a camera at one position is compared with an image taken from another position and the correspondence between images found.

Once again, feature points between two (or more images) must first be found using feature detectors such as the Scale-invariant feature transform (SIFT), Speeded Up Robust Features (SURF) or optical flow techniques. Once features are identified, they can be tracked using such algorithms as the Lucas-Kanade tracker and the feature trajectories extracted and used to reconstruct camera motion and their 3D positions. To perform this task in OpenCV, Daniel Lélis Baggio and his colleagues have shown how this can be accomplished in "Exploring Structure from Motion Using OpenCV" Chapter 4 of the book "Mastering OpenCV with Practical Computer Vision Projects" (seehttp://bit.ly/1PCtHRP).

In his October 2015 webcast "Understanding applications and methods of 3D imaging," Dr. Daniel Lau, Professor of Electrical and Computer Engineering at the University of Kentucky (Lexington, KY, USA;www.uky.edu) showed how off-the shelf software such as Pix4Dmapper Pro from Pix4D (Lausanne, Switzerland; www.pix4d.com) and PhotoScan from Agisoft (St. Petersburg, Russia; www.agisoft.com) could be used to generate 3D models from single cameras mounted to drones (http://bit.ly/1Yrzzk3).

Such software proves particularly useful in applications such as roof and power line inspection applications where small-format microbolometer-based IR cameras such as the Vue Pro from FLIR Systems (Wilsonville, OR, USA;www.flir.com) can be mounted to drones. After recording image data to its on-board micro-SD card, the data can then be processed to recreate a 3D model of the scene.

Shape from shading

Shape from motion is not the only technique that can be used to reconstruct a 3D image using a single camera. In shape from shading techniques, the object is illuminated from several different directions and captured images captured by a single camera. Depending on the differences between the shading of these images, an image of the surface can be calculated that shows both the surface orientation and the texture of the object being illuminated.

To date, a number of shape from shading systems have been developed by companies such as In-situ (Sauerlach, Germany,www.in-situ.de), Keyence (Osaka, Japan; www.keyence.com) and Stemmer Imaging (Puchheim, Germany; www.stemmer-imaging.de). In the development of its DotScan system that is used to evaluate the Braille dot patterns on pharmaceutical cartons, for example, In-situ uses four blue LED telecentric light sources placed above the object illuminate the carton from four directions. A 1.3-Mpixel UI-1540SE-M USB uEye industrial camera from IDS Imaging Development Systems (Obersulm, Germany; www.ids-imaging.com) then captures individual images and shape from shading is used to check whether the raised dots have the correct shape (see "Machine vision checks Braille code on drug packages," Vision Systems Design, January 2010, http://bit.ly/1lg7e1t).

Keyence also uses this technique in its LumiTrax system in which multiple images are taken with illumination from different directions and the changes in light intensity of each pixel among the different images are analyzed to split shapes and textures into separate images (Figure 2). A number of different application examples of the shape from shading technique can be found on the company's website athttp://bit.ly/1kTrG8K.

Figure 2: In Keyence's LumiTrax system multiple images are taken with illumination from different directions and the changes in light intensity of each pixel among the different images are analyzed to split shapes and textures into separate images.

For its part, Stemmer Imaging (Puchheim, Germany,www.stemmer-imaging.de) offers its Trevista shape from shading system that integrates a vision system, dome-shaped illumination system and industrial PC. Optical 3D shape measurement is performed on topographical relief images and texture images using software integrated into Sherlock from Teledyne DALSA (Waterloo, ON, Canada; www.teledynedalsa.com) and Stemmer Imaging's own Common Vision Blox (CVB).

Structured light

Just as stereo camera-based vision systems rely on triangulation to obtain 3D images, so too do structured laser-based configurations. In such systems, a narrow band of light is projected onto a 3D surface to produce a line of illumination that will appear distorted when imaged from an observation perspective other than that of the laser. Analyzing the shape of these reflected lines can then be used to geometrically reconstruct of the object's surface shape.

However, as Wallace Latimer, Product Line Manager, Machine Vision, Coherent (Santa Clara, CA, USA;www.coherent.com) points out, such laser line projection systems can be implemented in several different ways, each of which has its own unique characteristics, advantages and disadvantages (see "Understanding laser-based 3D triangulation methods", Vision Systems Design, June 2015; http://bit.ly/1IaG72x).

To generate this structured laser light, companies such as Coherent, Osela (Lachine, QC, Canada;www.osela.com), ProPhotonix (Salem, NH, USA; www.prophotonix.com) and Z-LASER (Freiburg, Germany; www.z-laser.com) all offer lasers that are available in number of different wavelengths and line widths. As the object or camera/laser system moves across the field of view of the object, captured images are used to generate a point cloud that represents the external surface of the object.

Lasers and cameras used in these systems can either be configured separately using off-the-shelf laser and cameras or in preconfigured, integrated systems such as the 3D Stinger scanner from Tordivel (Oslo, Norway;www.scorpionvision.com), Gocator series of 3D scanners from LMI Technologies (Delta, BC, Canada; http://lmi3d.com) and the IVC-3D series from SICK (Waldkirch, Germany; www.sick.com). Such pre-configured systems save the systems integrator time since they are pre-calibrated by the manufacturer and offered with off-the-shelf 3D reconstruction software.

Low texture

While passive stereo vision is widely used in robotic applications, stereo matching will fail if areas of low-texture are imaged. To overcome this, a number of manufacturers have developed products that project a structured laser pattern onto to the object, thus providing reference points for the stereo cameras.

In this way, a more accurate 3D reconstruction can be accomplished. Here, the choice of which pattern to use will depend on which provides the best correspondence between features of the two stereo images. In the Scorpion 3D Stinger RPP laser camera from Tordivel, both an IR (830nm) and red (660nm) random pattern projection laser from Osela is used to illuminate the object which is captured by two XCG GigE Cameras from Sony (Tokyo, Japan; https://pro.sony.com). This allows the system to capture both a 2D image and a 3D image set illuminated with the laser pattern.

IDS Imaging Development Systems uses the same approach in its Ensenso stereo 3D camera system that has been deployed by bsAutomatisierung (Rosenfeld, Germany; http://bsautomatisierung.de) to automatically pick individual, randomly aligned parts from a container (see "3D vision system assists in robotic bin picking,"Vision Systems Design, http://bit.ly/1qWBIUB).

Fringe projection

A variation of structured lighting known as digital fringe projection uses a projector to project a series of phase-shifted sinusoidal fringe patterns onto an object and then image the reflected, distorted images with a camera. In a presentation entitled "High-resolution, high-speed, three-dimensional video imaging with digital fringe projection techniques," by Laura Ekstrand and her colleagues at the Machine Vision Laboratory in the Department of Mechanical Engineering at Iowa State University (Ames, IA, USA;www.iastate.edu) explain the theory behind digital fringe projection and how it can be used to generate 3D point clouds (http://bit.ly/21fZUnx).

Today, numerous products exist based on this technology, many of which use digital light projectors from Texas Instruments (TI; Dallas, TX, USA;www.ti.com) in their designs (see "Choosing a 3D vision system for automated robotics applications," Vision Systems Design, December 2014; http://bit.ly/1BUQaFw).

In its AreaScan3D camera, for example, VRmagic (Mannheim, Germany;www.vrmagic.com) uses TI's DLP pico projector and transfers image data from an embedded camera over an Ethernet interface (Figure 3). 3D image data can then be reconstructed using software such as Stemmer Imaging's CVB or HALCON from MVTec (Munich, Germany; www.mvtec.com).

Figure 3: In its AreaScan3D camera VRmagic uses TI's DLP pico projector and transfers image data from an embedded camera over an Ethernet interface. 3D Image data can then be reconstructed using software such as Stemmer Imaging's CVB or HALCON from MVTec.

Time of flight

3D time-of-flight (TOF) cameras illuminate the object to be imaged using a light source that is either pulsed or modulated by a continuous wave source. In the pulsed mode approach, the measuring short time intervals between the emitted and reflected light are used to compute depth information. In continuous wave systems, the phase shift between the light source and the reflected light is measured and the distance to the object then computed.

Today, a number of companies have developed products based around these technologies that incorporate area array detectors that allow both distance and image intensity information to be captured simultaneously. odos imaging (Edinburgh, Scotland;www.odos-imaging.com), for example has chosen a pulsed technique in its real.iZ-1K-VS vision system in which pulses of light reflected by objects within the scene are detected by a 1280 x 1024 CMOS imager and range and intensity information is captured. For its ToF-6m TOF camera, Basler (Ahrensburg, Germany; www.baslerweb.com) uses pulsed NIR LEDs to illuminate a scene. Reflected images are then captured using a 640 x 480 CCD imager from Panasonic (Osaka, Japan; www.panasonic.com) allowing both 2D images to be captured and depth information to be computed.

While odos imaging and Basler TOF cameras use pulsed illumination, companies such as PMD Technologies (Siegen, Germany;www.pmdtec.com) use continuous wave (CW) techniques. Using PMD Technologies PhotonICs 19k-S3 time-of-flight imager, Bluetechnix (Wien, Austria; www.bluetechnix.at) has developed a TOF camera, the Argos 3D-P100 3D camera that illuminates a scene using IR light and captures depth map and 160x120 pixel data at up to 160fps to deliver depth information and gray value image data for each pixel simultaneously (Figure 4).

Figure 4: Bluetechnix's Argos 3D-P100 3-D camera illuminates a scene using IR light and captures depth map and 160x120 pixel data at up to 160fps to deliver depth information and gray value image data for each pixel simultaneously.

Companies mentioned

Agisoft
St. Petersburg, Russia
www.agisoft.com

Basler
Ahrensburg, Germany
www.baslerweb.com

bsAutomatisierung
Rosenfeld, Germany
http://bsautomatisierung.de

Bluetechnix
Wien, Austria
www.bluetechnix.at

Coherent
Santa Clara, CA; USA
www.coherent.com

IDS Imaging Development Systems
Obersulm, Germany
www.ids-imaging.com

In-situ
Sauerlach, Germany
www.in-situ.de

Iowa State University
Ames, IA
www.iastate.edu

Keyence
Osaka, Japan
www.keyence.com

LMI Technologies
Delta, BC, Canada
http://lmi3d.com

MVTec
Munich, Germany
www.mvtec.com

odos imaging
Edinburgh, Scotland
www.odos-imaging.com

Osela
Lachine, QC, Canada
www.osela.com

Panasonic
Osaka, Japan
www.panasonic.com

Pix4D
Lausanne, Switzerland
www.pix4d.com

Point Grey
Richmond, BC, Canada
www.ptgrey.com

ProPhotonix
Salem, NH, USA
www.prophotonix.com

PMD Technologies
Siegen, Germany
www.pmdtec.com

SICK
Waldkirch, Germany
www.sick.com

Sony
Tokyo, Japan
https://pro.sony.com

Stemmer Imaging
Puchheim, Germany
www.stemmer-imaging.de

Teledyne DALSA
Waterloo, ON, Canada
www.teledynedalsa.com

Texas Instruments (TI)
Dallas, TX, USA
www.ti.com

Tordivel
Oslo, Norway
www.scorpionvision.com

University of Kentucky
Lexington, KY, USA
www.uky.edu

VRmagic
Mannheim, Germany
www.vrmagic.com

Z-LASER
Freiburg, Germany
www.z-laser.com

About the Author

Andy Wilson | Founding Editor

Founding editor of Vision Systems Design. Industry authority and author of thousands of technical articles on image processing, machine vision, and computer science.

B.Sc., Warwick University

Tel: 603-891-9115
Fax: 603-891-9297

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!