High-speed Imaging: GPU processor extends dynamic range of high-speed images

In many vision applications such as industrial robotics and life sciences, it is important for cameras to image a scene that may range from very light to very dark.

Tech Fig A2 1406vsd
Tech Fig A2 1406vsd
Tech Fig B2 1406vsd
To increase the dynamic range of images captured at high-speeds, BitFlow has developed an algorithm that runs on NVIDIA graphics processors (a) Original image (b) HDR image after FPN removal and weighted averaging.

In many vision applications such as industrial robotics and life sciences, it is important for cameras to image a scene that may range from very light to very dark. To do so, CCD and CMOS devices used in these cameras require a high-dynamic range. To accomplish this, a number of different techniques and imager architectures including dynamic well capacity adjustment, multiple image capture, temporally varying exposure times, time-to-saturation architectures, logarithmic transfer functions, and local intensity adaptation can be used (see "Dynamic Design", Vision Systems Design, October 2009; http://bit.ly/1lFaDBs).

Typical hardware implementations, such as those found in the latest generation sCMOS 2.0 imagers from Fairchild Imaging (Milpitas, CA, USA: www.fairchildimaging.com) use dual amplifiers and analog to digital converters with independent gain settings to increase dynamic range. These imagers have found favor with numerous camera companies that offer products for the scientific markets (see "sCMOS cameras target scientific applications", Vision Systems Design, March 2014; http://bit.ly/1f6h9CW and "Low-light level cameras appear at Photonics West," Vision Systems Design, April 2014, http://bit.ly/1k6CNpE).

Of course, not all CMOS imagers incorporate such hardware, especially many high-speed imagers that incorporate multiple-taps outputs. When the dynamic range of such cameras needs to be increased, other methods such as integrating temporally varying exposure times can be used. Indeed, this was the theme behind one of the demonstrations shown by BitFlow (Woburn, MA, USA; www.bitflow.com) at April's Vision show in Boston.

On the company's booth, BitFlow demonstrated the four channel EoSens 4XP CoaXPress (CXP) camera from Mikrotron (Unterschleissheim, Germany; www.mikrotron.de) running at 284 fps. To transfer the 2036 x 1728 x 8-bit images to the host PC, the camera was interfaced to BitFlow's Cyton CXP4 PCI Express-based frame grabber, which dynamically cropped the frame to the demonstration monitor's 1920 x 1080 pixel (1080p) resolution. To create a high dynamic range image from a sequence of three images, several image processing algorithms were required including fixed pattern noise (FPN) removal, image convolution and weighted image averaging.

"At data rates as fast as 284fps and images as large as 1920 x 1080 x 8-bits," says Jeremy Greene, Software Engineer with BitFlow, "accomplishing this in real-time on a host PC would be an impossible task." Instead, Greene leveraged the power of a PCI Express-based NVIDIA Quadro K5000 GPU board to perform this task.

To remove any fixed pattern noise, a continuous sequence of images are first captured from the camera with the lens cap affixed, and combined to a single frame in the GPU by a real-time running average algorithm. This temporally constant non-uniform image map is then is then subtracted from successive images that are sequentially exposed at 0.5, 1.5 and 6ms and transferred to the graphics card.

By using a Gaussian filter on each image exposed at 1.5ms, any isolated bright or dark areas within the image will be blurred away, resulting in an image where only contiguous bright or dark regions exist. "By analyzing the intensity of each pixel within this image, it is possible to determine how the shorter exposed image (at 0.5ms), the medium exposure image (at 1.5ms) and the longer exposed image (at 6ms) are to be combined," says Greene.

This weighted average will result in an image of higher dynamic range since, should the pixels in the Gaussian filtered image be really bright, more image data from the shorter exposed image will be combined into the final image and vice-versa.

Written in C++ using Microsoft's Visual Studio using NVIDIA's CUDA parallel computing architecture and GPUDirect for Video protocol, the algorithm was developed in under one week, according to Greene. When running on the GPU, image data can be captured and processed at 284fps resulting in an output data rate of approximately 94fps. These can then be displayed at 60fps on a high-resolution monitor. According to Reynold Dodson, President of BitFlow, the company is offering the software free of charge to customers of its frame grabber products.

More Vision Systems Issue Articles
Vision Systems Articles Archives

More in Boards & Software