IMAGE PROCESSING: GPU processors speed time delay filtering
In applications such as medical imaging and surveillance it is often necessary to remove unwanted artifacts from a series of images.
In applications such as medical imaging and surveillance it is often necessary to remove unwanted artifacts from a series of images. In security systems, for example, slowly moving background objects such as clouds or shadows can be removed from sequences of images to highlight the faster moving objects such as pedestrians or cars.
To accomplish this task, a method known as time-domain median filtering is applied to a sequence of images. In this method, each new incoming image together with a number of previously acquired images is used to calculate an updated median. With each new acquired image the oldest is deleted from the sliding image stack. Thus, the median filtered background image varies continuously but more slowly than the individual incoming images. To perform the median filter of the whole stack for each new incoming image justifies the usage of graphics processing unit (GPU) power.
“This median,” says Volker Gimple, image-processing group manager with Stemmer Imaging (Puchheim, Germany; www.stemmer-imaging.com), “yields an image that includes the very slowly moving components of the image.”
As this median filtering is taking place, every incoming frame in the pipeline is subtracted from the median image. This yields an image that separates the quickly moving components in the image from the slowly moving ones. Since this process is performed dynamically, any slow-moving change in the background is automatically updated, revealing the fast-moving objects.
“Of course,” says Gimple, “should an object to be moving ‘relatively slowly’ through the series of images, the object may not be highlighted properly since it may appear in a number of frames in which the median filter is performed. Because of this, the number of frames with which to perform the median filter can be lowered in software to adjust the system for objects of varying speed. Similarly, if the object is moving very quickly, the number of these frames can be effectively increased.”
While this method of image filtering has been known for many years, it is only recently that such algorithms have been demonstrated on general-purpose computers with GPUs such as the GeForce series from Nvidia (Santa Clara, CA, USA; www.nvidia.com). Because the multiple processors on the Nvidia devices can be dynamically allocated to provide pipelined image-processing functions, they are ideally suited to these tasks.
More than two years ago, Gimple and his colleagues at Stemmer demonstrated how a PC-based system using an Nvidia 8800 graphics card could be used in conjunction with a Gigabit Ethernet CCIR camera to capture and then perform a Sobel filter on the captured images at 30 frames/s (see “Graphics architectures speed processing,”Vision Systems Design, September 2007).
Now, using Microsoft’s DirectX API and Microsoft High Level Shader Language compiler, Stemmer has incorporated time-domain median filtering into its Common Vision Blox software, allowing developers to run the program as a GPU-based application. In a recent demonstration of this technology, Stemmer showed how a TM-1327 1.4-Mpixel GigE camera from JAI (Copenhagen, Denmark; www.jai.com) interfaced to a PC could perform this task.
“As images are transferred at 30 frames/s into the host CPU memory,” says Gimple, “they are transferred over the PCI Express bus to the Nvidia graphics card. Before filtering, Bayer interpolation is performed on the images to render true color images.” A given number of software frames can be then selected as the images pass through the graphics pipeline.
After these images are selected, a median filter is performed and the incoming images subtracted from the result. This is then displayed by the graphics processor. “Because the processor contains such a large number of pipelined processing elements,” says Gimple, “it is possible to adjust the number of frames being filtered depending on the requirements of the application.” In this way, the required difference image can be adjusted.