GPU speeds up Generalized Hough Transform

Researchers at KPIT Cummins Infosystems (Pune, India) have created a parallel implementation of the Generalized Hough Transform algorithm that can run on a GPU (Graphical Processing Unit).

GPUs speed up Generalized Hough Transform
GPUs speed up Generalized Hough Transform

The Generalized Hough Transform (GHT) is a well known algorithm for object detection. Introduced by Dana H. Ballard in 1981, it is the modification of the Hough Transform using the principle of template matching.

The merit of the algorithm is its ability to detect object location and its pose accurately. However, the algorithm requires a great deal of memory and is computationally intensive. As a result, the use of the algorithm for object detection has been limited.

Seeking to resolve that issue, researchers at KPIT Cummins Infosystems (Pune, India) have created a parallel implementation of the GHT algorithm that can run on a GPU (Graphical Processing Unit). The researchers claim that their implementation is 80 times faster than a CPU-based version.

A technical article entitled "A Fast and Accurate GHT Implementation on CUDA" which describes the work in detail is available on KPIT Cummins' web site here.

Readers interested in accelerating image processing algorithms using GPUs can also find many other related articles on the High Performance Computing on Graphics Processing Units' web site here.

Related articles from Vision Systems Design.

1. Motion estimation algorithm ported to GPU

Researchers from the Illinois Institute of Technology (Chicago, IL, USA) have taken a general purpose block-matching algorithm which is commonly used for motion estimation and ported it to run on multiple NVIDIA (Santa Clara, CA, USA) GPU cards using the Compute Unified Device Architecture (CUDA) computing engine.

2. GPU toolkit speeds MATLAB development

To speed the development time of programmers using MATLAB, AccelerEyes (Atlanta, GA, USA) has developed a GPU toolkit known as Jacket for MATLAB that allows M-code developers to port their code to CUDA and run it on any Tesla, Quadro, or GeForce graphics card from Nvidia (Santa Clara, CA, USA).

3. Researchers compare multicore programming methodologies

Researchers at the Department of Computer and Information Science at Linköping University (Linköping, Sweden) have recently evaluated the effectiveness of OpenCL for programming multicore CPUs in a comparative case study with OpenMP and Intel Threading Building Blocks.

-- Dave Wilson, Senior Editor, Vision Systems Design

More in Home