GPUs to determine patient exposure to radiation from imaging

Sept. 28, 2012
Researchers at Rensselaer Polytechnic Institute (Troy, NY, USA) aim to harness the power of GPUs in commercial graphics cards to help determine a patient’s exposure to radiation from X-ray and CT imaging scans.

Researchers at Rensselaer Polytechnic Institute (Troy, NY, USA) aim to harness the power of GPUs in commercial graphics cards to help determine a patient’s exposure to radiation from x-ray and CT imaging scans.

Rensselaer University Professor X. George Xu is leading the interdisciplinary team of academic, medical, and industrial researchers to develop the techniques to calculate the radiation dose a patient will receive from a CT scan.

Funded by a $2.6m grant from the National Institute of Biomedical Imaging and Bioengineering (NIBIB; Bethesda, MD, USA), the research team aims to use NVIDIA video cards and parallel processing techniques to help reduce radiation dose calculations.

"There is a high level of interest at the national level to quantify and reduce the amount of ionizing radiation involved in medical imaging. Our parallel computing method has the potential to be used in everyday clinical procedures, which would dramatically decrease the amount of radiation we receive from CT scans," says Xu.

Several national and international bodies have already called for the establishment of a centralized "dose registry" system that would track over time the number of CT scans a patient undergoes, and the radiation exposure resulting from those procedures.

Additional efforts by the radiology community call for new measures to avoid unjustified CT scans and to greatly reduce the radiation exposure for pediatric and pregnant patients. However, current software packages for determining and for tracking CT doses are insufficient for such a critical task, Xu says.

To help solve this problem, Xu has spent nearly a decade developing software to calculate the exact amount of radiation a specific organ of a patient will receive from a CT scan. Running on a standard desktop computer, however, the software currently takes about 10 hours to perform the calculation and produce a result -- far too long to be practical in a clinical setting.

In the study funded by the NIBIB, Xu and the research team will design and test new simulation software on the graphic processing units (GPUs) found in computer graphics cards, instead of running solely on the central processing units (CPUs) of a desktop computer.

After developing and validating the software, the research team will integrate it with GE's LightSpeed CT scanners at Massachusetts General Hospital which they hope will enable them to estimate patient radiation doses in less than a minute.

Recent articles on GPUs from Vision Systems Design that you might also find of interest.

1. Motion estimation algorithm ported to GPU

Motion estimation in an image sequence has many potential uses such as detecting and tracking moving objects. However, one potential drawback to its use is the computation time needed to run the motion estimation software.

2. GPU toolkit speeds MATLAB development

AccelerEyes (Atlanta, GA, USA) has developed a GPU toolkit known as Jacket for MATLAB that allows M-code developers to port their code to CUDA and run it on any Tesla, Quadro, or GeForce graphics card from Nvidia (Santa Clara, CA, USA).

3. Direct access to GPUs accelerates video-processing tasks

Historically, getting video into and out of the GPU from third-party hardware introduced unnecessary delays. Now, a solution to the problem has been developed by Nvidia (Santa Clara, CA, USA) in the form of an application programming interface (API) that allows third-party hardware to communicate directly with the company's GPUs.

-- Dave Wilson, Senior Editor, Vision Systems Design

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!