Researchers improve with robust processing

Image reconstruction, the abstract rebuilding of an image that has been torn apart or degraded due to lost bits during the transmission of a compressed file, is being pursued intensively by industry researchers using the latest computer-intensive hardware boards, software algorithms, and combinations of both. Reconstruction is needed because skilled users need to examine more intricate details of imaged objects, particularly for medical images of tumors and lesions.

Apr 1st, 1998
Th Vsd51535 32

Researchers improve with robust processing

By Richard Parker, Contributing Editor

Image reconstruction, the abstract rebuilding of an image that has been torn apart or degraded due to lost bits during the transmission of a compressed file, is being pursued intensively by industry researchers using the latest computer-intensive hardware boards, software algorithms, and combinations of both. Reconstruction is needed because skilled users need to examine more intricate details of imaged objects, particularly for medical images of tumors and lesions.

These complicated images are often generated by confocal microscopy, x-rays, computed tomography, and magnetic-resonance imaging. Other applications seeking use of image reconstruction include paleontology (the study of fossils), weather forecasting, radar signal analysis, and adaptive optics.

Image reconstruction is highly computer-intensive and requires the use of powerful array-processor and digital-signal processor (DSP) systems, such as those based on the Sharc (Super Hardware Architecture Computer) processor from Analog Devices (Norwood, MA) and the TMS 320C40 processor from Texas Instruments (Dallas, TX). These and similar more powerful processors are easing the task of software developers to produce applicable algorithms, many of which are proprietary.

Noise factors

One medical application benefiting from image reconstruction is speckle-noise reduction in the ultrasound imaging of lesions and cancerous tumors. Speckle noise is the primary factor that limits the contrast resolution in diagnostic ultrasound imaging. It limits the detection of small low-contrast lesions and makes ultrasound images generally difficult for all but specialists to interpret.

Recognizing this fact, Lawrence J. Busse, a consultant and founder of L. J. B. Development (Fort Wright, KY), has developed a geometric filtering (GF) algorithm for speckle-noise reduction that allows general medical personnel to readily interpret ultrasound images without the need for a specialist. He developed the algorithm while working as director of research for Tetrad Corp. (Denver, CO), under a grant from the US National Cancer Institute.

The algorithm runs on an AL860 dual-processor board from Alacron Inc. (Nashua, NH) using a frame grabber and a personal computer. (Alacron now also offers more powerful boards based on the Analog Devices Sharc DSP). Using commercially available signal-processing hardware, the algorithm processes and displays ultrasound image frames obtained from a Tetrad E/U2200 ultrasound imaging system at close to 2 frames/s. "With the powerful processor boards available today, I`m sure we can do a lot better in terms of frame rate," notes Busse.

The GF nonlinear iterative algorithm is also being used to reduce noise in binary images and synthetic-aperture-radar (SAR) images. However, when it was applied to images of anatomical features, the algorithm`s effectiveness in reducing speckle noise was determined by the field of view of the scanned image or the spatial sampling range of the image. Images scanned with a small field of view (that is, at a high spatial sampling rate) required many iterations of the algorithm. However, images with a large field of view (that is, a low spatial sampling rate) appeared to be oversmoothed following only a single GF iteration.

Busse set about linking the spatial sampling rate to the actual spatial resolution in ultrasound data to optimally obtain the highest spatial resolution with the fewest GF iterations. He developed a model of the lateral and axial resolutions. The model was developed as a function of transducer geometry, center frequency, and RF processor to determine the data and video sampling rates that should be applied before GF application. Upon usage, this model obtained effective speckle-noise reduction with a single GF application (see Fig. 1).

Recovering images

Medical and other complex images must often be compressed and transmitted at a low bit rate over long distances. However, such images often get corrupted by lost bits when they are decompressed at the receiving end. Using high-bit-rate compression can alleviate this problem, but it, in turn, requires the use of costly high-bandwidth transmission lines and image-storage systems. Ideally, the lowest bit rate possible is needed that retains as much image quality as possible.

To that end, researchers Tom Gilmore, K. Tuggle, and M. F. Chouikha of the department of electrical engineering at Howard University (Washington, DC) working with researcher Nigel Ziyad at NASA Goddard Space Flight Center (Greenbelt, MD) developed the principal-component-analysis (PCA) neural net algorithm to compress, represent, and recover medical x-ray images. This algorithm reduces the dimensions of an image`s data set to just "principal components" but still retain a high degree of image quality.

During processing, the image`s original data set is transformed to a new set of variables, with the first few components of the transformed data set retaining most of the variations present in the original data set. The image`s peak signal-to-noise ratio is used to measure performance results. A Hilbert transform is also used after applying the PCA algorithm to further improve image quality. (see Fig. 2).

In another example, a 128 ¥ 128-pixel image of a lung with a tumor and image noise was reconstructed using the PCA algorithm and an 8 ¥ 8-block Hilbert transform; it also produced a higher image-quality level. Measurements revealed that peak signal-to-noise ratios of 34.82 dB (versus 32.30 dB without an 8 ¥ 8-block Hilbert transform) and 35.53 dB (versus 32.73 dB without an 8 ¥ 8-block Hilbert transform) were achieved, respectively, for the two different images.

Researchers are currently investigating the use of a "lapped orthogonal transform" technique that would help eliminate "blocking" artifacts in an image. This unitary transform involves the overlapping of image blocks to reduce the artifacts appearing at the boundaries of an image.

Packet loss

According to researchers, few techniques are available to obtain acceptable reversible (lossless) medical-image compression when data are transmitted in packet form over ATM networks. Consequently, researchers Alan Merriam, Albert Kellner, and Pamela Cosman at the department of electrical and computer engineering at the University of California at San Diego (La Jolla, CA) have proposed lossless image-compression schemes that can recover lost image bits over ATM networks. Their work was funded by the Focused Initiative on Photonics for Data Fusion Networks of the US Ballistic Missile Defense Organization.

According to the researchers, in most lossless image-compression algorithms, the loss of a single packet in transmission can preclude useful image reconstruction and therefore require the retransmission of the lost packet or the entire image file. This retransmission could result in increased total image transmission latency and a serious reduction in image quality. The researchers tried two image-compression approaches that were completely re versible when no packets were lost.

These methods produced a small reconstruction error in the case of lost packets. One method used linear prediction in the pixel domain to decorrelate the image, followed by Huffman coding of the residuals. The second method, which provided greater compression than the first, used a multiresolution sequential transform and subband coding to decorrelate the image. This method also used Huffman coding of the coefficients.

Both methods were tried on a 256 ¥ 256-pixel (at 8 bits/pixel) magnetic-resonance scan of the brain in which bits were lost during transmission and then reconstructed. Results showed that the sub-band method did not perform as well as the linear-predication method in providing lower peak signal-to-noise ratios.

Eyeing results

Image reconstruction has also been applied to eye examinations. For example, primary open-angle glaucoma (POAG) is a chronic ophthalmic disease that causes progressive blindness. In the United States and United Kingdom, studies show that one out of every 200 persons suffers from POAG. This condition is caused by high intraocular pressure that destroys the optic-nerve axons responsible for carrying visual information. It is highly prevalent in the eye region where the optic nerve emerges, known as the optic disk. The disk, similar to a cup, increases its depth as POAG progresses.

Unfortunately, POAG reveals no early warning symptoms. Standard clinical diagnosis methods include measurement of intraocular pressure, perimetry, and visual examination of the disk. All three methods have proven inadequate for early POAG prognosis.

As a result, re searchers Juan Pablo Duclos and Andres Guesalaga of the department of electrical engineering at the Catholic Pontifical University (Santiago, Chile) have proposed a stereoscopic technique for image reconstruction of the optic-disk structure and, therefore, improved POAG evaluation. Their algorithm involves the detection of corresponding points in the image of an optic disk (using CCD cameras) and the image reconstruction of the 3-D coordinates of the detected points. The algorithm was processed on a 133-MHz Pentium-based PC and permitted the researchers to three-dimensionally visualize, manipulate, and precisely measure the optic disk in 45 s.

In some applications, a largely hardware approach can be implemented for image reconstruction. For example, adaptive-optics systems, which are often used to correct for optical distortion in an image of interest, usually require powerful digital-signal processing that cannot be met with commercially available products. A simple example is a "smart" camera that provides an "autofocus" feature. A complex example is in astronomy, where atmospheric distortions are corrected for improved viewing of astronomical objects. Such optics processing involves "reconstructing" an image for correction purposes.

Adaptive optics

In an adaptive-optics application, researcher Robert J. Eager at Boeing`s Rocketdyne Division (Canoga Park, CA) has developed a massively parallel DSP system architecture for reconstructing wavefronts obtained from a 941-channel adaptive optics system. The architecture, called a real-time reconstructor (RTR), incorporates 1024 processing elements and supports real-time (1-kHz) wavefront reconstruction for an adaptive-optics system at the Starfire Optical Range, part of the Phillips Laboratory at Kirtland Air Force Base (Albuquerque, NM). Unlike most image reconstruction systems, the RTR system works "live" via a 3.5-m telescope, not after an image is received and postprocessed.

The RTR system design requirements specified 16-bit integer precision of ALU, SHIFT, MAC, and I/O operations with a multiply accumulator size of 32 bits or more, conditional branching and ALU saturation, and user-system communications during runtime operation. The hardware also needed sufficient DSP local memory for 16 coefficient sets, programming by commercial C-compilers and assemblers, extensions to support different adaptive-optics system sizes and configurations, and packaging in a 19-in. rack enclosure using Eurocards.

After determining that available processing boards did not meet requirements, Eager designed a system architecture that efficiently transfers data from the system`s input ports to the target processor and from the target processor to the system`s output ports. He inserted separate input, output, and control-loop pipelines to handle data flow through the DSP section. A multiple-DSP approach was therefore developed that used a single processing element per channel to minimize performance degradation caused by context-switching overhead. An Analog Devices ADSP2111-KS80 processor was chosen because it more closely met the design requirements than other DSPs.

Click here to enlarge image

FIGURE 1. A geometric filtering algorithm developed by Lawrence Busse while at Tetrad Corp. reduces speckle noise in the ultrasound medical images of internal human organs. In one example, an original image of a scanned abdomen shows the liver and its surrounding area (upper left). The same image is reconstructed after applying geometric filtering (lower left). In another example, an original image depicts a kidney (upper right). The same image is restored via geometric filtering (lower right).

Click here to enlarge image

Click here to enlarge image

FIGURE 2. At Howard University, a principal-component analysis algorithm has been developed to recover lost compressed image bits and to reconstruct x-ray images. Original 128 x 128-pixel x-ray image of a rib cage (left), is reconstructed and is further improved in quality after applying an 8 ¥ 8-block Hilbert transform (right).

More in Life Sciences