Technology Trends: Wavelet algorithm targets embedded systems

At present, there are a number of standards available to perform image compression.

Jan 1st, 2000
Th 0100news1

At present, there are a number of standards available to perform image compression. These include discrete-cosine transform-based methods and fractal and wavelet techniques. While many standards have found applications in specific markets—MJPEG in medical imaging, MPEG2 in video encoding—wavelet transforms have yet to find such a specific market, despite that in some cases, wavelet-compression can yield images 25% the size of a similar-quality image using JPEG methods.

Click here to enlarge image

RIGHT. "Café" image from the suite of JPEG2000 images is encoded using a conventional nonover-lapping tiling algorithm (left) and a memory-scaleable wavelet transform developed by the Motorola Research Center (right).

Traditionally, wavelet transforms operate on images stored in frame buffers or on host PC image memory. For embedded systems, this increases the amount of memory needed and the cost of image compression systems. To solve this memory requirement problem, images to be compressed can be tiled. By processing tiles sequentially and independently, it is possible to reduce buffering up to the tile size.

"Although this solves the memory requirement problem," says Igor Kharitonenko of the Motorola Research Center (Sidney, Australia), "the subsequent quantization of the wavelet coefficients introduces undesirable distortions that appear on the boundaries of the tiles." These distortions become visible at low bit rates after decoding the compressed bitstream as can be seen in the image "Café" from the JPEG2000 Compression images from the JPEG committee (Web: www.jpeg.org). This image was encoded using nonoverlapping tiling at a 0.125 bpp rate with tile sizes of 128 x 128.

To overcome these artifacts, Kharitonenko and his colleagues have developed a memory-scalable wavelet transform that also operates in a block-based fashion, but treats the boundary regions in a different manner. "Because the transform we have developed only requires a single block memory buffer and removes the necessity for random access to other blocks, says Kharitonenko, "both the memory and the computation time are reduced."

—ANDREW WILSON

More in Life Sciences