Imaging algorithms map out the moon

Jan. 3, 2012
Computer scientists at NASA's Ames Research Center (Moffett Field, CA, USA) have used image processing algorithms to transform legacy data from the Apollo Metric Camera onboard the Apollo 15, 16 and 17 spacecraft into an immersive 3-D mosaic map of a part of the moon.

Computer scientists at NASA's Ames Research Center (Moffett Field, CA, USA) have used image processing algorithms to transform legacy data from the Apollo Metric Camera onboard the Apollo 15, 16 and 17 spacecraft into an immersive 3-D mosaic map of a part of the moon.

The "Apollo Zone" Digital Image Mosaic (DIM) and Digital Terrain Model (DTM) maps cover about 18% of the lunar surface at a resolution of 98 ft (30 m) per pixel.

The maps are the result of three years of work by the Intelligent Robotics Group (IRG) at NASA Ames, and are available to view through the NASA Lunar Mapping and Modeling Portal (LMMP) and Google Moon feature in Google Earth.

The software has now been released in several open-source libraries including Ames Stereo Pipeline, Neo-Geography Toolkit, and NASA Vision Workbench.

The Apollo Zone project uses imagery recently scanned at NASA's Johnson Space Center in Houston, Texas, by a team from Arizona State University. The source images themselves are large -- 20,000 pixels by 20,000 pixels -- and the IRG aligned and processed more than 4000 of them using Ames' Pleiades supercomputer.

In the future, the team plans to expand the use of their algorithms to include imagery taken at angles, rather than just straight down at the surface. A technique called photoclinometry allows 3-D terrain to be reconstructed from a single 2-D image by comparing how surfaces sloping toward the sun appear brighter than areas that slope away from it.

To view the maps, visit the LMMP site.

-- By Dave Wilson, Senior Editor, Vision Systems Design

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!