By using mathematical image processing, researchers from Harvard’s School of Engineering and Applied Sciences were able to create a 3D image through a single lens without moving the camera.
Principal investigator and associate professor of natural sciences Kenneth B. Crozier explains in the Harvard press release that the team focused on how the image would look if it were taken from a different angle by observing clues within the rays of light entering the camera.
At each pixel, light containing important information is coming from a certain angle, says Crozier.
"Cameras have been developed with all kinds of new hardware—microlens arrays and absorbing masks—that can record the direction of the light, and that allows you to do some very interesting things, such as take a picture and focus it later, or change the perspective view,” he said in the press release. “That's great, but the question we asked was, can we get some of that functionality with a regular camera, without adding any extra hardware?"
Crozier and his team determined that the key was to surmise the angle of the light at each pixel by taking two images from the same camera position, but focused at different depths. By observing the differences in the images, the team was able to extract enough information for a computer to mathematically create a new image from a “different angle.” When this image is stitched together with one of the first two, it creates the impression of a 3D stereo image that the team is calling “light-field moment imaging.”
This new method would serve a number of purposes and applications, including the creation of 3D images of translucent materials, such as biological tissues, which would allow biologists to study cell behavior more effectively.
Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design
To receive news like this in your inbox, click here.