3D Camera Without Lens Could Be Useful in Machine Vision

Nov. 14, 2022
The system overcomes the obstacles seen in other cameras without lenses, resulting in a lightweight and fast camera.

Researchers have developed an experimental camera without a traditional lens that captures 3D color images, even those hidden behind other objects, with a single exposure.

To accomplish this feat, researchers used computational imaging—a technique in which an image is formed by using algorithms that typically require a significant amount of computing. In these systems, the sensing system and algorithms are tightly integrated.

In the case of the imager discussed here, the researchers combined a thin microlens array made of a flexible polymer (in place of a traditional lens), a commercial sensor, and a deep learning neural network that reconstructs 3D scenes from 2D images.

The system overcomes the obstacles seen in other cameras without lenses, resulting in a lightweight and fast camera. The researchers addressed problems such as extensive calibration of the point spread function (PSF) and a time-consuming process to complete calculations and reconstruct objects.

Other researchers have used deep learning models to address these issues; however, those models require extensive training and may not provide adequate image resolution. To solve these problems, the current group of researchers developed what they refer to as a physical neural network, which they say quickly learns how to reconstruct a 3D scene and provides good resolution.

"To the best of our knowledge, we are the first to demonstrate deep learning data-driven 3D photorealistic reconstruction without system calibration and initializations,” the researchers from the University of California, Davis (Davis, CA, USA; www.ucdavis.edu), Department of Electrical and Computer Engineering, write in a journal article (bit.ly/3Tl6xEJ) describing their work. 

The camera could be useful in machine vision tasks such as inspection, the researchers write. Because the camera captures 3D information in a single exposure, there is no need to take multiple images, each focusing on a different depth, to capture 3D information.

This saves time in machine vision applications. “Once the 3D spatial information is recovered from the snapshot image, one could then use relevant computer vision algorithms to further extract various features that are required for specific machine vision applications,” explains Weijian Yang, Associate Professor in the Department of Electrical and Computer Engineering at the University of California, Davis.

Feng Tian, a doctoral student at the university and an author of the article, adds in a news release from Optica (Rochester, NY, USA; www.optica.org), publisher of the journal, that the camera also could “give robots 3D vision, which could help them navigate 3D space or enable complex tasks such as manipulation of fine objects.” 

The camera comprises a microlens array of 37 lens units that are each 3 mm in diameter. The researchers randomly distributed them within an aperture of 122 mm in diameter. They positioned the microlens array 15 mm from the surface of the image sensor—an IMX309BQJ from Sony (Tokyo, Japan; www.sony.com).

The units on the microlens array have different fields of view, allowing them to image objects from different perspectives, which allows this experimental camera system to reconstruct images of scenes that include partially hidden objects. “This is particularly helpful to image objects with complex structures, as one may not need to rotate the objects to take multiple images from different angles,” Yang says. 

To train the deep learning neural network, the researchers displayed images on a monitor and then used the lensless camera to capture those images. They then used paired sets of the original images and captured images to train the network.  “After training, the neural network learns the physical model of reconstruction, and could then reconstruct object scenes that it has not seen during training,” explains Yang.  The neural network learns how to map information from the 2D images to reconstruct a 3D scene.

To test the refocusing and 3D imaging capabilities of the camera system, the researchers lined up small toy characters on a table and captured images through the microlens array and then used the algorithms to reconstruct the group of objects at different distances. “The reconstructed object scene correctly focuses on the toy characters at the corresponding distance, and blurs those at the foreground and background,” the researchers wrote.

About the Author

Linda Wilson | Editor in Chief

Linda Wilson joined the team at Vision Systems Design in 2022. She has more than 25 years of experience in B2B publishing and has written for numerous publications, including Modern Healthcare, InformationWeek, Computerworld, Health Data Management, and many others. Before joining VSD, she was the senior editor at Medical Laboratory Observer, a sister publication to VSD.         

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!