Thermal Ranging Technique Delivers Detailed Images

Sept. 4, 2023
Researchers at Purdue University use thermal imaging and AI to help autonomous vehicles and robots perceive their surroundings at night.

Researchers at Purdue University have developed a technique to overcome the “ghosting effect” common with thermal imaging, allowing for the perception of textures and an understanding of the scene—even in dark, low light or foggy environments.

The Purdue (West Lafayette, IN, USA) researchers have dubbed their method heat assisted detection and ranging, or HADAR, and they say it could be useful in helping autonomous vehicles and robots navigate their surroundings more safely than existing processes.

They note that other technologies have well-known shortcomings for this purpose. Cameras require illumination, making them impractical choices for navigating in less-than-ideal situations outdoors. LiDAR, which does provide perception without illumination, is difficult to scale because of signal interference and eye safety concerns.

In an article in Nature (bit.ly/3qIH0MP) explaining their work, the researchers wrote, “HADAR not only sees texture and depth through the darkness as if it were day but also perceives decluttered physical attributes beyond RGB or thermal vision, paving the way to fully passive and physics-aware machine perception.”

Fanglin Bao, research scientist at Purdue and lead author, adds, “Visible photons during the day carry information about the physical world to our eyes giving us a sense of reality. On the other hand, the night-time invisible photons can only be deciphered by AI algorithms and advanced sensors. We have proved for the first time that broad daylight and pitch-dark night are strikingly equivalent.”

“Furthermore, HADAR can extract unique temperature and material information about the scene,” he says.

Currently, LiDAR (in combination with sonar, radar and cameras) is favored for use in navigation for vehicles or robots because thermal imaging typically produces low contrast images.

“Objects and their environment constantly emit and scatter thermal radiation, leading to textureless images famously known as the ‘ghosting effect,’” Bao explains. “Thermal pictures of a person’s face show only contours and some temperature contrast; there are no features, making it seem like you have seen a ghost. This loss of information, texture and features is a roadblock for machine perception using heat radiation.”

How HADAR Overcomes the “Ghosting Effect” 

HADAR solves this problem. It recovers the textures in the scattered heat signals. “We use thermal physics and machine learning combined with spectral resolution in thermal images to achieve this goal,” says Zubin Jacob, associate professor of electrical and computer engineering at Purdue University, and another author of the study.

How does HADAR do this?

“HADAR uses hyperspectral thermal imaging, which takes thermal images of the scene for hundreds of different colors in the thermal infrared. These frequencies/colors are invisible to the human eye,” Jacob says.

To accomplish this, the researchers built an imager, HADAR prototype-1, to acquire the images. The imager comprises an A325sc thermal camera from Teledyne FLIR (Wilsonville, OR, USA) and 10 thermal infrared filters from Spectrogon (Taby, Sweden).

Explaining the imaging process, Bao says, “We first use the thermal camera to take images of the scene (car+person+cardboard) for each filter. Different filters have different transmittance spectra, and combined together, they give us the spectral resolution of the scene. The spectral resolution is crucial to HADAR.”

The researchers then use TeX, which is a decomposition algorithm, to extract information about the temperature (T), material (e) and texture(X) for each object in the image data.

The researchers then show these three attributes in the HSV color space, providing a representation of colors in which “H means Hue. S means Saturation. V means Value (or Brightness),” Bao says. HSV is an alternative to the more common RGB, he adds.

This process creates the TeX vision images, and then “HADAR then performs object detection and ranging based on the TeX vision images,” to create the maps that autonomous vehicles and robots use to navigate their surroundings, Bao says. 

Results of a Comparison Study

To evaluate HADAR’s performance and compare it to both RGB optical imaging and a LiDAR point cloud, the researchers used an outdoor scene at night with a car, a human and a real-scale cardboard cutout of Albert Einstein. RGB optical imaging and LiDAR could not distinguish between the human and the Einstein cutout. LiDAR also could not see the car at night.

“Beyond our major engineering advance, we have made a foray into the foundations of perception,” Bao explains.

About the Author

Linda Wilson | Editor in Chief

Linda Wilson joined the team at Vision Systems Design in 2022. She has more than 25 years of experience in B2B publishing and has written for numerous publications, including Modern Healthcare, InformationWeek, Computerworld, Health Data Management, and many others. Before joining VSD, she was the senior editor at Medical Laboratory Observer, a sister publication to VSD.         

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!