Scalable Aerial Vision System Could Enhance Environmental Monitoring and Hazard Response

University of Minnesota researchers creates a scalable drone swarm system utilizes high-resolution cameras and neural networks to analyze smoke plume dynamics.
Sept. 16, 2025
4 min read

What You Will Learn

  • The system employs a manager drone coordinating four worker drones equipped with high-resolution cameras and Nvidia Jetson computers for synchronized data collection.
  • Neural radiance fields (NeRF) enable detailed 3D reconstruction of smoke plumes from multi-angle images, capturing their evolution over time.
  • Cost-effective at approximately $1,000 per drone, the system could be a scalable alternative to expensive satellite and LiDAR technologies.

To detect, track, and analyze wildfire smoke plumes, researchers at the University of Minnesota (Minneapolis, MN, USA) have developed a five-camera embedded vision system mounted on autonomous aerial drones operating as a swarm.

The setup allows them to capture multi-angle images of smoke plumes, enabling 3D image reconstruction of those plumes using neural radiance fields (NeRF), a neural network-based method for building 3D scenes from a set of 2D images.  

The researchers say their system provides on-site data about the dynamic behavior of smoke particles—an important input for improving prediction algorithms for smoke plume dispersion, fire spread, and pollutant emissions. The system could be “a practical and scalable solution for both real-time hazard response and long-term environmental monitoring,” they write in an article published in the journal Science of the Total Environment

Their model also could be applied to other outdoor atmospheric hazards such as snowstorms, sandstorms, and volcanic eruptions.

 Components of the Aerial Embedded Vision System

The multi-drone machine vision system includes these components:

  • A manager drone and four worker drones built on Holybro (Hong Kong) S500 V2 quadcopter frames using Holybro Pixhawk 6C flight controllers running ArduPilot, open-source vehicle autonomy software. 
  • A 12-MPixel USB camera from ArduCam (Nanjing, China) mounted on each drone using a three-axis gimbal.
  • A Jetson Orin Nano computer from Nvidia (Santa Clara, CA, USA) loaded on the manager drone while each of the four worker drones has a Jetson Nano computer from Nvidia.

Gathering and Processing Image Data 

The drones are interconnected via an outdoor Wi-Fi network with a speed of 1775 Mbps with coverage of 200-300 m at the 5 GHZ band. The drones run MAVROS, part of the Robot Operating System (ROS). It facilitates communication between the onboard computer and flight controller as well as among drones, allowing for autonomous swarm operations.

As directed by the manager drone, worker drones fly in a synchronized manner and capture images at specific time intervals. The images are compiled and fed into the NeRF model, which outputs a point cloud. The researchers further process the point cloud to remove the background noise and segment the smoke plume in 3D. They do this using a combination of two machine learning models: Gaussian Naive Bayes and YOLOv8 (You Only Look Once).

The entire process is repeated for each time interval.

Related: Delivering High-Resolution Machine Vision for Enterprise Drone Applications

Taken together, the succession of reconstructed models captures a plume’s “evolution over time, showcasing its growth, directional shifts, and eventual dissipation,” they write.

Validating the 3D Reconstruction Model

But does the 3D reconstruction model work in the field? To answer that question, the researchers conducted two tests.

In the first test, designed to measure accuracy, they deployed the drones at an altitude of 10 m over a Ford F-150 pickup truck to snap pictures. The subsequent point cloud had an error rate of 1.8% with a standard deviation of about 0.98%.

In the second test, designed to measure effectiveness, they used two smoke generators to produce smoke plumes that extended upwards to about 10 m in height with widths of 1-10 m. Each worker drone completed a full circular circuit in 32 seconds and recorded 260 images at 8 fps. Each drone completed five data collection circuits. After gathering and compiling the images, the researchers completed the reconstruction process, confirming that the process captures changes in the plume’s dynamics over time.  

“This approach allows for high-resolution data collection across large areas—at a lower cost than satellite-based tools,” explains Nikil Krishnakumar, a graduate research assistant with the Minnesota Robotics Institute at the University of Minnesota and first author of the paper.

Krishnakumar and other authors estimate that the drones cost approximately $1,000 each, presenting a “significantly more affordable alternative to high-resolution LiDAR or multispectral imaging systems.”

Related: Drone Detection System Uses AI to Fight Prison Contraband Smuggling

The researchers plan to continue refining their process by experimenting with such changes as adaptive, rather than fixed, flight circuits and other neural network methods of 3D image reconstruction.

References

Krishnakumar N, Sharma S, et al. 3D characterization of smoke plume dispersion using multi-view drone swarm. Science of the Total Environment. Vol. 980. June 2025. https://doi.org/10.1016/j.scitotenv.2025.179466 

About the Author

Linda Wilson

Editor in Chief

Linda Wilson joined the team at Vision Systems Design in 2022. She has more than 25 years of experience in B2B publishing and has written for numerous publications, including Modern Healthcare, InformationWeek, Computerworld, Health Data Management, and many others. Before joining VSD, she was the senior editor at Medical Laboratory Observer, a sister publication to VSD.         

Sign up for Vision Systems Design Newsletters
Get the latest news and updates.

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!