Unique image processing algorithms increase drone reaction speeds

July 16, 2020
Event cameras enable fast image processing and nimble movement.

Driverless vehicles require quick reaction times. If self-driving cars cannot process changing road conditions and take correct action quickly enough to avoid accidents, automated transportation becomes difficult for passengers to trust.

Researchers at the Robotics and Perception Group at the University of Zurich (Zurich, Switzerland; www.uzh.ch/en) have taken a step toward advancing drone reaction times. In an experiment reported in the paper Dynamic Obstacle Avoidance for Quadrotors with Event Cameras (bit.ly/VSD-DGBL), researchers used event cameras and custom algorithms to enable quadcopter drones to achieve reaction speeds up to six times faster than normal.

Lead researcher Davide Scaramuzza explains that the researchers had search and rescue operations in mind when attempting to increase drone response times. The faster a drone can navigate, the better use it can make of limited battery life, and the longer it can spend searching for survivors after a natural disaster like an earthquake.

“However, by navigating fast drones are also more exposed to the risk of colliding with obstacles, and even more if these are moving,” said Scaramuzza.

Event cameras feature image sensors with smart pixels that react only to changes in a scene. Image processing therefore takes less time than with traditional image sensors and the researchers created object-detection algorithms to take best advantage of this processing speed.

The drone used in the experiment has a 6-in. Lumenier (Sarasota, FL, USA; www.lumenier.com) QAV-RXL frame with a Cobra (Fengxian District, Shanghai, China; www.cobramotorsusa.com) CM2208-2000 brushless motor with a 6-in., three-bladed propeller at the end of each arm.

A Qualcomm (San Diego, CA, USA;  www.qualcomm.com) Snapdragon Flight board provides monocular, vision-based state estimation using a Qualcomm machine vision SDK. An NVIDIA (Santa Clara, CA, USA; www.nvidia.com) Jetson TX2 paired with an AUVIDEA (Bavaria, Germany; www.auvidea.eu) J90 carrier board runs the rest of the software stack.

Two front-facing Insightness (Zurich, Switzerland; www.insightness.com) SEES 1 cameras, with 320 x 240 resolution, connect to the Jetson TX2 via USB interface. Each camera features a horizontal FOV of 80°, which is small for obstacle avoidance applications, says Davide. Therefore, his team adopted a vertical stereo setup rather than a horizontal setup to maximize the overlap between both cameras’ FOV and guarantee a baseline of 130°, enabling object detection accuracy between 5 and 10 cm up to 2 m from the cameras.

The first stage of the experiment tested the camera/algorithm combination with the cameras in a fixed position. The cameras detected a variety of objects of different shapes and sizes with 81% to 97% accuracy, depending on the distance the object was thrown and the object’s size. The cameras took an average of 3.5 ms to detect the objects.

In the second stage of the experiment, the cameras were mounted on quadcopter drones that were flown both indoors and outdoors. If the image processing software knew the size of the incoming object in advance, the drone required only one camera to successfully dodge the object. When the drone did not know the size of the incoming object, the drone required two cameras, for stereoscopic vision, to avoid the object.

Even when the objects were thrown from a distance of 3 m and traveling 10 m/s, the drone successfully dodged 90% of the time. The experimental results suggest the ability to greatly increase current reaction speeds of autonomous vehicles, given the correct combination of hardware and software.

About the Author

Dennis Scimeca

Dennis Scimeca is a veteran technology journalist with expertise in interactive entertainment and virtual reality. At Vision Systems Design, Dennis covered machine vision and image processing with an eye toward leading-edge technologies and practical applications for making a better world. Currently, he is the senior editor for technology at IndustryWeek, a partner publication to Vision Systems Design. 

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!