Big Data

AI helps drones dodge fast-moving objects


Drones capable of autonomous avoidance aren’t novel — DJI has featured them in its lineup for years. But what about quadcopters that can dodge fast-moving projectiles? That’s what scientists from the University of Maryland and ETH Zurich describe in a newly published paper (“EVDodge: Embodied AI For High-Speed Dodging On A Quadrotor Using Event Cameras“) on the preprint server Arxiv.org. They claim that their philosophical “reimagining” of navigation stacks, dubbed Embodied AI, enables virtually any drone to avoid moving obstacles with no more than cameras and an onboard computer.

They intend to release the accompanying code and training data set after the paper is accepted for publication.

“We develop a purposive artificial intelligence-based formulation for the problem of general navigation,” wrote the paper’s coauthors. “Embodied AI … [is] AI design based on the knowledge of agent’s hardware limitations and timing/computation constraints … To our knowledge, this is the first deep learning-based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor.”

Drone AI obstacle

So how does it work? The team’s setup comprises a front-facing event-based camera — i.e., an image sensor with superior latency and high dynamic range — and a lower-resolution down-facing camera, plus sonar and an inertial measurement unit. Drawing on the collected data, a network of “shallow” AI models — EVDeBlurNet, EVHomographyNet, and EVSegFlowNet — each performs “modest” tasks like deblurring and denoising image sequences, approximating background motion from the down-facing camera, and segmenting obstacles to compute their motion. Obstacles are detected for five consecutive frames to estimate their trajectory or velocity, and once that’s done the models classify them according to their geometry — (1) a sphere with a known radius, (2) an object of an unknown shape with known sizes, or (3) an unknown object without prior knowledge — and instruct the drone to react accordingly.

The team conducted tests with four different objects — a spherical ball, a toy car, a model airplane, and a Parrot Bepop 2 drone — either thrown or flown at the target Intel Aero quadcopter from distances upwards of 17 feet. (To enable “robust” estimation, carpets of different textures were laid on the ground to provide “strong contours” in event frames.) Over the course of more than 200 trials, the researchers achieved a success rate of between 70% and 86%, the latter with the toy car and model airplane. Additionally, they showed that their navigation stack could be adapted to a pursuit task (wherein the Aero followed the Bepop 2), which they say demonstrates its generalizability.

“[Our] philosophy was used to develop a method to dodge dynamic obstacles using only a monocular event camera and onboard sensing,” concluded the researchers, who leave to future work a more robust construction of event frames from the onboard cameras that might further improve performance. But for now, the researchers say they are able to “successfully evaluate and demonstrate the proposed approach in many real-world experiments with obstacles of different shapes and sizes.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.