New York State creates 50-mile corridor for drone technology testing

Nov. 15, 2019
The corridor will provide companies with the space to test UAS and UTM technologies in real world settings to collect data that will help inform the industry and regulators.

In this week’s roundup from the Association for Unmanned Vehicle Systems International, which highlights some of the latest news and headlines in unmanned vehicles and robotics, the State of New York established a first-of-its-kind space for drone testing and regulatory studies, increased UAS training in Scotland for mammal disentanglements, and MIT develops technology to allow delivery vehicles to find front doors without maps of the property.

New York Gov. Cuomo announces completion of 50-mile drone corridor

New York Gov. Andrew M. Cuomo has announced the completion of New York’s 50-mile unmanned traffic management (UTM) drone corridor, which runs from Central New York to the Mohawk Valley.

Considered the first of its kind in the nation, the corridor will provide companies with the space to test UAS and UTM technologies in real world settings to collect data that will help inform the industry and regulators, and push society towards regular commercial drone use.

“The completion of the 50-mile drone corridor is a groundbreaking achievement that caps a key strategy laid out in our CNY Rising plan to make Central New York and the Mohawk Valley a global center for UAS testing and innovation,” says New York Gov. Andrew M. Cuomo.

The announcement of the corridor’s completion comes less than a week after it was announced that the FAA has granted the New York UAS Test Site at Griffiss International Airport in Rome, New York approval to fly UAS beyond visual line of sight (BVLOS) within the first segment of the corridor.

Considered the first “true” BVLOS authority granted to the FAA-designated test site, the approval will allow UAS testing to be conducted without the need for ground-based observers.

To date, Northeast UAS Airspace Integration Research (NUAIR) and the New York UAS Test Site have flown more than 2,500 test flights; all of which required multiple people in the field to have a visual line of sight for the aircraft.

With this “true” BVLOS flight approval, those observers will no longer be required in the field thanks to NUAIR and the Test Site demonstrating to the FAA that they can safely conduct BVLOS operations thanks to a combination of the proper safety measures and technologies.

“The ability to fly with this new authority will help develop and advance many aspects of an air traffic management system for unmanned aircraft,” says NUAIR CEO Michael Hertzendorf.

“In order for us to fully employ, operate and unlock the true potential of unmanned systems and achieve a reality where drones are conducting routine missions such as inspecting power lines, protecting critical infrastructure, or delivering medical supplies, we need to ensure the proper safety elements are in place. This authority greatly enhances our ability to test towards that end state.”

Oceans Unmanned, partners expand drone training to Scotland

In an effort to provide UAS support for marine mammal disentanglement response efforts in Sweden, Oceans Unmanned Inc. (OU), the Scottish Entanglement Alliance (SEA) and DARTdrones have expanded their freeFLY initiative into the country.

Through the freeFLY program, which was launched in 2018, networks of local volunteer drone operators that are available to support regional response groups are provided with equipment and hands-on training.

OU notes that UAS operators were provided with initial flight training and advanced safe launching, operating, and recovering drones from small boats and support vessels over the course of a two-day session that it hosted.

“Based on the available data, the rate of entanglements and range of species impacted appear to be increasing in Scottish waters,” says Ellie MacLennan, coordinator of the SEA project.

“The addition of aerial imagery from on-scene, vessel-launched drones will provide improved situational awareness and increased safety for both the animal and responders.”

One of SEA’s goals is to improve reporting rates of marine animal entanglements. SEA also wants to provide fisherman with opportunities to get involved with entanglement research and disentanglement efforts through workshops and training courses.

The entities note that the freeFLY training was part of a larger workshop that focused on “encouraging better reporting of entanglements, widening Scotland’s existing entanglement response network, and sharing insights to better understand, mitigate and respond to incidents.”

“This event was a great opportunity to work with both SEA and the IWC and hopefully begin a long-term partnership” says Brian Taggart, chief pilot for Oceans Unmanned.

“We were able to donate complete drone equipment sets, safety gear, and provide a significant amount of on-the-water training for the response teams.”

MIT engineers develop technique that allows robots to find front door without having to map an area in advance

MIT engineers have developed a navigation method that helps last-mile delivery vehicles find the front door without having to map an area in advance.

With the approach developed by MIT engineers, a robot would use clues in its environment to plan out a route to its destination, which wouldn't be described as coordinates on a map, but instead could be described in general semantic terms such as “front door” or “garage.”

So, in instances where a robot is charged with the task of delivering a package to someone's front door, it wouldn’t take the robot long to explore the property before identifying its target. The robot also wouldn’t have to rely on maps of specific residences.

“We wouldn’t want to have to make a map of every building that we’d need to visit,” says Michael Everett, a graduate student in MIT’s Department of Mechanical Engineering.

“With this technique, we hope to drop a robot at the end of any driveway and have it find a door.”

MIT notes that researchers have spent recent years introducing robotic systems to natural, semantic language, training them to recognize objects by their semantic labels so that they can visually process, for example, a door as a door, as opposed to a solid, rectangular obstacle.

“Now we have an ability to give robots a sense of what things are, in real-time,” Everett says.

Everett, along with the co-authors of the paper detailing the results of this research, Jonathan How, professor of aeronautics and astronautics at MIT, and Justin Miller of the Ford Motor Company, are using similar semantic techniques as a launch point for their new navigation approach, which utilizes pre-existing algorithms that extract features from visual data to generate a new map of the same scene, represented as semantic clues, or context.

In their case, the researchers built a map of the environment as the robot moved around using an algorithm called semantic SLAM (Simultaneous Localization and Mapping), as well as the semantic labels of each object and a depth image.

The researchers note that robots have been able to recognize and map objects in their environment for what they are using other semantic algorithms, but those algorithms haven’t allowed a robot to make decisions in real time while navigating a new environment, on the most efficient path to take to a semantic destination such as a “front door.”

“Before, exploring was just, plop a robot down and say ‘go,’ and it will move around and eventually get there, but it will be slow,” How says.

The researchers sought to speed up a robot’s path-planning through a semantic, context-colored world, so they developed a new “cost-to-go estimator” algorithm that converts a semantic map created by preexisting SLAM algorithms into a second map, which represents the likelihood of any given location being close to the goal.

“This was inspired by image-to-image translation, where you take a picture of a cat and make it look like a dog,” Everett says.

“The same type of idea happens here where you take one image that looks like a map of the world, and turn it into this other image that looks like the map of the world but now is colored based on how close different points of the map are to the end goal.”

To represent darker regions as locations far from a goal, and lighter regions as areas that are close to the goal, the cost-to-go map is colorized, in gray-scale. So the sidewalk, coded in yellow in a semantic map, might be translated by the cost-to-go algorithm as a darker region in the new map, compared with a driveway, which is more and more lighter as it approaches the front door, which is the lightest region in the new map.

The researchers used satellite images from Bing Maps to train this new algorithm. The maps included 77 houses from one urban and three suburban neighborhoods. According to the researchers, the system converted a semantic map into a cost-to-go map, and mapped out the most efficient path, following lighter regions in the map, to the end goal. For each satellite image, semantic labels and colors were assigned to context features in a typical front yard, such as grey for a front door, blue for a driveway, and green for a hedge.

During the training process, the researchers also applied masks to each image to mimic the partial view that a robot’s camera would likely have as it makes it way through a yard.

“Part of the trick to our approach was [giving the system] lots of partial images,” How explains. “So it really had to figure out how all this stuff was interrelated. That’s part of what makes this work robustly.”

The researchers also tested their approach in a simulation of an image of an entirely new house, outside of the training dataset. First, they used the preexisting SLAM algorithm to generate a semantic map. Then, they applied their new cost-to-go estimator to generate a second map, and path to a goal, which in this case was the front door.

According to researchers, their new cost-to-go technique found the front door 189 percent faster than classical navigation algorithms. Everett notes that the results showcase how robots can use context to efficiently locate a goal even in “unfamiliar, unmapped environments.”

“Even if a robot is delivering a package to an environment it’s never been to, there might be clues that will be the same as other places it’s seen,” Everett says. “So the world may be laid out a little differently, but there’s probably some things in common.”

Share your vision-related news by contacting Dennis Scimeca, Associate Editor, Vision Systems Design


Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!