Unmanned systems at CES 2019: Autonomous cars, motorcycles, and underwater vehicles

Jan. 18, 2019
In roundup from the Association for Unmanned Vehicle Systems International (AUVSI) which highlights some of the latest news and headlines in unmanned vehicles and robotics, the focus is on autonomous vehicle technology at CES 2019, including cars, motorcycles, flying cars, and underwater vehicles.

In roundup from the Association for Unmanned Vehicle Systems International (AUVSI) which highlights some of the latest news and headlines in unmanned vehicles and robotics, the focus is on autonomous vehicle technology at CES 2019, including cars, motorcycles, flying cars, and underwater vehicles. Additionally, learn about researchers at Purdue University's School of Electrical and Computer Engineering that aredeveloping integrative language and vision software that could potentially enable an autonomous robot to not only interact with people in different environments, but also accomplish navigational goals.

Autonomous tech moves up, down and underwater at CES 2019

Drones and self-driving cars continue to be all the rage at CES, the former Consumer Electronics Show, but autonomous technology is also making its way into things such as motorcycles with self-driving capabilities, flying cars and even underwater drones.

BMW made a splash at the show with its iNEXT vehicle, which aims to answer the question about what a vehicle interior can look like when the car no longer has to be driven by a human.

"The interior can be a place for relaxation, interaction, entertainment, or concentration, as preferred," the company says. "It is more like a comfortable and fashionably furnished "living space" on wheels — a new ‘favorite space.’"

The controls are out of sight and only appear when a person needs to help operate the vehicle, which BMW dubs "shy tech."

BMW also highlighted its self-driving chops with an R 1200 GS motorcycle that could drive itself. As demonstrated on an outside lot at the event, the R 1200 GS could steer and stop itself while its operator just enjoyed the ride.

The technology isn’t intended to replace actual riders, just to make them better, BMW says.

"Development of this test vehicle will provide valuable insights into riding dynamics, which can then be used to help the rider recognize dangerous situations and master difficult driving maneuvers," the company says.

Big automakers aren’t the only ones getting into the self-driving act, as there is still room for small companies to gain a foothold. BMW’s iNEXT concept uses lidar and computer vision systems from Innoviz Technologies, a three-year-old Israeli startup staffed mostly by engineers with military backgrounds.

Innoviz’ lidar is a "small, solid-state solution," just a few inches on each side and capable of being mass produced so it’s affordable, says Omer Keilaf, the company’s CEO and cofounder.

Going by the SAE standards for vehicle automation, for level 3, which is driver assist, "you need only one," Keilaf says. "If you’re talking about level 4 or 5 [where the car can drive itself all or some of the time], you probably need several around the car to cover 360, but it’s cheaper than the spinners [spinning lidars, often mounted on the roof]."

"BMW is a very tough customer, very demanding," he says.

"We developed three chips, all the optics, all the mechanics, and the computer vision, that’s a lot to do in a short period of time, automotive grade. Thank God we’re not in medical. That’s the only industry I think is worse."

Flying cars

Bell made a splash with its debut of the Bell Nexus concept, a five-seat flying taxi styled like an upsized drone, complete with six ducted-fan rotors. It uses a turboshaft engine to power a generator that in turn distributes power to the motors.

"This is our solution for urban air mobility," says Chad Stecker, the program manager. "We’re targeting a 150-mile range at 150 miles per hour."

The Nexus is aimed at point-to-point urban flying, similar to concepts from Airbus, Uber and others. It takes off vertically and then the fans tilt to provide forward thrust.

"We’re actively developing the prototype aircraft," Stecker says. "We talked about having a certified air vehicle entry into service in the mid 2020s, targeting 2025 right now."

The prototype aircraft will be fully autonomous, he says.

Under the sea

Another trend that has been picking up at CES in recent years is that of underwater drones, or personal ROVs (remotely operated vehicles). These are essentially smaller, cheaper versions of vehicles that have been used for years by the oil and gas industry and others for underwater exploration.

A couple of years ago, there were one or two systems on display. This year, there were numerous, taking up a sizable chunk of the show’s drone hall.

One of the pioneers of such systems is Sublue, which sells WhiteShark Max, an optionally tethered underwater drone with six motors.

The company started doing underwater inspection, anti-explosive work and other commercial jobs. Two years ago it turned its attention to the commercial world and first got attention by building small motors that pull swimmers through the water, enabling them to stay under longer.

"It’s quite a big value if we can utilize some of this robotic technology, including dynamic motion control, or visual AI. That’s why we are coming into this market, and we look forward to bringing some change to this very old and conventional market," says Sublue Vice President Jun Li.

Li says competition in the market is good because it will help boost awareness of the state of the oceans, which are warming and increasingly polluted.

Next year, the company plans to have an autonomous ROV that can follow a diver around, something that existed only in prototype form at this year’s show.

PowerVision Technology Group unveils new unmanned water technology

During CES 2019, UAS, robotics and data technologies’ company PowerVision Technology Group announced its new water drone offerings, which include PowerDolphin, PowerRay, and PowerSeeker.

Each offering is equipped with its own unique capabilities.

Capable of capturing 4K photography and video, PowerDolphin is equipped with both intelligent fishing functions and water mapping functions. With its external mounting equipment, the platform can find fish, lure fish, and perform troll fishing. It can also directly tow hooks and lure fish to any desired location.

Additionally, PowerDolphin’s front nose has a 220-degree dual-joint rotation 4K camera, allowing it to capture photos or video above water and underwater.

"Whether it is thrilling water sports, an exciting bait dropping process, or the magnificent sea scenery below, all can be glanced at from the first-person view perspective," PowerVision says of its PowerDolphin.

The images captured by PowerDolphin can be sent in real-time using ultra-long-range wireless 1080P image transmission to PowerVision’s multi-product dedicated APP Vision+. To achieve a flip camera preview and recording interface, users simply click on the Vision+ button.

PowerDolphin comes in three different packages— Standard, Explorer, and Wizard models—which are expected to be ready for shipment by the end of the first quarter of 2019.

"The PowerDolphin is a new lifestyle robot that is not only suitable for water sports, photography, and fishing," says Wally Zheng, the founder, and CEO of PowerVision.

"It is a product that will help a scientist explore marine life in many different ways. From ocean reefs to animal observation the PowerDolphin is a multipurpose tool."

The PowerSeeker intelligent fish finder helps anglers accurately determine active fishing spots by enhancing the ability to find fish with real-time detection within 131 feet underwater.

It can draw underwater topographic maps—a first in the marine drone industry—thanks to being equipped with an intelligent sonar device and GPS waypoint function.

Purdue University researchers developing autonomous robot capable of interacting with humans

Researchers at Purdue University's School of Electrical and Computer Engineering are developing integrative language and vision software that could potentially enable an autonomous robot to not only interact with people in different environments, but also accomplish navigational goals.

Led by Associate Professor Jeffrey Mark Siskind, the research team—which also includes Doctoral candidates Thomas Ilyevsky and Jared Johansen—is developing a robot named Hosh that can integrate graphic and language data into its navigational process in order to locate a specific place or person.

Hosh is being developed thanks to a grant funded by the National Science Foundation’s National Robotics Initiative.

​"The project’s overall goal is to tell the robot to find a particular person, room or building and have the robot interact with ordinary, untrained people to ask in natural language for directions toward a particular place," Siskind explains.

"To accomplish this task, the robot must operate safely in people’s presence, encourage them to provide directions and use their information to find the goal."

Among many possibilities, the researchers believe that Hosh could help self-driving cars communicate with passengers and pedestrians, or help complete small-scale tasks in a business place such as delivering mail.

After receiving a task to locate a specific room, building or individual in a known or unknown location, Hosh will unite novel language and visual processing so that it can navigate the environment, ask for directions, request doors to be opened or elevator buttons pushed and reach its goal.

In order to give the robot "common sense knowledge," the researchers are developing high-level software. Common sense knowledge, the researchers note, is the ability to understand objects and environments with human-level intuition. With this knowledge, Hosh would be able to recognize navigational conventions.

An example of this is Hosh incorporating both spoken statements and physical gestures into its navigation process.

"The robot needs human level intuition in order to understand navigational conventions," Ilyevsky says. "This is where common sense knowledge comes in. The robot should know that odd and even numbered rooms sit across from each other in a hallway or that Room 317 should be on the building’s third floor."

The Researchers will develop integrative natural language processing and computer vision software to develop the robot’s common sense knowledge. Usually, the researchers say, natural language processing will enable the robot to communicate with people while the computer vision software will enable the robot to navigate its environment, but in this instance, the researchers are advancing the software to inform each other as the robot moves.

"The robot needs to understand language in a visual context and vision in a language context," Siskind says. "For example, while locating a specific person, the robot might receive information in a comment or physical gesture and must understand both within the context of its navigational goals."

As the technology advances, the researchers expect to send the robot on autonomous missions with more and more difficulty. Initially, the robot will learn to navigate indoors on a single floor, and then, in order to move to other floors and buildings, it will ask people to operate the elevator or open doors for it.

The researchers hope that in the spring, they are at a point where they can begin conducting outdoor missions.

"We expect this technology to be really big, because the industry of autonomous robots and self-driving cars is becoming very big," Siskind says. "The technology could be adapted into self-driving cars, allowing the cars to ask for directions or passengers to request a specific destination, just like human drivers do."

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!