Depth Cameras Can Fill LiDAR’s Autonomous Vehicle Blind Spots—Here’s How

Aug. 1, 2022
Many developers of autonomous vehicles believe LiDAR is a key enabling technology. But, LiDAR doesn’t see everything.

David Chenco-founder and chief technical officer of Orbbec (Troy, MI, USA; www.orbbec3d.com)

According to an eyewear industry research organization, 73% of all drivers in the United States require some amount of vision correction to drive safely. Yet for autonomous vehicles—the anticipated future of personal transportation—that number is currently stuck at 100%.

Despite all their advanced technology, autonomous vehicles (AVs), in some important ways, are driving blind. LiDAR (Light Detection and Ranging), a method in which invisible pulsed laser light is bounced off objects to determine their range, receives a lot of attention as an enabling technology for AVs. While not entirely new, LiDAR is considered by many to be pivotal to the practical realization of self-driving cars.

LiDAR provides simultaneous localization and mapping (SLAM ) capability, solving the seemingly impossible challenge of mapping an unknown environment, and the vehicle’s place within it, in near real time, even as the vehicle moves at high speeds. With SLAM, AVs will have the long-distance information they need to operate. But, there are other challenges to overcome.

Proximity Issues

Although LiDAR is effective at detecting objects up to several hundred meters away, close-up object identification at distances of just a few meters is nearly impossible. When microseconds count before a collision, LiDAR cannot tell if the object about to be struck is a person or a trash can. Furthermore, LiDAR has security vulnerabilities yet to be countered, including the potential for attacks by lasers that can fool LiDAR systems into thinking objects are closer or farther than they appear.

Another issue involves activity inside the vehicle, particularly during the interim period before full autonomy is realized. AVs are clever—but not yet clever enough to overcome challenges posed by humans within the vehicle.

For some years to come, drivers will need to be ready to take over the automobile in response to changing conditions or situations. Once AVs reach SAE Level 3 or higher, for example, it will be easy for drivers to take too much for granted; perhaps even fall asleep. Yet with Level 3 systems, the car will sometimes require the driver to reacquire vehicle control. Without a way to detect when drivers are deeply distracted or incapacitated, the risk factor rises dangerously.

Finally, some manufacturers including Tesla and Toyota believe that the cost of LiDAR eliminates it from serious contention. Although this is a valid sticking point, there are scores of LiDAR companies around the world working to make such systems practical from both the engineering and business perspectives.

Filling the Gap

LiDAR stands a good chance of finding a defining place in the AV market. But, many believe that depth vision systems, integrated alongside LiDAR, are well suited to removing the remaining blind spots inside and outside the vehicle. In so doing, 3D imaging will be the last piece needed to make autonomous driving a reality on the world’s streets and highways.

Depth cameras use RGBD (color-depth) technology. They are typically implemented using dual cameras that provide stereovision for enabling depth perception of the surrounding area, including the position, distance, and speed of nearby objects, and an RGB (color) camera for added textures.

RGBD technology adds an even higher level of performance in comparison to LiDAR technology. LiDAR can focus on long distances, usually in a target range of about 300 meters, with limited density point tracking. RGBD sensors, on the other hand, can track close range with a much higher density within the sensing focus area. This difference allows these sensors to detect exceedingly small objects on the road like a small animal running across the street. With RGBD technology, cameras can recognize and differentiate objects instantly as quickly as they enter the field of view. Combining an image chip with thousands of receiving elements, RGBD systems can capture a scene and read objects and their positions far quicker than humans can—even in complete darkness.

Fast and Intelligent

Precise detection of the immediate vicinity is critical for successful AV development. Deep learning technologies can teach 3D systems to sense and recognize objects with a high degree of fidelity. Although LiDAR enables SLAM and navigation, depth cameras can ensure obstacle perception and identification.

3D provides the intelligent close-in vision that LiDAR cannot. It can recognize a motorcycle vs. a deer or a pedestrian vs. someone on a scooter or skateboard. It can even tell the difference between a dog, a raccoon, an opossum, or a rock.

Within the car itself, RGBD sensors can provide the safeguards AVs need. Because of their ability to recognize and learn objects, RGBD sensors can tell if a head is nodding or if it is turned away from looking at the road ahead for extended periods. This capability will be critical until autonomous driving has reached its ultimate and safe maturity—a process that likely will take years, if not decades.

Dual Systems in Development

Many systems developers, including those in the AV industry, are hard at work integrating LiDAR and depth cameras. In the automotive realm, Mobileye (Jerusalem; www.mobileye.com), an Intel (Santa Clara, CA, USA; www.intel.com) subsidiary, is combining LiDAR and cameras to build a near/far sensory environment that is both redundant and complimentary. But others are furthering the intelligent integration of depth cameras and LiDAR.

To do their work, RGBD sensors must be properly placed. For low-speed, small-sized vehicles, two front-facing and two side-facing (one on each side) depth cameras are a common setup. Larger vehicles, or those for more specialized applications, may need additional units.

As with most instances of automotive technology, performance, reliability, cost, and production repeatability are defining factors. At mass production quantities, depth cameras become both practical and highly cost-effective. Developers should look for supply partners with experience with mass production; a proven ability to produce at quantities of a million or more will ensure both affordability and capacity. Even more important, especially at the early stages, is a supplier with broad experience in associated technologies. Having a partner that understands stereovision, indirect Time-of-Flight (iToF ) and structured light will enable them to tackle extreme or unexpected scenarios.

Success Around the Corner

The ultimate success of autonomous vehicles will depend on reliable, tested, cost-effective, and foolproof vision systems. The safest solution that solves all these needs is a combination of LiDAR and 3D. Together they can instantly recognize a vehicle’s position, along with objects in its vicinity, to execute maneuvers in the fractions of a second necessary to avoid difficulties or even disaster. At the same time, they will ensure the readiness of occupants to take control, should that be necessary. For the time being, AVs continue to have vision problems—but together, 3D and LiDAR are the far-sighted solution.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!