ADAS: A compelling case study of computer vision success

March 30, 2016
The fully autonomous vehicle trials underway by established automobile companies such as Ford and GM, along with upstarts such as Baidu and Google (and, persistent rumors suggest, Apple) may capture the bulk of popular attention. But their precursor, ADAS (advanced driver assistance systems), is more quietly but quite rapidly becoming a huge technology success story. 

The fully autonomous vehicle trials underway by established automobile companies such as Ford and GM, along with upstarts such as Baidu and Google (and, persistent rumors suggest, Apple) may capture the bulk of popular attention. But their precursor, ADAS (advanced driver assistance systems), is more quietly but quite rapidly becoming a huge technology success story.

Only a few years ago, high-end brands like Mercedes-Benz unveiled ADAS features such as collision avoidance in premier luxury models. Today, ADAS is mainstream. Honda, for example, now offers extensive ADAS capabilities even in its ~$20,000 entry-level Civic, and within the year, Toyota plans to make autonomous braking a standard feature across its product line.

Although ADAS implementations used in high-end vehicles may combine the data generated by multiple sensor technologies, camera-based approaches are increasingly common across the board and are likely to be the sole sensor approach utilized in entry level and even mainstream models. Why? As Strategy Analytics analyst Ian Riches observed in a recent article:

  • Some ADAS functions, such as lane departure warning, traffic sign recognition and pedestrian detection, either require a camera or cannot currently be done as effectively using another method.
  • Cameras are now price- and performance-competitive in applications traditionally associated with radar or LIDAR (light detection and ranging) sensors, such as adaptive cruise control and automatic emergency braking.
  • For many ADAS functions, a combined radar-plus-camera platform offers the optimal solution. For lower-end, cost-sensitive products, however, Strategy Analytics believes that the camera-only approach is far more likely to be successful than a radar-only alternative.
  • Cameras are becoming an integral part of parking-assistance solutions, including high-end surround view systems that include four or more cameras per vehicle.

Riches' comments focus on ADAS implementations involving cameras mounted to the vehicle exterior. Additional opportunities come from the use of cameras located in the vehicle interior; to monitor for driver slumber, distraction or unstable emotional state, for example, or to allow for gesture interface control of the vehicle's infotainment system without need to redirect attention away from the road ahead, or to tailor vehicle attributes based on who's sitting in the driver and passenger seats. And the ultimate aim of these various ADAS features is to largely-to-completely free the vehicle's occupants from the driving task.

ADAS is a key focus area for many of the Member companies of the Embedded Vision Alliance. As such, you'll find plenty of content on the Alliance website devoted to the topic: Dozens of technical presentations and demonstration videos, for example, along with an abundance of technical articles.

ADAS (and autonomous vehicle) technology will also be extensively showcased at the Alliance's upcoming Embedded Vision Summit conference. Highlights include the presentations:

  • "Using Vision to Enable Autonomous Land, Sea and Air Vehicles," a keynote from Larry Matthies, Senior Scientist at the NASA Jet Propulsion Laboratory
  • "Making Existing Cars Smart Via Embedded Vision and Deep Learning" from Stefan Heck, CEO and Co-founder of Nauto
  • "Computer Vision in Cars: Status, Challenges, and Trends" from Marco Jacobs, Vice President of Marketing at videantis, and
  • "Sensing Technologies for the Autonomous Vehicle" from Tom Wilson, ADAS Product Line Manager at NXP Semiconductors

The Embedded Vision Summit, an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software, takes place in Santa Clara, California May 2-4, 2016. Register now, as space is limited and seats are filling up!

Regards,

Brian Dipert
Editor-in-Chief, Embedded Vision Alliance
[email protected]

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!