New algorithms and tools accelerate embedded vision adoption

May 1, 2019
Since we started the Embedded Vision Alliance in 2011, there has been an unprecedented growth in investment, innovation, and use of practical vision technology across a broad range of markets. To help understand technology choices and trends, the Embedded Vision Alliance conducts an annual survey of product developers.

Since we started the Embedded Vision Alliance in 2011, there has been an unprecedented growth in investment, innovation, and use of practical vision technology across a broad range of markets. To help understand technology choices and trends, the Embedded Vision Alliance conducts an annual surveyof product developers.

In the most recent iteration of this survey, completed in November 2018, 93% of respondents reported that they expect an increase in their organization’s vision-related activity over the coming year (61% expect a large increase). This increase would not be possible without extensive work on new algorithms and development tools to speed adoption of vision-based systems.

Three fundamental factors are driving the proliferation of visual perception. It increasingly

  • Works well enough for diverse, real-world applications
  • Can be deployed at low cost and with low power consumption
  • Is usable by non-specialists

Traditionally, computer vision applications have relied on highly specialized algorithms painstakingly designed for each specific application and use case. This meant that designing for computer vision was hard, and this significantly slowed the adoption of vision-based applications. Additionally, this made new applications very expensive and time consuming.

However, there has been a democratization of computer vision. By that we mean it’s becoming much easier to develop computer vision-based algorithms, systems and applications, as well as to deploy these solutions at scale – enabling many more developers and organizations to incorporate vision into their systems.

Deep learning is one of the drivers of this trend. Because of the generality of deep learning algorithms, there’s less of a need to develop specialized algorithms. Instead, developer focus can shift to selecting among available algorithms, and then to obtaining the necessary quantities of training data.

Deep neural networks (DNNs) have transformed computer vision, delivering superior results on tasks such as recognizing objects, localizing objects within a frame and determining which pixels belong to which object. Even problems previously considered solved with conventional techniques are now finding better solutions using deep learning techniques.

As a result, computer vision developers are increasingly adopting deep learning techniques. In the Alliance's most recent survey, 59% of vision system developers are already using DNNs (an increase from 34% two years ago). Another 28% are planning to use DNNs for visual intelligence in the near future.

Another critical factor in simplifying computer vision development and deployment is the rise of cloud computing and much better development tools. For example, rather than spending days or weeks installing and configuring development tools, today engineers can get instant access to pre-configured development environments in the cloud. Likewise, when large amounts of compute power are required to train or validate a neural network algorithm, this compute power can be quickly and economically obtained in the cloud.

Cloud computing offers an easy path for initial deployment of many vision-based systems, even in cases where ultimately developers will switch to edge-based computing to reduce costs. Our most recent survey found that 75% of respondents using deep neural networks for visual understanding in their products deploy those neural networks at the edge, while 42% use the cloud. These figures total to more than 100% because some survey respondents use both approaches.

The world of practical computer vision is changing very fast – opening up many exciting technical and business opportunities. You can learn about the latest developments in computer vision at the Embedded Vision Summit, May 20-23, 2019, in Santa Clara, California. The event attracts a global audience of more than one thousand people who are developing and using computer vision technology.

Jeff Bier
Founder, Embedded Vision Alliance
President, BDTI

About the Author

Jeff Bier | Founder, Embedded Vision Alliance

Jeff Bier is the founder of the Embedded Vision Alliance, a partnership of 90+ technology companies that works to enable the widespread use of practical computer vision. The Alliance’s annual conference, the Embedded Vision Summit (May 20-23, 2019 in Santa Clara, California) is the preeminent event where engineers, product designers, and business people gather to make vision-based products a reality.

When not running the Alliance, Jeff is the president of BDTI, an engineering services firm: for over 25 years BDTI has helped hundreds of companies select the right technologies and develop optimized, custom algorithms and software for demanding applications in audio, video, machine learning and computer vision. If you are choosing between processor options for your next design, need a custom algorithm to solve a unique visual perception problem, or need to fit demanding algorithms into a small cost/size/power envelope, BDTI can help.

https://www.embedded-vision.com/

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!