Embedded vision: Why take it to the cloud if you can use the edge?

Dec. 21, 2020
This guest blog from Vlad Kardashov, Vice-President of Engineering, iENSO, looks at the benefits of deploying embedded vision applications on the edge.
Artificial intelligence and machine learning algorithms add complexity to embedded vision systems, leaving designers to decide on deploying such systems at the edge or in the cloud.

After trying various combinations of edge and/or cloud processing in past projects; I believe that the heavy lifting should be done at the edge. The most obvious and valuable advantage is it drastically reduces the degree of lag or latency that can occur with sending volumes of data to the cloud for processing.

Whether it is an industrial application, a connected autonomous vehicle, or some other consumer technology; as long as there is sufficient computing power on the edge, it's always more efficient, secure and cost-effective to run algorithms there – as close as possible to where the image and data originate.

Technology becoming edge-centric

Having enough processing power on the edge was not always a given. In fact, many applications were forced to process in the cloud because of cost and capability limitations of underlying chipset technologies.

However, in the past three years, substantial changes at the intersection of vision and AI have taken place. With increased demand for edge processing coming from product developers—such as camera makers, analytics developers, security companies and so on—many chipset providers have answered the challenge and, in just a few years, released hardware capable of much more powerful processing.

The speed of innovation has been quite impressive. In fact, almost every System on Chip (SoC) released today for cameras and vision systems has a GPU on board, making the processing of analytics in AI-based systems easy. 

Consumer demand for vision in 2020 and onward

For all its negatives, the pandemic has also served to accelerate edge AI innovation. Many processes that have always been based on in-person presence have moved to remote work, which means remote vision as well. There is more demand for vision systems in a wide variety of different applications, like surveillance and health and safety inspection, medical equipment, research, remote education, and many more.

At this point, the hardware has caught up, to operate in real-time with video for real-time processing through AI. Quite a few interesting platforms have emerged that can process and compete with cloud-based systems and make it possible for offline systems to process AI-based analytics at the edge.

With the combination of higher consumer demand and stronger technological capability, I believe that over the next three years processing at the edge for embedded vision will rapidly evolve. Highly established brands and completely new start-ups are jumping into the industry with fantastic product ideas.

Competitive edge in product development goes to collaborators

With engineering/commercialization service companies like iENSO available to help, it doesn’t matter if a company has an experienced in-house team already. In fact, embedded vision remains a complex and quickly evolving field, which means maintaining that skillset in-house is something that even many established big names choose not to do.

For product companies new to the embedded vision space, particularly if smaller or earlier-stage companies, it’s a big leap to hire a full team of 20 engineers and build that capability in-house. It’s not just a matter of the cost, but also the challenge and delay of finding the qualified individuals.

Product companies of all sizes prefer to focus on their core product technology and competitive secret sauce while finding a collaboration partner with the expertise to augment their internal team and develop their AI and related vision system for them. This creates a big opportunity for edge AI from a technical perspective and from a collaboration perspective, across the entire industry ecosystem.

With demand for more powerful offline processing of vision data at an increase, new technology platforms with multi-core CPUs, embedded GPUs and all the related tools for development, and exciting collaborations between product companies and embedded vision engineering experts, the future of vision is on the edge.

Vlad Kardashov, Vice-President of Engineering, iENSO       

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!