Embedded Vision Alliance announces Vision Product of the Year Awards
At the Embedded Vision Summit 2018, the Embedded Vision Alliance announced the winners of its Vision Product of the Year Awards, which recognize the innovation and achievement of the industry’s leading technology, service, and end-product companies who are enabling the next generation of practical applications for computer vision.
Awards are given out for nine categories: processors; software and algorithms; cameras, camera modules, and sensors; developer tools; artificial intelligence technology; automotive solutions; cloud technologies; and end products. Winners were selected by an independent panel of judges which included Steve Glaser, Managing Director of Copia Growth Advisors and Executive-in-Residence at Plug and Play Technology Incubator; Dr. Chris Rowen, CEO of Babblabs and IEEE Fellow; and Peter Shannon, Managing Director of Levitate Capital.
"Computer vision has become a powerful, practical technology serving many large, growing markets. As a result, we are seeing a dramatic acceleration in innovation in this space," said Jeff Bier, founder of the Embedded Vision Alliance. "We are pleased to introduce the Vision Product of the Year Awards to recognize the companies that are leading this trend."
The winning entries for the inaugural 2018 Vision Product of the Year Awards are:
- Best Processor: AImotive aiWare – An optimized hardware architecture for general artificial neural network acceleration.
- Best Camera: Intel RealSense Depth Cameras: D415 / D435 - Both 3D cameras feature RealSense depth modules, a 1920 x 1080 OV2740 CMOS image sensor from OmniVision Technologies, USB 3.0 Type C interface, active infrared stereo depth technology, depth stream output resolution of up to 1280 x 720 at up to 90 fps for depth stream output frame rate.
- Best Software or Algorithm: MathWorks GPU Coder – GPU Coder software enables scientists and engineers to automatically generate optimized CUDA code from high-level functional descriptions in MATLAB for deep learning, embedded vision, and autonomous systems.
- Best Automotive Solution: Algolux CANA - CANA (Camera-aware neural architecture) is a novel end-to-end deep neural network for more robust autonomous vision perception in challenging imaging conditions, such as low light, adverse weather, or lens issues found in the real word.
- Best AI Technology: Morpho SoftNeuro – SoftNeuro, according to the company, is one of the world’s fastest deep learning inference engines, which operates in multiple environments, utilizing learning results that have been obtain through a variety of deep learning frameworks.
- Best Cloud Technology: Xilinx Machine Learning Suite – The Xilinx Machine Learning suite provides tools for accelerating vision applications in the cloud. The key innovation, according to the company, is that it enables cloud users of machine learning inference to get an order of magnitude performance advantage/cost-savings of optimized FPGA acceleration over GPUs.
- Best Developer Tools: AImotive aiSim – aiSim, according to the company, offers a complete, integrated simulation environment, fine-tuned for a vision-first approach to autonomous driving through photorealistic rendering, sensor modeling with importable calibration parameters, dataflow simulation and software-in-the-loop/hardware-in-the-loop testing.
- Best End Product: 8tree dentCHECK – dentCHECK is a 3D optical scanner that is a purpose-built tool for the aviation maintenance industry.
View more information onthe awards.
Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design
To receive news like this in your inbox, click here.
Join our LinkedIn group | Like us on Facebook | Follow us on Twitter
James Carroll
Former VSD Editor James Carroll joined the team 2013. Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.