Deep learning device from Intel enables artificial intelligence programming at the edge

July 20, 2017
Intel has announced the release of the Movidius Neural Compute Stick, a USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a range of host devices at the edge.

Intelhas announced the release of the MovidiusNeural Compute Stick, a USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a range of host devices at the edge.

Designed for product developers, researchers and makers, the Movidius Neural Compute Stick features the Myriad 2 vision processing unit (VPU), which contains hybrid processing elements including twelve 128-bit VLIW processors, and two 32-bit RISC processors.The Caffe framework is supported on the device, which features a USB 3.0 Type A interface. Minimum requirements for a host machine are an x86_64 computing running Ubuntu 16.04 with 1 GB RAM and 4 GB free storage space.

The device was designed to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor. Through software and hardware tools, suggests Intel, the Neural Compute Stick brings machine intelligence and AI out of the data centers and into end-user devices.

"The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks directly from the device," said Remi El-Ouazzane, vice president and general manager of Movidius, an Intel company. "This enables a wide range of AI applications to be deployed offline."

With the Movidius Neural Compute Stick, users can do the following:

  • Compile: Automatically convert a trained Caffe-based convolutional neural network (CNN) into an embedded neural network optimized to run on the onboard Movidius Myriad 2 VPU.
  • Tune: Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimized model on the device to the original PC-based model.
  • Accelerate: Unique to Movidius Neural Compute Stick, the device can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.

Furthermore, the Neural Compute Stick comes with the Movidius Neural Compute software development kit, which enables deep learning developers to profile, tune, and deploy CNNs on low-power applications that require real-time processing.

The Movidius Neural Compute Stick is now available for purchase through select distributors for MSRP $79.

View more information on the Movidius Neural Compute Stick.

Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design

To receive news like this in your inbox,
click here.

Join our LinkedIn group | Like us on Facebook | Follow us on Twitter

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!