Next Microsoft HoloLens device will feature AI and deep learning capabilities
In its next version of its wearable self-contained, holographic computer known as the HoloLens, Microsoft (Redmond, WA, USA; www.microsoft.com) will incorporate an artificial intelligence coprocessor to enable deep learning capabilities on the device.
First released in March of 2016 in a development edition, the HoloLens is touted by Microsoft as a fully-untethered holographic computer that delivers 3D holograms pinned to the real world surrounding a user via cutting edge optics and sensors. The optics on the device include see-through holographic lenses, 2 HD 16:9 light engines, automatic pupillary distance calibration, and holographic resolution of 2.3M total light points. Sensors on the device include head-tracking cameras, one Time-of-Flight (ToF) - based depth camera, one 2 MPixel photo/HD video camera, one inertial measurement unit, one infrared camera, four microphones, as well as one ambient light sensor.
Furthermore, HoloLens contains a custom multiprocessor that is called the Holographic Processing Unit (HPU), which is used to process data and information coming from the device's sensors. In the second version of the HPU, which is currently under development, an AI coprocessor will be added, Harry Shum, executive vice president of the Artificial Intelligence and Research Group announced in a keynote speech delivered at CVPR 2017 on July 26, 2017 in Honolulu, Hawaii. This chip will be used to natively and flexibly implement deep neural networks and supports a variety of layer types.
At the event, Shum showed an early iteration of the second version of the HPU, running live code implementing hand segmentation. The AI coprocessor has been designed to run continuously in the next version of the HoloLens, off of the device's battery.
"This is just one example of the new capabilities we are developing for HoloLens, and is the kind of thing you can do when you have the willingness and capacity to invest for the long term, as Microsoft has done throughout its history," said Marc Pollefeys, Director of Science, HoloLens. "And this is the kind of thinking you need if you're going to develop mixed reality devices that are themselves intelligent. Mixed reality and artificial intelligence represent the future of computing, and we're excited to be advancing this frontier."
Autonomous vehicle technology company Nauto raises $159M
Nauto (Palo Alto, CA, USA; www.nauto.com), a startup company that develops artificial intelligence-based autonomous vehicle technology-has received $159 million in Series B funding led by SoftBank Group Corp. (Tokyo, Japan; https://www.softbank.jp/en/) and Greylock Partners (Menlo Park, CA, USA; www.greylock.com).
Other participants include previous strategic investors BMW iVentures (New York, NY, USA; www.bmwiventures.com), General Motors Ventures (Detroit, MI, USA; www.gmventures.com), Toyota AI Ventures (Los Altos, CA, USA; https://toyota-ai.ventures) and the venture unit of global financial services and insurance provider Allianz Group (Munich, Germany; www.allianz.com), and Series A investors Playground Global (Palo Alto, CA, USA; playground.global) and Draper Nexus (San Mateo, CA, USA; https://www.drapernexus.com), according to a press release.
Nauto's product is a flexible mounting solution that features a wide-angle exterior camera and wide-angle interior camera, GPS, LTE and wireless connections, LED and speakers for feedback, and night vision support. The platform utilizes some of the latest deep learning and computer vision algorithms and a smart cloud network informed by the accumulation of more than a million miles on urban streets and highways, according to the company.
Nauto is able to learn from other drivers, the road, and conditions around vehicles in the Nauto network. Fleets equipped with Nauto can automatically capture and upload video of significant events and insights in real time to help fleet managers improve overall driver performance and enhance the safety and efficiency of an entire fleet, according to the company. The platform also uses the VERA (Vision Enhanced Risk Assessment) scoring system, which provides a risk rating for the frequency and severity of distraction events.
Funds raised in this round will be used to fuel the company's growth and the deployment of its retrofit safety and networking system into more vehicles around the globe, as well as support the expansion of the Nauto data platform in autonomous vehicle research and development across multiple automakers, noted the press release. The more Nauto units that are deployed, and the more that vehicles driving with Nauto accumulate even more miles, the more precise the network will become, suggests the company.
"SoftBank and Greylock, along with our key strategic partners, are turbo-charging Nauto's ability to make roads safer today and to create an onramp to autonomy for the near future," said Nauto founder and CEO Stefan Heck. "At a time when traffic fatalities are climbing and distracted driving causes more than half of all crashes, we're tackling that problem by putting Nauto's safety features into more commercial fleet vehicles - from trucks and vans to buses and passenger cars - to warn drivers and coach them on how to stay focused."
"And," he continued, "in pursuit of the profoundly transformational impact autonomous vehicle technology can have on business and society, we'll now more rapidly be able to gather the billions more miles of real driving experience and data required to get a precise understanding of how the best drivers behave behind the wheel."
SoftBank Group Corp. Chairman and CEO Masayoshi Son, also commented: "While building an increasingly intelligent telematics business, Nauto is also generating a highly valuable dataset for autonomous driving, at massive scale that will help accelerate the development and adoption of safe, effective self-driving technology."
Infrared camera helps drones collect volcano gas
A team of researchers from the University of Mainz (Mainz, Germany; www.uni-mainz.de/eng/) used a pair of drones-one of which was equipped with an infrared camera-to collect samples of gas from inside Italy's Mount Etna volcano.
First, an Inspire 1 drone from DJI (Shenzhen, China; www.dji.com) was fitted with a Zenmuse XT thermal imaging camera to monitor temperatures and capture thermal footage. The Zenmuse XT is an infrared camera developed by FLIR that is available in either 640 x 512 or 336 x 246 uncooled microbolometer arrays that features a 17 μm pixel pitch and a spectral band of 7.5 to 13.5 μm. A second drone, a Matrice 600 Pro from DJI, was fitted with a multi-gas measurement box to analyze gas composition and volcano deformation.
Mount Etna erupts regularly, most recently in March of 2017, injuring 10 people. Formerly, there were eruptions through 2011 and 2012, and their ash columns forced the closure of local Catania airport on numerous occasions. In July 2011, lava endangered a tourist hub on the side of the volcano, before it was successfully diverted. Eruptions can last for months at a time, with one spanning over 400 days between 2008 and 2009, according to International Business Times.
Over the course of six days, the drones were flown into the volcano craters to learn more about them. In doing so, the researchers found that Sulphur concentrations are much higher near vents. Additionally, the drones could capture solids which formed due to Sulphur reacting in the atmosphere with water, helping scientists to better understand the chemical evolution of volcanic gas plumes. Using this information, the team aims to develop improved plans for the evacuation of residents should an eruption be expected.
"With thousands of people living in the vicinity of volcanoes, we wanted to better understand how they behave," said Professor Jonathan Castro of the University of Mainz. "Mount Etna has a long and frequent history of lava forming and eruptions. It's a perfect natural laboratory, but we need to analyze more thoroughly to protect the population in the area."
Castro added: "The academic world is reacting positively and more institutions around the world are relying on drones in their quest to predict eruptions. This brings us a step closer to potentially saving thousands of lives."
Deep learning device from Intel enables artificial intelligence programming at the edge
Intel (Santa Clara, CA, USA; www.intel.com) released the Movidius Neural Compute Stick, a deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a range of host devices.
The Movidius Neural Compute Stick features the Myriad 2 vision processing unit (VPU) contains hybrid processing elements including twelve 128-bit VLIW processors and two 32-bit RISC processors. The device features Caffe framework support and USB 3.0 Type A interface. Host minimum requirements are an x86_64 computing running Ubuntu 16.04 with 1 GB RAM and 4 GB free storage space.
Designed to reduce development, tuning and deployment barriers, the device delivers deep neural network processing in a small form factor to bring machine intelligence and AI out of the data centers and into end-user devices at the edge.
"The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance - more than 100 gigaflops of performance within a 1W power envelope - to run real-time deep neural networks directly from the device," said Remi El-Ouazzane, vice president and general manager of Movidius (San Mateo, CA, USA; www.movidius.com), an Intel company. "This enables a wide range of AI applications to be deployed offline."
With the Movidius Neural Compute Stick, users can do the following:
- Compile: Automatically convert a trained Caffe-based convolutional neural network (CNN) into an embedded neural network optimized to run on the onboard Movidius Myriad 2 VPU.
- Tune: Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimized model on the device to the original PC-based model.
- Accelerate: Unique to Movidius Neural Compute Stick, the device can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.
Furthermore, the Neural Compute Stick comes with the Movidius Neural Compute software development kit, which enables deep learning developers to profile, tune, and deploy CNNs on low-power applications that require real-time processing.