Deep learning-enabled video camera launched by Amazon

June 18, 2018
First announced by Amazon at the re:Invent conference in November, the Amazon Web Services (AWS) Deep Lens video camera—which is designed to put deep learning technology in the hands of developers—is now shipping to customers.

First announced by Amazon at the re:Invent conference in November, the Amazon Web Services (AWS) Deep Lens video camera—which is designed to put deep learningtechnology in the hands of developers—is now shipping to customers.

DeepLens runs deep learning models directly on the device and is designed to provide developers with hands-on artificial intelligence technology. The device features a 4 MPixel camera that captures 1080P video along with an Intel Atom Processor that provides more than 100 GFLOPS of computer power, which AWS says is enough to run tens of frames of incoming video through on-board deep learning models every second.

AWS’ camera also has 8 GB RAM, 16GB memory (expandable), and Intel Gen9 graphics engine, and WiFi, USB, and micro HDMI ports. AWS DeepLens runs the Ubunti 16.04 OS and is preloaded with AWS Greengrass Core, as well as a device-optimized version of MXNet, and the flexibility to use other frameworks such as TensorFlow and Caffe. Additionally, the The Intel clDNN library provides a set of deep learning primitives for computer vision and other AI workloads, according to Amazon.

AWS DeepLens, according to the company, "allows developers of all skill levels to get started with deep learning in less than 10 minutes by providing sample projects with practical, hands-on examples which can start running with a single click."

The initial response to DeepLens was positive, according to Jeff Barr, Chief Evangelist for AWS.

"Educators, students, and developers signed up for hands-on sessions and started to build and train models right away," he wrote. "Their enthusiasm continued throughout the preview period and into this year’s AWS Summit season, where we did our best to provide all interested parties with access to devices, tools, and training."

DeepLens was also used in a few challenges and the HackTillDawn hackathon.

"I was fortunate enough to be able to attend the event and to help to choose the three winners. It was amazing to watch the teams, most with no previous machine learning or computer vision experience, dive right in and build interesting, sophisticated applications designed to enhance the attendee experience at large-scale music festivals," Barr wrote of HackTillDawn.

Additionally, AWS held the AWS DeepLens Challenge, which tasked participants with building machine learning projects that utilized DeepLens. The submissions, according to Barr, were as diverse as they were interesting, with applications designed for children, adults, and animals Details on all of the submissions, including demo videos and source code, are available on the AWS Community Projects page.

"From what I can tell, DeepLens has proven itself as an excellent learning vehicle. While speaking to the attendees at HackTillDawn, I learned that many of them were eager to get some hands-on experience that they could use to broaden their skillsets and to help them to progress in their careers," wrote Barr.

View more information on DeepLens here.

Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design

To receive news like this in your inbox,
click here.

Join our LinkedIn group | Like us on Facebook | Follow us on Twitter

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!