Deep learning, a hot topic in the vision industry as of late, is something that Google has placed a strong emphasis on. Over the past few years, Google has built two generations of large-scale computer systems for training neural networks, and then applied these systems to a wide variety of research problems that have traditionally been difficult for computers.
Google’s second generation system, TensorFlow, was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligent research organization. The system is designed to facilitate research in machine learning, and to make it "quick and easy to transition from research prototype to production system."
In a presentation from the May 2016 Embedded Vision Summit, Jeff Dean, Senior Fellow at Google, presents the "Large-Scale Deep Learning for Building Intelligent Computer Systems" keynote, in which he discusses how, using TensorFlow, Google’s research group has made significant improvements in the state-of-the-art in many areas. He also talks about how dozens of different groups at Google use it to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, language translation, and many other tasks.
Additionally, he touches on some of the ways that Google trains large models quickly on large datasets, and discusses different approaches for deploying machine learning models in environments ranging from large datacenters to mobile devices, while also talking about the ways in which Google has applied this work to a variety of problems in its products, usually in close collaboration with other teams. This talk describes joint work with many people at Google.