Google has been working on facial and object recognition, and wants to bring that capability directly into a hand-held device. In the study of “deep learning”, machine learning, or neural networks (see the video at the end of this article), Google has depended, as most involved in this work, with larger devices and cloud computing power. NVIDIA, for instance, has dedicated a great deal of research in this area using their GPU processors. However, Google wants to get it down to the chip level and bring it small enough to fit into an Android smart phone. Movidius has been working independently on their own chipset and software for smaller devices, and this past week Movidius has announced that it is working with Google to accelerate the adoption of deep learning within mobile devices. As part of the agreement, Google will source Movidius processors alongside the entire Movidius software development environment. In turn, Google will contribute to Movidius’ neural network technology roadmap.

This agreement enables Google to deploy its advanced neural computation engine on Movidius’ ultra-low-power platform, introducing a new way for machine intelligence to run locally on devices. Local computation allows for data to stay on device and properly function without internet connection and with fewer latency issues. This means future products can have the ability to understand images and audio with incredible speed and accuracy, offering a more personal and contextualized computing experience.

“What Google has been able to achieve with neural networks is providing us with the building blocks for machine intelligence, laying the groundwork for the next decade of how technology will enhance the way people interact with the world,” said Blaise Agϋera y Arcas, head of Google’s machine intelligence group in Seattle. “By working with Movidius, we’re able to expand this technology beyond the data center and out into the real world, giving people the benefits of machine intelligence on their personal devices.”

Google will utilize Movidius’ latest flagship chip – the MA2450. The MA2450 is the only commercial solution on the market today with the performance and power-efficiency to perform complex neural network computations in ultra-compact form factors. The MA2450 is the most powerful iteration of the Myriad 2 family of vision processors, providing a series of improvements over the first-generation Myriad 2 VPU announced last year, the MA2100.

“The technological advances Google has made in machine intelligence and neural networks are astounding. The challenge in embedding this technology into consumer devices boils down to the need for extreme power efficiency, and this is where a deep synthesis between the underlying hardware architecture and the neural compute comes in,” said Remi El-Ouazzane, CEO, Movidius. “Movidius’ mission is to bring visual intelligence to devices so that they can understand the world in a more natural way. This partnership with Google will allow us to accelerate that vision in a tangible way.”

As the companies continue their collaboration, more details will become available.
Here’s a recent talk on Machine Learning by Oriol Vinyals from Google DeepMind, to explore how visual recognition is one of the major milestones in Machine Learning and the advancement of neural network development.

The Singularity is approaching: aren’t we all expecting our hand-held devices to recognize us by our faces already?