
According to USA Today, Modivius announced Thursday that it was teaming up with the Internet giant to speed up the adoption of a type of deep machine learning within smartphones.
Movidius stated that Google will be focusing its software development and processors to come up with a new way for machine learning, to work on devices free of the cloud.
Google's next generation smartphones will soon understand and process what they see just like humans do.
Google has been trying to change this dynamic with Project Tango, a mobile device that can do real-time mapping and some object tracking, while running off only a small battery. Using a mix of cameras and sensors, Movidius' technology in Project Tango allows devices to create three-dimensional maps of indoor spaces. Search "dog", for example, and the app will pull up all the photos of dogs a user has in their Google Photos library; search for "Paris" and a user will see pictures of themselves posing in front of the Eiffel Tower.
"Instead of us adapting to computers and having to learn their language, computers are becoming more and more intelligent in the sense that they adapt to us", said Blaise Aguera y Arcas, head of machine intelligence group at Google. Google, meanwhile, will aid Movidius' neural network technology roadmap.
Machine learning, artificial intelligence, neural networks, and related technology have become areas of intense interest for Google and its peers because these disciplines help mobile devices deal with sensory and navigation applications in the real world.
"Movidius' mission is to bring visual intelligence to devices so that they can understand the world in a more natural way".
By marrying sophisticated software algorithms to a powerful, purpose-built Vision Processing Unit (VPU), Movidius brings new levels of intelligence to smart devices and enables a new wave of intelligent and contextually aware devices, including drones and AR/VR devices. Mountain View is working with Silicon Valley chip designer to enable smartphones to perform heavy computation internally rather than relying on remote data centers.
Movidius CEO El-Ouazzane noted that the company's vision processor is "defined to handle new workloads".
To put it in simpler terms, Google will place Movidius' MA2450 chip inside Android handsets.
"The challenge in embedding this technology into consumer devices boils down to the need for extreme power efficiency, and this is where a deep synthesis between the underlying hardware architecture and the neural compute comes in".
0 comments:
Post a Comment