Smartphones and Other Devices to Become Truly Smart
At the recent Embedded Vision Summit in Santa Clara, California, Qualcomm announced a deep-learning software development kit for devices powered by the Snapdragon 820 processor.
The software is a deep-learning toolkit optimized specifically for mobile devices, allowing manufacturers and developers to run their neural network models. We expect it to become available in the second half of 2016 on smartphones, security cameras, automobiles and drones that use the Snapdragon 820 processor.
The announcement will raise eyebrows given that this level of artificial intelligence has largely only been exclusive to heavy-duty computing power in the cloud. Machine learning is essentially a process of pattern recognition. Humans learn by observing examples and over time appreciating their similarities or differences. It’s exactly the same process with machine learning, where the “training” of neural networks often needs large data and analysis over a prolonged period, which is hugely computing-intensive. Nvidia has been a leading light in this field with its Tesla line of graphics processing units suited specifically to artificial intelligence and built for performance workstations and servers.
Qualcomm’s announcement brings this capability to devices at the network edge and without the need for connectivity. Others such as Movidius with the Myriad 2 Chip (see Drones Gain the Power of Sense) and MediaTek with its deep-learning platform have sought to do something similar but lacked Qualcomm’s scale and crucially, offline capability.
Uses for the software include common deep-learning user experiences such as scene detection, text recognition, object tracking and avoidance, gesture and face recognition and natural language processing.
There is a growing tendency to place architecture with aspects of machine learning in the cloud, in a gateway and in devices at the edge of the network. Much of the drive for this comes from the Internet of things, but there are also good reasons for doing it in smartphones. The kind of neural network training outlined above isn’t going to move to mobile devices. At present, machine learning for popular services such as Google Photos or Google Translate is done in the cloud but the learned parameters may be downloaded onto the device, and a proportion of the work is set to move to the device.
For applications where latency is an issue, the ability to perform such functions on the device is crucial. This includes mission critical applications such as remote medical diagnoses and there are several examples starting to appear in devices for smart home security. Similarly, applications shouldn’t have to rely on a persistent connection given that this can’t always be guaranteed. With the Google Translate example, listening to ambient noises means machine learning on the device and relaying to the cloud at intervals is a logical approach.
CCS Insight believes that machine learning on the device heralds a change in the design of applications using artificial intelligence. We predict that we’ll begin to see the rise of hybrid apps that combine the convenience and speed of local processing with the power and ubiquity of the cloud. One challenge for developers will be to use different device development kits for machine learning, depending on the chipset or OS implementation, in conjunction with their cloud-based systems.
According to Gary Brotman, director of product management at Qualcomm, the introduction of the Snapdragon Neural Processing Engine software development kit will mean that companies will begin to include deep learning in their products. This includes industries as diverse as automotive, the Internet of things, mobility (including drones) and security.
An initial implementation will come from Nauto, a start-up that makes smart dashboard cameras for companies responsible for fleets of vehicles. Its hardware will use deep learning to analyse the driving environment, provide evidence in the case of a crash, and even work with the driver to optimise their driving.
Qualcomm is extremely well positioned in mobile as more adjacent segments become intelligent and connected. Moreover, there is scope for Qualcomm to expand its role by providing value based on the data captured. Although it’s unlikely that Qualcomm would compete against giants such as Amazon Web Services, this could conceivably include basic analytics, for example.
Qualcomm has plenty of opportunity but with that comes challenges. We will soon be at a point where the device becomes proactive in seeking data. This raises the tricky area of permissions and the balance between informing the user by requesting access and disrupting the user experience. This is a nice problem to have as adoption grows, but Qualcomm will need to work closely with its partners to ensure that artificial intelligence delivers seamless yet permission-based value and avoids numerous potential pitfalls.