AI at the Network Edge Will Need More Than Custom Silicon
Qualcomm’s recent announcement of its Snapdragon 710 chipset once again puts the spotlight on artificial intelligence in smartphones, and more broadly, at the network edge. There’s been a tremendous amount of noise on this subject in the past 12 months, with Apple announcing a neural engine in its A11 Bionic system-on-chipset in the iPhone X, HiSilicon unveiling a “neural processing unit” in its Kirin 970 processor, followed by MediaTek with its NeuroPilot AI technology and artificial intelligence processing unit in the Helio P60. More recently, Arm also introduced Project Trillium.
The flurry of announcements raises questions about what these solutions encompass and what they’re trying to achieve. There’s evidently a need for custom silicon for artificial intelligence, but workloads vary, and delivering silicon is only half the challenge. Ensuring developers can fully take advantage of the hardware is arguably the hardest part.
With artificial intelligence firmly at the peak of the hype curve, the industry must be collectively conscious that technologies deliver tangible benefits rather than an empty claim of intelligence. This should be easy given that artificial intelligence isn’t a new phenomenon. What is new is the way solutions are being marketed expressly under the banner of artificial intelligence.
The advent of dedicated accelerators for artificial intelligence workloads is a mixed blessing. Even defining these is difficult because of architectural similarities to digital signal processors (DSPs). Artificial intelligence is becoming pervasive in smartphones, spanning everything from power management to predictive user interface, natural language processing, object detection, facial recognition… the list is endless. For these tasks to be entirely efficient, it’s not realistic that they run exclusively on the CPU or even the graphics processing unit (GPU). Equally, developers need to have the tools to fully maximize the resources available.
This is highly reminiscent of the early days of the smartphone CPU core wars. Adding more cores created little impact beyond marketing hype until developers began writing to those cores to create multithreaded apps.
The approach taken by Qualcomm is noteworthy as it contrasts with that of Apple, HiSilicon and MediaTek, all of which are positioning a single, dedicated accelerator for artificial intelligence. Instead, Qualcomm is emphasizing its heterogeneous approach that comprises its Hexagon DSP, Adreno GPU and Kryo CPU. The Qualcomm AI Engine consists of these cores alongside software frameworks and tools to accelerate artificial intelligence app development using the platform.
This includes Qualcomm’s Snapdragon Neural Processing Engine development kit in addition to support for the Android Neural Networks API and Hexagon Neural Network toolkit. Similarly, it can run models trained in Caffe, Caffe2, TensorFlow, TensorFlow Lite and ONNX, allowing developers to provide optimized experiences with a choice of three cores on Snapdragon, but without significant additional heavy lifting.
Although Qualcomm hasn’t yet created a custom artificial intelligence accelerator, its heterogeneous approach is designed to offer a range of options that can address a wide variety of tasks and workload requirements. This is a logical approach because no two workloads are the same — some will work more effectively on a GPU, others on a DSP. Nor is the strategy a rigid one. As workloads change and demands evolve, it’s reasonable to assume that Qualcomm and others will adapt their silicon strategy with more custom accelerators.
Neither approach is right or wrong, particularly as artificial intelligence is still in its infancy. We can expect that the network edge will become characterized by multiple devices and end-points, all of which have varying and increasingly individual requirements when it comes to artificial intelligence. As workloads proliferate, the resources available will adjust in parallel. No single accelerator will fulfill all functions.
What’s essential today is that artificial intelligence silicon has purpose and can be easily exploited by developers. Choice, flexibility and performance are far more important than claims of a given core being “dedicated to artificial intelligence” or having an attention-grabbing brand name that conjures up visions of other-worldly intelligence. Success will be determined by flexibility, performance, developer commitment and the ability to adapt to the rapidly changing demands of artificial intelligence.
This article first appeared in FierceWireless on 24 May 2018.
Sign up to our free Daily Insight service here.
Subscribe to our blog
Make sure you don't miss out on our fresh insights on topical news in the connected world
"*" indicates required fields