5G networks founded on processing, intelligence and flexibility
Nvidia has been largely out of the mobile limelight ever since it abandoned its Tegra and Icera efforts in mobile chipsets. It was undoubtedly the right move: Intel has since withdrawn from the smartphone modem market, competition is intensifying and 5G has driven up cost and complexity.
However, MWC Americas saw Nvidia’s return to the main stage. Mobile networks are now as much a computing problem as a networking one, and the market has come to Nvidia. Gone are the days of Nvidia chasing, but never quite catching, the opportunity in mobile.
Nvidia made several announcements, spearheaded by its EGX Edge Supercomputing Platform. This is designed to take artificial intelligence (AI) workloads to the edge and so improve performance, privacy, compliance and data sovereignty. Through the delivery of the EGX platform in combination with Kubernetes, Nvidia is offering a consistent and scalable platform for deployment on premises, at the network edge and in the cloud.
Nvidia has partnered with Red Hat to standardize and automate the deployment of components for provisioning GPU-based Kubernetes systems. Significantly, the EGX software stack is further supported by partners including Canonical, Cisco Systems, Microsoft, Nutanix and VMware. Compliant hardware, which Nvidia is labelling “Nvidia GPU Cloud Ready for Edge” will be available from some 14 manufacturers including Dell, Hewlett Packard Enterprise and Lenovo. All server offerings will be based on Nvidia’s T4 GPU.
This is an impressive list of partners and delivers some much-needed consistency from the cloud to the edge. The fact that Nvidia can deliver such widespread and immediate support is reflective of its existing strength in data centre AI. With demand increasing to move workloads closer to the source of the data, Nvidia is predictably extending its AI capability beyond the centralized cloud.
With EGX offering the same environment, tools and consistency, it’s little surprise that it has broad support. Although carrier involvement is less evident at this stage, Nvidia also showcased its GeForce Now cloud gaming service and carrier partnerships, which represent a natural starting point for EGX. Nvidia will hope that what starts with cloud gaming quickly matures to other workloads as network operators establish their own edge cloud capabilities. The on-premises and hybrid cloud opportunity is where Nvidia will generate early momentum. BMW, Procter & Gamble and Samsung have all committed to support EGX.
The new platform is undoubtedly a threat to Intel, but certainly not unexpected. Intel’s strategy for AI and edge computing is built on a wide diversity of silicon options spanning Xeon, FPGAs, Movidius, ASICs, GPUs and even Atom chips, with tools such as OpenVINO, One API and OpenNESS providing consistency for developers. It’s inescapable that Nvidia has grabbed the headlines in AI silicon, but GPU computing still requires a CPU. And as workloads proliferate and many of them move to the edge, approaches to AI accelerators will inevitably adapt in parallel.
Nvidia will further catch Intel’s attention with the announcement of an initiative with Ericsson to explore approaches to virtualize radio access networks using GPU computing. The move follows a host of partnerships and announcements by Intel and its carrier and infrastructure partners (see Instant Insight: Intel Data-Centric Innovation Day, 2019 and Intel’s Role in 5G: Beyond the Modem).
Nvidia’s progress in the data centre meant diversification to address opportunities in edge computing presented by 5G was inevitable. The market is moving toward both Nvidia and Intel as processing, intelligence and flexibility become cornerstones of next-generation 5G networks. It’s not a winner-takes-all scenario, but Intel needs to ensure that its strategy of choice, flexibility and scale doesn’t get lost as Nvidia sharpens the edges in its cloud story.