Edge Computing Is Not Plain Sailing — Cloud Can Help

When a yacht crosses the Atlantic, the ocean feels infinite. There are boundaries, but the crew can sail for most of the journey without thinking about them, focussing instead on the course, the conditions and how to sail efficiently.

Sailing on a local lake or reservoir is different. You can normally see the shore in all directions, so the crew needs to pay more attention to their surroundings and others on the lake. Sailing on a river is even more constrained, with smaller distances to the riverbank as well as many other people around.

Despite these different constraints, the sailing itself is done in the same way. The crew needs to take account of the local context, but doesn’t need to learn different sailing techniques for each waterway.

Software architects and developers in the enterprise and industrial world would like this to be true for them as they work across the cloud, a telecom network, on premises and other forms of edge computing.

Some parts of the analogy hold well. Use of public clouds is limited mostly by a customer’s budget, not by the computing resources. At the other end of the spectrum, working with data from a factory machine often takes place on a single, small server or PC-grade gateway with limited power consumption, computing horsepower and memory.

The factory machine is an example of edge computing, an umbrella term that includes:

  • Processes running on individual sensors and machines, known as edge devices
  • Gateways typically close to one or more machines
  • On-premises computing, where businesses own and host their own servers
  • Computing running on telecom servers in fixed or mobile network, known as “telco edge”
  • Colocation facilities for businesses to run servers in shared space nearby, referred to as colocated edge.

The software architecture for edge computing is generally different from that in the cloud. And it’s especially diverse in industrial machinery, which historically has been operated on closed, proprietary software stacks. This has been opening up for some time, and the market is shifting as suppliers look to address several sectors. But it’s still a sprawling and messy landscape, with many specialist options for applications, hypervisors, middleware, networking, operating systems and security. Developers often need to learn different skills and programming languages at various layers of the stack, making it harder to design a coherent system architecture — not easy water to sail in.

Cloud provider Amazon Web Services (AWS) was the first to tackle this issue when it launched the AWS Greengrass software in 2016. The product offered a subset of the firm’s cloud functions for Internet of things (IoT) applications that could run on the Ubuntu operating system in small edge computing devices. AWS now also uses the product in some of its edge computing appliances, and other major cloud providers have since launched similar products supporting their own clouds.

This is suitable for customers who have gone all-in with a single cloud provider, but many look for an architecture that can support different clouds for greater independence and openness.

Beyond this, there’s increasing attention on enterprise computing that doesn’t use the public cloud, described by Intel as the distributed edge. The term sounds like another buzzword for all traditional computing, however, three new aspects are clear.

Firstly, corporate systems that use cloud, colocated edge, telco edge and on-premises computing will be based on a distributed architecture. This introduces different ways of doing the “plumbing” for the system — managing the connected devices, networking, software updates, security and so on — which will be easier if the architecture is common across all areas.

Secondly, some applications will largely live in the distributed edge rather than the public cloud, such as 5G networks, augmented reality, autonomous vehicles and robotics. Other uses will include all aspects of the cloud and distributed edge — inferencing and machine learning, a fully fledged IoT system, digital twins and the metaverse. So, the distributed edge is becoming a first-class citizen of the architecture, not just a set of devices attached to the cloud.

Thirdly, the different computing domains are increasingly thought of as connected areas. There are good reasons for seeing them as a continuum, with the option to run applications and workloads in any of them, including business continuity, industrial-grade reliability, optimization of capital or operating expenditure, managing privacy and improving latency. Again, these will be easier to achieve if the architecture is common.

We’ve previously covered some of Intel’s pioneering moves with processors for edge computing, OpenVINO processor abstraction software for machine learning and its Edge Software Hub, so it was great to catch up recently with Sachin Katti, chief technical officer and chief strategy officer of Intel’s Network and Edge Group (NEX), to discuss its approach to the distributed edge and its recent developments.

Mr Katti stressed Intel’s vision, which is in line with the sailing analogy above — to make development as common as possible across computing environments. System architects and developers should be able to build what they want without having to worry too much about the different parts of the system it runs on. They shouldn’t have to learn a suite of new skills and tools as they include different types of edge computing in their systems. Intel’s vision is also based on openness and remaining agnostic to cloud providers that could be linked into the system.

This vision has prompted the formation of NEX from the company’s previously separate IoT, cloud networking and telecom networking groups, a move that should enable better support for a broadening range of users and developers in a variety of companies. Many of the people involved will be operations engineers or managers, rather than traditional cloud developers.

Mr Katti also pointed out the growing breadth of Intel’s edge computing initiatives, ranging from the hardware, up through the infrastructure layers to commercial-grade machine learning software to run on top, including:

  • New processors, including the 4th Gen Xeon scalable processor, set to be important for 5G core and virtual radio access networks, and the 12th Gen Intel Core system-on-chip for edge computing in IoT
  • FlexRAN, an open-source framework for 5G radio access networks, as well as tooling and APIs
  • Contributions to the Open Programmable Infrastructure project, run by the Linux Foundation to develop standardized architecture components and ways of interacting with them, including creating an Infrastructure Programmer Development Kit
  • Cloud Native Data Plane, a set of open-source user software libraries for handling the system data, which are a key part of Intel’s approach to interoperability with various clouds
  • Smart Edge, a lighter cloud software stack for multi-access edge devices, enabling users to deploy and manage edge zones. It includes pre-validated blueprints for major uses such as private 5G networks and virtual cloud
  • Edge Software Hub and the Developer Cloud, equipping developers to source fully fledged software suites addressing common uses, and to try them out on different hardware before implementing them
  • Intel Geti and OpenVINO, software suites to facilitate machine learning development and deployment for training and inference independent of the type of processor used.

Together, these tools let users deploy their systems across public, private and hybrid clouds.

Mr Katti also discussed where these developments will take the industry. He says the hardware for edge computing is shifting to commercial off-the-shelf systems, rather than highly specialist hardware, and the lower layers of software are converging. Intel’s focus is on these aspects — the greatest diversity of edge software is in the upper layers, so this is where the greatest change will be seen.

If we’re moving to a world in which a cloud software architecture is the prevalent form of computing across cloud, telco cloud, on-premises and much of the distributed landscape of edge computing, then a lot of the software used at the edge will need to be redesigned to run properly in that environment. This is for technical reasons — distributed usage and good compatibility with the cloud architecture — as well as commercial reasons, so that suppliers have a strong and defensible position and don’t lose market share to the cloud players’ services.

Also, cloud providers boast high development speeds, with some releasing up to 120 features and new services each quarter. In the operations technology landscape, system architects are more conservative and don’t welcome this pace of development. So, edge computing tools from all providers and the open-source community will need to manage the way they stay current, based on customer requests for new features.

Lastly, once there’s a common cloud architecture spanning the different areas of computing, it will become easier for operations technology specialists to optimize their systems for different outcomes. For resilience, you might run automatic failover of workloads from a faulty edge device to elsewhere in the system. For lower operating costs, you might design the system around on-premises computing to minimize data transfer and cloud data storage. For fast response times and low latency, for example with machine learning inference, more computing could be done at the very edge of the system. You can dynamically run the workload in the optimal place based on current conditions.

The distributed edge, as Intel calls it, includes a huge array of machinery serving enterprise and operations systems for all sectors. Equally, though, the benefits of reducing technical complexity and fragmentation are a prize worth reaching for. Bringing a common architecture to all this is an enormous task that will take many years. But the course is clear: suppliers strongly support the initiative, and many of the necessary tools and communities exist to help implement the change.