An Overview of Edge Computing
As new technologies are introduced to the market, the computing architecture that supports them will continue to change. Over the past two decades, we’ve seen the growth of the internet and proliferation of cloud services that enable a huge range of digitally-enabled services. The level of change has been transformative, but it is not enough to support many new applications relying on low-latency, high-bandwidth data delivery to network clients.
The new computing paradigm set to change this dynamic is known as edge computing. By deploying significant compute resources “at the edge”, meaning closer to the end users, data-intensive tasks requiring extremely low latency can be offloaded from the data center. There are many benefits of this type of computing implementation, but it changes the approach to system-level design and chipset selection. We’ll give an overview of current trends in edge computing system design and which approach companies can take to support this growing shift in computing.
Network and Systems Architecture for Edge Computing
Many digitally-enabled experiences require compute that is not available on the end user’s device. Today, processing power in these applications is provided by the cloud, or at minimum the application provider’s web server infrastructure. Newer digitally-enabled experiences are expected to increase the level of compute required to provide services to end clients on the network, which increases network workloads and bandwidth requirements. Edge computing offers a solution to these challenges.
Edge computing requires deploying servers outside the data center so that they can provide high compute that supports applications for end users. More advanced applications and services like smart infrastructure, autonomous vehicles, AI-driven applications, and industrial/commercial applications will require too much computing power to fit into a client device. With a connection to a nearby edge compute node, those computing requirements can be offloaded from the cloud and services can be provided with lower latency.
The implementation of edge computing within a larger network infrastructure is shown below. Services like 5G and fiber are enabling technologies providing the connectivity between end users and edge nodes, where processing for applications and services is performed. Results are delivered to end users with lower latency and higher bandwidth compared to reception from cloud services.
From this graphic, we can see how edge compute nodes act as the intermediaries between an end user’s device and the cloud. The idea here is simple: high-compute, low-latency processing tasks can be handled on an edge application deployed on one or more edge nodes, while the high latency supporting processing and storage tasks can be handled in the cloud.
Edge Computing Applications
This addition to the traditional network architecture enables multiple applications that require high compute and low latency. Some examples and system design information are provided in the table below.
Streaming services |
|
AI at the edge |
|
IIoT and Automation |
|
Smart infrastructure |
|
Military |
|
In data centers, the standard CPU + GPU architecture is used for general-purpose servers and some application-specific servers. FPGAs have not traditionally been the chipset of choice for providing digitally-enabled services over the internet. Any of these system processing modalities can be implemented in edge servers, depending on the requirements of the end application.
Here are some points to consider when selecting a processor architecture for use in an edge server:
- CPUs: This provides general-purpose computing capabilities with the flexibility to run any application, but they are most inefficient for dedicated high-compute functions.
- GPUs: These provide much greater processing power that can be dedicated to specific tasks, although they generate excessive heat because they do not implement application-specific logic.
- FPGAs: These components do not implement general-purpose computing except on small logic blocks. However, they are reconfigurable and can dedicate significant resources to high-compute tasks.
What’s Next in Edge Computing?
First things first, edge data center systems and infrastructure need to grow and thrive. As the available edge compute capacity grows and the market grows, we can expect the business models around edge computing to be similar to that used by data center providers. According to estimates from PwC, the global market for edge data centers (both proprietary and MTDC’s) is expected to triple, reaching approximately $13.5 billion by 2024. It is extremely difficult to estimate the added value through new digital services enabled by this growth.
The complexity of edge computing systems requires a systems-level design approach with participation from electrical engineers, mechanical engineers, software developers, and of course PCB designers. When you need to implement low power design techniques in your physical layout, use the complete set of system analysis tools from Cadence. Only Cadence offers a comprehensive set of circuit, IC, and PCB design tools for any application and any level of complexity.
Subscribe to our newsletter for the latest updates. If you’re looking to learn more about how Cadence has the solution for you, talk to us and our team of experts.