Skip to main content

Bring Intelligence to IoT With Edge Computing

lot edge computing

The rapidly increasing trend of adopting Internet of Things (IoT) in various applications has resulted in an increase in the number of connected devices. The result is generation of massive amounts of IoT-derived data. Due to the limited commute resources in IoT devices, the storage and processing of IoT data on these devices is inefficient. In order to address resource limitation problems in networked IoT devices, traditional cloud computing resources have been historically used.

However, this practice causes other challenges, such as low-latency in IoT applications. The current state of IoT networks is unscalable and will not be capable of serving the massive numbers of newly expected IoT products in advanced applications. This is where a new method for processing massive quantities of data from multiple devices is needed.

Edge computing has emerged as a possible solution to this resource congestion problem by providing localized computing for networked devices residing at the edge of the network. While the cloud can still provide services requiring highly centralized compute resources, edge computing moves data processing or storage to the "edge" of the network, i.e., close to the locations of end users.

IoT + Edge Computing Networks

Within the IoT + edge computing networking architecture, edge compute nodes act as intermediaries between a cloud environment, or as the only network node with which an IoT device may interact. The interaction and edge compute network architecture used with IoT products depends on the role of the IoT device, its compute requirements, and the services being delivered to end users.

Traditionally, the network architecture supporting IoT devices exists at three levels:

  • Sensor nodes are deployed at the end nodes of the network and are generally aggregators of data source of data that form the backbone of an IoT network.
  • IoT gateways gather and consolidate data from IoT devices before transmitting it to the cloud servers, and vice versa.
  • Cloud/backhaul network provides the bulk of compute required in an application and service delivery to end users.

Within this network architecture, cloud servers are connected to sensor nodes and the core network via IoT gateways. Typically, IoT gateways are involved in data pre-processing in order to eliminate needless overhead and avoid redundancy. However, IoT gateways and edge compute nodes are very different in that an IoT gateway provides none of the compute required in service delivery, they only relay outputs between cloud servers and end users. Edge compute nodes take the place of IoT gateways and provide significant compute outside the data center.

 IoT and edge computing

Network architecture used in edge computing.

Edge compute solves many of the problems involved in data transmission to and from the cloud, which consumes a significant amount of power and increases latency. Edge computing provides processing capabilities at the network’s edge, improving overall performance and security while reducing latency and cost. Edge and cloud are complementary; edge computing assures the continuity of services, while cloud computing administers the network and performs the most compute intensive tasks that require processing data from multiple edge nodes.

Implementation of Edge Computing

Broadly speaking, implementation of edge computing can be done in three different ways:

  • Cloudlet deployment: A cloudlet is a collection of computers that represents a small center, where cloudlet nodes are linked to IoT devices in the same geographic region.
  • Multiaccess edge computing: This network implementation is also known as mobile edge computing (MEC). This type of network enables mobile devices at the edge to access compute via an edge server, which can also route traffic to the cloud.
  • Fog computing: This decentralized system of computational nodes hosts end-user services between the cloud and end-users. Nodes are heterogeneous and the location of fog-computing nodes could be anywhere between the cloud and the end user.

In all of these edge computing paradigms, the goal is to process massive amounts of IoT data locally. For example, MEC improves cellular network services by reducing latency and increasing bandwidth. Before transmitting massive amounts of data to the cloud, it analyses them and provides context-aware services. Similarly, fog computing nodes are intended to be broadly accessible by any IoT device on the network.

To learn more about these implementations of edge computing in IoT networks, read the following article from Hamdan et al.:

The Hardware Perspective

IoT systems are usually devices that run on batteries and can receive and send data wirelessly. When edge computing is introduced into the mix, the computational requirements at the nodes also increase and need for energy-efficient devices arises. Most of the time, data is sent in small packets every once in a while, using low-power radio transceivers operating with sub-GHz standards or a 2.4 GHz standard.

The table below summarizes some hardware requirements for IoT products that can interface with edge compute nodes.

Processor

Requires low-power architecture; FPGA or custom ASIC/SoC preferred

Wireless communications

Sub-GHz, 2.4 GHz (Bluetooth, WiFi, etc.)

Wired communications

Serial protocols, Ethernet, or fiber for advanced edge devices

Sensor interfaces

micro-electromechanical systems (MEMS) sensors preferred

Power management

Requires on-board management and control over power to peripherals

IoT nodes can use a lot of power, and many modern MCU architectures can't provide the required efficiency. Therefore, in some cases, it might be better to use dedicated hardware like ASICs instead of general-purpose MCUs because they use less power. However, there is a catch here: ASIC architectures are not flexible

This problem can be overcome by using application-specific integrated processors (ASIPs). These are semi-specific processors meant to expedite repetitive application-specific processes with dedicated hardware. Flexible ASIPs are typically implemented in FPGAs, so they are totally reconfigurable.

In a Nutshell

In closing, the goal in implementing compute for IoT products at the network edge is to decrease traffic flow and congestion in the network. Compared to traditional cloud services, edge computing reduces transmission latency between the nodes and the end users, resulting in faster response times for real-time IoT functionalities.

Also, by moving the communication and computational overhead from nodes with low battery resources to nodes with substantial power resources, the lifetime of an IoT ecosystem can be extended. Edge computing complements a number of other significant technologies as well, including hybrid cloud and 5G.

The complexity of IoT in edge computing networks requires a systems-level design approach with participation from electrical engineers, mechanical engineers, software developers, and PCB designers. When you need to implement low power design techniques in your physical layout, use the complete set of system analysis tools from Cadence. Only Cadence offers a comprehensive set of circuit, IC, and PCB design tools for any application and any level of complexity.

Subscribe to our newsletter for the latest updates. If you’re looking to learn more about how Cadence has the solution for you, talk to us and our team of experts.

Untitled Document