Skip to main content

The Concept of Portability in AI Apps and Hardware

AI portability

AI is proliferating across industries, and embedded AI is emerging as a new computing approach to implement intelligence in edge devices and hardware systems without relying on an internet connection. Devices built with embedded AI capabilities can run local AI models and algorithms without sending data to the cloud, but the hardware architecture design has to support this computing approach with low latency and low power consumption.

Software portability is not a new concept to software developers, but bringing portability down to the hardware level is being motivated by the demand for AI on edge devices. While the benefits are compelling, developing portable embedded AI systems comes with unique challenges.

The Portability Challenge in Embedded AI

Use cases for embedded AI are often brought up in the context of remote or edge compute where internet access is unavailable. The typical use cases are in remote infrastructure, military, and aerospace. But as is typically the case for advanced technologies, the use cases are rapidly expanding into consumer and industrial areas. For example, in smart home systems, embedded AI powers capabilities like speech-to-text, object recognition, and automated monitoring. For robotics and manufacturing, it enables processes like computer vision-guided quality inspection and predictive maintenance. In automotive, it supports advanced driver assistance systems and self-driving functionality.

In any of these areas, embedded AI requires local, real-time execution of AI algorithms rather than offloading processing to the cloud. As embedded AI applications diversify, there is a need to develop AI systems capable of running efficiently across different hardware environments. This is where the concept of software portability now gets telegraphed into hardware development as the hardware deployment is the key to enabling portability across systems.

Why Hardware Portability Matters in Embedded AI

Portability refers to the ability of software systems and applications to run on different computing platforms without significant re-engineering. It would be preferred to have the same type of execution in hardware. When you apply the concept to traditional computing architectures applied to AI, traditional computing architecture quickly fails to deliver the required results.

Enabling real-time execution of AI algorithms in embedded environments is not a matter of throwing more GPUs at the problem until the problem is solved. The problem of portability in the data center is essentially avoided by just adding more GPUs into a stack of servers. In an embedded system, form factor and power consumption are the two limiting factors that prevent this approach.

GPUs have been the workhorse of highly parallelized compute required in AI, but they can’t meet the form factor requirements of embedded AI systems.

AI portability

For embedded AI development, portability offers two major advantages:

Flexibility - Portable hardware design allows the same AI stack to be deployed on different edge devices or hardware systems, essentially being vendor agnostic across chipsets. This makes it easy to scale solutions or reuse embedded software in new deployments.

Future-proofing - Embedded systems often remain in use for years after deployment. A portable embedded system can support updates to the AI app, or the embedded AI app could be ported onto newer hardware as it becomes available, and without a major code overhaul.

Designing for Portable Embedded AI

Portable AI systems need to run on a chipset that enables implementation of AI compute at the edge but without the form factor limits of GPUs. The best processor candidates are:

  • An AI accelerator chip

  • An FPGA with custom logic implementing AI compute

  • Heterogeneous components with an AI block/chiplet

These systems also take a modular hardware architecture where key system components are separated into distinct elements that can be swapped in and out. For example, when the AI accelerator is decoupled from the main processor and peripherals, the AI accelerator module can be upgraded or substituted as new options emerge without affecting the rest of the system.

Modularity requires standardized interfaces between system elements. Widely used interface protocols like PCIe, Ethernet, and USB enable interoperability between hardware components from different vendors.

Finally, abstraction layers in both hardware and software hide away low-level hardware details from AI developers. This prevents developers from having to tailor implementations to specific platforms. Together, abstraction layers help AI software remain hardware-agnostic and portable across system generations.

Moving Forward with Portable Embedded AI

As more semiconductor manufacturers develop IP and embedded compute products with an AI-enabling architecture, hardware developers can build new systems that implement modularity and portability. This will be the key to enabling AI deployments in embedded environments across system architectures, but without relying on the cloud. While cloud-based AI will still support many use cases, embedded intelligence is essential for low-latency autonomous applications.

Hardware systems that implement embedded AI must be highly reliable at the circuit board level and package level. Make sure your design is evaluated with the best set of system analysis tools from Cadence. Only Cadence offers a comprehensive set of circuit, IC, and PCB design tools for any application and any level of complexity. Cadence PCB design products also integrate with a multiphysics field solver for thermal analysis, including verification of thermally sensitive chip and package designs.

Subscribe to our newsletter for the latest updates. If you’re looking to learn more about how Cadence has the solution for you, talk to our team of experts.

Untitled Document