Skip to main content

What Happens After Semiconductor Scaling Laws Are Broken?

Semiconductor scaling

About 15 years ago, everyone was predicting the end of Moore’s law around the 14 nm technology node. At the time, significant research focused on overcoming leakage and tunneling, use of unique materials in new chips, and changes to the fundamental structure of transistors (finFET, all-around-gate FET, etc.). It seems that every time the industry hits a roadblock, innovators come up with a new way to continue scaling progress into the next technology node.

If you look under the surface, there are multiple scaling laws dealing with computing power and processor performance, it just so happens that Moore’s law gets the most visibility. And just like Moore’s law, the empirical scaling laws describing the progress of classical computing are hitting their own limits. The next generation of computing platforms will have to overcome a total of three scaling laws that limit performance improvements, and it remains to be seen what factors will limit future scaling efforts.

Three Current Scaling Laws

Semiconductor scaling is most often associated with either device scaling (either individual transistors or complete logic circuits), as well as scaling in terms of device density. Those two points are not always equivalent, and they do not consider the system-level architecture implemented in device packaging and chipset architecture. The central idea in semiconductor scaling surrounds three different areas of microprocessor design:

  • Device density (number of transistors)
  • Parallelization (number of processors in parallel)
  • Computing power per unit power consumption

Readers will (or should) notice that the first scaling law above is actually Moore’s law. In fact, all three of these laws have been called out by leaders in the semiconductor industry and they have guided industry development milestones over the past several decades.

Amdahl’s Law

The first of the major scaling laws that is upending compute instruction execution is Amdahl’s law. This law regards parallelization of processing and states that only a small number of sequential instructions will limit throughput. At that point, additional logical core parallelization will not continue to increase throughput.

Such an effect has become clear even before Moore’s law has reached its limit. It was noticed early on that this sequential instruction execution bottleneck is the main factor limiting AI compute in traditional processor architectures. Getting to the next Amdahl’s law plateau requires elimination of sequential instructions in targeted applications like AI, something which will become much more popular as time progresses.

Moore’s Law

Next, we arrive at the most well-known of the semiconductor scaling laws. At present, there does not seem to be any evidence of Moore’s law slowing down as the industry continues pushing device densities higher by decreasing transistor gate sizes. As of the end of 2022, the most advanced technology node currently in production is the 5 nm process, and the 3 nm process is slated to go into production in the following year. When this progress is compared with the historical march of transistor density scaling, it’s clear that Moore’s law is not slowing down.

Semiconductor scaling

Transistor scaling since 1970. [Source: Wikimedia Commons]

Dennard Scaling

Finally, we have a physical scaling law that relates computing power to energy consumption during operation. Dennard scaling, also known as the MOSFET scaling law, states that the power density consumed by a MOSFET stays constant even as transistors get smaller. In other words, this requires scaling down voltage and current used in logic circuits as MOSFETs are scaled smaller.

This scaling law was instrumental to scaling beyond the single-core x86 architecture. The limits of single-core computing at faster clock speeds became apparent once CPU clock speeds approached 3 GHz. To increase compute throughput, multiple logic cores were used. Eventually, Amdahl’s law takes over as parallelization starts to become limited by sequential instruction execution as described above.

How to Keep Scaling From Failing

As scaling has progressed, semiconductor manufacturers have repeatedly found ways to increase density at existing nodes, or to continuously reinvent transistor structures to get to the next node. The next generation of chips requires specialization, such as through the use of accelerator blocks in packages and highly customizable cores with an open standard like RISC-V.

Over the past 5 years, packaging has played a major role as an economic enabler of products to benefit from scaling. As devices move into more advanced nodes, the capital investments involved in producing more advanced products create greater barriers to entry, and thus fewer companies have been able to take advantage of the more advanced technology nodes.

Heterogeneous integration is changing the economics of many products by enabling the most advanced portions of some chips to be manufactured at only the most advanced node. Other portions of the product might not need a 3 nm process; those functional blocks can work fine when produced at a less advanced node. These newer devices can take a domain-centric production approach that specializes chip/chiplet designs depending upon the end applications and the types of data being processed.

To help systems engineers keep up with the latest technology trends and include the latest capabilities in their products, designers can use the complete set of system analysis tools from Cadence. Only Cadence offers a comprehensive set of circuit, IC, and PCB design tools for any application and any level of complexity.

Subscribe to our newsletter for the latest updates. If you’re looking to learn more about how Cadence has the solution for you, talk to our team of experts.

Untitled Document