Skip to main content

Overview of the InfiniBand Protocol

InfiniBand

Over 20 years ago, the InfiniBand protocol was invented as a response to the growing demands of high-performance computing environments. The success of the InfiniBand physical layer and software layer standardization helped the protocol become one of the most popular interconnect technologies used in modern data centers. Just like competing protocols used in switched fabric topologies, the protocol operates on point-to-point communication with a standardized connector format.

With new designs targeting data center environments operating at ever faster speeds, the bandwidth in InfiniBand is also being extended to its limit. With implementation of PAM-4 signaling schemes at earlier data rates, the current architecture works reliably up to 100G. The data rate in the next generation of the InfiniBand interconnect architecture is still to be determined, but if the doubling trend continues we will see the specification targeting 200G plus encoding per Tx/Rx lane.

This article aims to give an overview of the Infiniband protocol and its place in high-performance computing architecture. If you’re developing a new product targeting usage in the data center, then InfiniBand may be one of the primary interconnect protocols you should consider.

Role of InfiniBand in Data Centers

While the data center environment has changed over time, interconnect technologies between servers, switches, and other architecture have continuously pushed data rates to higher levels. InfiniBand is one of the technologies that has followed the trend of pushing higher data rates, with doubling of data rates every few years. InfiniBand was originally developed in 1999, and at one point it was the most popular interconnect architecture. Today it is competitive with PCIe, Ethernet, Fibre Channel, and Omni-Path.

There are multiple performance specifications defined in the InfiniBand standard deployed in practice. The current set of standardized performance levels is shown below.

Performance

Effective throughput (without encoding) per lane

Modulation and encoding

Single data rate (SDR)

2 Gbps

  • NRZ
  • 8b/10b

Double data rate (DDR)

4 Gbps

  • NRZ
  • 8b/10b

Quad data rate (QDR)

8 Gbps

  • NRZ
  • 8b/10b

Fourteen data rate (FDR)

13.64 Gbps

  • NRZ
  • 64b/66b

Enhanced data rate (EDR)

25 Gbps

  • NRZ
  • 64b/66b

High data rate (HDR)

50 Gbps

  • PAM-4
  • 64b/66b

Next data rate (NDR)

100 Gbps

  • PAM-4
  • 64b/66b

The next two performance generations are still to be determined, but its target effective throughput per lane will be 200 Gbps and 400 Gbps per lane.

Physical Layer

The physical layer implemented in the InfiniBand protocol is based on groups of differential pairs, similar to the earlier forms of Ethernet. The standard allows data transfer on up to 12 lanes per link, with each lane containing one Rx and one Tx differential pair. In other words, the maximum size of the physical layer for a single lane could reach up to 24 differential pairs on a PCB. These pairs could be routed between chips on a board, between modules in the same server, or between servers.

The physical layer in InfiniBand also uses a set of standardized connectors. Different connectors are available depending on lane count, performance level, and transmission media. These connectors can support passive copper, active copper, or fiber as the physical transmission media. A typical application with these standardized connectors is to provide interconnects between cards, such as between multiple NICs and a compute accelerator module (GPU, TPU, etc.).

InfiniBand

InfiniBand interconnect solution between multiple servers (shown here via three NICs) and a compute module in a different server.

Other allowed connections include board-to-board and cable backplanes.

On a PCB, this means the protocol demands low-loss laminates at higher performance generations. Specialized low-loss low-Dk FR4 laminates are available with loss tangents well below 0.01, or low-Dk PTFE laminates could be used as the interconnect substrate. At these low loss tangent values, the two main factors that will limit the channel reach and skew are:

  • Skin effect and copper roughness
  • Fiber weave effect

The former can be reduced with low-roughness copper on the InfiniBand signal layers, and the latter can be reduced with spread glass laminates.

Ethernet Over InfiniBand

Finally, because InfiniBand uses a Tx/Rx differential pair structure, the physical layer and interconnect components are compatible with Ethernet. An existing link in a board, card, or module could be used as an Ethernet link if the MAC can be instantiated in the deployed equipment. One example would be in an FPGA, where an instantiated bitstream could be updated to switch from InfiniBand to Ethernet, or vice versa, prior to deployment.

Any system that aims to implement switched fabric topologies must be qualified to operate at the highest current data rates and beyond. Make sure you qualify your most advanced designs using the complete set of system analysis tools from Cadence. Only Cadence offers a comprehensive set of circuit, IC, and PCB design tools for any application and any level of complexity. Cadence PCB design products also integrate with a multiphysics field solver for thermal analysis, including verification of thermally sensitive chip and package designs.

Subscribe to our newsletter for the latest updates. If you’re looking to learn more about how Cadence has the solution for you, talk to our team of experts.

Untitled Document