Semiconductor Engineering Editor in Chief Ed Sperling recently sat down with Suresh Andani, Senior Director, Product Marketing and Business Development at Rambus, to discuss the evolution of PCIe and its latest iteration: PCIe 5.
As Andani notes, PCIe 5 and subsequent iterations of the PCIe standard will continue to be one of the “key interfaces” that enable high-speed computing and processing in the data center.
“The base node in data centers are servers. In these servers, you will typically find the central processing unit (CPU), whether it is Xeon from Intel, EPYC from AMD or P9/P10 from IBM,” he explains.
“PCIe basically connects the SmartNIC or the NIC from the top of the rack switch to the CPU. It also connects the CPU to accelerators. In addition, more storage is moving away from SAS/SATA towards non-volatile memory express (NVMe) or PCIe. There are also a number of interfaces moving towards PCI Express and migrating to PCIe 5.”
According to Andani, with the amount of data exploding and high-performance computing (HPC) workloads increasing, PCIe 4 – even at x16 widths – is no longer sufficient, especially for applications that are running in the cloud.
“Networking in the cloud is also moving from 100GbE to 400GbE for which you need PCIe 5. The bandwidth enabled by PCIe 4 is simply not enough,” he adds.
As Andani observes, PCIe has steadily evolved from Gen 1 to Gen 5, with the Gen 1 specification –introduced in 2002 – running at 2.5 gigatransfers per second (GT/s).
“2.5 GT/s was once sufficient for the type of workloads that ran during earlier days. PCIe 5 runs at 32 GT/s and its successor, PCIe 6, was recently announced,” he explains. “It is important to note that between 2010 and 2017, SerDes I/O technology quadrupled, although the PCIe specifications stagnated at PCIe 3 for almost seven years. This made PCIe a system bottleneck.”
To avoid protracted refresh cycles and reduce system bottlenecks, PCI-SIG has detailed a steady two-year cadence at which the PCIe specs will be released in the future.
“PCIe 5 can take care of networking bandwidth all the way up to 400 gigabit Ethernet, solving a big problem that was there up until PCIe 4,” he states. “The PCIe 5 spec was finalized and released in May 2019 and the development of the PCIe 6 spec was announced in June 2019. It is expected that PCIe 6 will be finalized sometime in the 2021 timeframe. With PCIe moving at a two-year cadence, the interface] will no longer be a system bottleneck.”
Andani also notes that high-performance workloads, such as genomics, high-frequency trading and video transcoding, are all increasing in sophistication and demanding ever-more parallel processing.
“The parallel processing that’s needed for all these datasets requires heterogeneous computing. Specifically, it requires massively parallel architectures like GPUs or FPGAs, which is why a lot of these workloads are being offloaded from the main CPU to a co-processor (which some people refer to as an accelerator), whether they are GPUs, FPGAs, or even ASICs. In turn, heterogeneous computing with accelerators is pushing high bandwidth demand for PCIe, the key interface between the CPU and the accelerators,” he adds.
To better understand the evolution of PCIe, Andani presents a chart that compares the lane speeds of PCIe 4 and PCIe 5. PCIe 4 hit 16 GT/s per lane, while PCIe 5 doubles that speed to 32 GT/s per lane.
“If you look at the chart, you can see the full duplex aggregate bandwidth in gigabytes per second in x1, x2, x4, x8 and x16 configurations.
“So, the maximum aggregate bandwidth that PCIe 4 was able to provide was 64 gigabytes per second (GB/s) for duplex. PCIe 5 doubles that to 128 GB/s, a speed increase that is very much needed in hyperscale data center architectures.”
Indeed, sophisticated workloads continue to evolve along with an ever-increasing amount of data. This means bandwidth will need to double at a steady cadence, which is precisely why PCIe 6 will scale up to 64 GT/s. However, Andani emphasizes that the demand for bandwidth will always be insatiable.
“You will always need faster and faster interfaces as the workloads become more sophisticated,” he concludes.
Leave a Reply