“Modern CPUs rely on the following primary interconnect types: memory interconnects, primarily supported by DDR4 today; high speed chip-to-chip cache coherent interconnect, typically supported by proprietary standards; and low speed links such as USB and SATA for low-level management and configuration,” Mathur explained.
“For almost everything else, there is PCIe. Easily scalable via multi-lane links, PCIe has always been backwards compatible and extensively supported by all modern OSs, software and drivers. Not surprisingly, PCIe has been widely deployed throughout the data center, enterprise and client PC markets.”
According to Mathur, the advent of cloud computing has demanded that data centers continuously increase compute power by adding faster CPUs with ever increasing numbers of cores. Data centers have also adopted newer processing techniques with GPUs and accelerators to support emerging machine learning, artificial intelligence and deep learning workloads.
“These kinds of use-cases require higher performance processing [paired] with higher performance storage, all with minimal latency – a paradigm that demands interconnects to optimally feed processing capabilities,” he elaborated.
As Mathur notes, PCIe 1.0 supported a transfer rate of 2.5 Gbps per lane at launch. Subsequent upgrades released approximately every 3-4 years doubled bandwidth each time (PCIe 2.0 at 5 Gbps in 2006, and PCIe 3.0 at 8 Gpbs in 2010). However, this cadence was abandoned following the rollout of PCIe 3.0. In fact, there was a 7-year gap before speeds reached 16 Gpbs, with the PCIe 4.0 specification publically released in 2017.
“Multiple CPU, storage, accelerator and network adaptor vendors have already developed PCIe 4.0 compatible products in anticipation of broad deployment,” he continued. “This ramp is imminent with the public release of the specification and extensive ecosystem support.”
According to Mathur, there are two immediate technologies that will benefit from the proliferation of PCIe 4.0: the adoption of next-generation NVMe storage technologies and the deployment of GPU/FPGA accelerators in the data center.
NVMe is a non-volatile memory interface standard that utilizes PCIe interfaces to SSDs. Emerging NVMe storage technologies are saturating existing PCIe 3.0 interfaces and will only achieve optimal performance with the increased bandwidth offered by PCIe 4.0. Meanwhile, accelerator hardware such as GPUs and FPGAs, which are already resorting to proprietary interconnects to address bandwidth bottlenecks, will be able to leverage a high-performance industry standard interface resulting in seamless compatibility across platforms.
Moreover, complementary standard initiatives such as CCIX are purporting to add cache coherence capability, required for efficient accelerator offload, to the ubiquitous PCIe transport layer.
“PCI-SIG, the standards body that governs PCIe, is accelerating the development and release of the PCIe 5.0 specification, likely to address unabated market demand and the delay in ratifying PCIe 4.0,” Mathur concluded. “PCI-SIG has also developed a cabled technology called OCuLink to connect PCIe devices, thereby enabling new out-of-the-box compute and storage use-cases. Innovations such as these guarantee PCIe’s relevance to compute and storage infrastructure and cement its continuing role in defining the data center of the future.”