The PCI Express® (PCIe®) interface is the critical backbone that moves data at high bandwidth and low latency between various compute nodes such as CPUs, GPUs, FPGAs, and workload-specific accelerators. With the torrid rise in bandwidth demands of advanced workloads such as AI/ML training, PCIe 6.0 jumps signaling to 64 GT/s with some of the biggest changes yet in the standard.
Interface IP
HBM3 Memory: Break Through to Greater Bandwidth
AI/ML’s demands for greater bandwidth are insatiable driving rapid improvements in every aspect of computing hardware and software. HBM memory is the ideal solution for the high bandwidth requirements of AI/ML training, but it entails additional design considerations given its 2.5D architecture. Now we’re on the verge of a new generation of HBM that will raise memory and capacity to new heights. Designers can realize new levels of performance with the HBM3-ready memory subsystem solution from Rambus.
MIPI Drives Performance for Next-Generation Displays
MIPI® Alliance technology has helped enable the dramatic growth of the mobile phone market. The function and capabilities of MIPI interface solutions have grown dramatically as well. MIPI DSI-2 SM has become the leading display interface across a growing range of products including smartphones, AR/VR, IoT appliances and ADAS/autonomous vehicles. As the application space has expanded, so too have the performance requirements. Learn how MIPI DSI-2 interface and VESA® DSC visually lossless compression technologies can meet the challenges of next-generation displays.
HBM2E Raises the Bar for Memory Bandwidth
Data Center Evolution: Accelerating Computing with PCI Express 5.0
The PCI Express® (PCIe) interface is the critical backbone that moves data at high bandwidth between various compute nodes such as CPUs, GPUs, FPGAs, and workload-specific accelerators. The rise of cloud-based computing and hyperscale data centers, along with high-bandwidth applications like artificial intelligence (AI) and machine learning (ML), require the new level of performance of PCI Express 5.0.
Data Center Evolution: From Pluggable to Co-Packaged Optics
A torrent of data traffic is growing at an exponential rate driven by applications including 5G, AI/ML, video streaming, online gaming, ADAS and more. Handling this data traffic are hyperscale data centers that have grown to over 500 in number worldwide with a third as many in the pipeline. To scale computing resources to the growing data demands, hyperscale data centers deploy fiber optics throughout to provide the needed bandwidth and manage the power. Pluggable small form-factor optical modules have scaled the optical connections from 50G to 400G over the past two decades. With the evolution to 800G Ethernet and beyond, new architectures including co-packaged optics can enable the desired performance while keeping power consumption within the desired envelope.