A torrent of data traffic is growing at an exponential rate driven by applications including 5G, AI/ML, video streaming, online gaming, ADAS and more. Handling this data traffic are hyperscale data centers that have grown to over 500 in number worldwide with a third as many in the pipeline. To scale computing resources to the growing data demands, hyperscale data centers deploy fiber optics throughout to provide the needed bandwidth and manage the power. Pluggable small form-factor optical modules have scaled the optical connections from 50G to 400G over the past two decades. With the evolution to 800G Ethernet and beyond, new architectures including co-packaged optics can enable the desired performance while keeping power consumption within the desired envelope.
Interface IP
From Data Center to End Device: AI/ML Inferencing with GDDR6
Created to support 3D gaming on consoles and PCs, GDDR packs performance that makes it an ideal solution for AI/ML inferencing. As inferencing migrates from the heart of the data center to the network edge, and ultimately to a broad range of AI-powered IoT devices, GDDR memory’s combination of high bandwidth, low latency, power efficiency and suitability for high-volume applications will be increasingly important. The latest iteration of the standard, GDDR6 memory, pushes data rates to 18 gigabits per second and device bandwidths to 72 gigabytes per second.
HBM2E and GDDR6: Memory Solutions for AI
Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 driving rapid improvements in every aspect of computing hardware and software. Meanwhile, AI inference is being deployed across the network edge and in a broad spectrum of IoT devices including in automotive/ADAS. Training and inference have unique feature requirements that can be served by tailored memory solutions. Learn how HBM2E and GDDR6 provide the high performance demanded by the next wave of AI applications.
The Rambus GDDR6 PHY IP Core
With GDDR PHYs providing a maximum bandwidth of up to 64 GB/s, it is critical for ASIC designers to ensure that devices and systems aren’t affected by signal integrity issues. This is precisely why the Rambus GDDR6 PHY engineering team makes extensive use of modeling and simulation tools, as well as providing highly-programmable circuits, debug interfaces and utilities. Moreover, our engineering team comprises a range of in-house experts that participate in all stages of the GDDR6 PHY design which is available on leading FinFET process nodes. These include package and PCB design experts and layout gurus, as well as signal integrity and power integrity specialists. On the engineering side, the Rambus GDDR6 PHY leverages our system-aware design methodology to facilitate flexible product integration. Specifically, we provide full system signal and power integrity analysis to optimize performance and chip layout.
SerDes Signal Integrity Challenges at 28Gbps and Beyond
This paper will discuss the various challenges of designing high speed SerDes, as well as the importance of detailed modeling and design of highly programmable circuits and debug interfaces.
Analysis, Modeling and Characterization of Multi-Protocol High-Speed Serial Links
Improved analysis, modeling, characterization and correlation methods of multi-protocol high-speed transceivers that utilize T-coil to enhance the transmitter and receiver bandwidth, transmitter FIR filters and receiver CTLE and DFE equalizers is presented. The key circuit blocks are measured and modeled using IBIS-AMI models and the overall system performance including the eye diagrams, BER curves are well correlated to on-die measurements. The paper discuss the procedure taken to model, measure, and verify the high-speed transceivers meet the standard specifications such as return loss, jitter tolerance, BER and convergence of the adaptation equalizers and CDR to optimize the margins for various channels.