CXL 3.0 introduces several compelling new features to address the rapidly evolving demands of future data centers. A new device type, CXL Multi-Headed Devices, has been introduced to support simultaneous connection to multiple hosts. CXL Dynamic Capacity Device (DCD) capability simplifies migration of memory resources between hosts. New CXL Fabrics offer substantial scale and flexibility in architectural design. Danny Moore will discuss these important new developments in the CXL standard.
LPDDR5X: Delivering High Bandwidth and Power Efficiency
The bandwidth and low power characteristics of LPDDR make it an increasingly attractive choice of memory for applications in IoT, automotive, and edge computing. LPDDR5X takes performance to the next level with a data rate of up to 8.5 Gbps. Join Vinitha Seevaratnam to learn which applications can benefit from using LPDDR memory.
System Level Design Considerations for PCIe 6.0
PCIe 6.0 offers many new and exciting features including a 64 GT/s data rate, PAM4 signaling, forward error correction, and a low power L0p mode. In this presentation, Lou Ternullo will walk you through all the system design considerations you will need to know before getting started on your PCIe 6.0 design, including how to get the most out of each of the PCIe devices.
Leveraging VESA Video Compression & MIPI DSI-2 for High-Performance Displays
Visually lossless video compression is essential for handling the growing bandwidth requirements of cutting-edge displays with higher resolutions, faster refresh rates, and greater pixel depths. This presentation will show designers how they can develop cutting-edge display products without compromising on display quality, battery life or cost using a combination of VESA video compression and MIPI DSI-2 technology.
Meeting the Needs of Generative AI Training with HBM3
Generative AI training models are growing in both size and sophistication at a lightning pace, requiring more and more bandwidth. With its unique 2.5D/3D architecture, HBM3 can deliver Terrabytes per second of bandwidth at a system level. Join Frank Ferro to hear how HBM helps designers address the needs of state-of-the-art AI training models.
Powering AI/ML Inference with GDDR6 Memory
GDDR6 memory offers an impressive combination of bandwidth, capacity, latency and power. Frank Ferro will discuss how these features make it the ideal memory choice for AI/ML inference at the edge and highlight some of the key design considerations you need to keep in mind when implementing GDDR6 memory at ultra-high data rates.