A rapid rise in the size and sophistication of inferencing models has necessitated increasingly powerful hardware deployed at the network edge and in the endpoint devices. To keep these inferencing processors and accelerators fed with data requires a state-of-the-art memory solution that delivers extremely high bandwidth. Frank Ferro will discuss the design and implementation considerations of GDDR6 memory subsystems to address the bandwidth needs of these next-generation inferencing engines (JESD/CIPRI).
LPDDR5 Delivers High Bandwidth for a Growing Range of Applications
Initially designed for mobile phones and laptops, the bandwidth and low power characteristics of LPDDR make it an increasingly attractive choice of memory for applications in IoT, automotive, edge computing and the data center. Fifth-generation LPDDR5 raises data rates to 6.4 Gbps and bandwidth to 25.6 GB/s for a x32 DRAM device. In this session, Rambus and its partners OpenFive and Avery Design Systems will discuss their high-performance, high-quality, configurable LPDDR5 solution.
Accelerating Data Interconnects with PCI Express™ 6.0 & 5.0 Interface IP
The latest generation of the PCI Express, PCIe™ 6.0, advances performance to 64 GT/s in support of advanced workloads and networking. In this presentation, interface technology expert, Arjun Bangre will discuss the changes implemented in PCI Express 6.0, such as PAM4 signaling and low-latency forward error correction (FEC). In addition, Arjun Bangre will contrast PCIe 6.0 and 5.0 and explain how Rambus can support the PCIe interface your next design requires.
GDDR6 Memory Enables High-Performance Inferencing
A rapid rise in the size and sophistication of inferencing models has necessitated increasingly powerful hardware deployed at the network edge and in the endpoint devices. To keep these inferencing processors and accelerators fed with data requires a state-of-the-art memory solution that delivers extremely high bandwidth. Frank Ferro will discuss the design and implementation considerations of GDDR6 memory subsystems to address the bandwidth needs of these next-generation inferencing engines.
Implementing CXL™ 2.0 Interconnect Solutions
Compute Express Link™ (CXL) has evolved rapidly since its launch in 2019 and is slated for debut in the next generation of server platforms coming later this year. While it builds on the same physical layer as PCI Express, CXL implements unique features at the controller level to enable memory cache coherency between a host and multiple types of connected devices including smart NICs, accelerators and memory expansion devices.
Memory Bandwidth for AI/ML Races Higher with HBM3
With the insatiable need for higher bandwidth in state-of-the-art AI/ML training and HPC, the HBM standard has been on a rapid pace of improvement. The newly standardized HBM3 generation doubles the data rate to 6.4 Gb/s that offers up to 819 GB/s of memory bandwidth between an accelerator and a single HBM3 DRAM device. Memory interface technology expert, Frank Ferro will discuss how the Rambus 8.4 Gb/s HBM3 Memory Subsystem can provide the headroom and scalability needed for implementing state-of-the-art HBM designs.