Interface IP chip icon

Interface IP

HBM3 Controller

The Rambus HBM3 controller core is designed for use in applications requiring high memory bandwidth and low latency including AI/ML, HPC, advanced data center workloads and graphics. With the integrated HBM3 PHY it comprises a complete HBM3 memory subsystem.

How the HBM3 Memory Subsystem works

HBM3 is a high-performance memory that features reduced power consumption and a small form factor. It combines 2.5D packaging with a wider interface at a lower clock speed (as compared to GDDR6) to deliver higher overall throughput at a higher bandwidth-per-watt efficiency for AI/ML and high-performance computing (HPC) applications.

The Rambus HBM3 memory subsystem supports data rates up to 8.4 Gbps per data pin. The interface features 16 independent channels, each containing 64 bits for a total data width of 1024 bits. At maximum data rate, this provides a total interface bandwidth of 1075.2 GB/s.

The interface is designed for a 2.5D system with an interposer used for routing signals between the 3D DRAM stack and the memory subsystem on the SoC. This combination of signal density and stacked form factor requires special design consideration. In order to enable easy implementation and improved flexibility of design, Rambus performs complete signal and power integrity analysis on the entire 2.5D system to ensure that all signal, power and thermal requirements are met. 

HBM3 Memory Subsystem Example
HBM3 Memory Subsystem Example

The Rambus HBM3 memory subsystem supports HBM3 memory devices with 2, 4, 8, 12 and 16 DRAM stack height with densities of up 32 Gb. The subsystem maximizes bandwidth and latency via Look-Ahead command processing.

The Rambus HBM3 memory subsystem comprises an integrated HBM3 PHY and Controller. Alternatively, these cores can be licensed separately to be paired with 3rd-party HBM3 controller or PHY solutions.

HBM3 Memory: Break Through to Greater Bandwidth

HBM3 Memory: Break Through to Greater Bandwidth

AI/ML’s demands for greater bandwidth are insatiable driving rapid improvements in every aspect of computing hardware and software. HBM memory is the ideal solution for the high bandwidth requirements of AI/ML training, but it entails additional design considerations given its 2.5D architecture. Now we’re on the verge of a new generation of HBM that will raise memory and capacity to new heights. Designers can realize new levels of performance with the HBM3-ready memory subsystem solution from Rambus.

Solution Offerings

Protocol Compatibility

ProtocolData Rate (Gbps) Max. Application
HBM34.8, 5.6, 6.4, 8.4AI/ML, HPC, Data Center, Graphics
Watch Webinar

Memory Systems for AI and Leading-Edge Applications

Thanks to rapid advancements in computing, neural networks are fueling tremendous growth in AI for a broad spectrum of applications. Learn about the memory architectures, and their relative advantages, at the heart of the AI revolution.