Interface IP chip icon

Interface IP

HBM2E Controller

The Northwest Logic HBM2E controller core is designed for use in applications requiring high memory throughput including performance-intensive applications in artificial intelligence (AI), high performance computing (HPC), data center and graphics. With the co-verified Rambus HBM2E PHY it comprises a complete HBM2E memory interface subsystem.

How the HBM2E Interface works

HBM2E is a high-performance memory that features reduced power consumption and a small form factor.  It combines 2.5D packaging with a wider interface at a lower clock speed (as compared to DDR4) to deliver higher overall throughput at a higher bandwidth-per-watt efficiency for AI/ML and HPC applications.

The Rambus HBM2E interface is fully compliant with the JEDEC HBM2E JESD235 standard. It supports data rates up to 3.2 Gbps per data pin. The interface features 8 independent channels, each containing 128 bits for a total data width of 1024 bits. The resulting bandwidth is 410 GB/s per stack, with the stack consisting of 2, 4, 8 or 12 DRAMs.

The interface is designed for a 2.5D system with an interposer used for routing signals between the DRAM stack and the PHY on the SoC. This combination of signal density and stacked form factor requires special design consideration. In order to enable easy implementation and improved flexibility of design, Rambus performs complete signal and power integrity analysis on the entire 2.5D system to ensure that all signal, power and thermal requirements are met.

HBM2E Memory Interface Subsystem Example
HBM2E Memory Interface Subsystem Example

The Northwest Logic HBM2E controller supports both HBM2 and HBM2E devices with data rates of up to 3.2 Gbps per data pin. It supports all standard channel densities including 4, 6, 8, 12, 16 and 24 Gb. The controller maximizes memory bandwidth and minimizes latency via Look-Ahead command processing. The core is DFI compatible (with extensions added for HBM2E) and supports AXI, OCP or native interface to user logic.

The HBM2E PHY and Northwest Logic HBM2E controller used together comprise a complete HBM2E memory interface subsystem. Alternatively, these cores can be licensed separately to be paired with 3rd-party HBM2E controller or PHY solutions.

Download HBM2 and GDDR6: Memory Solutions for AI white paper

HBM2E and GDDR6: Memory Solutions for AI

Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 driving rapid improvements in every aspect of computing hardware and software. Meanwhile, AI inference is being deployed across the network edge and in a broad spectrum of IoT devices including in automotive/ADAS. Training and inference have unique feature requirements that can be served by tailored memory solutions. Learn how HBM2E and GDDR6 provide the high performance demanded by the next wave of AI applications.

Solution Offerings

  • Co-verified with Rambus HBM2E PHY
  • Supports HBM2 and HBM2E devices
  • Supports all standard HBM2 channel densities (4, 6, 8, 12, 16, 24 Gb)
  • Supports up to 3.2 Gbps/pin
  • Handles two pseudo-channels with one controller or independently with two controllers
  • Queue-based interface optimizes performance and throughput
  • Maximizes memory bandwidth and minimizes latency via Look-Ahead command processing
  • Achieves high clock rates with minimal routing constraints
  • Full run-time configurable timing parameters and memory settings
  • DFI compatible (with extensions added for HBM2E)
  • Full set of Add-On cores available
  • Supports AXI, OCP or native interface to user logic
  • Delivered fully integrated and verified with target PHY
  • Core (source code)
  • Testbench (source code)
  • Complete documentation
  • Expert technical support
  • Maintenance updates
 

Engineering Design Services:

  • Customization
  • SoC Integration
2.5D/3D Packaging Solutions for AI and HPC

2.5D/3D Packaging Solutions for AI and HPC

For AI and HPC applications, HBM2E memory can deliver excellent bandwidth, capacity and latency in a very compact footprint thanks to its 2.5D/3D structure. The flipside is that this same structure leads to greater design complexity and raises a new set of implementation considerations.

Protocol Compatibility

ProtocolData Rate (Gbps) Max. Application
HBM2E3.2AI/ML, HPC and Data Center
HBM22AI/ML, HPC and Data Center
2.5D/3D Packaging Solutions for AI and HPC

Memory Systems for AI and Leading-Edge Applications

Thanks to rapid advancements in computing, neural networks are fueling tremendous growth in AI for a broad spectrum of applications. Learn about the memory architectures, and their relative advantages, at the heart of the AI revolution.

Upcoming Webinar: AI Requires Tailored DRAM Solutions