Interface IP chip icon

Interface IP

GDDR6 PHY

Designed for performance and power efficiency, the GDDR6 PHY enables applications requiring high memory throughput including graphics, advanced driver assistance systems (ADAS), data center and artificial intelligence (AI). With the Northwest Logic GDDR6 Controller it comprises a complete GDDR6 memory interface subsystem. 

How the GDDR6 Interface works

Originally designed for graphics applications, GDDR6 is a high-performance memory solution that can be used in a variety of compute-intensive applications.

The Rambus GDDR6 PHY is fully compliant to the JEDEC GDDR6 (JESD250) standard, supporting up to 18 Gbps per pin. The GDDR6 interface supports 2 channels, each with 16 bits for a total data width of 32 bits. At 18 Gbps per pin, the Rambus GDDR6 PHY offers a bandwidth of 72 GB/s. The PHY is available in advanced FinFET nodes for leading-edge SoC integration. Rambus works directly with customers to provide full-system signal and power integrity analysis, creating an optimized chip layout. Customers receive a hard macro solution with a full suite of test software for quick turn-on, characterization and debug.

GDDR6 Memory Interface Subsystem
GDDR6 Memory Interface Subsystem

The Northwest Logic GDDR6 controller fully supports the bandwidth and dual channel capabilities of the Rambus GDDR6 PHY. It maximizes memory bandwidth and minimizes latency via Look-Ahead command processing. The core is DFI compatible (with extensions for GDDR6) and supports AXI, OCP or native interface to user logic.

The Rambus GDDR6 PHY and Northwest Logic GDDR6 controller used together comprise a complete GDDR6 memory interface subsystem. Alternatively, these cores can be licensed separately to be paired with 3rd-party GDDR6 controller or PHY solutions.

Data Center to End Device: AI/ML Inferencing with GDDR6 cover

From Data Center to End Device: AI/ML Inferencing with GDDR6

Created to support 3D gaming on consoles and PCs, GDDR packs performance that makes it an ideal solution for AI/ML inferencing. As inferencing migrates from the heart of the data center to the network edge, and ultimately to a broad range of AI-powered IoT devices, GDDR memory’s combination of high bandwidth, low latency, power efficiency and suitability for high-volume applications will be increasingly important. The latest iteration of the standard, GDDR6 memory, pushes data rates to 18 gigabits per second and device bandwidths to 72 gigabytes per second.

Solution Offerings

  • JESD250 Compliant
  • Available on TSMC 7nm process
  • Flexible delivery of IP core: works with ASIC/ SoC layout requirements
  • Speed bins: 12 Gbps, 14 Gbps, 16 Gbps, 18 Gbps
  • 2 Channels
  • Support for GDDR6 SGRAM
  • DFI 3.1 style interface for easy integration with memory controller
  • Memory controller or PHY can be ASIC interface master (PHY independent mode)
  • Selectable low-power operating states
  • Programmable Driver/Termination impedance value
  • Driver/Termination Impedance calibration
  • In-built test support
  • Utilizes 13-layer metal stack
  • Register interface for state observation
  • LabStation™ software environment for system level bring-up, characterization, and validation
  • Fully-characterized hard macro (GDSII)
  • Complete design views:
    • Gate-level and IO models
    • Layout abstracts (.lef)
    • Timing models (.lib)
    • Verilog Behavior model
    • CDL netlists (.cdl)
    • GDSII layout
    • DRC & LVS reports
  • Full documentation:
    • Datasheet
    • Package design guidelines
    • ASIC/DFT manufacturing guidelines
    • Test and characterization user guide

Comprehensive Chip and System Design Reviews

  • Kickoff/Program Review
  • Floor plan Review
  • Test/Characterization Plan Review
  • Package Design Review
  • Board Design Review
  • Final Chip Integration Review
  • Bring-up and Test Review
 

Engineering Design Services:

  • Package design
  • System board layout
  • Statistically-based signal and power integrity analysis
Download HBM2 and GDDR6: Memory Solutions for AI white paper

HBM2E and GDDR6: Memory Solutions for AI

Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 driving rapid improvements in every aspect of computing hardware and software. Meanwhile, AI inference is being deployed across the network edge and in a broad spectrum of IoT devices including in automotive/ADAS. Training and inference have unique feature requirements that can be served by tailored memory solutions. Learn how HBM2E and GDDR6 provide the high performance demanded by the next wave of AI applications.

Don’t miss out on the Rambus Design Summit on October 8th!