Key Design Points To Consider: GDDR6 and HBM2 DRAMs

This entry was posted on Wednesday, December 5th, 2018.

The industry is seeing continued growth in the need for high memory bandwidths to process and extract meaning from the increasing amounts of digital data in the world. The industry continues to be abuzz about GDDR6 memory for all the benefits it brings to not just graphics, but other applications like AI and self-driving cars that are important today and in the near future. However, talk still lingers in some OEM circles about whether to take the GDDR6 route or the HBM2 (High Bandwidth Memory) route in next generation systems. Each has its own set of distinct benefits, as well as challenges from the perspective of application needs and implementation difficulty.

Frank Ferro, Sr. Director, Product Marketing, for Rambus Memory & Interface Division, delineates the two from an applications perspective in a recent video —


In short, with GDDR6 memory systems, SoC designers can rest easy knowing that it’s compatible with today’s packaging and printed circuit boards. Moreover, GDDR6 leverages traditional unstacked DRAM die architectures that are common throughout the industry.  GDDR6 memory systems can be tricky to implement, as the high data rates mean SoC and board designers must grapple with more challenging signal integrity and cooling solutions in order to achieve desired data rates and acceptable thermals.

GDDR6 and HBM2

HBM2 memory systems, on the other hand, provide system designers two key advantages. One is they provide the highest bandwidths per device. The second one is the best power efficiency for high performance memory systems.

While these benefits are desired across the industry, the critical challenges to consider are the cost and increased design complexity of HBM2 memory systems. HBM2 uses die stacking, which is more difficult to implement, which in turn leads to higher costs. At the system level, an additional silicon interposer is needed that provides electrical connectivity to the SoC. This improves the communication path between the SoC and the interposer, but it adds additional cost and design complexity to design and assemble the additional components, as well as manage thermals and long-term reliability issues.

 

A simple example illustrates how these differences can impact system design. Fig. 1 shows two 256GB/s memory system, one using GDDR6 and one using HBM2. While both memory systems provide 256GB/s of memory bandwidth, the data rate is 16 Gbps for GDDR6 and 2 Gbps for HBM2. However, the SoC controller PHY area and power of an HBM2 memory system is smaller than for a GDDR6 memory system.

As the industry moves forward, it’s imperative for system and SoC design engineers to focus on the primary requirements for their systems, as well as the key specifications of the high-performance DRAMs. Both have a direct influence on DRAM choice and the corresponding memory controller and SoC controller PHY architectures.