Earlier this month, Semiconductor Engineering editor-in-chief Ed Sperling hosted an industry roundtable to discuss new DRAM options and considerations. Frank Ferro, our senior director of product management, represented Rambus, alongside participants from Cadence, Synopsys and Samsung Electronics.
Read first our primer on:
HBM2E Implementation & Selection – The Ultimate Guide »
As Ferro points out, the two primary challenges the memory industry continues to face are bandwidth and latency.
“These challenges really haven’t changed very much [over the years]. However, the key difference is that we’re now seeing a swing back from where we had plenty of bandwidth – and compute was the bottleneck – to where memory is the bottleneck again.”
This trend, says Ferro, has prompted the development of various memory technologies including advanced packaging, in-memory and near-memory computing.
“[For example], HBM basically came out of nowhere a couple years ago and now GDDR is showing up on everyone’s radar,” he explains. “We’re [currently] building physical layers based on [vendor] specs and JEDEC specs. Architecturally, we are looking at ways to innovate.”
According to Ferro, certain types of memory architectures are now being designed to match specific applications.
“In networking, you may see HBM doing packet buffering or DDRs doing some other kind of housekeeping function. The SoCs and the networking processors are starting to optimize memory subsystems based on the type of workloads they’re seeing,” he elaborates. “In AI, the training is moving toward HBM. Inference can’t afford HBM [in most cases], so they’re leaning toward GDDR6.”
System level requirements, says Ferro, determine which memory technology is most appropriate as performance trade-offs are made between power consumption, design complexity and cost. Ferro also notes that the next generation of HBM is on its way, with the promise of more bandwidth and power efficiency. Indeed, AI is still in its infancy and system architects continue to look for more bandwidth from the memory sub-system as the size of data-sets grow. This additional bandwidth and memory density will also enable the co-design of processors and memory systems for further performance optimizations.
Moreover, says Ferro, foundries, semiconductor, DRAM and IP companies are working to improve the overall cost of HBM implementation. HBM is still in the early manufacturing stages and cost reduction will come with volume and manufacturing experience with both the DRAM (3D) and 2.5D systems. There are also efforts to look at new types of interposer material to reduce cost.
“The addition of both HBM and GDDR6 to the traditional DDR DRAM now gives designers more choices for systems requiring very high bandwidth. Although having additional choices for the memory subsystem means more evaluation work for system architects, it is ultimately very good for the industry as the need for more bandwidth has no end in sight,” he concludes.
Leave a Reply