“Although HBM provides DDR3 – like bit rate per pin (HBM1=1GHz, HBM2=2GHz), the standard more than compensates with its channel of 128 bits each,” Loren Shalinsky, a Strategic Development Director at Rambus, explained. “This results in an interface that can support 1024 bits x 2GHz = 2048 Gbits/sec (256GB/sec). The HBM stack (as defined in HBM2) can actually have up to 8 die in the stack. The standard doesn’t specify how many channels are on each die.”
Indeed, as ExtremeTech’s Joel Hruska recently noted, future GPUs built with HBM might reach 512GB/s to 1TB/s of main memory bandwidth as compared to 336GB/s on the current Titan Black.
“HBM is the ‘middle’ option as far as cost and bandwidth — it’s not as cheap as Wide I/O, or as power efficient, but it’s explicitly designed for high performance GPU environments and still should be cheaper than HMC,” said Hruska.
IMAGE CREDIT: ExtremeTech (via JEDEC)
“HBM is explicitly designed for graphics, but it’s a specialized application of Wide I/O 2. Both AMD and Nvidia are adopting it for next-generation GPUs — Nvidia has stated they’ll use it for Pascal in 2016, while AMD is working on the tech but hasn’t yet publicly stated which GPUs will support it.”
According to Rambus VP of IP Strategy Stefan Tamme, Rambus has a long history of working with companies across multiple markets, including gaming, computing, and mobile, to improve memory performance, power and capacities. In fact, most systems using advanced DRAM architectures today carry some form of Rambus DNA.
“Stacked memory architectures, such as HBM, represent a natural evolution for higher performance and capacity systems. We have participated in a number of initiatives over the years to develop enabling technologies that ease the transition,” Tamme concluded. “We look forward to continually developing comprehensive memory architecture solutions that take into account the evolving needs of an increasingly diverse marketplace.”