The once indefatigable Moore’s Law is beginning to slow, even as data, driven by a burgeoning Internet of Things (IoT), continues to increase exponentially. Consequently, a slew of new memory architectures, including those utilizing 2.5D and 3D packaging, are evolving to meet the demands of a new digital age.
Nevertheless, As Ed Sperling of Semiconductor Engineering recently pointed out, there are still more questions than answers about the future of memory, perhaps due to the salient lack of an obvious successor to DDR4.
“[It is unclear] which type of memories to use for what, how they should be packaged and used, and how those new memories will impact data storage further downstream at the disk level,” he explained. “What comes next may be a new memory type, or it may be a new architectural approach using the same technology [as DRAM].”
According to Frank Ferro, a Senior Director of Product Management at Rambus, potential solutions and directions for a beyond DDR4 paradigm includes leveraging existing memory system I/O and architectures to support higher frequencies and multiple memory types on the DDR channel.
“The next generation memory needs to consider advanced I/O techniques; new data bus topologies and the use of improved, lower swing, power efficient, single-ended signaling to reduce bottlenecks,” he said.
Rambus, notes Ferro, already has a prototype memory interface system running at 6.4Gbps (@2 DIMMs per channel), which is more than 2x existing DDR4 data rates. “By doubling the speed of the memory interface and increasing DIMM performance, Rambus has demonstrated that there still is strong roadmap for traditional DDR interfaces.”
In addition, Rambus continues to actively participate in industry conversations about various trends, such as 2.5D/3D packaging and high bandwidth memory (HBM), the latter of which stacks up to 8-DRAM dies.
“From our perspective, 2.5D and 3D packaging is primarily being driven by HBM, which is designed for use in server and network devices,” Ferro explained. “At this point in time, the cost-benefit of HBM varies based on specific use cases, such as those that demand higher DRAM density.”
In turn, says Ferro, HBM is being driven by an insatiable need for more bandwidth by bringing to memory closer to the processor.
“The maximum speed for HBM is 2Gbits/s per pin – for a total bandwidth of 256Gbytes/s,” he confirmed. “And while the bit rate may be somewhat similar to DDR3 at 2.1Gbps, the 8, 128-bit channels gives HBM approximately 15x more bandwidth.”
As Ferro emphasizes, HBM design and implementation can also be challenging, as 2.5D-packaging technology inevitably adds various manufacturing complexities, along with silicon interposer costs.
“There are numerous expensive components mounted to the interposer, such as the SoC and multiple HBM devices,” said Ferro. “Another significant challenge involves routing thousands of signals (data + control + power/ground) via the interposer to the SOC for each HBM memory used. Therefore, a good yield is certainly a critical factor in making the system cost effective.”
Despite the above-mentioned challenges, says Ferro, HBM offers a number of distinct capabilities for a new digital age dominated by the IoT. These include moving memory closer to the CPU, while increasing both density and bandwidth.
“In short,” Ferro added, “HBM takes advantage of existing technologies to create another tier of memory, thus bolstering the overall server memory architecture. At the same time, continued enhancements are needed to the underlying memory and system topology to provide even greater performance as we look out to 2019 and beyond.”
Leave a Reply