Rambus’ John Eble and Frank Ferro recently penned an article for Data Center Dynamics that explores how new memory technology can help bolster both bandwidth and capacity in the data center.
As the two note, DDR4 delivers up to 1.5x performance improvement over DDR3, while reducing power by an impressive 25% on the memory interface. In addition, DDR4 supports a maximum capacity of 512 GB per module and features more banks than its DDR3 predecessor, along with significantly smaller row sizes, faster (bank) cycling and an increased pin count (284) to support higher addressing capability.
“DDR4 supports stacked memory chips with up to 8 devices presenting a single signal load to memory controllers,” the two explained. “In fact, compared to DDR3, DDR4 offers the potential to double module density as well as speed, while lowering power consumption and extending battery life in future 64-bit tablets and smartphones.”
While DDR4 clocks in at 1.6Gbps to 3.2Gbps, DDR5 is slated to double the bandwidth, thereby achieving achieve speeds of 3.2Gbps to 6.4Gbps to meet the growing demands of data centers in the age of the Internet of Things (IoT). According to Eble and Ferro, while the DDR5 standards work is still in progress, early publicly released information shows a number of evolutionary improvements. These include higher density, a new command structure and new power saving features. DDR5 will also likely introduce signal equalization and error correction.
In addition to discussing DDR4 and DDR5, Eble and Ferro also talked about the introduction of high bandwidth memory – and explained how HBM represents another approach to increasing server memory performance.
“HBM bolsters local available memory by placing low-latency DRAM closer to the CPU. Moreover, HBM DRAM increases memory bandwidth by providing a very wide interface to the SoC of 1024 bits. This means the maximum speed for HBM2 is 2Gbps for a total bandwidth of 256GB/s,” Eble and Ferro elaborated. “Although the bit rate is similar to DDR3 at 2.1Gbps, the 8, 128-bit channels provide HBM with approximately 15X more bandwidth. In addition, four HBM memory stacks (for example), each with 256GB/s in close proximity to the CPU, provides both a significant increase in memory density (up to 8Bb per HBM) and bandwidth when compared with existing architectures.”
Last, but certainly not least, Eble and Ferro described how hybrid DIMM technologies such as NVDIMM (non-volatile DIMMs) are being deployed to address the insatiable demand for increased memory capacity and bandwidth. According to JEDEC, NVDIMM-P will enable new memory solutions optimized for cost, power usage and performance as a new high capacity persistent memory module for computing systems.
“Persistent memory usage on the DIMM supports multiple use cases for hyperscale, high performance and high capacity data centers. These applications include latency reduction, power reduction, metadata storage, in-memory databases, software-defined server RAID, as well as reduced processing load during unexpected failures,” the two added. “[Moreover], NVDIMM-P enables fully accessible flash on the DIMM, allowing the system to leverage non-volatile memory as an additional high-speed memory bank. Meanwhile, NVDIMM-N is designed to enable flash as a persistent memory backup to the DRAM. In practical terms, this means DRAM data is stored locally on the flash, which creates persistence (in case of a power outage) while reducing CPU load.”