Writing for DataCenter Dynamics, Scott Fulton notes that recent benchmark results indicate DDR4 is at least partially, and perhaps wholly, responsible for performance gains in low-end and mid-tier servers. In addition, DDR4 may, at best, mitigate performance drop-offs in the high-end.
However, says Fulton, the industry is merely at the early stage of new technology adoption with DDR4.
“As JEDEC’s members were well aware of from the beginning, the adoption phase takes more than just a few months,” he explained.
“New compilers that enable the latest wave of processors to make better use of [Intel] Xeon’s and Xeon Phi’s vastly updated memory controllers may yet yield the benefits expected for the high-end.”
According to Fulton, the key advantage of DDR4 lies in the enablement of significantly lower-power systems in data centers whose compute power per cubic foot will increase.
“Denser power tends to lead toward heat, and heat leads to meltdowns. For today, the thermal nightmare has been averted,” he added.
Loren Shalinsky, a Strategic Development Director at Rambus, expressed similar sentiments during a recent interview in Sunnyvale.
“The memory industry has traditionally emphasized reducing power consumption, while simultaneously encouraging the addition of more bandwidth and storage capacity,” Shalinsky told Rambus Press.
“Recently introduced DDR4 memory will ultimately supplant the DDR3 populating current data centers – effectively reducing DRAM power consumption by more than 35%. This will also lead to an overall data center power consumption reduction of almost 8%.”
As Frank Ferro, a Senior Director of Product Management at Rambus, points out, the memory industry will continue to innovate by furthering shrinking power consumption in the data center.
For example, says Ferro, Rambus’ Beyond DDR demo silicon offers a 25% improvement in power efficiency while hitting data transfer rates up to 6.4Gbps in a multi-rank, multi-DIMM configuration. Meaning, the memory interface is three times faster than current DIMMs topping out at 2.133Gbps – and two times the maximum speed specified for DDR4 at 3.2Gbps.
“The 25% power savings can be attributed to several factors. Firstly, the low-swing signaling reduces the I/O power required on the interface,” Ferro explained.
“The design is also ‘asymmetric,’ meaning that complex timing and the equalization circuits are all implemented in the PHY, thus greatly simplifying the DRAM interface and reducing cost.”
According to Ferro, removing complex timing circuits such as PLLs and DLLs from the DRAM makes it extremely agile, facilitating the rapid entrance and exit from power down mode. Because the memory controller is the originator of all memory requests, it is capable of implementing an aggressive and granular DRAM power management scheme.
“As we look to server requirements over the next five years, it is estimated that the total memory bandwidth will need to increase ~33% per year to keep pace with processor improvements,” Ferro confirmed in a recent Semiconductor Engineering article.
“Given this projection, DRAM would have to achieve speeds of over 12Gbps in 2020! Although this is a 4X speed increase over the current DDR4 standard, the Rambus Beyond DDR4 silicon shows traditional DRAM signaling still has plenty of headroom for growth and that these speeds [within reasonable power envelopes] are possible.”
Leave a Reply