Bob O’Donnel of TECHnalysis Research recently published a white paper describing the critical role memory server chipsets play in facilitating high-speed DDR4 designs.
“With the introduction of DDR4, server system designers can leverage DRAM that runs at speeds of 2,133 Mbps today, with future speeds running up to 3,200 Mbps,” he explained.
“This performance boost also comes with real challenges, however, because the move to higher speeds degrades electrical signal integrity, especially with multiple modules added to a system. In practical terms, this means it’s becoming harder to achieve higher capacities at higher speeds.”
In order to overcome this limitation, says O’Donnel, memory designers use specialized clocks and dedicated memory buffer chips integrated onto the DIMMs.
“These server memory buffer chipsets play a critical role in high-speed DDR4 designs,” he continued. “They allow server designers to maintain the high-speeds that DDR4 offers, while also enabling the higher capacity designs that today’s applications require.”
As O’Donnel notes, there are currently two categories of modern server DDR4 DIMMs. In a Registered DIMM (RDIMM), a Register Clock Driver (RCD) chip delivers a single load for the clock and command/address signals for the entire DIMM onto the data bus that connects between memory and the CPU. This technique facilitates a reduced impact on signal integrity versus an unbuffered DIMM, where all of the individual DRAM chips place multiple loads on clock and command signals.
On a Load Reduced DIMM (LRDIMM), each individual DRAM chip is equipped with an associated Data Buffer (DB) chip—in addition to the RCD on the module—to reduce the effective load on the data bus, enabling the use of higher capacity DRAMs. The combination of the RCD and individual DBs constitute a complete server DIMM chipset.
“With the server DIMM chipset enabled, data is not actually sent straight to the CPU from memory, just as gasoline isn’t sent straight to a car’s engine from its fuel tank,” said O’Donnel. “A properly designed fuel injection system sends gas to the engine in exactly the right form, quantity and speed that it requires and, in an analogous way, memory buffers serve to regulate the delivery of raw data from memory into and out of the CPU.”
As expected, the location of data buffers on DDR4 LRDIMMs also play a vital role in helping to achieve better performance than DDR3 LRDIMMs.
“The big benefit is the reduced trace distance from each DRAM module to the memory bus and memory controller,” he explained. “While DDR3 LRDIMMs have a single centralized memory buffer that forces data to cross the distance of the DIMM module and back, DDR4 LRDIMMs have dedicated memory buffer chips located a very short trace line away from the data bus.”
Real world benefits include time savings that can be measured in nanoseconds, as well as optimized signal integrity facilitated by shorter trace lines. Both translate into improved real-world performance for time-sensitive applications. As we’ve previously discussed on Rambus Press, performance is particularly important for today’s cloud-based services, advanced analytics tools and other big data applications. Indeed, all are driving a higher set of expectations for servers.
“Throw in the looming prospect (and opportunity) of the Internet of Things (IOT) and the stage is set for a very challenging environment in today’s and tomorrow’s data centers and enterprise servers,” O’Donnel confirmed. “Many of these new applications leverage very large in-memory databases to meet the performance expectations of today’s increasingly connected, mobile world.”
To be sure, microseconds count when providing real-time analytics on millions of financial transactions or offering real-time language translation via cloud-based services. The solution, says O’Donnel, is a set of chips like the new Rambus R+ DDR4 Server DIMM chipset, which reduces latencies for time-sensitive applications and ensures the best possible performance in delivering data to and from the CPU. This is particularly true for large multi-core CPUs, which can benefit from multiple dedicated lanes of memory bandwidth.
“With the R+ DDR4 Server DIMM chipset, Rambus has chosen to enter the finished semiconductor market for the first time, offering their branded chips to DRAM and DIMM manufacturers such as Samsung, SK Hynix, Micron and more,” he added. “The new Rambus chips are DDR4 JEDEC-compliant, ensuring they will work with any standard server DDR4 DRAMs and function in any standard DDR4 server architecture. In fact, they surpass JEDEC’s reliability requirements. They work at 2,666 Mbps and already include built-in support for 2,933 Mbps, making them well-prepared for future memory innovations.”
Interested in learning more? You can check our official R+ DDR4 Server DIMM chipset product page here and read the complete TECHnalysis white paper here.
Leave a Reply