Earlier this week, Rambus Chief Scientist Craig Hampel gave a keynote presentation at MemCon 2015 that explored the increasingly blurred lines between memory and storage.
As Hampel notes, devices used as memory are typically volatile, byte addressable, directly writable, have deterministic latency and have an endurance greater than 1015 operations. In contrast, storage devices are non-volatile, block addressable, require an erase operation, have varied latency and have endurance limits that are much less than memory devices.
The two also differ in terms of system integration distinctions. Memory, says Hampel, is hardware device parallel, hardware state controlled and contextually unaware. In addition, the CPU waits for memory and isn’t power coherent. Meanwhile, storage devices are abstracted, software state controlled and context of the data is resident in the storage system. Storage is also power coherent and the CPU typically context switches during a memory operation.
Lastly, memory supports a direct CPU interface, 100s of GB/s per CPU, fewer outstanding transactions and fixed scheduling. In contrast, storage features an abstracted interface, 10s of GB/s per CPU, 100s of outstanding transactions and split transaction/dynamic scheduling.
“The application view is probably the most definitive,” Hampel told conference attendees. “Memory is most often associated with partial and intermediate data, while storage is designated for complete and final data, as well as saved and persistent data.”
Despite the differences, says Hampel, the lines between memory and storage are beginning to blur. Indeed, data movement increasingly limits TCO, performance and power efficiency. An alternative paradigm could see compute offload to storage that is both data structure-aware and connected directly to a memory-like storage device.
“Memory and storage will begin to share numerous characteristics at the boundary. As expected, there are numerous software and hardware opportunities for these properties,” he explained. “Memory interfaces (like DDR), with extensions for storage support, are the likely deployment for emerging storage-class memories.”
According to the chief scientist, future converged memory interface requirements include speeds of up to 6.4+Gbps and 2DPC (DRAM & SCM modules on the same channel) and the efficient allocation as well as scheduling of SCM & DRAM bandwidth. In addition, future converged standards should minimize latency and power, while maintaining similar economics and infrastructure for low-risk industry adoption.
“In this context, potential DDR5 directions could include revamped control buses that provide more general purpose control paths – while removing primary and secondary bottlenecks,” he continued. “Extended LRDIMM architecture would support higher data and control rates, as well as caching and mixed DRAM/SCM module types, along with address and data buffers to abstract memory types.”
Similarly, the chief scientist added, improved protocols could potentially support storage class memory over memory type bus, as well as pipeline mixed minimum latency with non-deterministic and varied latencies.
“An upgraded data bus would enable lower swing and power efficient, single-ended signaling such as regulated LVSTL (NGS). It would also help facilitate new data bus topologies to improve data rates,” he concluded.
Leave a Reply