To many in the industry, system memory is viewed as little more than a silicon holding pen for temporarily storing program commands and data during execution. Nevertheless, the dramatic growth of Big Data – driven by the burgeoning Internet of Things (IoT) – has prompted a number of key industry players to re-examine the traditional role of memory in the data center.
To be sure, server utilizations have dropped dramatically, especially in big data applications as massive data sets that far exceed the size of main memory cause frequent – and mandatory – disk access. More specifically, accessing data on local disks or remote nodes can take a system many orders of magnitude longer to access than fetching data in local DRAM. Consequently, modern data-intensive applications are often severely bottlenecked by the movement of data to and from disks and across networks.
Clearly, data transport should be minimized to improve performance and power efficiency. Near Data Processing is emerging as a potential new paradigm to minimize data movement by deploying flexible compute engines such as FPGAs next to (near) large amounts of data. In this configuration, the need for data transport is minimized, with customized processing moved directly to the data to improve performance and minimize power consumption.
Recently, Rambus revealed details of its Smart Data Acceleration (SDA) Research Program, which focuses on improving performance and power efficiency in servers and data centers. As part of this program, Rambus developed the SDA platform that includes FPGAs coupled to large capacities of DRAM; closely following the technical contours outlined above.
As a versatile sandbox, the SDA platform allows engineers to explore new near data processing architectures and paradigms that vary in the ways software, firmware, FPGAs and memory interact. The platform is also flexible enough to present itself to the rest of the system in various ways, including as an ultra-fast solid-state disk, a Key-Value store, and a large pool of memory, to name a few examples. Partners and customers can also use the platform to test fresh methods of optimizing and accelerating analytics for large data sets, including in-memory data bases, financial services, ad serving, real-time risk analytics, imaging, transcoding and genome mapping.
Initial testing of the SDA platform configured as an ultra-fast solid state disk confirms higher IOPS rates are achievable under certain workloads – with significantly reduced latency under load compared to state-of-the-art Enterprise NVMe SSDs. In 4KB random access tests, the SDA engine is capable of delivering over 1M IOPS with latency under load in the 10 μs to 60 μs range for both reads and writes, with additional headroom to achieve higher IOPS rates.
In November, Rambus confirmed that its SDA platform had been deployed at Los Alamos National Laboratory (LANL). The platform is being used to improve the performance of in-memory databases, graph analytics and other Big Data applications. Initial tests indicate the SDA platform’s performance is well matched to HPC interconnects, with the platform reducing data movement via efficient workflows.
In a broader sense, Rambus views the SDA Research Program and similar initiatives that emphasize NUMA (non-uniform memory access characteristics) as helping to evolve Von Neumann architecture from its nascent Cold War beginnings to a world dominated by the data-intensive Internet of Things (IoT). We look forward to collaborating with our industry partners and customers on cutting-edge memory technologies and solutions for future servers and data centers.