Ed Sperling of Semiconductor Engineering recently noted that adding more cores to a processor doesn’t necessarily improve system performance. In fact, designing the wrong size or type of core may actually waste power.
“This has set the stage for a couple of broad shifts in the semiconductor industry,” Sperling explained. “Memory architectures can play an important role here. Most of the current approaches use on-chip SRAM and off-chip DRAM. But different packaging options, coupled with different memory architectures, can change the formula.”
According to Steven Woo, VP of Solutions Marketing at Rambus, memory is certainly capable of “rebalancing” system architectures and addressing bottlenecks.
“You can aggregate memory and make that available to a processor, which allows you to use less processors in a system,” Woo told the publication. “If you look at a data center, CPUs are often heavily underutilized, sometimes to the tune of 10% utilization. Multicore CPUs often have trouble getting enough memory capacity to keep the cores working, causing cores to be starved out. If you have enough memory capacity, it means you can improve CPU utilization to the point that you potentially need to purchase fewer CPUs.”
As Woo previously pointed out, the industry is experiencing a move towards near-data processing, where the data sets are so large that it’s actually cheaper to move the processing closer to the data than to move the data closer to the processing.
“You used to drag data to the processor, sometimes over long distances from storage arrays connected via slower networks. This works when there isn’t much data to be processed. But now that we’re processing data sets that are terabytes to petabytes in size, moving the data to the processing is a major bottleneck. It’s much more efficient to move the computation to the data,” he said. “Semantic awareness is another method that helps to minimize data movement, by allowing processing elements close to the data to understand the structure of that data and process it in a meaningful way without needing to move it to a server first.”
This is precisely why Rambus’ Smart Data Acceleration (SDA) research platform focuses on architectures designed to offload computing closer to very large data sets at multiple points in the memory and storage hierarchy. Potential use case scenarios include in-memory databases, real-time risk analytics, ad serving, neural imaging, transcoding and genome mapping.
Comprising software, firmware, FPGAs and significant amounts of memory, the platform operates as an effective test bed for new methods of optimizing and accelerating analytics in extremely large data sets. As such, the SDA’s versatile combination of hardware, software, firmware, drivers and bit files can be tailored to facilitate architectural exploration and optimization of specific applications.
Interested in learning more? You can check out our official Smart Data Acceleration page here.
Leave a Reply