Steven Woo, the vice president of systems and solutions and distinguished inventor in Rambus’ Office of the CTO, recently authored an article for Semiconductor Engineering that explores the data center in 2018 and beyond.
As Woo observes, there are a number of trends that continue to challenge the design of conventional von Neumann architecture, including the growing adoption of artificial intelligence (AI), machine learning, AR/VR, IoT, high-speed financial transactions, self-driving vehicles and blockchain/cryptocurrency mining.
“In 2018, we expect to see the continued deployment of FPGAs, GPUs and specialized silicon in the data center to address the needs of these applications,” he explains. “In addition, we anticipate the industry will focus increasing attention and resources on Post Moore-Era technologies such as cryogenic and quantum computing. Put simply, we expect continued industry attention to be focused on architectures and accelerators that address modern bottlenecks as the end of Moore’s Law looms large on the horizon.”
While the industry ramps up its focus on Post Moore-Era solutions, says Woo, there is also continued focus on advancing conventional architectures.
“Modern data center applications continue to drive the need for increasing memory bandwidth and capacity that is not only accelerating the evolution and deployment of faster DDR memory, but also providing opportunities for high-performance memories such as HBM and GDDR,” he elaborates.
“In the near-term, DDR4 buffer chip adoption will continue to ramp as the industry collectively awaits the launch and deployment of DDR5. According to JEDEC, DDR5 memory will offer improved performance with greater power efficiency compared to previous generations of DDR. As planned, DDR5 will provide double the bandwidth and density over DDR4, along with delivering improved channel efficiency.”
Concurrently, says Woo, the industry is exploring the most effective ways of deploying non-volatile memories and upcoming storage class memories in its relentless effort to improve performance, power-efficiency and cost. For example, hybrid DIMM technologies such as NVDIMM-P are expected to enable new memory solutions optimized for cost, energy consumption and performance.
“NVDIMMs offer persistence, which can improve fault-tolerance and data integrity, while also potentially optimizing the performance of the memory hierarchy. Tasks where this technology shows promise include indexing, message queuing, logging, batch processing, on-line transactions and storage applications,” he continues. “NVDIMMs can potentially benefit huge in-memory computing tasks such as in-memory databases that are integral parts of search engines and hyper-scale computing applications.”
For many IoT applications, says Woo, we can expect to see fog and edge computing gaining mindshare in 2018 and beyond.
“These paradigms take a different approach to computing on large quantities of data by moving the processing closer to the data rather than the conventional method of moving data towards more centralized data centers,” he adds.
“By moving as much computing as is practical to the edge of the network (and closer to the devices that are generating data), problems associated with moving large amounts of data to centralized data centers that consume precious network bandwidth can be avoided, improving performance, cost and power efficiency.”
In conclusion, says Woo, the recent shifts toward more data-centric computing have driven the adoption of technologies that improve the memory hierarchy and that alleviate data movement bottlenecks.
“In 2018 and beyond, will see a continued focus on memory hierarchies, as advances in computing are increasingly limited by the ability of memory systems to keep processing units fed with data. The potential for large-scale architecture changes and the increasing adoption of newer, non-von Neumann architectures are generating excitement in the industry, with memory systems once again taking center stage as a critical area of innovation,” he concludes.