Frank Ferro, a senior director of product management at Rambus, recently sat down with Ed Sperling of Semiconductor Engineering and other industry participants to discuss the slew of new memory initiatives and entrants.
According to Ferro, the initiatives were prompted by the need for improved efficiency from a latency standpoint in the memory hierarchy.
“You have more bandwidth needs, but how do you get that bandwidth more efficiently? Everyone has been using DDR, and maybe getting HBM as another layer in the hierarchy,” he told Sperling. “Right now there’s a big gap with flash. There is a lot of activity trying to fill the gap between DDR and flash with RRAM or XPoint.”
As Ferro notes, the industry is also exploring various server architectures to fill the above-mentioned gap.
“[This] gets more into the system challenges. There are all these multiple processors, and the question is how do we utilize memory more efficiently. That’s the big bottleneck right now,” he observed.
In addition, says Ferro, the industry is also going to need some fast memory at the local level.
“At the extreme level, for an MCU you have a very small amount of ROM and RAM that you have to fit everything into,” he said. “The ability to expand that and not go off-chip will require SRAM. As you get bigger CPUs, that’s more about caches than SRAM.”
In terms of embedded DRAM, Ferro says the concept has been “kicking around” for a long time.
“There are technical advantages to embedded DRAM, but the economics don’t seem to work well. The size is too big and the cost is too high. If it’s vertically integrated, then embedded DRAM could work because you don’t necessarily care,” he concluded. “If I sell a chip that’s bigger than a competitor’s chip, I’m going to lose. But if it’s all vertical, maybe you can take advantage of power and performance savings with embedded DRAM. But we don’t see it.”
Note: The full text of “The Future of Memory” by Ed Sperling can be read on Semiconductor Engineering here.