Memory and the IoT

This entry was posted on Wednesday, August 2nd, 2017.

Semiconductor Engineering’s Ed Sperling and Jeff Dorsch recently wrote an article about the challenges of chip design in the age of the IoT. As Sperling notes, this includes sensors, various types of processors, a growing menu of on-chip and off-chip memory types and a long list of I/O and interface IP, chips and chiplets.

“There also are different approaches emerging for packaging these devices, including custom ASICs in the cloud, various SoCs, 2.5D chips for the networks and servers and fan-out wafer-level packaging for MEMS and sensor clusters,” he explained.

“In addition, there are safety and security considerations involved in developing chips that go into increasingly connected cars, medical devices and industrial control systems. That adds to the complexity and cost, as well as the time it takes to design, validate, verify and debug these devices.”

In terms of memory, says Sperling and Dorsch, while DRAM and SRAM remain the go-to technologies, improvements are slowing. Indeed, according to Rambus Chief Scientist Craig Hampel, DRAM scaling once provided a 35% improvement in cost per bit from scaling, but by 2010, this figure had dropped to 25%. This has prompted chipmakers to eye a range of new memory types, including MRAM, phase-change memory, ReRAM, as well as load-reduced DIMM (LRDIMM), non-volatile DIMM (NVDIMM), storage-class memory DIMM (SCMDIMM) and caching DIMM.

“There are three essentials of a memory solution,” Hampel told Semiconductor Engineering. “The first is that it needs to satisfy functional needs of memory for block size and cost. Second, it is a ubiquitous interface. Wherever there is a hole, there are places you can put storage, but for some of the existing memory types the latency and block size are too high. The third thing is that you need software awareness to be able to take advantage of that memory.”

As we’ve previously discussed on Rambus Press, high-bandwidth memory (HBM) is another memory type that offers a number of distinct capabilities for a new digital age dominated by the IoT. This includes moving memory closer to the CPU, while increasing both density and bandwidth. To be sure, the maximum speed for HBM is 2Gbits/s per pin – for a total bandwidth of 256Gbytes/s. And while the bit rate may be somewhat similar to DDR3 at 2.1Gbps, the 8, 128-bit channels gives HBM approximately 15x more bandwidth. In short, HBM takes advantage of existing technologies to create another tier of memory, thus bolstering the overall server memory architecture.

Concurrently, the industry is preparing for the launch of DDR5, which is slated to offer double the bandwidth and density over DDR4, along with delivering improved channel efficiency. These enhancements, combined with a more user-friendly interface for server and client platforms, will enable high performance and improved power management in a wide variety of applications.

Interested in learning more about chip design for the IoT? The full text of “What Does An IoT Chip Look Like?” by Ed Sperling and Jeff Dorsch is available on Semiconductor Engineering here.