Ed Sperling of Semiconductor Engineering recently noted that rightsizing chip architecture has become more complex in recent years. Essentially, rightsizing is a method of targeting chips to specific application needs – ensuring sufficient performance, while minimizing power and cost.
“[Rightsizing] has been a topic of conversation across the semiconductor industry for years because as power becomes a bigger issue, improving the efficiency of designs by limiting the compute resources is a big opportunity,” he explained.
“[These days], there are more options to choose from, more potential bottlenecks, and many more choices about what process to use at what process node and for which markets and price points.”
As Sperling points out, compute cycles may be cheaper than in the past, yet they still maintain a measurable cost. For mobile, cost is associated with battery life, while for data centers, it is reflected in the utility costs of powering and cooling server racks.
“This also accounts for why new memory types, such as Magnetoresistive RAM, ReRAM and 3D XPoint, are under development. All of [these] are an attempt to deal with similar issues from the memory side,” Sperling continued. “But rightsizing has become much more difficult than just pairing a processor’s frequency or size to a specific application or changing the memory size or type.”
Indeed, current considerations include whether processing occurs on-chip, at the edge of the network or in the cloud. Companies may also examine what materials are being used for the substrate and insulation, which process node works best for a particular application, and how various components are packaged.
According to Steve Woo, VP of Enterprise Solutions Technology at Rambus, processor speed has been progressing along with Moore’s Law for several decades, although other systems and components haven’t managed to keep pace and consequently, lag behind. The resulting bottleneck, says Woo, affects access speed to memory and the network. And while increasing the size and amount of memory may have historically improved performance, this maxim may no longer hold true.
“If you use smaller memory chips, you can run that memory faster than if you’re adding an enormous memory system. In the past, the memory hierarchy was on-chip cache, off-chip cache in the same package, discrete DRAM and solid state drives or other storage,” Woo explained. “We’re likely to see more levels added, which could include high-bandwidth memory (HBM) or Hybrid Memory Cube (HMC). So rather than just adding larger capacity memory, the metric may be less about power per bit or power/performance per dollar.”
As Sperling notes, the standard rule of thumb was once defined by the following paradigm: as chipmakers ascended the memory hierarchy, latency would drop by a factor of 10 and bandwidth would increase by the same number, with the cost per bit rising at each level.
“But there are so many kinds of memories being added into the mix that those kinds of tradeoffs are becoming harder to quantify,” he concluded.
Interested in learning more about rightsizing? The full text of “Rightsizing Challenges Grow” by Ed Sperling is available at Semiconductor Engineering here.
Leave a Reply