Understanding peak power limitations in the data center
This entry was posted on Thursday, September 28th, 2017.
Semiconductor Engineering’s Anna Steffora Mutschler has written an article about how peak power poses a serious design challenge for chips, electronic systems and data centers. Issues related to peak power, says Mutschler, have become more significant as process nodes shrink. This is because the dielectrics and wires are thinner – while the margins for noise are lower.
Peak power, elaborates Mutschler, is the maximum power generated at any point in time and often occurs when a device switches from an “off” or “sleep” state to an “on” state. Peak power can also occur in a power domain that is used to control certain blocks with memories, such as when one or more applications are being used for intensive computing.
According to Mutschler, peak power issues are a direct result of the growing complexity in designs, fixed or shrinking power budgets and the need to process more data more quickly. For example, in mobile devices, more features require more circuitry to be turned off or put into sleep mode. In the data center, the sheer volume of information that requires rapid processing is rapidly increasing. However, servers can only run so quickly due to rack thermal limits.
“This is all directly related to power dissipation and density,” Frank Ferro, senior director of product management at Rambus, told Semiconductor Engineering. “If you look at SerDes, the first question customers ask is, ‘What is the power, performance and area (PPA)?’ And they get there with memory, as well, as you go up in speed. These companies may budget a certain percentage for memory, SerDes, the processor and the PHY. If you look at some of the networking chips, they give off a lot of heat. If the peak power is not within expectations, they knock you out of the running early.”
As Ferro notes, scaling SerDes from 28nm processes to 14nm hasn’t resulted in significant performance improvement. However, says Ferro, this is changing due to a mix of digital with analog circuitry.
“You always need to look at the architecture for the digital side, because it’s easier to port, but it’s also a challenge to get the performance up,” he concluded. “But there’s also a challenge on the performance side. If you look at what’s been going on in Ethernet, it took seven years to move from 1 gigabit per second to 10 gigabits per second. Now, 28G is the workhorse and we’re seeing companies asking for 56 and 112. The rate of development is phenomenal.”
Interested in learning more? The full text of “Managing Peak Power” by Anna Steffora Mutschler is available here.