Semiconductor Engineering editor in chief Ed Sperling has written an article exploring the evolution of the data center in the context of the Cloud. As Sperling notes, corporate data centers are notorious for their reluctant adoption of new technology.
“There is too much at stake to make quick changes, which accounts for a number of failed semiconductor startups over the past decade with better ideas for more efficient processors, not to mention rapid consolidation in other areas,” he explained. “But as the amount of data increases, and the cost of processing that data decreases at a slower rate than the volume increases, the whole market has begun searching for new approaches.”
According to Sperling, this is nothing less than a “new wrinkle” in Moore’s Law. The last major shift to occur in the data center paradigm occurred during the widespread adoption of virtualization in the early part of the millennium, which allowed companies to increase their server utilization rates and save money on the cost of powering and cooling server racks. This approach, says Sperling, allowed data centers to shut off entire server racks when not in use, essentially applying the dark silicon concept on a macro level.
“The next change involves adding far more granularity into the data center architecture,” he continued. “That means far more intelligent scheduling—both from a time and distance perspective—and partitioning jobs in software to improve efficiency. And it means new architectures everywhere, from the chip to the software stack to the servers themselves, as well as entirely new concepts for what constitutes a data center.”
Indeed, as Rambus VP of solutions marketing Steve Woo told Semiconductor Engineering, it is already quite “painful” for centers to move data back and forth.
“Disk and networking performance are not as high as the flops and number of instructions per second of the CPUs,” he explained. “So, do you just network machines together? Or do you think about newer architectures?”
As Woo previously told Rambus Press, one of the biggest challenges the industry faces today is expanding memory and storage bandwidth and capacity in parallel to meet the growing demands of processors with increasing core counts.
“Each core requires a certain amount of memory capacity and bandwidth. The rapid growth in the number of modern CPU cores is putting pressure on memory and storage systems,” he said. “The flexibility of modular rack scale architectures that disaggregate major resources allows administrators to scale up or down appropriately to meet their needs.”
To be sure, data centers have traditionally focused on raw compute, causing power and cooling costs to skyrocket. An alternative paradigm, says Woo, is to continue trending towards the design of modular resources with varying levels of processing, memory, bandwidth, capacity and storage.
“Modular, disaggregated architectures allow resources to be provided and assigned as needed to meet the widely varying demands of modern workloads. For example, serving up basic webpages and streaming YouTube videos can be achieved on an individual server with modest compute and memory resources,” he continued. “However, heavy analytics jobs or scientific computing tasks that process gigabytes of user or machine data require much higher compute, memory and storage capabilities and often entail many resources working together.”
As Woo emphasizes, a modular, disaggregated approach to data centers can help balance the ever-increasing requirements of Big Data and multiple core counts with accelerated demands for memory, storage capacity and bandwidth.
“Without a parallel boost in bandwidth and capacity, data centers won’t be able to take advantage of increasing CPU core counts,” he added. “Simply put, the key to building the data center of the future is providing a healthy balance between compute, memory and storage. The flexibility afforded by disaggregating resources is a compelling way to meet the needs of modern workloads.”
Leave a Reply