Jeff Dorsch of Semiconductor Engineering recently noted that there are a number of distinct advantages and drawbacks to various compute engines available on the market today.
“[For example], CPUs offer high capacity at low latency. GPUs have the highest per-pin bandwidth. And FPGAs are designed to be very general,” writes Dorsch. “But each also has its limitations. CPUs require more integration at advanced process nodes, [while] GPUs are limited by the amount of memory that can be put on a chip.”
According to Steven Woo, VP of Systems and Solutions at Rambus, flexibility is only one of the advantages associated with using FPGAs, as field programmable gate arrays can also be attached to the very same type of memories as CPUs.
“It’s really a very flexible kind of chip. For a specific application or acceleration need, FPGAs can provide improved performance and better [energy] efficiency,” he told Semiconductor Engineering. “The ease of developing something quickly to test out new concepts makes FPGAs an ideal platform for innovation. [This is why] some design teams start with an FPGA, then turn it into an ASIC to get a hardened version of the logic they put into an FPGA. They start with an FPGA to see if that market grows. That could justify the cost of developing an ASIC.”
As Woo explains, GPUs are perhaps best suited for applications such as visualization, graphics processing, various types of scientific computations and machine learning.
“The combination of numerous parallel pipelines with high bandwidth memory makes GPUs the compute engine of choice for these types of applications,” he told Rambus Press. “For other types of workloads, FPGAs are actively carving out a place in modern and future data centers.”
In addition to offering versatility, says Woo, reprogrammable and reconfigurable FPGAs can be outfitted with a wide range of algorithms without going through a difficult and costly design process typically associated with ASICs. Meanwhile, the flexible nature of FPGAs allows the silicon to be easily reconfigured to meet the needs of changing application demands.
“When paired with traditional CPUs, FPGAs are capable of providing application-specific hardware acceleration that can be updated over time,” he added. “Applications can also be partitioned into parts that run most efficiently on the CPU and other parts which run most efficiently on the FPGA.”
According to Woo, Intel’s recent Altera acquisition offers an important proof point of the critical role FPGAs are already playing in shaping future computing platforms. Similarly, Microsoft’s Project Catapult highlights the crucial role FPGAs play in evolving data centers. To be sure, the Bing search engine currently uses FPGAs to increase throughput. The field programmable gate arrays also facilitate re-configurability in data centers, allowing Microsoft to match the changing needs of data center workloads.
In a broader sense, says Woo, the days of relying on Moore’s Law and Dennard Scaling to optimize performance and power efficiency have long since passed. Going forward, the industry must focus on advances in system architecture to drive significant improvements.
“We believe FPGAs will exist alongside other silicon, and will play an important role in helping to evolve computing platforms by enabling flexible acceleration and near data processing,” he concluded.