Brian Bailey of Semiconductor Engineering observes that systems on chip have been manufactured with numerous processing variants ranging from general-purpose CPUs to DSPs, GPUs and custom processors which are highly optimized for certain tasks.
“When none of these options provide the necessary performance or consumes too much power, custom hardware takes over. But there is one type of processing element that has rarely been used in a major SoC— the FPGA,” he explained. “Solutions implemented in FPGAs are often faster than any of the instruction-set processors. In most cases they complete a computation with lower total energy consumption.”
However, as Bailey points out, the overall power consumption of embedded FPGAs (as opposed to discrete) is higher, while performance is slower than custom hardware. In addition, field programmable gate arrays typically occupy significantly more silicon area than ASICs.
“In the past, several companies have attempted to pioneer the embedded FPGA space, but none have been successful,” he continued. “To understand why eFPGAs may succeed this time around requires an understanding of both the changes happening across the industry at large and within specific markets.”
Indeed, numerous markets have traditionally relied on a waning Moore’s Law to enable increasing levels of integration as well as lower power, although product cycles at the top of the market are now predictably slowing.
“Networking and communications chips have long design cycles and are typically fabricated in advanced process nodes with $2 million to $5M mask costs,” Geoffrey Tate, CEO of Flex Logix, told Semiconductor Engineering. “The problem with this is that standards such as protocols and packets are changing rapidly. It used to be that these chips would be redesigned every couple of years to keep up, which is an increasingly expensive proposition. In addition, data centers are pushing to make chips programmable so they can be upgraded in-system automatically, thereby improving the economics of data centers and enabling them to do their own customization and optimization for a competitive edge.”
Steven Woo, VP of Systems and Solutions at Rambus, expressed similar sentiments.
“Rising design and mask costs at smaller process geometries, coupled with increasing chip complexity, verification effort and embedded software development, make the economics of chip design difficult, especially for smaller markets,” he explained. “FPGA technology offers the potential to help address this by allowing multiple markets and applications to be served with a single chip.”
As Woo notes, there is a tradeoff between the flexibility afforded by FPGAs and the increased area overhead such versatility incurs.
“The key is whether or not critical metrics such as application performance, power and TCO justify the overhead of increased flexibility. The industry is still in the early days of understanding how to use FPGAs in environments like data centers, so the adoption of FPGAs in this market will depend greatly on how much applications can benefit from them. As Microsoft has demonstrated, there are already compelling reasons to adopt them for modern workloads,” he added.
Interested in learning more? The full text of “Embedded FPGAs Going Mainstream?” is available on Semiconductor Engineering here. You can also check out our article archive on FPGAs here.
Leave a Reply