Karl Freund of Moore Insights and Strategy recently penned an article for Forbes about Microsoft’s extensive deployment of FPGA’s in the data center and beyond.
As Freund notes, Microsoft currently uses field programmable gate arrays to accelerate its Bing search engine (Project Catapult) along with its Azure Cloud, which has at least one FPGA in each server – delivering over one “exa op” (one billion operations per second) of total throughput across data centers in 15 countries.
“The real key to Microsoft’s heart is not just performance or power consumption. Microsoft points to the flexibility that FPGAs afford due to their inherent programmability,” writes Freund. “The ‘P’ in FPGA means programmable and therein may lay their most important value to Microsoft and in the data center in general. Once programmed, the FPGA hardware itself can be changed (reprogrammed) in the field (hence the “F”) to enable it to evolve with changes in the company’s business, science and underlying logic.”
According to Freund, Microsoft’s plans for FPGAs extend far and wide.
“Beyond Deep Learning acceleration, Microsoft is using FPGAs to accelerate networking and the complex software required to implement software-defined networks,” he adds.
Commenting on the above, Steven Woo, VP of Systems and Solutions at Rambus, notes that a number of major industry players have turned to FPGAs to accelerate a wide range of data-intensive tasks which historically have been distributed across multiple racks of servers.
“Aggregating numerous individual servers into a pool of processing units is a ‘one size fits all’ approach typically characterized by a relatively fixed amount of compute, memory, storage and I/O resources in each server,” he explains. “However, in practice, this paradigm suffers from an acute under-utilization of resources. This is because specific tasks may require a different amount of each resource during execution.”
Moreover, says Woo, legacy server architectures can also contribute to low CPU utilization rates, high latencies to access data, reduced power efficiency and increased TCO. In contrast, versatile FPGAs allow companies to evolve a more modular, flexible and effective approach for data centers, acceleration, HPC and beyond. For example, Baidu engineers are using field programmable gate arrays to accelerate SQL queries, while DeePhi is looking towards reconfigurable devices such as FPGAs for deep learning.
“Together with other silicon, such as GPU and CPUs, FPGAs will play an increasingly important role in helping to evolve computing platforms by enabling flexible acceleration and near data processing,” Woo concludes. “At Rambus, we look forward to collaborating with our industry partners and customers on cutting-edge memory technologies and solutions for future servers and data centers.”
Indeed, it should be noted that Rambus recently signed a license agreement with Xilinx that covers Rambus’ patented memory controller, SerDes and security technologies. In addition, the two companies agreed to evaluate potential collaboration on the use of Rambus’ CryptoManager platform, with Rambus also exploring the use of Xilinx FPGAs in its Smart Data Acceleration (SDA) research program.
As we’ve previously discussed on Rambus Press, the SDA research program focuses on architectures designed to offload computing closer to very large data sets at multiple points in the memory and storage hierarchy. Potential use case scenarios include big data analytics, real-time risk analytics, ad serving, neural imaging, transcoding and genome mapping. Comprising software, firmware, FPGAs and significant amounts of DRAM, the SDA platform operates as an effective test bed for exploring new methods of optimizing and accelerating analytics in extremely large data sets. As such, the SDA’s versatile combination of hardware, software, firmware, drivers and bit files can be precisely tweaked to facilitate architectural exploration of specific applications.
Put simply, the SDA – powered by an FPGA paired with 24 DIMMS – offers high memory densities linked to a flexible computing resource. Currently, the SDA platform functionality is targeted at accelerating and offloading tasks such as those found in Big Data analytics and in-memory database applications. The Smart Data Acceleration platform can also be made available over a network where it would serve as a shared resource and offload agent in a more disaggregated scenario.
Interested in learning more? You can check out our SDA research program article archive here.