Intel VP and General Manager Ron Kasabian recently described the extraction of meaningful information from raw data as a “key enabler” of the new digital service economy.
“In this new era, an organization’s competitive edge increasingly hinges on its ability to turn an avalanche of data into actionable insights that improve operations and guide the creation of essential new products and services,” he wrote in an official Intel blog post.
According to Kasabian, Big Data analytics isn’t limited to Web 2.0 businesses or high-tech powerhouses. Rather, the opportunity for pervasive analytics and insights spans virtually all industries—from healthcare to transportation, from banking to manufacturing.
“With powerful analytics solutions, physicians can diagnose illnesses faster and create personalized treatment plans,” Kasabian explained. “Retailers can better understand buying behaviors to stock up on the products people are most likely to need. Car manufacturers can use predictive failure analysis to make repairs proactively—before customers find themselves stuck on the side of the road.”
As Kasabian confirms, Intel’s new Xeon processor E7 v3 family is designed to accelerate real-time analytics on enormous datasets with sizes of multi-terabyte and even petabyte-scale.
“With up to 20 percent more cores, threads, cache, and system bandwidth than previous-generation processors, the Intel Xeon processor E7 v3 family makes fast work of complex, high-volume transactions and queries,” he said. “In addition, we’ve added an expanded memory footprint to support in-memory analytics—one of the keys to gaining immediate insights from big data.”
According to Timothy Prickett Morgan of The Platform, Xeon E7 v3 processors are designed to work in the current Brickland server platforms, which debuted with the Ivy Bridge-EX Xeon E7 v2 processors back in February 2014. It should be noted that the Brickland platform also features Intel’s Jordan Creek scalable memory buffer chip.
“This memory chip and controller combination now supports DDR4 main memory, which clocks a little higher and yet burns a bit less power compared to DDR3 memory,” Morgan explained. “The memory controllers support the same three DIMMs per channel, and so a single socket supports up to 96 memory sticks and up to 12 TB of memory capacity using 64 GB memory modules. If customers were expecting a memory capacity boost with the Haswell-EX processors, they are not going to get it, but the memory does run faster if customers want to move to DDR4.”
As we’ve previously discussed on Rambus Press, DDR4 memory delivers a 40-50 percent increase in bandwidth, along with a 35 percent reduction in power consumption compared to DDR3 memory (currently in servers).
According to Frank Ferro, senior director of product management at Rambus, the industry will continue to innovate and look beyond the conventional DDR4 paradigm as mainstream adoption of the standard accelerates.
“At Rambus, our Beyond DDR4 demo silicon is already capable of hitting data transfer rates up to 6.4Gbps in a multi-rank, multi-DIMM configuration – while achieving 25% improvement in power efficiency.”
In practical terms, says Ferro, this means the memory interface is three times faster than current DIMMs topping out at 2.133Gbps – and two times the maximum speed specified for DDR4 at 3.2Gbps.
“New Xeon processors, coupled with increased memory bandwidth and capacity, go a long way in accelerating real-time analytics for enormous datasets comprising petabytes of information,” he concluded.