Driven by Big Data and new applications, modern servers and data centers are out of synch with current demands – due to increasing requirements for real-time access to large amounts of information. That is precisely why Rambus’ Smart Data Acceleration (SDA) research program focuses on architectures designed to offload computing closer to very large data sets at multiple points in the memory and storage hierarchy.
Search Results for: big data
A modular approach to Big Data
Driven by Big Data and new applications, modern servers and data centers are out of synch with current demands – due to increasing requirements for real-time access to large amounts of information.
That is precisely why Rambus’ Smart Data Acceleration (SDA) research program focuses on architectures designed to offload computing closer to very large data sets at multiple points in the memory and storage hierarchy.
Comprising software, firmware, FPGAs and large amounts of memory, the platform is designed to test new methods of optimizing and accelerating data analytics for extremely large data sets. Potential use case scenarios include real-time risk analytics, ad serving, neural imaging, transcoding and genome mapping.
As we’ve previously discussed on Rambus Press, data centers have traditionally focused on raw compute, causing power and cooling costs to skyrocket. An alternative paradigm, says Rambus VP of solutions marketing Steve Woo, is to continue trending towards the design of modular resources with varying levels of processing, memory, bandwidth, capacity and storage.
“Modular, disaggregated architectures allow resources to be provided and assigned as needed to meet the widely varying demands of modern workloads. For example, serving up basic webpages and streaming YouTube videos can be achieved on an individual server with modest compute and memory resources,” he continued. “However, heavy analytics jobs or scientific computing tasks that process gigabytes of user or machine data require much higher compute, memory and storage capabilities and often entail many resources working together.”
As Woo emphasizes, a modular, disaggregated approach to data centers can help balance the ever-increasing requirements of Big Data and multiple core counts with accelerated demands for memory, storage capacity and bandwidth.
“Without a parallel boost in bandwidth and capacity, data centers won’t be able to take advantage of increasing CPU core counts,” he added. “Simply put, the key to building the data center of the future is providing a healthy balance between compute, memory and storage. The flexibility afforded by disaggregating resources is a compelling way to meet the needs of modern workloads.”
Riding Big Data waves with the R+ DDR4 Server DIMM chipset
Semiconductor Engineering editor in chief Ed Sperling has confirmed that evolving business models, acquisitions, minority investments and increasing uncertainty are creating fundamental industry shifts. “The announcement that
Riding Big Data waves with the R+ DDR4 Server DIMM chipset
Semiconductor Engineering editor in chief Ed Sperling has confirmed that evolving business models, acquisitions, minority investments and increasing uncertainty are creating fundamental industry shifts.
“The announcement that Rambus is developing memory controller chips, expanding its business beyond just creating IP for the memory and security markets, is the latest in a stream of public disclosures and behind-the-scenes deals that have been underway for the past 18 months,” Sperling explained. “And while the Rambus move is significant by itself, in the context of all the other moves over the past 18 months it blends into a landscape of equally dramatic changes.”
Patrick Moorhead, president of Moor Insights and Strategy, told Semiconductor Engineering the industry has entered a verticalization phase.
“We see this cycle every 7 to 10 years, where the specialists realize that the sum of the parts has more value than the parts. This is more of a solutions-oriented approach, and you see this with companies like Apple, as well as with ARM and TSMC, where you can have your part based on what is a hard macro,” he confirmed. “ARM could produce chips if it wanted to. And MIPS started out with chips, then moved to IP, and now is back to chips. Really what this comes down to is control of the investment.”
Moorhead noted that with Rambus, there was a lot of pull from server OEMs.
“There was a demand for a new look and vision, and ironically it was the end customers driving this. And Rambus was already getting so far into debugging and fine-tuning that they were doing a lot of the work, anyway.”
Ely Tsern, VP of Rambus’ memory products group, expressed similar sentiments, pointing out that there are currently a number of key market trends underway.
“One is the Big Data wave, with an increase in memory, bandwidth and capacity, and there has been a big uptake in servers and data centers. The second is a transition from DDR3 to DDR4, which started last year and is seeing a rapid adoption curve,” he said. “The problem, though, is that DDR4 is really hard, and it’s designed to increase in speed every year. To make that work, you need buffer chips to see an increase in speed, and there are some new fundamental challenges in the technology.”
As Moorhead told EE Times earlier this week, that is precisely why Rambus’ server memory interface chipset will offer a higher level of performance and quality with an eye toward future memory speeds.
“DDR4 is very technically challenging, and in particular, server vendors and server memory providers need higher capacities with improved performance,” he added.
“[Plus], the most expensive item in Big Data applications is memory, so the price point is a lot higher than you would imagine… It’s not just one DIMM per server or one chip per DIMM. It could be 8-48 DIMMS per server and up to 9 chips, including buffers per DIMM.”
As we’ve previously discussed on Rambus Press, the new RB26 DDR4 chipset offers industry-leading performance and margin, complying with the latest JEDEC spec at 2666 Megabit per second (Mbps) and offering built-in support for 2933Mbps. The chipset – which includes a roadmap with value-added features – is currently sampling to key customers and critical ecosystem partners.
Fighting cancer with Big Data
The California Initiative to Advance Precision Medicine has provided $1.2 million in funding for the Genomics Institute’s California Kids Cancer Comparison project. The project, led by the UC Santa Cruz Genomics Institute, is one of two selected by the new California Initiative to Advance Precision Medicine, a public-private effort launched by Governor Edmund G. Brown Jr.
Fighting cancer with Big Data
The California Initiative to Advance Precision Medicine has provided $1.2 million in funding for the Genomics Institute’s California Kids Cancer Comparison project. The project, led by the UC Santa Cruz Genomics Institute, is one of two selected by the new California Initiative to Advance Precision Medicine, a public-private effort launched by Governor Edmund G. Brown Jr.
Essentially, the project will allow scientists at UC Santa Cruz to harness Big Data bioinformatics so doctors can more effectively identify potential treatments for California children with cancer who fail to respond to standard therapies.
According to David Haussler, professor of biomolecular engineering and scientific director of the Genomics Institute, the California Kids Cancer Comparison will enable clinicians to sort through a much larger pool of genetic data than has previously been available – including tumor sequencing data from children throughout California and around the world, as well as adults.
In addition, the framework will help patients, their advocates, clinicians and researchers upload, analyze and communicate relevant data via MedBook. Developed by Theodore Goldstein, a former Apple VP, the specialized social media platform is specifically designed to maintain privacy and security for patient data.
“Our goal is simple: Every kid in California with cancer who needs a second chance, should get a second chance,” Haussler told the UC Santa Cruz News Center. “Currently, too many kids are not getting the full advantage of complete genomic analysis. If the standard treatment does not cure a child with cancer, then we need to be doing all that we possibly can to use genomic analysis to come up with an alternate therapy.”
Haussler also told the Santa Cruz Sentinel that this project is a “milestone” for the Genomics Institute.
“Most hospitals are very confidential and clinical trials generally only publish summary results. But you have to gather a large enough sample size to see patterns and make a connection,” he explained. “We’ll be able to compare against the databases of these siloed institutions interactively and make recommendations in real time.”
As we’ve previously discussed on Rambus Press, Big Data is playing an increasingly major role in helping oncologists identify various risks, while improving care and treatment.
One such project recently highlighted by Bernard Marr of Forbes is the American Society for Clinical Oncology’s CancerLinQ. This initiative hopes to ultimately collate and analyze data from every cancer patient in the United States. Similarly, Flatiron Health recently launched the OncologyCloud, a Big Data program designed to collect data from medical records, doctor notes and billing information. Simply put, Flatiron collates and structures disparate streams into a relevant data flow that can be used for comparative analysis.
In addition to the above-mentioned initiatives, 14 cancer institutes across the United States and Canada have confirmed they will be using IBM’s Watson analytics engine to match cancer patients with the most appropriate treatments. According to Marr, Watson is even capable of recommending potential drugs that haven’t yet been tapped to treat cancer.
It should also be noted that there are a number of Big Data programs dedicated to researching and curing specific types of cancer. To be sure, the Dragon Master Foundation recently partnered with five U.S. pediatric hospitals to create a database of tissue samples taken from patients with rare childhood brain tumors.
“Just this year a study concluded that thanks to the advances in spotting and treating cancer, by 2050 no one under 80 will be dying from the disease,” Marr added. “Big Data-powered research and treatment programs will undoubtedly play a part in that victory, just as they continue to give us answers in every field of science.”