Written by Rambus Press
Frank Ferro, Senior Director Product Management at Rambus, and Shane Rau, Senior Research Executive at IDC, recently hosted a webinar that explores the role of tailored DRAM solutions in advancing artificial intelligence. Part one of this four-part series reviewed a range of topics including the interconnected system landscape, the impact of COVID-19 on the data center, and how AI is helping to make sense of the data deluge. This blog (part two), takes a closer look at how AI enables useful data processing, various examples of AI silicon, and the evolving role of DRAM in advancing artificial intelligence.
How AI Enables Useful Data Processing
According to Rau, the sheer amount of data that has been generated in recent years more than justifies the need for AI. As well, AI requires a significant amount of processing power to handle data complexity.
“When you have more distributed data across the landscape generated by different system types, you have different data types. Some of that data is more important than other data. For example, the non-entertainment imaging share of total data, think a lot of static images, is declining” he explains.
“In contrast, entertainment remains a huge part of the data being created and think a lot of moving images, think videos, think Netflix, where the data cannot be interrupted, no one wants the sound or the video of their Netflix movie to be interrupted. That is a form of critical data that cannot be interrupted. It is real time data.”
As Rau emphasizes, AI needs to know how to prioritize data as well as process it.
“You have a combination of AI dealing with a lot of data, a lot of distributed data across that landscape, having to assess whether it is critical or not, and then how sophisticated that data type is. But immediately, the utility for AI is identifying the useful data and bringing that data to the surface for human attention,” he elaborates. “AI is stepping in between our data that we can no longer process because of the amounts and the sophistication of that data and doing the job, and also bringing the data to our attention so we can make good decisions.”
Examples of AI Silicon
As Rau observes, the industry will be working for the next decade or more to advance AI algorithms and processors.
“They will also be developing memory, memory capacity, and various memory types to adapt to the needs that AI will have across this period and across this whole data landscape,” he states.
To more clearly illustrate the market growth of various chips, Rau points to the slide below that aggregates different types of silicon including FPGAs and GPUs, along with specialized AI ASICs/ASSPs.
“You can see significant growth in devices like PCs, phones and tablets. Phones drive a lot of early AI data processing because many of the phone manufacturers, think Apple, think Huawei or their chip providers like Qualcomm, have AI-specific processing capabilities they can put in a phone,” he elaborates.
“This is critical for a phone when it’s doing AI, it’s right in front of you and can process a lot of incoming data through your voice or the video that you’re creating. It can then determine what is important to process and what needs to move on further through edge infrastructure, communications infrastructure, and then into the data center.”
According to Rau, PCs, helped by GPUs, will also be processing AI applications and data, while PCs, phones and tablets are a huge driver of the AI data processing silicon opportunity. Networking infrastructure is “very small” at this point, says Rau, but networking infrastructure will also be processing AI. Specifically, packet processing, compression, and decompression of data as it moves through infrastructure will require AI.
“We have a silicon opportunity that’s pervasive across the AI landscape (or the system landscape) from the IoT endpoints into the data center and cloud. So, we have established the need for AI, the need for processing of AI, and the opportunity for the processing silicon to do AI,” he adds. “With these processors comes the need for memory and most often DRAM. TPUs, GPUs, FPGAs, and other data processing silicon types need DRAM attached to them to bring the data close to the processing so it can be done quickly, such as in real time processing of video.”
The Evolving Role of DRAM in Advancing Artificial Intelligence
As Rau points out, DRAM is a core technology that has proven itself to be extremely adaptive over time, enabling it to support a wide range of system types.
“DRAM has adapted to the needs of new systems, starting with the 90s when personal computing and PCs drove the need for processing and large amounts of memory to meet the general-purpose needs of PCs. When DRAM was, what we called a commodity DRAM, it was ubiquitous, but then as graphics and gaming and other applications came in DRAM adapted into more specialized forms of DRAM for servers, for graphics, and GDDR for example,” he explained.
“More recently with the advent of smartphones, specialized low power LPDRAM is used for those devices. As well, cloud servers sometimes use commodity DRAM or specialized DRAM in large quantities and even some memory modules with their own data buffer chips are used to put more intelligence on the DRAM, next to the DRAM on the module so that DRAM can make some decisions on its own and offload some functions from the CPU. In this way, DRAM is also adapting.”
In terms of the next decade, says Rau, DRAM will once again adapt to new applications across multiple verticals like automotive, video surveillance, and smart homes.
“These applications will need DRAM and processing and heavy amounts of AI. Think in the smart home, think of Alexa, for example, when you talk to Alexa right now, Alexa sends your requests back to the data center,” he elaborates. “In the future, we think that smart home systems will be more locally intelligent to process your request in real time. That means more memory, that means more DRAM.”
Although the smart home space is different than automotive, says Rau, vehicles will also need significant amounts of memory and processing to support AI systems with varying levels of autonomous driving capabilities. In terms of video surveillance, says Rau, AI processing is used to process the event, analyze its severity, and determine will happen next.
“Will that event need to be reported through the video surveillance endpoint, through infrastructure, into the data center in the cloud for some form of more detailed analysis and response?”
The Ubiquity of AI and DRAM
According to Rau, AI will ultimately become ubiquitous in electronic systems.
“As human beings, we use those systems, conceivably seven billion human beings on the planet. There will be seven billion types of AI solutions – especially when you consider all the potential combinations of training versus inferencing, different processing types, the different DRAM types and capacities and configurations that will support that AI,” he explains. “With all of this, DRAM technology continues to adapt. We have GDDR6, which formerly was just graphics DRAM, now being applied in automotive, for example. And HBM, which is high bandwidth memory. Again, formerly just for graphics, but now applied to applications outside of graphics that require very high performance, low latency, as well as a non-proprietary, cost-effective form of performance. There are also memory buffers that go on modules that help the DRAM be more intelligent to respond and offload the needs of CPUs.”
For the next decade and beyond, Rau concludes, DRAM will continue to adapt to new applications driven by AI and data processing.
Leave a Reply