Scaling DRAM Technology to Meet Future Demands: Challenges and Opportunities

Sunday, June 22, 2025, Tokyo, Japan
Half-Day tutorial held in conjunction with 
ISCA 2025

Tutorial Abstract

Since the invention of the 1T1C bit cell more than 50 years ago, DRAMs have become the main memory of choice for processors in computer systems and many consumer electronics devices. As new use computing paradigms have been created, including 3D graphics, cloud computing, smart phones, and AI processing, specialized processors and DRAM memories have been developed that are optimized for these use cases. The same 1T1C DRAM bit cell is used in each of these applications, but the internal architecture and interfaces of the DRAMs supporting these markets are optimized in different ways, and the DRAMs are packaged differently to meet the needs of the system.

Across all markets, there is a relentless demand for higher performance and better power efficiency, as DRAM bandwidth can bottleneck application performance and interfaces to DRAMs can consume half of the SOC power. DRAMs are also being stressed by growing reliability concerns as they incorporate on-die ECC and mitigation for disturbance effects such as RowHammer and RowPress. As the momentum of AI continues to grow across markets (HPC, server, client, mobile, etc.), the design of efficient, performant and reliable memory systems is becoming increasingly critical. AI models are continuing to grow, pushing the capacity and bandwidth requirements of DRAMs. Simply scaling with historical techniques will no longer achieve the required characteristics due to physical challenges, limits of process scaling and system architecture constraints including thermals and power delivery.

This tutorial will describe DRAM architecture in detail, covering the similarities and differences between different DRAM technologies. Standard scaling techniques will be highlighted along with challenges that the industry is currently facing. Input from industry experts will show the pros and cons of DRAM architecture choices, demonstrating the system impact and requirements for mainstream adoption. Future DRAM architectures will also be discussed.

Topics that will be covered

The tutorial will focus on DRAM architecture, specifically looking at design tradeoffs and subsequent impact to the overall system performance, power, cost and reliability. The tutorial will cover the following topics:

  • Background and History of DRAM markets. How they were historically defined, what changed, and the drivers of new technologies.
  • DRAM array architecture, internal data paths, shared structures, and interfaces.
  • The future of DRAM, including 3D DRAM cells.
  • Memory modules, including DIMMs, CAMMs, MRDIMMs, and CXL modules.
  • Capacity, power, reliability, cost tradeoffs that motivate different DRAM architectures for different markets including computing, mobile, graphics, and AI. This will include underlying core architecture, packaging and system integration.
  • Power and energy differences between DRAM technologies.
  • Novel packaging techniques including stacking and 2.5D assembly.
  • Reliability, Availability, and Serviceability (RAS) techniques, including on-die error correction, system-level ECC, redundancy and repair, and RowHammer and RowPress mitigation.
  • Processing in memory that has been implemented in DRAM silicon.
  • Memory controller architecture and design challenges with current and future DRAMs.
  • System performance considerations, including latency under load and the impact of core timings and core architecture on performance.
  • Industry adoption challenges facing new DRAM technologies and features.
  • Challenges for the future.
 

Organizers and Affiliations

Steven Woo is a Fellow and Distinguished Inventor at Rambus Inc., where he leads research in Rambus Labs on advanced memory systems for accelerators and computing infrastructure, and manages a team of senior architects. Since joining Rambus, Steve has worked in various roles leading architecture, technology, and performance analysis efforts, and in marketing and product planning roles leading strategy and customer programs.  He has more than 25 years of experience working on advanced memory systems and holds more than 100 US and international patents. Steve received his PhD and MS degrees in Electrical Engineering from Stanford University, and Master of Engineering and BS Engineering degrees from Harvey Mudd College.

Wendy Elsasser is a Technical Director of Research Science at Rambus. She works in the Rambus Labs R&D division investigating future system architecture and developing innovative solutions to address the impact on the memory sub-system. She has over 25 years of experience in industry, starting with semi-custom micro-controller design, test, and implementation. Over the last 20 years, her focus has been on memory sub-systems, primarily external DRAM.  Her experience includes DRAM controller architecture, design, and validation as well as active contributions to consortiums and standards bodies. Specifically, she was a leader in the Gen-Z consortium and JEDEC, helping to define future memory interfaces and DRAM standards. Her work has resulted in 15 patents.

Taeksang Song is a Corporate Vice President at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CAMM, MRDIMM, CXL memory expanders, fabric attached memory solutions and processing near memory to meet the evolving demands of next-generation data-centric AI architectures. He has 20 years’ professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable heterogeneous computing infrastructure. Prior to joining Samsung Electronics, he worked at Rambus Inc., Micron Technology and SK hynix in lead architect roles for the emerging memory controllers and systems. Taeksang received his PhD from KAIST, South Korea, in 2006. He has authored and co-authored over 20 technical papers and holds over 50 U.S. patents.

Rambus logo