Join us at Taiwan OIP Ecosystem Forum! Our experts will be giving a presentation about GDDR Memory for High-Performance AI Inference.
To learn more and register, follow the link here: https://tsmc-signup.pl-marketing.biz/attendees/2025oip/tw/
Title: GDDR Memory for High-Performance AI Inference
Speakers: TBD
Abstract: The rapid rise in size and sophistication of AI/ML inference models requires increasingly powerful hardware deployed at the network edge and in endpoint devices. AI/ML Inference workloads for applications like edge computing and Advanced Driver Assistance Systems (ADAS) require high bandwidth memory while keeping costs low. With performance of over 20 Gbps, GDDR6 has been a good solution, providing an excellent combination of high bandwidth and cost efficiency.
As bandwidth requirements increase, the latest generation GDDR7 with speeds of 36 Gbps, will provide the additional bandwidth needed moving forward for these systems. To implement high-speed memory interfaces for both the memory PHY and controller, it requires the performance and power efficiency of TSMC’s advanced process nodes. This presentation will discuss how Rambus and Cadence worked together to develop an integrated memory subsystem that is deployed widely in end-customer systems using TSMC advanced nodes. Also discussed will be the signal integrity challenges of implementing GDDR6 and GDDR7 at these high data rates.