We’re excited to be participating at TSMC China OIP Ecosystem Forum in Nanjing! Kai Zhao, will be giving a presentation titled, “GDDR Memory for High-Performance AI Inference.” Session and abstract details below.
Topic: GDDR Memory for High-Performance AI Inference
Speaker: Kai Zhao
Abstract: The rapid rise in size and sophistication of AI/ML inference models requires increasingly powerful hardware deployed at the network edge and in endpoint devices. AI/ML Inference workloads for applications like edge computing and Advanced Driver Assistance Systems (ADAS) require high bandwidth memory while keeping costs low. With performance of over 20 Gbps, GDDR6 has been a good solution, providing an excellent combination of high bandwidth and cost efficiency.
Stay tuned for more event details! To learn more about Open Innovation Platform Ecosystem Forum, click the link here: https://www.tsmc.com/static/english/campaign/oip2025/index.html
