Can’t make it in person to TSMC OIP Ecosystem Forum? Join us for their Online VOD event! On November 27th, our experts will speak about memory and interconnect technology enabling AI. Abstract information below.
Learn more by following the link here: https://tsmc-signup.pl-marketing.biz/attendees/2025oip/tw/
Title: Why PCIe and CXL are Critical Interconnects for the AI Era
Speaker: TBD
Abstract: In this presentation, we discuss the critical role of PCIe and CXL interconnects in enabling the AI-driven data centers of tomorrow. We explore the challenges of traditional server setups, such as inefficiencies in resource allocation, over-provisioning, and data replication, and present disaggregation as a solution to improve cost-efficiency and performance. By leveraging PCIe and CXL interconnects, data centers can enable heterogeneous compute, memory sharing, and lower-latency communication across servers and racks. Raj underscores the importance of these interconnect technologies in reducing latency, optimizing resource use, and lowering costs in AI-focused data centers.
Title: Unleashing the Performance of AI Training with HBM4
Speaker: TBD
Abstract: AI training models are growing in both size and sophistication at a breathtaking rate, requiring ever greater bandwidth and capacity. With its unique 2.5D/3D architecture, HBM4 can deliver Terabytes per second of bandwidth and unprecedented capacity in an extremely compact form factor. This presentation will perform a deep dive on the various mechanisms and trade-offs like single and dual controller implementations that are used in modern HBM4 memory controller deployments.