Found 3548 Results

DDR5 PMIC5030 Product Brief

https://go.rambus.com/pmic5030-product-brief#new_tab

Download the product brief to see the specifications and features of the Rambus DDR5 PMIC5030.

Data Bus Inversion (DBI)

https://www.rambus.com/chip-interface-ip-glossary/dbi/

Data Bus Inversion (DBI) Table of Contents What is DBI? How DBI works What are the key features of DBI? What are the benefits of DBI? Enabling Technologies Rambus Technologies and DBI What is DBI? Data Bus Inversion (DBI) is a signal encoding technique used in high-speed digital interfaces to reduce power consumption and improve […]

CSI-2 (Camera Serial Interface 2)

https://www.rambus.com/chip-interface-ip-glossary/csi-2/

CSI-2 (Camera Serial Interface 2) Table of Contents What is CSI-2? What are the key functions of CSI-2? What are the benefits of CSI-2? Enabling Technologies Standards and Specifications Rambus MIPI CSI-2 Controller Core What is CSI-2? CSI-2, or Camera Serial Interface 2, is a high-speed serial interface standard developed by the MIPI Alliance for […]

Scaling AI Infrastructure with PCIe 7 and CXL 3

https://event.on24.com/wcc/r/5051048/9DDC583F5FEB476AB8DF5CF39F634FB7#new_tab

Interconnect technologies are key to scaling AI workloads across data center infrastructure. Learn how PCIe 7 and CXL 3 enable high-speed, low-latency connectivity for memory expansion and composable architectures in AI systems.

Memory IP for AI Accelerators: HBM4, LPDDR5, and GDDR7

https://event.on24.com/wcc/r/5051047/1951C72D6ABB0828FD16915C7DFF5279#new_tab

AI accelerators require high-performance memory IP to meet bandwidth, capacity and latency requirements. This session dives into Rambus IP solutions for HBM4, LPDDR5, and GDDR7, highlighting their role in powering next-gen AI silicon.

How AI is Shaping the Memory Market

https://event.on24.com/wcc/r/5051045/2BE50C4E73B38BBEF853AFA6D1778604#new_tab

Join Rambus experts for a dynamic roundtable discussion on the latest trends in the memory market. Topics include AI-driven demand, enabling technologies, and the future of memory innovation across computing segments.

Memory Interface Chip Solutions for PC Clients and AI

https://event.on24.com/wcc/r/5051055/779B1F67589803A20D1004F3648D5F6C#new_tab

AI is increasingly moving to the edge, and PC clients are evolving to support intelligent applications. This session showcases Rambus memory chip solutions optimized for client platforms, enabling responsive AI experiences with performant memory architectures.

Memory Interface Chip Solutions for Servers and AI in the Data Center

https://event.on24.com/wcc/r/5051041/46A5495531C5222065C15E9B6488D667#new_tab

Explore Rambus memory chip solutions designed for server platforms and AI workloads in the data center. This session covers performance, power efficiency, and scalability features that meet the demands of next-generation AI training and inference environments.

35 Years of Memory Innovation for AI

https://event.on24.com/wcc/r/5047305/7731855E52B458BF97C0300896B86DEC#new_tab

In this keynote, Dr. Steve Woo reflects on the 35-year journey of Rambus and the evolution of memory technology that has culminated in today’s AI-driven computing landscape. From early innovations to modern high-bandwidth architectures, this session highlights how memory has become a foundational enabler of artificial intelligence.

High Bandwidth Memory (HBM): Everything You Need to Know

https://www.rambus.com/blogs/hbm3-everything-you-need-to-know/

[Updated on October 30, 2025] In an era where data-intensive applications, from AI and machine learning to high-performance computing (HPC) and gaming, are pushing the limits of traditional memory architectures, High Bandwidth Memory (HBM) has emerged as a high-performance, power-efficient solution. As industries demand faster, higher throughput processing, understanding HBM’s architecture, benefits, and evolving role […]

Rambus logo