Found 407 Results

Rambus at DesignCon

https://www.rambus.com/designcon/

Rambus @ DesignCon 2025 Join Rambus for a day of technical sessions at DesignCon on January 29, 2025. Hear from our experts on the technologies that are set to shape the future of data centers and high-performance systems, and discover how our cutting-edge memory, interconnect and security IP enables today’s most challenging computing, edge, automotive […]

Rambus, VIAVI and Samtec Demonstrate CXL® over Optics PoC at Upcoming SC24

https://www.rambus.com/blogs/rambus-viavi-and-samtec-demonstrate-cxl-over-optics-poc-at-upcoming-sc24/

The disruption of GenAI over the last few years has forced system architects and hardware designers to rethink data center topologies. While AI model sizes and compute capability are growing exponentially, I/O throughput and memory access are growing linearly. These trends create an unsustainable gap, and it needs to be addressed across the stack starting […]

Ask the Experts: DDR5 MRDIMMs

https://www.rambus.com/blogs/ask-the-experts-ddr5-mrdimms/

John Eble, Vice President of Product Marketing for Memory Interface Chips at Rambus, recently shared the latest developments on the MRDIMM (Multiplexed Rank DIMM) DDR5 memory module architecture. This cutting-edge technology brings significant advances to memory bandwidth and capacity to support compute-intensive workloads including generative AI. What is MRDIMM? MRDIMM builds upon the existing DDR5 […]

Ask the Experts: AI at the Edge

https://www.rambus.com/blogs/ask-the-experts-ai-at-the-edge/

The rapid evolution of artificial intelligence (AI) is transforming edge computing, and Sharad Chole, Co-founder and Chief Scientist at Expedera discusses the implications. Expedera, a neural network IP provider, focuses on neural processing units (NPUs) for edge devices, emphasizing low-power operation, optimizing bandwidth, and cost efficiency. In our latest episode of Ask the Experts, Sharad […]

PCIe 6.2 Switch

https://www.rambus.com/interface-ip/pci-express/pcie6-2-switch/

PCIe 6.2 Switch Contact Us The Rambus PCI Express® (PCIe®) 6.2 Switch is a customizable, multiport embedded switch for PCIe designed for ASIC implementations. It enables the connection of one upstream port and multiple downstream ports as a fully configurable interface subsystem. It is backward compatible to PCIe 5.0.ContactProduct Brief How the PCIe 6.2 Switch […]

Rambus Unveils Industry-First Complete Chipsets for Next-Generation DDR5 MRDIMMs and RDIMMs to Deliver Breakthrough Performance for Data Center and AI

https://www.rambus.com/rambus-unveils-industry-first-complete-chipsets-for-next-generation-ddr5-mrdimms-and-rdimms-to-deliver-breakthrough-performance-for-data-center-and-ai/

Highlights: Introduces industry’s first Gen5 DDR5 RCD for RDIMMs at 8,000 MT/s, MRCD and MDB chips for next-generation MRDIMMs at 12,800 MT/s, and a second-generation server PMIC to support both Incorporates advanced clocking, control, and power management features needed for higher capacity and bandwidth modules operating at 8000 MT/s and above Feeds insatiable demand for […]

Verifying the next generation High Bandwidth Memory controllers for AI and HPC applications

https://event.on24.com/wcc/r/4719714/D55D39DCED7CBE83737212F4626F4C2E#new_tab

[Live on 10/30 @ 9am PT] High Bandwidth Memory (HBM) has revolutionized AI, machine learning, and High-Performance Computing by significantly increasing data transfer speeds and alleviating performance bottlenecks. The introduction of next-generation HBM4 is especially transformative, enabling faster training and execution of complex AI models. JEDEC has announced that the HBM4 specification is nearing finalization. In […]

Why Memory Matters for AI

https://go.rambus.com/why-memory-matters-for-ai#new_tab

In this roundtable discussion, memory experts Steve Woo, John Eble, and Nidish Kamath explore the critical role of memory in AI applications. They discuss how AI’s rapid evolution, especially with the growth of large language models, is driving the need for higher memory capacity, bandwidth, and power efficiency.

The Impact of AI 2.0 on Memory & Interconnect Technology

https://go.rambus.com/the-impact-of-ai-2-0-on-memory-and-interconnect-technology#new_tab

Steven Woo, Fellow and Distinguished Inventor at Rambus Labs, explores the transformative impact of AI 2.0 on memory and interconnect technology. He highlights how the rapid growth of AI models, exceeding trillions of parameters, and the shift to multimodal systems has dramatically increased the demand for higher memory capacity and bandwidth.

HBM4 Controller Product Brief

https://go.rambus.com/hbm4-controller-product-brief#new_tab

The Rambus HBM4 Controller is designed to support customers with deploying a new generation of HBM memory for cutting-edge AI accelerators, graphics and high-performance computing (HPC) applications. The Rambus HBM4 Controller supports a data rate of 10 Gigabits per second (Gbps) and delivers a total memory bandwidth of 2,560 Gigabytes per second (GB/s) or 2.56 […]

Rambus logo