Home > Search
Found 407 Results
Rambus @ DesignCon 2025 Join Rambus for a day of technical sessions at DesignCon on January 29, 2025. Hear from our experts on the technologies that are set to shape the future of data centers and high-performance systems, and discover how our cutting-edge memory, interconnect and security IP enables today’s most challenging computing, edge, automotive […]
The disruption of GenAI over the last few years has forced system architects and hardware designers to rethink data center topologies. While AI model sizes and compute capability are growing exponentially, I/O throughput and memory access are growing linearly. These trends create an unsustainable gap, and it needs to be addressed across the stack starting […]
John Eble, Vice President of Product Marketing for Memory Interface Chips at Rambus, recently shared the latest developments on the MRDIMM (Multiplexed Rank DIMM) DDR5 memory module architecture. This cutting-edge technology brings significant advances to memory bandwidth and capacity to support compute-intensive workloads including generative AI. What is MRDIMM? MRDIMM builds upon the existing DDR5 […]
The rapid evolution of artificial intelligence (AI) is transforming edge computing, and Sharad Chole, Co-founder and Chief Scientist at Expedera discusses the implications. Expedera, a neural network IP provider, focuses on neural processing units (NPUs) for edge devices, emphasizing low-power operation, optimizing bandwidth, and cost efficiency. In our latest episode of Ask the Experts, Sharad […]
PCIe 6.2 Switch Contact Us The Rambus PCI Express® (PCIe®) 6.2 Switch is a customizable, multiport embedded switch for PCIe designed for ASIC implementations. It enables the connection of one upstream port and multiple downstream ports as a fully configurable interface subsystem. It is backward compatible to PCIe 5.0.ContactProduct Brief How the PCIe 6.2 Switch […]
Highlights: Introduces industry’s first Gen5 DDR5 RCD for RDIMMs at 8,000 MT/s, MRCD and MDB chips for next-generation MRDIMMs at 12,800 MT/s, and a second-generation server PMIC to support both Incorporates advanced clocking, control, and power management features needed for higher capacity and bandwidth modules operating at 8000 MT/s and above Feeds insatiable demand for […]
[Live on 10/30 @ 9am PT] High Bandwidth Memory (HBM) has revolutionized AI, machine learning, and High-Performance Computing by significantly increasing data transfer speeds and alleviating performance bottlenecks. The introduction of next-generation HBM4 is especially transformative, enabling faster training and execution of complex AI models. JEDEC has announced that the HBM4 specification is nearing finalization. In […]
In this roundtable discussion, memory experts Steve Woo, John Eble, and Nidish Kamath explore the critical role of memory in AI applications. They discuss how AI’s rapid evolution, especially with the growth of large language models, is driving the need for higher memory capacity, bandwidth, and power efficiency.
Steven Woo, Fellow and Distinguished Inventor at Rambus Labs, explores the transformative impact of AI 2.0 on memory and interconnect technology. He highlights how the rapid growth of AI models, exceeding trillions of parameters, and the shift to multimodal systems has dramatically increased the demand for higher memory capacity and bandwidth.
The Rambus HBM4 Controller is designed to support customers with deploying a new generation of HBM memory for cutting-edge AI accelerators, graphics and high-performance computing (HPC) applications. The Rambus HBM4 Controller supports a data rate of 10 Gigabits per second (Gbps) and delivers a total memory bandwidth of 2,560 Gigabytes per second (GB/s) or 2.56 […]