Home > Search
Found 1075 Results
John Eble, Vice President of Product Marketing for Memory Interface Chips at Rambus, recently shared the latest developments on the MRDIMM (Multiplexed Rank DIMM) DDR5 memory module architecture. This cutting-edge technology brings significant advances to memory bandwidth and capacity to support compute-intensive workloads including generative AI. What is MRDIMM? MRDIMM builds upon the existing DDR5 […]
In our latest episode of Ask the Experts, Sharad Chole, Co-founder and Chief Scientist at Expedera, shares his insights on the challenges and opportunities of deploying AI inference workloads at the edge. Key topics include: The exponential growth in AI model complexity Overcoming challenges in edge AI: memory and bandwidth The road ahead for AI […]
The rapid evolution of artificial intelligence (AI) is transforming edge computing, and Sharad Chole, Co-founder and Chief Scientist at Expedera discusses the implications. Expedera, a neural network IP provider, focuses on neural processing units (NPUs) for edge devices, emphasizing low-power operation, optimizing bandwidth, and cost efficiency. In our latest episode of Ask the Experts, Sharad […]
PCIe 6.2 Switch Contact Us The Rambus PCI Express® (PCIe®) 6.2 Switch is a customizable, multiport embedded switch for PCIe designed for ASIC implementations. It enables the connection of one upstream port and multiple downstream ports as a fully configurable interface subsystem. It is backward compatible to PCIe 5.0.ContactProduct Brief How the PCIe 6.2 Switch […]
DDR5 Multiplexed Registering Clock Driver (MRCD) and Multiplexed Data Buffer (MDB) Delivering industry-leading memory bandwidth and capacity Contact Us The Rambus DDR5 Multiplexed Registering Clock Driver (MRCD) and Multiplexed Data Buffer (MDB) enable industry-standard DDR5 Multiplexed Rank DIMMs (MRDIMMs) operating at data rates up to 12,800 MT/s. Description Part Number Applications 12800 MT/s Multiplexed Registering […]
Highlights: Introduces industry’s first Gen5 DDR5 RCD for RDIMMs at 8,000 MT/s, MRCD and MDB chips for next-generation MRDIMMs at 12,800 MT/s, and a second-generation server PMIC to support both Incorporates advanced clocking, control, and power management features needed for higher capacity and bandwidth modules operating at 8000 MT/s and above Feeds insatiable demand for […]
[Live on 10/30 @ 9am PT] High Bandwidth Memory (HBM) has revolutionized AI, machine learning, and High-Performance Computing by significantly increasing data transfer speeds and alleviating performance bottlenecks. The introduction of next-generation HBM4 is especially transformative, enabling faster training and execution of complex AI models. JEDEC has announced that the HBM4 specification is nearing finalization. In […]
Join Nidish Kamath, director of product management for Rambus Memory Controller IP, as he dives into the HBM, GDDR, and LPDDR solutions that address AI training and inference workload requirements in this webinar.
In this webinar, Carlos Weissenberg, Product Marketing Manager for Memory Interface Chips at Rambus, discusses the increasing demands for memory driven by AI and high-performance computing.
In this roundtable discussion, memory experts Steve Woo, John Eble, and Nidish Kamath explore the critical role of memory in AI applications. They discuss how AI’s rapid evolution, especially with the growth of large language models, is driving the need for higher memory capacity, bandwidth, and power efficiency.