Home > Search
Found 1094 Results
The rapid evolution of artificial intelligence (AI) is transforming edge computing, and Sharad Chole, Co-founder and Chief Scientist at Expedera discusses the implications. Expedera, a neural network IP provider, focuses on neural processing units (NPUs) for edge devices, emphasizing low-power operation, optimizing bandwidth, and cost efficiency. In our latest episode of Ask the Experts, Sharad […]
PCIe 6.2 Switch Contact Us The Rambus PCI Express® (PCIe®) 6.2 Switch is a customizable, multiport embedded switch for PCIe designed for ASIC implementations. It enables the connection of one upstream port and multiple downstream ports as a fully configurable interface subsystem. It is backward compatible to PCIe 5.0.ContactProduct Brief How the PCIe 6.2 Switch […]
DDR5 Multiplexed Registering Clock Driver (MRCD) and Multiplexed Data Buffer (MDB) Delivering industry-leading memory bandwidth and capacity Contact Us The Rambus DDR5 Multiplexed Registering Clock Driver (MRCD) and Multiplexed Data Buffer (MDB) enable industry-standard DDR5 Multiplexed Rank DIMMs (MRDIMMs) operating at data rates up to 12,800 MT/s. Description Part Number Applications 12800 MT/s Multiplexed Registering […]
Highlights: Introduces industry’s first Gen5 DDR5 RCD for RDIMMs at 8,000 MT/s, MRCD and MDB chips for next-generation MRDIMMs at 12,800 MT/s, and a second-generation server PMIC to support both Incorporates advanced clocking, control, and power management features needed for higher capacity and bandwidth modules operating at 8000 MT/s and above Feeds insatiable demand for […]
[Live on 10/30 @ 9am PT] High Bandwidth Memory (HBM) has revolutionized AI, machine learning, and High-Performance Computing by significantly increasing data transfer speeds and alleviating performance bottlenecks. The introduction of next-generation HBM4 is especially transformative, enabling faster training and execution of complex AI models. JEDEC has announced that the HBM4 specification is nearing finalization. In […]
Join Nidish Kamath, director of product management for Rambus Memory Controller IP, as he dives into the HBM, GDDR, and LPDDR solutions that address AI training and inference workload requirements in this webinar.
In this webinar, Carlos Weissenberg, Product Marketing Manager for Memory Interface Chips at Rambus, discusses the increasing demands for memory driven by AI and high-performance computing.
In this roundtable discussion, memory experts Steve Woo, John Eble, and Nidish Kamath explore the critical role of memory in AI applications. They discuss how AI’s rapid evolution, especially with the growth of large language models, is driving the need for higher memory capacity, bandwidth, and power efficiency.
Steven Woo, Fellow and Distinguished Inventor at Rambus Labs, explores the transformative impact of AI 2.0 on memory and interconnect technology. He highlights how the rapid growth of AI models, exceeding trillions of parameters, and the shift to multimodal systems has dramatically increased the demand for higher memory capacity and bandwidth.
The Rambus HBM4 Controller is designed to support customers with deploying a new generation of HBM memory for cutting-edge AI accelerators, graphics and high-performance computing (HPC) applications. The Rambus HBM4 Controller supports a data rate of 10 Gigabits per second (Gbps) and delivers a total memory bandwidth of 2,560 Gigabytes per second (GB/s) or 2.56 […]