Found 1094 Results

Challenges and Opportunities of Deploying AI at the Edge with Expedera

https://www.rambus.com/blogs/ask-the-experts-ai-at-the-edge/

The rapid evolution of artificial intelligence (AI) is transforming edge computing, and Sharad Chole, Co-founder and Chief Scientist at Expedera discusses the implications. Expedera, a neural network IP provider, focuses on neural processing units (NPUs) for edge devices, emphasizing low-power operation, optimizing bandwidth, and cost efficiency. In our latest episode of Ask the Experts, Sharad […]

PCIe 6.2 Switch

https://www.rambus.com/interface-ip/pci-express/pcie6-2-switch/

PCIe 6.2 Switch Contact Us The Rambus PCI Express® (PCIe®) 6.2 Switch is a customizable, multiport embedded switch for PCIe designed for ASIC implementations. It enables the connection of one upstream port and multiple downstream ports as a fully configurable interface subsystem. It is backward compatible to PCIe 5.0.ContactProduct Brief How the PCIe 6.2 Switch […]

DDR5 Multiplexed Registering Clock Driver (MRCD) and Multiplexed Data Buffer (MDB)

https://www.rambus.com/memory-interface-chips/ddr5-dimm-chipset/ddr5-mrcd-and-mdb/

DDR5 Multiplexed Registering Clock Driver (MRCD) and Multiplexed Data Buffer (MDB) Delivering industry-leading memory bandwidth and capacity Contact Us The Rambus DDR5 Multiplexed Registering Clock Driver (MRCD) and Multiplexed Data Buffer (MDB) enable industry-standard DDR5 Multiplexed Rank DIMMs (MRDIMMs) operating at data rates up to 12,800 MT/s. Description Part Number Applications 12800 MT/s Multiplexed Registering […]

Rambus Unveils Industry-First Complete Chipsets for Next-Generation DDR5 MRDIMMs and RDIMMs to Deliver Breakthrough Performance for Data Center and AI

https://www.rambus.com/rambus-unveils-industry-first-complete-chipsets-for-next-generation-ddr5-mrdimms-and-rdimms-to-deliver-breakthrough-performance-for-data-center-and-ai/

Highlights: Introduces industry’s first Gen5 DDR5 RCD for RDIMMs at 8,000 MT/s, MRCD and MDB chips for next-generation MRDIMMs at 12,800 MT/s, and a second-generation server PMIC to support both Incorporates advanced clocking, control, and power management features needed for higher capacity and bandwidth modules operating at 8000 MT/s and above Feeds insatiable demand for […]

Verifying the next generation High Bandwidth Memory controllers for AI and HPC applications

https://event.on24.com/wcc/r/4719714/D55D39DCED7CBE83737212F4626F4C2E#new_tab

[Live on 10/30 @ 9am PT] High Bandwidth Memory (HBM) has revolutionized AI, machine learning, and High-Performance Computing by significantly increasing data transfer speeds and alleviating performance bottlenecks. The introduction of next-generation HBM4 is especially transformative, enabling faster training and execution of complex AI models. JEDEC has announced that the HBM4 specification is nearing finalization. In […]

From Training to Inference: HBM, GDDR & LPDDR Memory

https://go.rambus.com/from-training-to-inference-hbm-gddr-and-lpddr-memory#new_tab

Join Nidish Kamath, director of product management for Rambus Memory Controller IP, as he dives into the HBM, GDDR, and LPDDR solutions that address AI training and inference workload requirements in this webinar.

The Road Ahead for Main Memory in the Data Center

https://go.rambus.com/the-road-ahead-for-main-memory-in-the-data-center#new_tab

In this webinar, Carlos Weissenberg, Product Marketing Manager for Memory Interface Chips at Rambus, discusses the increasing demands for memory driven by AI and high-performance computing.

Why Memory Matters for AI

https://go.rambus.com/why-memory-matters-for-ai#new_tab

In this roundtable discussion, memory experts Steve Woo, John Eble, and Nidish Kamath explore the critical role of memory in AI applications. They discuss how AI’s rapid evolution, especially with the growth of large language models, is driving the need for higher memory capacity, bandwidth, and power efficiency.

The Impact of AI 2.0 on Memory & Interconnect Technology

https://go.rambus.com/the-impact-of-ai-2-0-on-memory-and-interconnect-technology#new_tab

Steven Woo, Fellow and Distinguished Inventor at Rambus Labs, explores the transformative impact of AI 2.0 on memory and interconnect technology. He highlights how the rapid growth of AI models, exceeding trillions of parameters, and the shift to multimodal systems has dramatically increased the demand for higher memory capacity and bandwidth.

HBM4 Controller Product Brief

https://go.rambus.com/hbm4-controller-product-brief#new_tab

The Rambus HBM4 Controller is designed to support customers with deploying a new generation of HBM memory for cutting-edge AI accelerators, graphics and high-performance computing (HPC) applications. The Rambus HBM4 Controller supports a data rate of 10 Gigabits per second (Gbps) and delivers a total memory bandwidth of 2,560 Gigabytes per second (GB/s) or 2.56 […]

Rambus logo