Home > Search
Found 242 Results
Supercomputing 2023 brought together some of the brightest minds in the field of high-performance computing, showcasing the latest in exascale computing and the challenges faced in the pursuit of next-generation advances in computing. Talks by Scott Atchley from Oak Ridge National Laboratory and Stephen Pawlowski from Intel stood out for their valuable perspectives on the […]
As AI continues to grow in reach and complexity, the unrelenting demand for more memory requires the constant advancement of high-performance memory IP solutions. We’re pleased to announce that our HBM3 Memory Controller now enables an industry-leading memory throughput of over 1.23 Terabytes per second (TB/s) for training recommender systems, generative AI and other compute-intensive […]
Highlights: Enhances Rambus high-performance memory IP portfolio for AI/ML and other advanced data center workloads Supports future evolution of HBM3 memory standard with up to 9.6 Gbps data rates Enables industry-leading memory throughput of over 1.2 TB/s SAN JOSE, Calif. – Oct. 25, 2023 – Rambus Inc. (NASDAQ: RMBS), a premier chip and silicon IP provider […]
HBM3E / HBM3 Controller IP Contact Us The Rambus HBM3E / HBM3 controller cores are designed for use in applications requiring high memory bandwidth and low latency including AI/ML, HPC, advanced data center workloads and graphics. Secure Site Login ContactProduct Brief The HBM3E Memory Subsystem HBM3E is a high-performance memory that features reduced power consumption […]
Generative AI and other advanced workloads bring even greater urgency to accelerate the power of computing. Training models are scaling by an incredible 10X per year, with the largest now , and are showing no sign of slowing. At the same time, AI inference is pushing out from the data center to millions and ultimately […]
Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance […]
Dr. Steven Woo, distinguished inventor and fellow at Rambus will be moderating an upcoming panel at the AI Hardware Summit on Tuesday, September 12th, 2023 starting at 3:00pm PT at the Santa Clara Marriott. Memory continues to be a critical bottleneck for AI/ML systems, and keeping the processing pipeline in balance requires continued advances in […]
Introduction What’s new about PCI Express 5 (PCIe 5)? The latest PCI Express standard, PCIe 5, represents a doubling of speed over the PCIe 4.0 specifications. We’re talking about 32 Gigatransfers per second (GT/s) vs. 16GT/s, with an aggregate x16 link duplex bandwidth of almost 128 Gigabytes per second (GB/s). This speed boost is needed […]
Generative AI training models are growing in both size and sophistication at a lightning pace, requiring more and more bandwidth. With its unique 2.5D/3D architecture, HBM3 can deliver Terrabytes per second of bandwidth at a system level. Join Frank Ferro to hear how HBM helps designers address the needs of state-of-the-art AI training models.
HBM Controller IP Delivering ultra high-bandwidth, low-latency memory performance Contact Us HBM Memory Controller IP Rambus High-Bandwidth Memory (HBM) 4E/4, 3E/3 and 2E/2 controller IP provide high-bandwidth, low-latency memory performance for AI/ML, graphics and HPC applications. Secure Site Login Explore ProductsHBM ControllerHBM Memory Subsystem Version Maximum Data Rate (Gb/s) Controller HBM4E 16 Product Brief HBM4 […]
