Home > Search
Found 226 Results
[Last updated on: January 23, 2024] In this blog post, we take an in-depth look at Compute Express Link®™ (CXL®™), an open standard cache-coherent interconnect between processors and accelerators, smart NICs, and memory devices. We explore how CXL can help data centers more efficiently handle the tremendous memory performance demands of generative AI and other advanced workloads. […]
By Steven Woo, Rambus Fellow Supercomputing 2023 brought together some of the brightest minds in the field of high-performance computing, showcasing the latest in exascale computing and the challenges faced in the pursuit of next-generation advances in computing. Talks by Scott Atchley from Oak Ridge National Laboratory and Stephen Pawlowski from Intel stood out for […]
[Updated December 7th, 2023] Post-Quantum Cryptography (PQC), also known as Quantum Safe Cryptography (QSC), refers to cryptographic algorithms designed to withstand attacks by quantum computers. Quantum computers will eventually become powerful enough to break public key-based cryptography, also known as asymmetric cryptography. Public key-based cryptography is used to protect everything from your online communications to […]
As AI continues to grow in reach and complexity, the unrelenting demand for more memory requires the constant advancement of high-performance memory IP solutions. We’re pleased to announce that our HBM3 Memory Controller now enables an industry-leading memory throughput of over 1.23 Terabytes per second (TB/s) for training recommender systems, generative AI and other compute-intensive […]
Highlights: Enhances Rambus high-performance memory IP portfolio for AI/ML and other advanced data center workloads Supports future evolution of HBM3 memory standard with up to 9.6 Gbps data rates Enables industry-leading memory throughput of over 1.2 TB/s SAN JOSE, Calif. – Oct. 25, 2023 – Rambus Inc. (NASDAQ: RMBS), a premier chip and silicon IP provider […]
HBM3E / HBM3 Controller IP Contact Us The Rambus HBM3E / HBM3 controller cores are designed for use in applications requiring high memory bandwidth and low latency including AI/ML, HPC, advanced data center workloads and graphics. Secure Site Login ContactProduct Brief The HBM3E Memory Subsystem HBM3E is a high-performance memory that features reduced power consumption […]
Generative AI and other advanced workloads bring even greater urgency to accelerate the power of computing. Training models are scaling by an incredible 10X per year, with the largest now , and are showing no sign of slowing. At the same time, AI inference is pushing out from the data center to millions and ultimately […]
By Steven Woo, Rambus Fellow Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought […]
Dr. Steven Woo, distinguished inventor and fellow at Rambus will be moderating an upcoming panel at the AI Hardware Summit on Tuesday, September 12th, 2023 starting at 3:00pm PT at the Santa Clara Marriott. Memory continues to be a critical bottleneck for AI/ML systems, and keeping the processing pipeline in balance requires continued advances in […]
Introduction What’s new about PCI Express 5 (PCIe 5)? The latest PCI Express standard, PCIe 5, represents a doubling of speed over the PCIe 4.0 specifications. We’re talking about 32 Gigatransfers per second (GT/s) vs. 16GT/s, with an aggregate x16 link duplex bandwidth of almost 128 Gigabytes per second (GB/s). This speed boost is needed […]