Home > Search
Found 385 Results
By Steven Woo, Rambus Fellow Supercomputing 2023 brought together some of the brightest minds in the field of high-performance computing, showcasing the latest in exascale computing and the challenges faced in the pursuit of next-generation advances in computing. Talks by Scott Atchley from Oak Ridge National Laboratory and Stephen Pawlowski from Intel stood out for […]
Today (Nov. 14th, 2023) the CXL™ Consortium announced the continued evolution of the Compute Express Link™ standard with the release of the 3.1 specification. CXL 3.1, backward compatible with all previous generations, improves fabric manageability, further optimizes resource utilization, enables trusted compute environments, extends memory sharing and pooling to avoid stranded memory, and facilitates memory […]
Delivered strong Q3 results with revenue and earnings above the midpoint of guidance Generated $51.6 million in cash from operations and completed $100.0 million accelerated share repurchase program Completed the sale of the PHY IP business, strengthening focus on chips and digital IP Produced quarterly product revenue of $52.2 million driven by memory interface chips […]
As AI continues to grow in reach and complexity, the unrelenting demand for more memory requires the constant advancement of high-performance memory IP solutions. We’re pleased to announce that our HBM3 Memory Controller now enables an industry-leading memory throughput of over 1.23 Terabytes per second (TB/s) for training recommender systems, generative AI and other compute-intensive […]
Highlights: Enhances Rambus high-performance memory IP portfolio for AI/ML and other advanced data center workloads Supports future evolution of HBM3 memory standard with up to 9.6 Gbps data rates Enables industry-leading memory throughput of over 1.2 TB/s SAN JOSE, Calif. – Oct. 25, 2023 – Rambus Inc. (NASDAQ: RMBS), a premier chip and silicon IP provider […]
HBM3E / HBM3 Controller IP Contact Us The Rambus HBM3E / HBM3 controller cores are designed for use in applications requiring high memory bandwidth and low latency including AI/ML, HPC, advanced data center workloads and graphics. Secure Site Login ContactProduct Brief The HBM3E Memory Subsystem HBM3E is a high-performance memory that features reduced power consumption […]
By Steven Woo, Rambus Fellow Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought […]
Introduction What’s new about PCI Express 5 (PCIe 5)? The latest PCI Express standard, PCIe 5, represents a doubling of speed over the PCIe 4.0 specifications. We’re talking about 32 Gigatransfers per second (GT/s) vs. 16GT/s, with an aggregate x16 link duplex bandwidth of almost 128 Gigabytes per second (GB/s). This speed boost is needed […]
The bandwidth and low power characteristics of LPDDR make it an increasingly attractive choice of memory for applications in IoT, automotive, and edge computing. LPDDR5X takes performance to the next level with a data rate of up to 8.5 Gbps. Join Vinitha Seevaratnam to learn which applications can benefit from using LPDDR memory.
GDDR6 memory offers an impressive combination of bandwidth, capacity, latency and power. Frank Ferro will discuss how these features make it the ideal memory choice for AI/ML inference at the edge and highlight some of the key design considerations you need to keep in mind when implementing GDDR6 memory at ultra-high data rates.