Found 385 Results

Exploring Exascale Computing: Insights from SC23

https://www.rambus.com/blogs/exploring-exascale-computing-insights-from-sc23/

By Steven Woo, Rambus Fellow Supercomputing 2023 brought together some of the brightest minds in the field of high-performance computing, showcasing the latest in exascale computing and the challenges faced in the pursuit of next-generation advances in computing. Talks by Scott Atchley from Oak Ridge National Laboratory and Stephen Pawlowski from Intel stood out for […]

CXL 3.1: What’s Next for CXL-based Memory in the Data Center

https://www.rambus.com/blogs/cxl-3-1-whats-next-for-cxl-based-memory-in-the-data-center/

Today (Nov. 14th, 2023) the CXL™ Consortium announced the continued evolution of the Compute Express Link™ standard with the release of the 3.1 specification. CXL 3.1, backward compatible with all previous generations, improves fabric manageability, further optimizes resource utilization, enables trusted compute environments, extends memory sharing and pooling to avoid stranded memory, and facilitates memory […]

Rambus Reports Third Quarter 2023 Financial Results

https://www.rambus.com/third-quarter-2023-financial-results/

Delivered strong Q3 results with revenue and earnings above the midpoint of guidance Generated $51.6 million in cash from operations and completed $100.0 million accelerated share repurchase program Completed the sale of the PHY IP business, strengthening focus on chips and digital IP Produced quarterly product revenue of $52.2 million driven by memory interface chips […]

Rambus HBM3 Controller IP Gives AI Training a New Boost

https://www.rambus.com/blogs/rambus-hbm3-controller-ip-gives-ai-training-a-new-boost/

As AI continues to grow in reach and complexity, the unrelenting demand for more memory requires the constant advancement of high-performance memory IP solutions. We’re pleased to announce that our HBM3 Memory Controller now enables an industry-leading memory throughput of over 1.23 Terabytes per second (TB/s) for training recommender systems, generative AI and other compute-intensive […]

Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP

https://www.rambus.com/rambus-boosts-ai-performance-with-9-6-gbps-hbm3-memory-controller-ip/

Highlights: Enhances Rambus high-performance memory IP portfolio for AI/ML and other advanced data center workloads Supports future evolution of HBM3 memory standard with up to 9.6 Gbps data rates Enables industry-leading memory throughput of over 1.2 TB/s  SAN JOSE, Calif. – Oct. 25, 2023 – Rambus Inc. (NASDAQ: RMBS), a premier chip and silicon IP provider […]

HBM3E Controller

https://www.rambus.com/interface-ip/hbm/hbm3-controller/

HBM3E / HBM3 Controller IP Contact Us The Rambus HBM3E / HBM3 controller cores are designed for use in applications requiring high memory bandwidth and low latency including AI/ML, HPC, advanced data center workloads and graphics. Secure Site Login ContactProduct Brief The HBM3E Memory Subsystem HBM3E is a high-performance memory that features reduced power consumption […]

Memory Key to Enabling AI: A Recap of AI Hardware Summit

https://www.rambus.com/blogs/focus-on-memory-at-ai-hardware-summit/

By Steven Woo, Rambus Fellow Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought […]

PCI Express 5 vs. 4: What’s New? [Everything You Need to Know]

https://www.rambus.com/blogs/pci-express-5-vs-4/

Introduction What’s new about PCI Express 5 (PCIe 5)? The latest PCI Express standard, PCIe 5, represents a doubling of speed over the PCIe 4.0 specifications. We’re talking about 32 Gigatransfers per second (GT/s) vs. 16GT/s, with an aggregate x16 link duplex bandwidth of almost 128 Gigabytes per second (GB/s). This speed boost is needed […]

LPDDR5X: Delivering High Bandwidth and Power Efficiency

https://go.rambus.com/lpddr5x-delivering-high-bandwidth-and-power-efficiency#new_tab

The bandwidth and low power characteristics of LPDDR make it an increasingly attractive choice of memory for applications in IoT, automotive, and edge computing. LPDDR5X takes performance to the next level with a data rate of up to 8.5 Gbps. Join Vinitha Seevaratnam to learn which applications can benefit from using LPDDR memory.

Powering AI/ML Inference with GDDR6 Memory

https://go.rambus.com/powering-ai-ml-inference-with-gddr6-memory#new_tab

GDDR6 memory offers an impressive combination of bandwidth, capacity, latency and power. Frank Ferro will discuss how these features make it the ideal memory choice for AI/ML inference at the edge and highlight some of the key design considerations you need to keep in mind when implementing GDDR6 memory at ultra-high data rates.

Rambus logo