Found 242 Results

Exploring Exascale Computing: Insights from SC23

https://www.rambus.com/blogs/exploring-exascale-computing-insights-from-sc23/

Supercomputing 2023 brought together some of the brightest minds in the field of high-performance computing, showcasing the latest in exascale computing and the challenges faced in the pursuit of next-generation advances in computing. Talks by Scott Atchley from Oak Ridge National Laboratory and Stephen Pawlowski from Intel stood out for their valuable perspectives on the […]

Rambus HBM3 Controller IP Gives AI Training a New Boost

https://www.rambus.com/blogs/rambus-hbm3-controller-ip-gives-ai-training-a-new-boost/

As AI continues to grow in reach and complexity, the unrelenting demand for more memory requires the constant advancement of high-performance memory IP solutions. We’re pleased to announce that our HBM3 Memory Controller now enables an industry-leading memory throughput of over 1.23 Terabytes per second (TB/s) for training recommender systems, generative AI and other compute-intensive […]

Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP

https://www.rambus.com/rambus-boosts-ai-performance-with-9-6-gbps-hbm3-memory-controller-ip/

Highlights: Enhances Rambus high-performance memory IP portfolio for AI/ML and other advanced data center workloads Supports future evolution of HBM3 memory standard with up to 9.6 Gbps data rates Enables industry-leading memory throughput of over 1.2 TB/s  SAN JOSE, Calif. – Oct. 25, 2023 – Rambus Inc. (NASDAQ: RMBS), a premier chip and silicon IP provider […]

HBM3E Controller

https://www.rambus.com/interface-ip/hbm/hbm3-controller/

HBM3E / HBM3 Controller IP Contact Us The Rambus HBM3E / HBM3 controller cores are designed for use in applications requiring high memory bandwidth and low latency including AI/ML, HPC, advanced data center workloads and graphics. Secure Site Login ContactProduct Brief The HBM3E Memory Subsystem HBM3E is a high-performance memory that features reduced power consumption […]

Rambus Joins Arm Total Design

https://www.rambus.com/blogs/rambus-joins-arm-total-design/

Generative AI and other advanced workloads bring even greater urgency to accelerate the power of computing. Training models are scaling by an incredible 10X per year, with the largest now , and are showing no sign of slowing. At the same time, AI inference is pushing out from the data center to millions and ultimately […]

Memory Key to Enabling AI: A Recap of AI Hardware Summit

https://www.rambus.com/blogs/focus-on-memory-at-ai-hardware-summit/

Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance […]

Rambus Moderates Panel on “Memory Challenges for Next-Generation AI/ML Computing” at AI Hardware Summit

https://www.rambus.com/blogs/rambus-moderates-panel-on-memory-challenges-for-next-generation-ai-ml-computing-at-ai-hardware-summit/

Dr. Steven Woo, distinguished inventor and fellow at Rambus will be moderating an upcoming panel at the AI Hardware Summit on Tuesday, September 12th, 2023 starting at 3:00pm PT at the Santa Clara Marriott. Memory continues to be a critical bottleneck for AI/ML systems, and keeping the processing pipeline in balance requires continued advances in […]

PCI Express 5 vs. 4: What’s New? [Everything You Need to Know]

https://www.rambus.com/blogs/pci-express-5-vs-4/

Introduction What’s new about PCI Express 5 (PCIe 5)? The latest PCI Express standard, PCIe 5, represents a doubling of speed over the PCIe 4.0 specifications. We’re talking about 32 Gigatransfers per second (GT/s) vs. 16GT/s, with an aggregate x16 link duplex bandwidth of almost 128 Gigabytes per second (GB/s). This speed boost is needed […]

Meeting the Needs of Generative AI Training with HBM3

https://www.rambus.com/meeting-the-needs-of-generative-ai-training-with-hbm3/

Generative AI training models are growing in both size and sophistication at a lightning pace, requiring more and more bandwidth. With its unique 2.5D/3D architecture, HBM3 can deliver Terrabytes per second of bandwidth at a system level. Join Frank Ferro to hear how HBM helps designers address the needs of state-of-the-art AI training models.

HBM Controller IP

https://www.rambus.com/interface-ip/hbm/

HBM Controller IP Delivering ultra high-bandwidth, low-latency memory performance Contact Us HBM Memory Controller IP Rambus High-Bandwidth Memory (HBM) 4E/4, 3E/3 and 2E/2 controller IP provide high-bandwidth, low-latency memory performance for AI/ML, graphics and HPC applications.  Secure Site Login Explore ProductsHBM ControllerHBM Memory Subsystem Version Maximum Data Rate (Gb/s) Controller HBM4E 16 Product Brief HBM4 […]

Rambus logo