Found 86 Results

AI Accelerates HBM Momentum

https://www.rambus.com/blogs/ai-accelerates-hbm-momentum/

In a recent EE Times article, Gary Hilson notes that high bandwidth memory (HBM) deployments are becoming more mainstream due to the massive growth and diversity in artificial intelligence (AI) applications. “HBM is [now] less than niche. It’s even become less expensive, but it’s still a premium memory and requires expertise to implement,” writes Hilson. […]

How Rambus is Making Data Faster and Safer in 2022 and Beyond 

https://www.rambus.com/blogs/rambus-2021-wrapped/

Throughout 2021 and early 2022, Rambus has continued to make data faster and safer with the launch of key products, industry initiatives, and strategic partnerships. To address the insatiable demand for more bandwidth in the data center, we announced our 8.4 Gbps HBM3-Ready Memory Subsystem, confirmed the sampling of our DDR5 5600 MT/s 2nd-generation RCD chip, demonstrated our PCI Express® (PCIe) 5.0 digital controller IP on leading FPGA platforms, and unveiled our CXL Memory Interconnect Initiative. Looking ahead to 2022 and beyond, these products, initiatives, and partnerships will help power the […]

Side-channel attacks explained: everything you need to know

https://www.rambus.com/blogs/side-channel-attacks/

In this blog post, we take an in-depth look at the world of side-channel attacks. We describe how side-channel attacks work and detail some of the most common attack methodologies. We also explore differential power analysis (DPA), an extremely powerful side-channel attack capable of obtaining and analyzing statistical measurements across multiple operations. In addition, we […]

Delivering Terabyte-Scale Bandwidth with HBM3-Ready Memory Subsystem

https://www.rambus.com/blogs/hbm3-memory-subsystem/

An exponential rise in data volume, and the meteoric rise of advanced workloads like AI/ML training, requires constant innovation in all aspects of computing. Memory bandwidth is a critical enabler of unleashing the power of processors and accelerators, and the High Bandwidth Memory (HBM) standard has evolved rapidly to deliver the performance required by the most demanding applications.  For current generation HBM2E, Rambus […]

Rambus Advances AI/ML Performance with 8.4 Gbps HBM3-Ready Memory Subsystem

https://www.rambus.com/rambus-advances-ai-ml-performance-with-hbm3-ready-memory-subsystem/

Highlights:  Provides HBM3-ready memory subsystem solution consisting of fully-integrated PHY and digital controller Supports data rates up to 8.4 Gigabits per second (Gbps), enabling terabyte-scale bandwidth accelerators for artificial intelligence/machine learning (AI/ML) and high-performance computing (HPC) applications Leverages market-leading HBM2/2E experience and installed-base to speed implementation of customer designs using next-generation HBM3 memory SAN JOSE, […]

HBM2E data rate: Now up to 4 Gbps (20% Increase)

https://www.rambus.com/blogs/hbm2e-data-rate/

Joseph Rodriguez, senior product marketing engineer for IP cores at Rambus, has written an article for Semiconductor Engineering that explores the company’s recent achievement of reaching 4 gigabits per second (Gbps) data rate with its HBM2E memory interface. The milestone – which was demonstrated in silicon – required mastering substantial signal integrity and power integrity […]

CXL Memory Initiative

https://www.rambus.com/cxl-memory-initiative/

CXL Memory Initiative Enabling new memory tiers for breakthrough server performance Contact Us Data Center Challenges Data centers face three major memory challenges as roadblocks to greater performance and total cost of ownership (TCO). Data Center Memory Challenges The first of these is the limitations of the current server memory hierarchy. There is a three […]

Avery Design Systems and Rambus Extend Memory Model and PCIe® VIP Collaboration

https://www.rambus.com/avery-design-systems-and-rambus-extend-memory-model-and-pcie-vip-collaboration/

Tewksbury, MA. and San Jose, Calif. – May 19, 2021 – Avery Design Systems, a leader in functional verification solutions, and Rambus Inc. (NASDAQ: RMBS), a provider of industry-leading chips and silicon IP making data faster and safer, announced today they are extending their long-term memory model and PCIe® Verification IP (VIP) collaboration. Rambus utilizes Avery’s high-quality, full-featured memory models to verify their memory […]

Stacking memory for AI/ML training with HBM2E

https://www.rambus.com/blogs/stacking-memory-for-ai-ml-training-with-hbm2e/

Frank Ferro, Senior Director Product Management at Rambus, recently penned an article for Semiconductor Engineering that takes a closer look at high bandwidth memory (HBM) and 2.5D (stacking) architecture for AI/ML training. As Ferro notes, the impact of AI/ML increases daily – impacting nearly every industry across the globe. “In marketing, healthcare, retail, transportation, manufacturing […]

Rambus Expands High-Performance Memory Subsystem Offerings with HBM2E Solution on Samsung 14/11nm

https://www.rambus.com/rambus-expands-high-performance-memory-subsystem-offerings-with-hbm2e-solution-on-samsung-14-11nm/

Highlights:  Supports accelerators requiring terabyte-scale bandwidth for artificial intelligence/machine learning (AI/ML) training applications Fully-integrated HBM2E memory interface subsystem, consisting of verified PHY and controller, silicon proven on advanced Samsung 14/11nm FinFET process Backed by unrivaled system expertise supporting customers with interposer and package reference designs to speed time to market SAN JOSE, Calif. – April 21, 2021 – Rambus […]

Rambus logo