Found 388 Results

Power Management: A Key Enabler of Memory Performance

https://www.rambus.com/blogs/power-management-a-key-enabler-of-memory-performance/

In planning for DDR5, the industry laid out ambitious goals for memory bandwidth and capacity while aiming to maintain power within the same envelope on a per module basis. In order to achieve these goals, DDR5 required a smarter DIMM architecture; one that would embed more intelligence in the DIMM and increase its power efficiency. […]

Rambus Reports First Quarter 2024 Financial Results

https://www.rambus.com/first-quarter-2024-financial-results/

Delivered solid Q1 results and expanded leadership offerings for the data center Completed $50.0 million accelerated share repurchase program Launched industry-leading family of DDR5 PMICs for AI and traditional servers SAN JOSE, Calif. – April 29, 2024 – Rambus Inc. (NASDAQ:RMBS), a provider of industry-leading chips and IP making data faster and safer, today reported […]

Rambus Expands Chipset for Advanced Data Center Memory Modules with DDR5 Server PMICs

https://www.rambus.com/rambus-expands-chipset-for-advanced-data-center-memory-modules-with-ddr5-server-pmics/

Highlights: Delivers industry-leading DDR5 server PMIC for the highest performance and capacity memory modules required by AI and other advanced workloads Supports multiple generations of high-performance DDR5-based servers with new family of PMICs Provides a complete memory interface chipset for DDR5 server memory modules including RCD, PMIC, SPD Hub and Temperature Sensor ICs SAN JOSE, […]

DDR5 vs DDR4 DRAM – All the Advantages & Design Challenges

https://www.rambus.com/blogs/get-ready-for-ddr5-dimm-chipsets/

[Last updated on: April 29, 2024] On July 14th, 2021, JEDEC announced the publication of the JESD79-5 DDR5 SDRAM standard signaling the industry transition to DDR5 server and client dual-inline memory modules (DIMMs). DDR5 memory brings a number of key performance gains to the table, as well as new design challenges. Computing system architects, designers, and […]

Rambus Advances AI 2.0 with GDDR7 Memory Controller IP

https://www.rambus.com/blogs/rambus-advances-ai-2-0-with-gddr7-memory-controller-ip/

As the latest addition to the Rambus portfolio of industry-leading interface and security digital IP for AI 2.0, the GDDR7 memory controller will provide the breakthrough memory throughput required by servers and clients in the next wave of AI inference. Memory Solutions for AI 2.0 AI 2.0 represents the revolutionary world of generative AI. AI […]

GDDR7 Controller

https://www.rambus.com/interface-ip/gddr/gddr7-controller/

GDDR7 Controller Contact Us The Rambus GDDR7 controller IP core is designed for use in applications requiring high memory throughput including artificial intelligence/machine learning (AI/ML), graphics, high-performance computing (HPC). It supports 160 Gigabytes per second (GB/s) throughput for a GDDR7 memory device enabling next-level performance for AI accelerators and GPUs using GDDR7 memory. Secure Site […]

[Infographic]: The Powerful Technologies that Enable Systems like ChatGPT to Thrive

https://www.rambus.com/blogs/infographic-the-powerful-technologies-that-enable-systems-like-chatgpt-to-thrive/

Generative AI has been making waves in the tech industry. The capability to understand context and perform tasks like creating and summarizing content with astonishing accuracy in seconds showcases the cutting-edge potential that generative AI has to transform business processes. Have you ever thought about the technologies that enable generative AI, including Chat GPT and […]

HBM3E: Everything You Need to Know

https://www.rambus.com/blogs/hbm3-everything-you-need-to-know/

[Updated on February 15, 2024] AI training data sets continue to grow and require accelerators that support terabyte-scale bandwidth. Offering a high memory bandwidth and power-efficient solution, HBM3E has become a leading choice for AI training hardware. Find out why in our blog below. Table of Contents: What is HBM3E Memory? What is a 2.5D/3D […]

New CXL 3.1 Controller IP for Next-Generation Data Centers

https://www.rambus.com/blogs/new-cxl-3-1-controller-ip-for-next-generation-data-centers/

The AI boom is giving rise to profound changes in the data center; compute-intensive workloads are driving an unprecedented demand for low latency, high-bandwidth connectivity between CPUs, accelerators and storage. The Compute Express Link® (CXL®) interconnect offers new ways for data centers to enhance performance and efficiency. As data centers grapple with increasingly complex AI […]

Compute Express Link (CXL): All you need to know

https://www.rambus.com/blogs/compute-express-link/

[Last updated on: January 23, 2024] In this blog post, we take an in-depth look at Compute Express Link®™ (CXL®™), an open standard cache-coherent interconnect between processors and accelerators, smart NICs, and memory devices. We explore how CXL can help data centers more efficiently handle the tremendous memory performance demands of generative AI and other advanced workloads. […]

Rambus logo