Found 385 Results

Boosting Data Center Performance to the Next Level with PCIe 6.0 & CXL 3.0

https://www.rambus.com/blogs/boosting-data-center-performance-to-the-next-level-with-pcie-6-0-cxl-3-0/

2022 has seen major updates to two standards critical to the future evolution of the data center: PCI Express® (PCIe®) and Compute Express Link™ (CXL™). The two are interwoven, and in this blog, we’ll look at their relationship and the impact of latest developments. Like many standards in the computing world, PCIe has proliferated far […]

Rambus Delivers PCIe 6.0 Interface Subsystem for High-Performance Data Center and AI SoCs

https://www.rambus.com/rambus-delivers-pcie6-interface-subsystem-for-high-performance-data-center-and-ai-socs/

Highlights: Delivers data rate of up to 64 GT/s for high-performance workloads Supports the full feature set of PCIe 6.0 with PHY support for CXL 3.0 Offers complete IP solution optimized for latency, power, and area Provides cutting-edge security to protect valuable data assets SAN JOSE, Calif. – Oct. 24, 2022 – Rambus Inc. (NASDAQ: RMBS), […]

AI Hardware Summit Event Recap: Interview with Steven Woo

https://www.rambus.com/blogs/ai-hardware-summit-event-recap-interview-with-steven-woo/

The fifth annual AI Hardware Summit was back this month, and for the first time in a couple of years, it took place fully in-person in Santa Clara, California. The world’s leading experts in AI hardware came together over the course of three days to discuss some of the big challenges facing the industry, and […]

PCI Express Glossary​

https://www.rambus.com/interface-ip/pci-express-glossary/

PCI Express Glossary a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | w | x | y | z A ACS […]

CXL Glossary

https://www.rambus.com/interface-ip/cxl-glossary/

CXL Glossary a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | w | x | y | z A A2F An […]

Rambus Initiates Accelerated Share Repurchase Program

https://www.rambus.com/rambus-initiates-accelerated-share-repurchase-program-5/

SAN JOSE, Calif. – Sep. 12, 2022 – Rambus Inc. (NASDAQ: RMBS), a provider of industry-leading chips and silicon IP making data faster and safer, today announced that it initiated an accelerated share repurchase program with Wells Fargo Bank, National Association (“Wells Fargo Bank”), to repurchase an aggregate of approximately $100 million of its common stock, with an initial […]

High-Performance Memory: Ask Me Anything

https://go.rambus.com/high-performance-memory-ask-me-anything#new_tab

With high-performance memory experts covering architecture, chips and IP, we’re looking forward to your questions. Please join us for this Ask Me Anything session to get the latest on technology and trends for the world’s highest bandwidth memory solutions including DDR5, HBM3 and GDDR6.

High-Speed Interface Solutions for 5G

https://go.rambus.com/high-speed-interface-solutions-for-5g#new_tab

A rapid rise in the size and sophistication of inferencing models has necessitated increasingly powerful hardware deployed at the network edge and in the endpoint devices. To keep these inferencing processors and accelerators fed with data requires a state-of-the-art memory solution that delivers extremely high bandwidth. Frank Ferro will discuss the design and implementation considerations […]

LPDDR5 Delivers High Bandwidth for a Growing Range of Applications

https://go.rambus.com/lpddr5-delivers-high-bandwidth-for-a-growing-range-of-applications#new_tab

Initially designed for mobile phones and laptops, the bandwidth and low power characteristics of LPDDR make it an increasingly attractive choice of memory for applications in IoT, automotive, edge computing and the data center. Fifth-generation LPDDR5 raises data rates to 6.4 Gbps and bandwidth to 25.6 GB/s for a x32 DRAM device. In this session, […]

GDDR6 Memory Enables High-Performance Inferencing

https://go.rambus.com/gddr6-memory-enables-high-performance-inferencing#new_tab

A rapid rise in the size and sophistication of inferencing models has necessitated increasingly powerful hardware deployed at the network edge and in the endpoint devices. To keep these inferencing processors and accelerators fed with data requires a state-of-the-art memory solution that delivers extremely high bandwidth. Frank Ferro will discuss the design and implementation considerations […]

Rambus logo