Home > Search
Found 384 Results
Generative AI has been making waves in the tech industry. The capability to understand context and perform tasks like creating and summarizing content with astonishing accuracy in seconds showcases the cutting-edge potential that generative AI has to transform business processes. Have you ever thought about the technologies that enable generative AI, including Chat GPT and […]
[Updated on February 15, 2024] AI training data sets continue to grow and require accelerators that support terabyte-scale bandwidth. Offering a high memory bandwidth and power-efficient solution, HBM3E has become a leading choice for AI training hardware. Find out why in our blog below. Table of Contents: What is HBM3E Memory? What is a 2.5D/3D […]
The AI boom is giving rise to profound changes in the data center; compute-intensive workloads are driving an unprecedented demand for low latency, high-bandwidth connectivity between CPUs, accelerators and storage. The Compute Express Link® (CXL®) interconnect offers new ways for data centers to enhance performance and efficiency. As data centers grapple with increasingly complex AI […]
[Last updated on: January 23, 2024] In this blog post, we take an in-depth look at Compute Express Link®™ (CXL®™), an open standard cache-coherent interconnect between processors and accelerators, smart NICs, and memory devices. We explore how CXL can help data centers more efficiently handle the tremendous memory performance demands of generative AI and other advanced workloads. […]
Rambus @ DesignCon 2024 Join Rambus for a day of technical sessions at DesignCon on January 31, 2024. Hear from our experts on the technologies that are set to shape the future of data centers and high-performance systems, and discover how our cutting-edge memory, interconnect and security IP enables today’s most challenging computing, edge, automotive […]
Highlights: Boosts data rate to 7200 MT/s for a 50% memory bandwidth increase over today’s Gen1 DDR5 devices Extends leadership in key memory interface chip solutions for server main memory Supports accelerated roadmap of server performance for generative AI and other advanced data center workloads Rambus Gen4 DDR5 RCD The Rambus Gen4 DDR5 RCD boosts […]
We’re witnessing an unprecedented time for computing. Advanced data center workloads, with Generative AI leading the pack, have set a blistering pace for hardware performance improvements. The platform vendors are responding with the most ambitious server roadmap ever seen. For example, the just introduced 5th Gen Intel® Xeon® Processor came just a year after its […]
Last updated on: December 27, 2023 On July 14th, 2021, JEDEC announced the publication of the JESD79-5 DDR5 SDRAM standard signaling the industry transition to DDR5 server and client dual-inline memory modules (DIMMs). DDR5 memory brings a number of key performance gains to the table, as well as new design challenges. Computing system architects, designers, and […]
By Steven Woo, Rambus Fellow Supercomputing 2023 brought together some of the brightest minds in the field of high-performance computing, showcasing the latest in exascale computing and the challenges faced in the pursuit of next-generation advances in computing. Talks by Scott Atchley from Oak Ridge National Laboratory and Stephen Pawlowski from Intel stood out for […]
Today (Nov. 14th, 2023) the CXL™ Consortium announced the continued evolution of the Compute Express Link™ standard with the release of the 3.1 specification. CXL 3.1, backward compatible with all previous generations, improves fabric manageability, further optimizes resource utilization, enables trusted compute environments, extends memory sharing and pooling to avoid stranded memory, and facilitates memory […]