Rambus is, once again, leading the way in memory performance solutions with today’s announcement that the Rambus GDDR6 PHY now reaches performance of up to 24 Gigabits per Second (Gb/s), the industry’s highest data rate for GDDR6 memory interfaces!
AI/ML inference models are growing rapidly in both size and sophistication, and because of this we are seeing increasingly powerful hardware deployed at the network edge and in endpoint devices. For inference, memory throughput speed and low latency are critical. GDDR6 memory offers an impressive combination of bandwidth, capacity, latency and power that makes it ideal for these applications.
The GDDR6 interface supports 2 channels, each with 16 bits for a total data width of 32 bits. With speeds up to 24 Gb/s per pin, the Rambus GDDR6 PHY offers a maximum bandwidth of up to 96 GB/s. This represents a 50% increase in available bandwidth, compared with the previous generation 16G GDDR6 PHY.
Of course, hitting such high data rates also comes with some challenges. Maintaining signal integrity (SI) at speeds of 24 Gb/s, particularly at lower voltages, requires significant expertise. Designers face tighter timing and voltage margins and the number of sources of loss and their effects all rise rapidly. This is where the long-standing Rambus expertise in SI comes in and allows customers to maintain the SI of their system, even at these new 24G data rates.
Check out our “From Data Center to End Device: AI/ML Inference with GDDR6” white paper for a detailed look at GDDR6 memory capabilities and discover why it is ideally suited to meet the challenges of AI inference applications.