As AI continues to grow in reach and complexity, the unrelenting demand for more memory requires the constant advancement of high-performance memory IP solutions. We’re pleased to announce that our HBM3 Memory Controller now enables an industry-leading memory throughput of over 1.23 Terabytes per second (TB/s) for training recommender systems, generative AI and other compute-intensive AI workloads.
According to OpenAI, the amount of compute used in the largest AI training has increased at a rate of 10X per year since 2012, and this is showing no signs of slowing down any time soon! The growth of AI training data sets is being driven by a number of factors. These include complex AI models, vast amounts of online data being produced and made available, as well as a continued desire for more accuracy and robustness of AI models.
OpenAI’s very own ChatGPT, the most talked about large language model (LLM) of this year, is a great example to illustrate the growth of AI data sets. When it was first released to the public in November 2022, GPT-3 was built using 175 billion parameters. GPT-4, released just a few months after, is reported to use upwards of 1.5 trillion parameters. This staggering growth illustrates just how large data sets are becoming in such a short period of time.
As AI applications evolve and become more complex, more advanced models, larger data sets and massive data processing needs require lower latency, higher bandwidth memory for training. Delivering the highest per device bandwidth of any available memory, HBM3 has become the memory of choice for AI training hardware.
With its unique 2.5D/3D architecture, HBM memory offers significantly higher bandwidth when compared to traditional DDR-based memories, resulting in faster data access and processing vital for AI training tasks. HBM is also extremely power efficient given its position in relation to the GPU/CPU, and its compact form factor offers many benefits for devices where space is at a premium.
The Rambus HBM3 Memory Controller delivers a market-leading data rate of 9.6 Gigabits per second (Gb/s), supporting the continued evolution of HBM3 beyond the top specification speed of 6.4 Gb/s. The interface features 16 independent channels, each containing 64 bits for a total data width of 1024 bits. At the 9.6 Gb/s data rate, this provides a total interface bandwidth of 1228.8 GB/s, or in other words, over 1.23 Terabyte per second (TB/s) of memory throughput! HBM3 memory solutions are evolving in the market, and the Rambus HBM3 Memory Controller supports this trend as HBM scales to new performance levels.
Want to dive into some of the benefits of HBM3 memory in more detail? Check out our new “HBM3: Everything You Need to Know” blog.