Recently Rambus fellow and distinguished inventor, Steve Woo, had a web chat with Bill Wong, technology editor for Electronic Design, to discuss some of the latest hardware trends in AI/ML. This was part of an ongoing conversation Steve and Bill have had regarding leading-edge developments in the AI/ML revolution.
In the webcast, Steve discusses some of the dramatic growth we’ve witnessed in AI training capabilities and how that growth is outstripping the rate of improvement in processing realized from Moore’s Law and Dennard Scaling. At the same time, there’s been an explosion of AI-powered IoT devices, and the rollout of 5G will only help accelerate that trend.
AI-specific hardware has been a catalyst for this tremendous growth, but there are always bottlenecks that must be addressed. A poll of the audience participants found that memory bandwidth was their #1 area for needed focus. Steve and Bill agreed and explored how HBM2E and GDDR6 memory could help advance AI/ML to the next level.
Steve discussed how HBM2E provides unsurpassed bandwidth and capacity, in a very compact footprint, that is a great fit for AI/ML training with deployments in heat- and space-constrained data centers. At the same time, the excellent performance and low latency of GDDR6 memory, built on time-tested manufacturing processes, make it an ideal choice for AI/ML inference which is increasingly implemented in powerful “IoT” devices such as ADAS in cars and trucks.
For a full replay of their web chat, please go here.