The latest enhancements to the HBM2 standard will clearly be appreciated by developers of memory bandwidth-hungry ASICs, however in order to add support of HBM2E to their designs, they are also going to need an appropriate controller as well as physical interface. For many companies developing of such IP in-house does not make financial sense, so Rambus has designed a highly-integrated HBM2E solution for licensing.
Interface IP
Memory For Advanced Designs
The 2020 Designcon conference included many talks and exhibits with a storage and memory focus. Both Rambus and Teledyne LeCroy had tutorials on design and connectivity for leading edge electronic components and systems as well as testing memory systems. This piece will look at some material from the tutorials and exhibits that can inform us about disaggregated processing developments, high speed chip to chip interfaces and memory for AI applications.
Storage and Networking Bytes: PCIe5, OpenShift, and Veeam
Let’s start with PCIe5, the spec for which was finalized in early 2019. Now manufacturers are now getting revved up to produce PCIe5 hardware in 2020, which will be a boon for data- and processor-hungry workloads like machine learning and AI, as well as high performance computing (HPC) workloads that rely on GPUS, FPGAs, and ASICS to process data.
Accelerating AI And ML Applications With PCIe 5
The rapid adoption of sophisticated artificial intelligence/machine learning (AI/ML) applications and the shift to cloud-based workloads has significantly increased network traffic in recent years. Historically, the intensive use of virtualization ensured that server compute capacity adequately met the need of heavy workloads. This was achieved by dividing or partitioning a single (physical) server into multiple virtual servers to intelligently extend and optimize utilization. However, this paradigm can no longer keep up with the AI/ML applications and cloud-based workloads that are quickly outpacing server compute capacity.
Memory subsystem solution for next-generation AI training chip
Rambus has announced that Enflame (Suiyuan) Technology has selected Rambus HBM2 PHY and Memory Controller IP for its next-generation AI training chip. Rambus memory interface IP enables the development of high-performance, next-generation hardware for leading-edge AI applications.
Top Tech Talks Of 2018
2018 shaped up to be a year of transition and inflection, sometimes in the same design. There were new opportunities in automotive, continued difficulties in scaling, and an explosion in AI and machine learning everywhere. Traffic numbers on stories give a snapshot of the most current trends, but with videos those trends are even more apparent because of the time invested in watching those videos.