AI/ML’s demands for greater bandwidth are insatiable driving rapid improvements in every aspect of computing hardware and software. HBM memory is the ideal solution for the high bandwidth requirements of AI/ML training, but it entails additional design considerations given its 2.5D architecture. Now we’re on the verge of a new generation of HBM that will raise memory and capacity to new heights. Designers can realize new levels of performance with the HBM4-ready memory subsystem solution from Rambus.
Papers
MACsec Fundamentals
For end-to-end security of data and devices, data must be secured both when it as rest (stored on a connected device) and when it is in motion (communicated between connected devices). For data at rest, a hardware root of trust anchored in silicon provides that foundation upon which all device security is built. Similarly, MACsec security anchored in hardware at the foundational communication layer provides that basis of trust for data in motion.
Expanding Server Memory Capabilities with MRDIMM Technology
As per socket compute density increases, the amount of directly accessible, low-latency memory bandwidth and capacity to adequately feed data to the multiple cores needs to scale accordingly. Scaling memory bandwidth by increasing raw DRAM component bandwidth or by increasing the number of memory channels has challenges and is reaching limits. JEDEC has unveiled the development of a new DIMM technology called Multiplexed Rank Dual Inline Memory Modules (MRDIMM) to address the bandwidth challenge.
MIPI DSI-2 & VESA Video Compression Drive Performance for Next-Generation Displays
MIPI® Alliance technology has helped enable the dramatic growth of the mobile phone market. The function and capabilities of MIPI interface solutions have grown dramatically as well. MIPI DSI-2SM has become the leading display interface across a growing range of products including smartphones, AR/VR, IoT appliances and ADAS/autonomous vehicles. As the application space has expanded, so too have the performance requirements. Learn how MIPI DSI-2 interface and VESA® DSC visually lossless compression technology can meet the challenges of next-generation displays.
Supercharging AI Inference with GDDR7
A rapid rise in the size and sophistication of AI inference models requires increasingly powerful AI accelerators and GPUs deployed in edge servers and client PCs. GDDR7 memory offers an attractive combination of bandwidth, capacity, latency and power for these accelerators and processors. The Rambus GDDR7 Memory Controller IP offers industry leading GDDR7 performance of up to 40 Gbps and 160 GB/s of available bandwidth per GDDR7 memory device.
Data Center Evolution: The Leap to 64 GT/s Signaling with PCI Express 6.1
The PCI Express® (PCIe®) interface is the critical backbone that moves data at high bandwidth and low latency between various compute nodes such as CPUs, GPUs, FPGAs, and workload-specific accelerators. With the torrid rise in bandwidth demands of advanced workloads such as AI/ML training, PCIe 6.1 jumps signaling to 64 GT/s with some of the biggest changes yet in the standard.