This IDC Technology Spotlight Study, sponsored by Rambus, discusses server demands on DRAM and different workloads. DRAM must dynamically adjust to the needs of these disparate workloads. The history of dynamic random-access memory (DRAM) is characterized by the ability of the technology to adapt to the increasingly specialized real-time memory requirements of the applications that utilize it. The COVID-19 pandemic changed the average DRAM content by workload balance in servers. While workloads require servers to adapt to their specific needs, servers must be based on standardized, scalable technologies to be affordable. Over time, DDR5 will be essential to meet the average server’s DRAM needs.
In response to an exponential growth in data, the industry is on the threshold of a groundbreaking architectural shift that will fundamentally change the performance, efficiency and cost of data centers around the globe. Server architecture, which has remained largely unchanged for decades, is taking a revolutionary step forward to address the growing demand for data and the voracious performance requirements of advanced workloads.
AI/ML is increasingly pervasive across all industries driven by a massive wave of digitization. Data, the raw material of AI/ML and Deep Learning algorithms, is available in enormous quantities from all aspects of business operations. AI/ML promises great gains in responsiveness and adaptability in an ever-changing technology landscape, and industries are enthusiastically responding to that appeal. Concurrently, the vast value creation of AI/ML make it an inviting target for adversaries who aim to compromise or steal. Learn about the attack vectors against AI/ML and solutions for safeguarding its assets.
MIPI® Alliance technology has helped enable the dramatic growth of the mobile phone market. The function and capabilities of MIPI interface solutions have grown dramatically as well. MIPI DSI-2 SM has become the leading display interface across a growing range of products including smartphones, AR/VR, IoT appliances and ADAS/autonomous vehicles. As the application space has expanded, so too have the performance requirements. Learn how MIPI DSI-2 interface and VESA® DSC visually lossless compression technologies can meet the challenges of next-generation displays.
AI/ML training capabilities are growing at a rate of 10X per year driving rapid improvements in every aspect of computing hardware and software. HBM2E memory is the ideal solution for the high bandwidth requirements of AI/ML training, but entails additional design considerations given its 2.5D architecture. Designers can realize the full benefits of HBM2E memory with the silicon-proven memory subsystem solution from Rambus.
The PCI Express® (PCIe) interface is the critical backbone that moves data at high bandwidth between various compute nodes such as CPUs, GPUs, FPGAs, and workload-specific accelerators. The rise of cloud-based computing and hyperscale data centers, along with high-bandwidth applications like artificial intelligence (AI) and machine learning (ML), require the new level of performance of PCI Express 5.0.