Rambus circle

Artificial Intelligence & Machine Learning

Speed and Security for the Artificial Intelligence & Machine Learning Revolution

Artificial intelligence (AI) and machine learning (ML) are at the heart of the latest virtuous cycle of computing. Enormous gains in computing power have made practical the neural networks underpinning the revolutionary strides being made in AI and ML. The explosion of AI and ML applications drives the creation of application-focused processors which then take AI and ML to new levels of performance. Concurrently, the development of enormous digital data sets, also thanks to advances in computing and networking, provides the vast training data on which ML depends.

At Rambus, we develop products that move and protect the data critical to the development and performance of AI and ML. We provide high-speed interfaces, security cores, and chips that optimize the computing and networking devices at the foundation of the AI and ML revolution.

Download HBM2 and GDDR6: Memory Solutions for AI white paper

HBM2E and GDDR6: Memory Solutions for AI

Artificial Intelligence/Machine Learning (AI/ML) growth proceeds at a lightning pace. In the past eight years, AI training capabilities have jumped by a factor of 300,000 driving rapid improvements in every aspect of computing hardware and software. Meanwhile, AI inference is being deployed across the network edge and in a broad spectrum of IoT devices including in automotive/ADAS. Training and inference have unique feature requirements that can be served by tailored memory solutions. Learn how HBM2E and GDDR6 provide the high performance demanded by the next wave of AI applications.
Real-time Insights with More, Faster Data icon

Speeding Neural Networks

Huge advances in parallel processing for neural networks power the great leaps realized in AI and ML. But applications made possible by these developments whet greater demands for even higher performance. At the hardware level, the bottleneck has moved from the processor core to the memory and chip-to-chip interfaces at the SoC boundary. At Rambus, we’re pushing the envelope of neural network performance with memory and SerDes IP cores, and memory interface chips, to unleash the performance of the next generation of AI and ML hardware.

SerDes PHYs arrow in blue circle
Memory PHYs arrow in blue circle
Digital Controllers arrow in blue circle
Memory Interface Chips arrow in blue circle

Download AI Requires Tailored DRAM Solutions

AI Requires Tailored DRAM Solutions

DRAM has continuously adapted to the needs of each new wave of hardware spanning PCs, game consoles, mobile phones and cloud servers. Each generation of hardware required DRAM to hit new benchmarks in bandwidth, latency, power or capacity. Looking ahead, the 2020s will be the decade of artificial intelligence/machine learning (AI/ML) touching every industry and application space. For DRAM, AI/ML represents the biggest challenge yet with a list of requirements for “all of the above.” Learn about the DRAM solutions tailored to meet the needs of AI/ML.
Safeguarding the Most Valuable Currency: Data icon

Securing Training Data and Algorithms

Given the immense value of the data and algorithms powering AI and ML, safeguarding these from malicious attack is of mission-critical importance. Doing so requires a multi-tiered approach built on a foundation of hardware-level secure silicon. At Rambus, we’ve developed robust hardware-based security solutions that safeguard the SoCs at the heart of AI and ML computing systems. With secure silicon IP and provisioning services, Rambus is at the forefront of protecting advanced AI and ML processing.

Root of Trust Solutions arrow in blue circle
Crypto Accelerator Cores arrow in blue circle
Protocol Engines arrow in blue circle
Provisioning and Key Management arrow in blue circle

CryptoManager Root of Trust Cover

The CryptoManager Root of Trust

Built around a custom RISC-V CPU, the Rambus CryptoManager Root of Trust (CMRT) is at the forefront of a new category of programmable hardware-based security cores. Siloed from the primary processor, it is designed to securely run sensitive code, processes and algorithms. More specifically, the CMRT provides the primary processor with a full suite of security services, such as secure boot and runtime integrity, remote attestation and broad crypto acceleration for symmetric and asymmetric algorithms.

FREE Webinar: Understanding Fault Injection Attacks and Their Mitigation