Join us at D&R IP SoC Silicon Valley! Stop by Table 1 and chat with our experts about our Silicon IP offerings. We will also be presenting on three topics. Session information below.
From Monolithic SoCs to Chiplets: A new Hardware Security Paradigm
Speaker: Berardino Carnevale, Senior Technical Marketing and Product Manager at Rambus
Abstract: Chiplet‑based architectures are transforming SoC design, but they also upend long‑standing security assumptions. By disaggregating a monolithic die into multiple, often multi‑vendor chiplets, the implicit silicon trust boundary disappears, expanding the attack surface to include chiplet substitution, weak‑chiplet compromise, and exposed die‑to‑die interconnects. This presentation explores why traditional SoC security models fail in chiplet systems and introduces a system‑level security paradigm based on distributed trust with centralized authority.
AI Inference needs a mix-and-match memory strategy
Speaker: Nidish Kamath, Director of Product Management, Silicon IP at Rambus
Abstract: AI inference spans diverse workloads, from low‑latency chat to long‑context reasoning and large‑scale recommendations—making single, monolithic accelerator and memory designs increasingly inefficient. This talk explains how inference naturally splits into prefill and decode stages with fundamentally different bottlenecks: prefill is compute‑bound, while decode is dominated by memory bandwidth and latency. By matching memory technologies to each stage, using cost‑efficient GDDR or LPDDR for prefill and reserving premium HBM for decode, with pooled memory for KV offload, operators can significantly reduce cost per token without sacrificing latency. The session outlines emerging disaggregated architectures for AI inference workloads.
Enabling Efficient Edge AI Inferencing Through Ecosystem Collaboration
Speakers: Nidish Kamath, Director of Product Management, Silicon IP at Rambus & Kevin Yee, Sr. Director of IP and Ecosystem Marketing, Samsung Foundry
Abstract: As artificial intelligence continues to move from centralized data centers to the edge, delivering real‑time inferencing under strict power, performance, and area constraints has become a defining challenge for next‑generation systems. Meeting this challenge requires more than isolated innovation—it demands deep collaboration across AI compute architectures, memory subsystems, and advanced semiconductor process technologies.
This presentation highlights a strategic collaboration between Rambus, and Samsung Foundry that exemplifies how ecosystem alignment can unlock efficient, scalable edge AI platforms. The session will explore how ultra‑efficient AI inference processors are combined with Rambus’ silicon‑proven LPDDR5/LPDDR5X memory controller IP and Samsung Foundry’s leading‑edge logic process technologies to enable high‑performance, low‑power AI inferencing across a wide range of edge applications, including robotics, smart cameras, industrial automation, and edge servers.
