Home > Search
Found 3614 Results
Like their terrestrial counterparts, space-based systems benefit from the greater computing power achieved through semiconductor scaling. However, chips for spacecraft must be radiation hardened (RH) to operate in the rigors of space, and there is considerable time and effort required to develop and qualify rad-hardened devices on a given process node. The BAE Systems RH45® nanometer (nm) node has long been the go-to solution for space-based computing, but the industry is now on the verge of […]
Download the product brief to see the specifications and features of the Rambus DDR5 PMIC5030.
Data Bus Inversion (DBI) is a signal encoding technique used in high-speed digital interfaces to reduce power consumption and improve signal integrity. DBI works by inverting data bits when the number of logical transitions (from 0 to 1 or vice versa) exceeds a predefined threshold, typically half the bus width. A control signal indicates whether inversion has occurred, allowing the receiver to correctly interpret the data.
CSI-2, or Camera Serial Interface 2, is a high-speed serial interface standard developed by the MIPI Alliance for connecting cameras to host processors in mobile and embedded systems. It is widely used in smartphones, tablets, automotive systems, drones, and industrial vision applications.
Interconnect technologies are key to scaling AI workloads across data center infrastructure. Learn how PCIe 7 and CXL 3 enable high-speed, low-latency connectivity for memory expansion and composable architectures in AI systems.
AI accelerators require high-performance memory IP to meet bandwidth, capacity and latency requirements. This session dives into Rambus IP solutions for HBM4, LPDDR5, and GDDR7, highlighting their role in powering next-gen AI silicon.
Join Rambus experts for a dynamic roundtable discussion on the latest trends in the memory market. Topics include AI-driven demand, enabling technologies, and the future of memory innovation across computing segments.
AI is increasingly moving to the edge, and PC clients are evolving to support intelligent applications. This session showcases Rambus memory chip solutions optimized for client platforms, enabling responsive AI experiences with performant memory architectures.
Explore Rambus memory chip solutions designed for server platforms and AI workloads in the data center. This session covers performance, power efficiency, and scalability features that meet the demands of next-generation AI training and inference environments.
In this keynote, Dr. Steve Woo reflects on the 35-year journey of Rambus and the evolution of memory technology that has culminated in today’s AI-driven computing landscape. From early innovations to modern high-bandwidth architectures, this session highlights how memory has become a foundational enabler of artificial intelligence.
