HBM4E Advances ​Bandwidth Performance for AI Training

Upcoming and On-demand Webinars at Rambus

Check out our library of upcoming and on-demand webinars, and hear from our experts about topics ranging from high speed memory solutions, security IP, IoT and beyond.

  • Topics:

Security Solutions for a World of IoT Devices

Security Solutions for a World of IoT Devices

With the ‘Internet of Things’ (IoT) getting more and more pervasive, an increasing number of connected things around us collect, handle and control sensitive data. The hacking of IoT devices can affect privacy, cause a loss of physical and information security, and impact availability of services. Connected devices significantly increase the attack surface of systems and networks as they potentially provide hackers a local springboard into those systems. Mass-deployed connected devices have been used to mount distributed Denial of Service attacks. IoT devices face a hard security challenge as they face high attack exposure while having limited resources to protect themselves. This session will cover the tools and solutions provided by Rambus to help protect and harden resource constrained devices from network-based attacks.

Watch Webinar »
Securing Data Center AI/ML Workloads Beyond Secure Boot and Authentication

Securing Data Center AI/ML Workloads Beyond Secure Boot and Authentication

With the rising value of AI/ML spanning training and inference models, data, and the AI hardware itself, the threats from adversaries are greater than ever. As such, a security strategy for AI/ML workloads and hardware needs to offer far more than secure boot and authentication. Rambus security expert, Bart Stevens will discuss how a hardware root of trust can be the foundation for AI/ML security through defense in depth, partitioning of secure operations, and state-of-the-art protections from side channel attacks.

Watch Webinar »
DDR5 Memory Enables Next-Generation Servers

DDR5 Memory Enables Next-Generation Servers

An exponential rise in data volume, and the rapid increase of advanced workloads like AI/ML training, requires constant innovation in all aspects of computing. Yet given the broad infrastructure implications, main memory technology changes infrequently, once every 6 or 7 years. The transition to DDR5 is a watershed industry event as it will be the main memory solution in servers for the rest of this decade.

Watch Webinar »
MIPI® Sensor Solutions for Autonomous Driving

MIPI® Sensor Solutions for Autonomous Driving

Sensors drive the algorithms that interpret the data enabling ADAS driving systems. High bandwidth global shutter cameras, and low-latency, low-power radar, lidar and sonar sensors’ streaming data leverage MIPI CSI-2® technologies to meet challenging design requirements. This session will discuss how highly configurable MIPI CSI-2 based PHY and Controller sub-system solutions can be tailored to address the needs of autonomous driving. Example customer use cases will illustrate the implementation of these MIPI CSI-2 solutions.

Watch Webinar »
High-Performance Memory: Ask Me Anything

High-Performance Memory: Ask Me Anything

With high-performance memory experts covering architecture, chips and IP, we’re looking forward to your questions. Please join us for this Ask Me Anything session to get the latest on technology and trends for the world’s highest bandwidth memory solutions including DDR5, HBM3 and GDDR6.

Watch Webinar »
LPDDR5 Delivers High Bandwidth for a Growing Range of Applications

LPDDR5 Delivers High Bandwidth for a Growing Range of Applications

Initially designed for mobile phones and laptops, the bandwidth and low power characteristics of LPDDR make it an increasingly attractive choice of memory for applications in IoT, automotive, edge computing and the data center. Fifth-generation LPDDR5 raises data rates to 6.4 Gbps and bandwidth to 25.6 GB/s for a x32 DRAM device. In this session, Rambus and its partners OpenFive and Avery Design Systems will discuss their high-performance, high-quality, configurable LPDDR5 solution.

Watch Webinar »
Accelerating Data Interconnects with PCI Express™ 6.0 & 5.0 Interface IP

Accelerating Data Interconnects with PCI Express™ 6.0 & 5.0 Interface IP

The latest generation of the PCI Express, PCIe™ 6.0, advances performance to 64 GT/s in support of advanced workloads and networking. In this presentation, interface technology expert, Arjun Bangre will discuss the changes implemented in PCI Express 6.0, such as PAM4 signaling and low-latency forward error correction (FEC). In addition, Arjun Bangre will contrast PCIe 6.0 and 5.0 and explain how Rambus can support the PCIe interface your next design requires.

Watch Webinar »
GDDR6 Memory Enables High-Performance Inferencing

GDDR6 Memory Enables High-Performance Inferencing

A rapid rise in the size and sophistication of inferencing models has necessitated increasingly powerful hardware deployed at the network edge and in the endpoint devices. To keep these inferencing processors and accelerators fed with data requires a state-of-the-art memory solution that delivers extremely high bandwidth. Frank Ferro will discuss the design and implementation considerations of GDDR6 memory subsystems to address the bandwidth needs of these next-generation inferencing engines.

Watch Webinar »
Implementing CXL™ 2.0 Interconnect Solutions

Implementing CXL™ 2.0 Interconnect Solutions

Compute Express Link™ (CXL) has evolved rapidly since its launch in 2019 and is slated for debut in the next generation of server platforms coming later this year. While it builds on the same physical layer as PCI Express, CXL implements unique features at the controller level to enable memory cache coherency between a host and multiple types of connected devices including smart NICs, accelerators and memory expansion devices.

Watch Webinar »
Memory Bandwidth for AI/ML Races Higher with HBM3

Memory Bandwidth for AI/ML Races Higher with HBM3

With the insatiable need for higher bandwidth in state-of-the-art AI/ML training and HPC, the HBM standard has been on a rapid pace of improvement. The newly standardized HBM3 generation doubles the data rate to 6.4 Gb/s that offers up to 819 GB/s of memory bandwidth between an accelerator and a single HBM3 DRAM device. Memory interface technology expert, Frank Ferro will discuss how the Rambus 8.4 Gb/s HBM3 Memory Subsystem can provide the headroom and scalability needed for implementing state-of-the-art HBM designs.

Watch Webinar »
Rambus logo