Many-Channel AXI DMA

The Many-Channel AXI DMA (formerly vDMA-AXI) IP Core implements a highly efficient, configurable DMA engine specifically engineered for Artificial Intelligence/Machine Learning (AI/ML) optimized SoCs and FPGAs that power tomorrow’s virtualized data centers.

How the Many-Channel AXI DMA Core Works

The core is intended to be used as a centralized DMA allowing concurrent data movement in any direction, and is particularly suited for many-core SoCs such as AI/ML processors. The Many-Channel AXI DMA Core is based on a novel architecture that allows hundreds of independent and concurrent DMA channels to be distributed among a number of Virtual Machines (VMs) or host domains without sacrificing on performance and resource utilization. The core is optimized to deliver the highest possible throughput for small data packet transfers, which is a common weakness in traditional DMA engines.
Many-Channel AXI DMA Block Diagram
Many-Channel AXI DMA Block Diagram

The Many-Channel AXI DMA core can be attached externally to the PCIe 5.0 Controller for a scalable enterprise class PCIe interface solution for compute, network, and storage SoCs.

Data Center Evolution: Accelerating Computing with PCI Express 5.0

Data Center Evolution: Accelerating Computing with PCI Express 5.0

The PCI Express® (PCIe) interface is the critical backbone that moves data at high bandwidth between various compute nodes such as CPUs, GPUs, FPGAs, and workload-specific accelerators. The rise of cloud-based computing and hyperscale data centers, along with high-bandwidth applications like artificial intelligence (AI) and machine learning (ML), require the new level of performance of PCI Express 5.0. 

Solution Offerings

Rambus logo