A fundamental problem with increasing DRAM bandwidth is increasing the data transfer rate between the DRAM interface and the DRAM core. One possibility is to increase the frequency of the DRAM core to match that of the DRAM interface. However, this introduces additional circuit complexity, increases die size, and raises DRAM power consumption, resulting in higher manufacturing cost and lower yield. Core prefetch takes a different approach to solving this problem by allowing the DRAM core to run at a reduced speed compared to the DRAM interface. To match the bandwidth of the interface, each core access transfers multiple bits of data from the core to make up for this difference in transfer speeds. In this manner, core prefetch lets DRAM bandwidth increase, even if the DRAM core is limited to operating at a lower speed.
Core prefetch benefits many different groups by lowering the cost to achieve high DRAM bandwidths. DRAM manufacturers benefit from higher yields brought about by running the core at a lower speed, increasing the number of good DRAM devices in a given manufacturing run. The ability to supply a given level of bandwidth with fewer DRAMs reduces the number of controller pins and packaging costs for controller designers. Fewer DRAMs also enables system integrators to decrease their bill of materials costs and allows for smaller system form factors. Finally, consumer benefits from the lower system costs through higher DRAM yields, reduced packaging costs and a reduced number of required DRAM for a given performance level.