5G is more memory hungry compared to previous generations, according to Gary Hilson, memory editor at EE Times. In his recent article, he pulled together an industry consensus about the types of memory that might be considered to comply with 5G demands, the pros and the cons.
Hilson writes that it makes sense when you think about how much computing power people are carrying around in their hands compared to even the early days of the Blackberry. Mobile networks are just as much about transmitting 4K video as they are talk and text.
He adds, “Connected devices not only include smartphones, but sensors, parking meters, smart cars, wearables, and utilities. Telecom infrastructure is now networking and compute infrastructure—flash and DRAM are supplanting SRAM and TCAM, and there might be room for emerging memories, too.”
Hilson explains that a high speed TCAM (ternary content-addressable memory) can search its entire contents in a single clock cycle and is faster than RAM. It’s a mainstay of networking gear, such as high-performance routers and switches, to increase the speed of route look-up, packet classification, packet forwarding and access control list-based commands.
However, there are downsides to TCAM. Hilson quotes Jim Handy, principal at Objective Analysis, as saying, “despite its longevity, it is ‘exotic’ in that it’s quite expensive and there are limited suppliers. But there’s solid payback from using them. They streamline the routers. They make them far faster with less other processing hardware.”
Hilson also writes about resistive-RAM (ReRam) and MRAM as possible contenders to mate up with 5G requirements. But as he notes, RRAM-based TCAM circuits can match the performance of CMOS-based SRAM circuits for multicore neuromorphic processor applications despite the performance and reliability tradeoffs. However, the issue with both RRAM and the emerging magneto-resistive random-access memory (MRAM) is temperature. Neither can handle high heat.
Hilson again quotes Handy saying, “Embedded MRAM has made some inroads. (However,) the telephone companies want both high and low temperature stuff.”
Another viewpoint in Hilson’s article comes from Virtium vice president Hiep Pham, who oversees the company’s networking endeavors. For the immediate future, the transition from 4G means continuing to support legacy equipment. “The goal for 5G is basically to have a faster speed, a high capacity, and low latency,” he said. “But 5G will go together with some applications.”
“This is where the lines get blurry,” Hilson reports. “The evolution from a telecom network to a technology platform means it’s not just about a better voice and data experience for handsets, but support for a wide range of edge computing and IoT scenarios, including industrial automation, as well as autonomous vehicles and smart cities.”
He states that Pham believes the amount of artificial intelligence (AI) in the network will increase, starting in data centers but also distributed through the network, right to the edge. “The edge will have to handle some of the AI, even though it goes through 4G or 5G.” He said there will remain bandwidth limitations, so the edge must be capable of storing and process information.
Lastly, Ryan Baxter, senior director for data center at Micron, rounds out Hilson’s piece by saying, “The distributed nature of a 5G platform means there’s been a proliferation of different memory technologies being considered and not just standard DDR3 and DDR4.”
Baxter is quoted as saying that low power double-data rate (LPDDR) and graphics double-date rate (GDDR)-based technology are being looked at and the requirements of such broad spectrum are driving a fair amount of fragmentation because 5G presents itself as a blend of telecom and computing networking.
“We’re seeing significantly larger memory footprints required to essentially support the compute elements of the 5G deployments,” he says in Hilson’s article.