More specifically, the analog-to-digital converter (ADC) and (DSP) architecture of Rambus’ 56G SerDes PHY is designed meet the long-reach backplane requirements for the industry transition to 400 GB Ethernet applications. In practical terms, this means it can support scaling to speeds as fast as 112G, which are required in the networking and enterprise segments, such as enterprise server racks that are moving from 100G to 400G.
According to McGregor, as high-speed networking tracks upwards of 400 GB, the challenge is not just about speeding up the interfaces between network nodes, but also being able to distribute network nodes.
“We’re talking about speeds we only dreamed about a couple of years ago,” said McGregor, who also emphasized that existing network architectures aren’t built for the emerging applications that are driving the need for speed. These applications include wireless communications, IoT, artificial intelligence and deep learning applications, as well as autonomous vehicles, which will be gathering a great deal of data to be processed in the cloud.
“Any way you look at it, the amount of data these networks have to handle is going grow exponentially over the next 20 years. Being able to go to 40GB to 100GB to 400GB and beyond is really critical,” he stated.
With semiconductors, says McGregor, it’s gotten to the point where everything is being put on a single chip to bring everything together as close as possible. For a data center with all its storage, processing and compute elements, this scenario simply isn’t feasible.
“You have to have these high-speed interfaces,” he added.
As McGregor points out, the key to Rambus’ strategy has not just been memory, but the memory interface itself.
“They’ve tried to take that IP and extend it, but not just to memory but to any interface,” he elaborated.
Partnering with Samsung, says McGregor, gives Rambus the advanced manufacturing capabilities for these types of high-speed interfaces. The TIRIAS Research analyst also noted that in the long term, applications such as AI will push the industry to adopt new architectures, as the current ones are limited both by physics and Moore’s Law. This includes Rambus’ efforts on cryogenic memory subsystems that can be cooled at cryogenic temperatures to support quantum computing.
Indeed, Rambus recently announced that it is expanding its collaboration with Microsoft researchers to develop prototype systems that optimize memory performance in cryogenic temperatures. Following the initial collaboration announced in December 2015, this new agreement extends joint efforts to enhance memory capabilities, reduce energy consumption and improve overall system performance.
The technologies being developed by the companies will improve energy efficiency for DRAM and logic operation at cryogenic temperatures, defined by the U.S. National Institute of Standards and Technology as below −180 °C or −292.00 °F or 93.15 K and ideal for high-performance super computers and quantum computers. Additionally, the technologies will enable high-speed SerDes links to operate efficiently in cryogenic and superconducting domains and allow new memory systems to function at these temperatures.