How Semiconductors are Still Relevant in an Increasingly Digitized Age
This entry was posted on Friday, November 16th, 2018.
On November 27th, Senior Vice President of Global Marketing, Sales and Support, Mike Noonen, will keynote at the SemIsrael Expo 2018, speaking on how “Big Data Needs Secure Tiny Devices.”
Also, on the agenda is Frank Ferro, Senior Director, Product Marketing, Memory & Interface Division, who will cover “Advanced Memory Interfaces for High-Performance Systems,” in the IP & Cores track. You can visit the Rambus product expert team in booth #26 at SemIsrael.
Semiconductors are becoming more and more important to the future of technology. As an integral part of our tools since the 1950s, they have been powering every computerized device. As software becomes more sophisticated, and as Moore’s Law is fading from prominence, some may wonder how semiconductors can keep up in this increasingly mobile and digitized age. From military hardware in the 1950s, to personal computers in the 1980s, to mobile phones in the 2010s, semiconductors are a vital part of technology, providing the brains for most of our tools.
With everything becoming increasingly connected, from people, to cars, to homes, to cities, to industries, the semiconductor industry is projected to be valued at $1.9 trillion by 2020. While 12 elements of the periodic table of elements were used in semiconductors in the 1980s, that number rose to 16 in the 1990s and 58 in the 2000s. Manufacturers are using more of the periodic table of elements, making much more possible, such as Indium Phosphide optical interfaces and Iridium substrates for diamond wafers.
Semiconductor companies are on the cusp of a fourth tectonic shift in computing. That tectonic shift has offered a set of unique challenges, such as low CPU utilization, where processing capabilities are scaling faster than memory can support, big data analytics, where massive and growing data sets are straining data center architecture, and DRAM scaling, where the cost per bit of DRAM no longer scales with process.
The last 40 years have shown that performance from monolithic scaling is slowing, as increasing transistor counts enable newer functionality and better system performance. But early systems were bottlenecked by CPU performance, limiting what applications could do. Those bottlenecks have shifted over time from the processor to memory. This is compounded by the rising costs of System-on-Chip (SoC) devices, driven by the increased complexity of equipment and materials required to manufacture said devices.
The question is that if monolithic scaling is not delivering performance and is becoming unaffordable, what can the semiconductor industry do? A passage from Gordon Moore’s 1965 document, where “Moore’s Law” came from, said that “it may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected.”
One thing to consider is system-in-package. Separately packaged, but interconnected devices, known as “disaggregation,” can drive performance and control costs. Another key technology to consider is high bandwidth memory or HBM2 DRAM. It can output as much as 256GB/s, and provide more bandwidth at 50% lower power with a much smaller footprint than other memory formats such as GDDR5. With big data needing tiny devices, reducing the footprint of memory is important.
The Bottom Line
With the end of Moore’s Law, memory becoming more affordable, and tasks becoming more complex, semiconductors are more relevant than ever.