Written by Steven Woo
Artificial Neural Networks
First proposed in 1944 by Warren McCullough and Walter Pitts, an artificial neural network (ANN) or more commonly, a neural network (NN), can perhaps best be defined as a computational model that attempts to closely emulate the network of neurons present in the human brain. More specifically, neuromorphic computing aspires to replicate the results that biological brains can achieve using analogous – albeit artificial – mechanisms.
Unlike traditional von Neumann systems, neural networks are not limited by conventional bottlenecks between program memory, data memory and the CPU. As Dan Kara, research director at ABI Research notes, standard CPUs will continue to play an important role in data centers for the foreseeable future. However, they will be supported by an increasing number of specialized non-von-Neumann platforms, as well as graphics processing units (GPUs), custom application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs).
Machine Learning
Creation of the term ‘machine learning’ (ML) is widely credited to Arthur Samuel, an artificial intelligence (AI) pioneer. From 1949 through the late 1960s, Samuel focused on enabling computers to learn from their experience. One of Samuel’s most famous learning programs used Lee’s Guide to Checkers to adjust its moves and optimize gameplay. This is because Samuel viewed machine learning as a form of AI that enables a system to learn from raw data rather than explicit programming. The field of machine learning has significantly expanded in the 21st century and now spans multiple sub-categories including supervised learning, unsupervised learning, reinforcement learning and deep learning.
AI: 2019 & Beyond
Neural networks and machine learning are currently experiencing an exciting renaissance – with a resurgence of interest across the semiconductor industry, academia and the media. Driven by Moore’s Law, compute performance and transistor densities have improved by several orders of magnitude since the last wave of machine learning advances in 1980s. This has enabled neural networks and machine learning to flourish across a wide range of industries.
Numerous chips and cores are now being designed and deployed to implement neural network training and inference. Analysts estimate there are at least 50 neural network processors and 10 neural network IP cores under development or in production by internet giants, traditional semiconductor manufacturers and numerous start-up companies. These specialized chips and cores are targeted at a diverse set of use cases such as data centers, mobile processors and edge processing systems.
NN, ML & Memory Requirements
Neural networks require ultra-high bandwidth capabilities and power-efficiency for inference and training. Moreover, extensive data training sets often demand large memory capacities, especially for data center applications. To fulfill these requirements, some companies are leveraging on-chip memory, while others are using HBM2 or GDDR6. Although on-chip memory provides the highest bandwidth and power efficiency, this paradigm sacrifices capacity to achieve the former.
HBM2 memory systems offer NN and ML engineers the advantages of the highest bandwidth per device, as well as optimal power efficiency for high performance memory systems. The key challenges for HBM2 memory systems are the high cost of the memory device, the additional cost of using a silicon interposer and the inherent difficulty of interposer-based system designs.
Meanwhile, GDDR offers NN and ML engineers high bandwidth as well as high capacity. Like previous GDDR generations, GDDR6 DRAMs utilize single-ended signaling and die-down implementations to provide the cleanest signaling environments for the highest per-pin signaling rates. The primary advantages of GDDR6 are compatibility with existing package and board infrastructure, as well as the relatively lower cost of DRAMs and packages. The critical challenges for GDDR6 memory systems are maintaining signal integrity and achieving reasonable power efficiencies.
Conclusion
Driven by Moore’s Law, neural networks and machine learning are currently experiencing an exciting renaissance. Both require ultra-high bandwidth capabilities, large capacities and power-efficiency for inference and training. To fulfill these requirements, some companies have chosen on-chip memory, while others are using HBM2 or GDDR6. Each standard offers its own advantages and disadvantages, which are explored in detail above.
Leave a Reply