Found 220 Results

AI/ML ramps up despite training challenges

https://www.rambus.com/blogs/ai-ml-ramps-up-despite-training-challenges/

Training AI/ML algorithms A new survey commissioned by Alegion  reveals that 8 out of 10 companies have found training AI/ML algorithms more challenging than originally anticipated. Moreover, 78 percent of AI/ML projects have stalled at some stage before deployment. 96 percent of enterprises have also encountered data quality and labeling challenges – while only half […]

Achronix Chooses Rambus GDDR6 PHY IP for Next-Generation FPGA

https://www.rambus.com/achronix-chooses-rambus-gddr6-phy-ip-for-next-generation-fpga/

Delivering best-in-class solutions for artificial intelligence and hardware acceleration applications SUNNYVALE, Calif – June 4, 2019 – Rambus Inc. (NASDAQ: RMBS) today announced that Achronix, a leader in FPGA-based hardware data acceleration devices and high-performance eFGPA IP, has selected the Rambus GDDR6 PHY for its next-generation Speedster7t FPGA family. Leveraging the top-end data rates delivered by the Rambus […]

Are ML Systems Robust Enough for the Here and Now?

https://www.rambus.com/blogs/are-ml-systems-robust-enough-for-the-here-and-now/

Researchers at MIT have created a way to determine robustness levels of machine learning (ML) models for various tasks.  And that is done by detecting when those models make mistakes they’re not supposed to make . Rob Matheson of the MIT News Offices writes about that development in ECN. A good part of Matheson’s piece […]

Most Complex Processor Chip for AI Acceleration

https://www.rambus.com/blogs/most-complex-processor-chip-for-ai-acceleration/

EE Times reports that Graphcore of Bristol, UK has put on the market a new type of processor for AI acceleration.  It’s called the intelligence processing unit (IPU). According to CEO Nigel Toon, the IPU processor, is the most complex processor chip that’s ever been built. It’s described as “just shy of 24 billion transistors on a […]

Rambus Reports First Quarter 2019 Financial Results

https://www.rambus.com/first-quarter-2019-financial-results/

First quarter revenue and billings in line with expectations; GAAP revenue of $48.4 million, with royalty revenue of $24.8 million and licensing billings of $75.4 million $28.8 million in cash provided by operating activities DDR4 server DIMM chipset revenue up nearly 40% year-over-year, fueled by continued market share growth Record revenue for IP Cores business; […]

Training Neural Networks

https://www.rambus.com/blogs/training-neural-networks/

by Steven Woo Neural networks (NNs) span a wide range of topologies and sizes. Some neural networks are relatively simple and have only two or three layers of neurons, while so-called deep neural networks may comprise 100+ layers of neurons. In addition, the layers can be extremely wide – with hundreds to thousands of neurons […]

Rambus 112G SerDes PHY Article in eeweb.com/EE Times

https://www.rambus.com/blogs/rambus-112g-serdes-phy-article-in-eeweb-com-ee-times/

Ken Dyer, Director, Engineering Architecture, is the author of a 112G Long Reach (LR) SerDes PHY article in eeweb.com/EE Times network. The 112G is coming on to the market to comply with growing demands for greater bandwidth for next generation data centers as well as for advanced applications like artificial intelligence (AI) and machine learning, […]

From Multilayer Perceptrons to GANs

https://www.rambus.com/blogs/from-multilayer-perceptrons-to-gans/

Written by Steven Woo In our previous blog post, we discussed the history of neural networks (NNs) and machine learning (ML). We also took a closer look at some of the memory standards that are currently powering a diverse range of NNs and ML applications. In this blog post, we’ll explore some of the more […]

Reduced-Precision Computation for Neural Network Training

https://www.rambus.com/blogs/reduced-precision-computation-for-neural-network-training/

In our previous blog post, we discussed the process of training neural networks (NN) and briefly touched on NN training platforms and related memory bandwidth issues. As we noted, neural network training and inference performance are heavily contingent upon memory bandwidth. This is because the memory system is typically tasked with holding the neural network […]

Understanding ML and ANN memory requirements

https://www.rambus.com/blogs/understanding-ml-and-ann-memory-requirements/

Written by Steven Woo Artificial Neural Networks First proposed in 1944 by Warren McCullough and Walter Pitts, an artificial neural network (ANN) or more commonly, a neural network (NN), can perhaps best be defined as a computational model that attempts to closely emulate the network of neurons present in the human brain. More specifically, neuromorphic […]

Rambus logo