Written by Paul Karazuba for Rambus Press
A team of North Carolina State University researchers recently published a paper that highlights the vulnerability of machine learning (ML) models to side-channel attacks. Specifically, the team used power-based side-channel attacks to extract the secret weights of a Binarized Neural Network (BNN) in a highly-parallelized hardware implementation.
“Physical side-channel leaks in neural networks call for a new line of side-channel analysis research because it opens up a new avenue of designing countermeasures tailored for deep learning inference engines,” the researchers wrote. “On a SAKURA-X FPGA board, [our] experiments show that the first-order DPA attacks on [an] unprotected implementation can succeed with only 200 traces.”
According to Jeremy Hsu of IEE Spectrum, machine learning algorithms that enable smart home devices and smart cars to automatically recognize various types of images or sounds such as words or music are among the artificial intelligence (AI) systems “most vulnerable” to such attacks.
“Such algorithms consist of neural networks designed to run on specialized computer chips embedded directly within smart devices, instead of inside a cloud computing server located in a data center miles away,” he explains. “This physical proximity enables such neural networks to quickly perform computations with minimal delay, but also makes it easy for hackers to reverse-engineer the chip’s inner workings using a method known as differential power analysis (DPA).”
Indeed, edge devices storing AI/ML models and data can be physically disassembled. If unprotected, an attacker can run malicious firmware, intercept network traffic, and employ various side-channel techniques to extract secret keys and steal sensitive information. Both AI/ML inference models, as well as input data and results, are often quite lucrative and should be shielded from criminal elements intent on financial gain. Moreover, the integrity of AI/ML systems must be secured from tampering to prevent malicious attackers from designing cloned or competitive devices, as well as altering training models, input data, and results.
To protect both silicon and data, edge devices running AI/ML algorithms should be built on top of a secure, tamper-proof foundation that ensures confidentiality, integrity, authentication, and availability (up-time). This can be achieved with a programmable security co-processor such as the Rambus CryptoManager Root of Trust (CMRT) which uses a combination of hardware and software countermeasures to thwart side-channel attacks. For AI/ML edge devices built without the Rambus CMRT, we recommend that system designers utilize a Test Vector Leakage Assessment (TVLA) platform like Rambus’ DPA Workstation Analysis Platform (DPAWS) to help detect side-channel leakage and implement appropriate countermeasures.
DPAWS includes all the hardware, software, and training material needed to evaluate and certify secure AI/ML edge devices. It offers a Windows-based, intuitive user interface that enables system designers to efficiently perform side-channel analysis on AI/ML edge devices to identify potential security flaws. Collected signals are examined using simple power and electromagnetic analysis (SPA/SEMA) or more powerful differential power and electromagnetic analysis (DPA/DEMA) to identify exposure of secret keys.
The recently released DPAWS 9.2 offers several new notable features, including sample leakage history plotting. This capability allows system designers to detect and better understand side-channel leakage. More specifically, confirming the increase of electronic leakage across time can help improve countermeasure design, detect additional locations vulnerable to potential leakage, and provide a baseline for comparing non-leaking locations.
DPAWS 9.2 also offers system designers access to bi-variate leakage detection; a form of higher-order side-channel analysis to which more robust designs may need to be resistant. Essentially, bi-variate analysis combines different points from within the analysis range to detect leakage. This methodology may reveal leaks in implementations that are univariate or first-order secure. Previously, separate tools were used to combine pairs of data points within the analysis window or perform bivariate analysis from the command line.
In summary, AI/ML edge devices, like all physical electronic systems, routinely leak information about the internal process of computing via fluctuating levels of power consumption and electro-magnetic emissions. To protect both silicon and data, edge devices running AI/ML algorithms should be built on top of a secure, tamper-proof foundation. As well, AI/ML edge devices should be carefully evaluated with a testing platform like Rambus DPAWS to validate the effectiveness of countermeasures in reducing sensitive side-channel leakage.
Leave a Reply