Rambus fellow Helena Handschuh recently participated in a Semiconductor Engineering industry panel discussion about securing data running on AI/ML silicon. As the panel participants note, AI systems are designed to process data at high speeds, not limit access. This is precisely why the semiconductor industry is struggling to more effectively secure AI/ML data and prevent it from being stolen or corrupted.
The big question in AI, ML and deep learning is how do you secure that data and know that it can’t be corrupted or stolen? How do you secure the models and how do you keep the weights from being published? That’s somebody IP and it’s a valuable asset!
“[Also], how do you secure the model itself and how to you involve it in a secure way? And then, how do you protect the computation from an input once you train a device and the network is set up and then the input comes in and the result is revealed? These are all the different points that your system has to take care of. It’s a lot of work.”
Indeed, says Handschuh, a recent research paper shows that any system can be perturbed in a fixed number or logarithmic number of changes and fixes.
“There is no way you can stop somebody from corrupting things enough that it will show something completely different. We really have to think about that,” she cautions.
Commenting on how updates and evolving algorithms affect the power profile of AI chips, Handschuh emphasizes that system designers need to develop methods of securing silicon (preventing leakage) that isn’t contingent on what software does or doesn’t do differently.
“The basic building blocks need to be written and implemented in such a way that they don’t leak by themselves [and become susceptible to side-channel attacks]. And then software that you download will hopefully run on a processor or a building block element that isn’t leaking anything,” she elaborates. “In that case, the software should be able to run securely and not give away too much information. But that’s the next step — securing processors such that it doesn’t matter what software you run.”
As Handschuh emphasizes, adding security measures rarely occurs by chance.
“In some cases, [security] requires legislation or standardization, because there’s liability involved if things go wrong, so you have to start including a specific type of solution that will address a specific problem,” she states. “Liability [in the AI sector] is what’s going to drive it. Nobody will do it just because they are so paranoid that they think that it must be done. It will be somebody telling them, ‘If you don’t do it, here’s the risk for you and for your business.’”
At a certain point, says Handschuh, there will be a general realization that securing AI silicon requires a more proactive approach.
“Whether that’s PSA or Common Criteria or something else – you will have to prove you’ve done something specifically to check the [security] box,” she says.
In terms of legislation, Handschuh notes that Europe has traditionally focused on the privacy side of the cyber security equation, while the United States passed the Cybersecurity Information Sharing Act in 2015.
“There is [clearly] a push from the regulators to say, ‘We need to do something.’ They don’t ever tell you exactly what needs to be done, but there is some traction.”
Handschuh also points out that it is important to secure AI data at the point at which encryption and decryption occur.
“When the data is at its most vulnerable, it has to be protected within some secure boundary and within some trusted execution environment or security flavor. At that precise moment when it is encrypted or decrypted, it has to be in a place where nobody has access to it,” she states.
According to Handschuh, AI systems can be designed to allow various levels of access rights to data, while preventing users from directly accessing the data and exporting it.
“You can handle [data] like a chemist, where they never touch poisonous materials directly, but can move them around through manipulators. So, you can give certain user rights to do things with data and assets, but no one can directly touch them,” she concludes.
Leave a Reply