In part one of this two-part blog series, Ben Levine, Senior Director of Security Product Marketing at Rambus, and Semiconductor Engineering editor-in-chief Ed Sperling, discuss the many security challenges associated with creating increasingly complex CPUs, SoCs and embedded systems. While there isn’t a single solution to the problem, says Levine, an embedded security processor– siloed from the general-purpose processor – can significantly bolster semiconductor security by providing a hardware-based root-of-trust.
In this blog post (part two), Levine and Sperling talk in detail about the importance of adequately securing various types of specialized silicon, such as AI (artificial intelligence) accelerators and edge devices.
“AI is absolutely an area where there’s a real need for security. There are different types of data that you’re dealing with at different parts of the process,” Levine explains. “[Essentially], AI can be divided into two phases. One is training – when you are building the model that you’re going to use to classify things. The other is inference, when you are classifying new data that is coming in.”
When training AI applications, says Levine, it is important to protect the integrity of the training data sets, as well as inference at a later stage.
“You probably want to encrypt [the AI model] when it’s in rest and only decrypt it when you need to use it. When you are doing inference, you’re also working with data that may be sensitive,” he further elaborates. “So, for example, if it is a Cloud AI that’s processing voice commands from a smart speaker, the data is [originating] from someone’s house. You need to protect the confidentiality of that data. You may [also] have many users that are being processed at the same time. [Nevertheless], they need to have their data kept separate, confidential [and secure].”
AI, says Levine, presents a definite security challenge for the semiconductor industry. As he points out, it is also important to understand that any one security solution isn’t a magical panacea. Nevertheless, a hardware-based root-of-trust embedded in custom AI silicon can help protect valuable customer data from tampering and unauthorized access.
“As AI applications become more [mainstream], people are going to be more aware of the security implications of AI,” states Levine. “They will start to really care if they’re buying products that protect the security – not only of the company that’s producing the AI device – but also of their users. So, companies that are making secure AI chips, I think will have a real advantage.”
With regards to the impact a dedicated security processor would have on power, performance and time to market, Levine notes that with an AI chip, the overhead for area and power is negligible.
“It’s tiny and even in smaller devices, a [properly implemented] root-of-trust is going to be much smaller than an application processor or another piece of complex IP,” he explains. “So, the cost-benefit ratio is actually pretty good. You’re getting a lot of security value from a relatively small piece of hardware and it’s important that it’s small. If you have a big complex root-of-trust, you’re building in – potentially – a lot of security vulnerabilities.”
Time to market, says Levine, is another important factor to consider when designing and implementing a security processor.
“Developing secure hardware is not as easy and straightforward as it might look when people first start thinking about it. You really need to know what you’re doing. [Essentially], it’s starting a new secure core from scratch,” Levine explains. “Learning from lots of mistakes takes a lot of time. Our recommendation is that you [obtain] a root-of-trust from someone who has a [positive] track record. [Buy] something that is available off-the-shelf, so you can just easily plug [it] into your design and take advantage of its security [capabilities]. That’s the best way to get to market with something that will really be secure.”
In addition, says Levine, a security processor should be “smart,” and not just programmed to passively encrypt and decrypt data. Rather, a security processor should have a certain degree of programmability and the capability to detect anomalous behavior within an SoC. Moreover, security hardware should be upgradable, programmable and adaptive, as threat models and cyber-attack vectors rarely remain static.
“A root-of-trust should be designed in such a way that it can be updated securely. Once you have that, a root-of-trust can also be used to securely update other software and firmware in the system,” he concludes.
Interested in learning more about securing complex CPUs, SoCs and embedded systems? You can read part one of this blog series here and download our Rambus CryptoManager Root of Trust white paper here.