Written by Scott Best for the Rambus Blog
In the first of this three-part blog series, we define anti-tamper technologies, the low-cost attacks that target security chips, and some of the countermeasures that are effective against them.
It is important to understand that the term “anti-tamper” means many different things to many different people. In this series, we use the term to describe a set of countermeasures that are designed to thwart an adversary’s attempt to monitor and/or affect the correct operations of a security chip. Put simply, anti-tamper is what makes a security chip. A chip that runs cryptographic algorithms and lacks anti-tamper protection is not really a security chip.
It should also be noted that anti-tamper protections can be inherited from one part of the chip to another. This means there are certain countermeasures that can be implemented at the chip level – and used to protect algorithms running in other parts of the chip. Sometimes anti-tamper protection is algorithmic within the circuit itself, or can be more system-wide, making it capable of protecting the entire chip simultaneously.
In this article:
Your adversary’s capabilities include not just their technical sophistication, but how much money and time they have to break into your system. Some of the easier attacks can be executed by just about anyone including high school hackers or individuals testing the security of your system. More sophisticated adversaries can be found at universities and even at some national labs. These adversaries will probe your security with more advanced methods to extract secret information. The most challenging attacks will originate from national labs and state funded actors who have access to the most expensive and sophisticated cryptographic analysis attacks, semi-invasive attacks, and fully invasive attacks to weaken the security of your chip.
It is important to emphasize that a security chip needs to take all countermeasures into account. It does not make sense to protect against only the most sophisticated attacks while leaving yourself exposed to the simplest ones.
In the first of this three-part blog series, we will take a closer look at low-cost attacks. Most adversaries use low-cost attacks to test your security chip to gauge how resistant and well-designed it is. These include protocol and software attacks, brute force glitch attacks, and simple environmental attacks.
Protocol and Software Attacks
Protocol and software attacks target a chip’s protocol and the software that operates within the chip. This category covers a wide spectrum of actions your adversary will attempt, but in general, what they will try to do is use your chip in a way it was not intended. For example, if the adversary is going to attack the protocol, they might try to issue commands that are not supported by a normal protocol. They may record an actual, authentic transaction and try to replay it in the future to see if they can cause authentic behavior from inauthentic traffic. A man-in-the-middle attack, for example, is a way of breaking the security between two chips that think they are communicating in secret but are not. Attackers may also try to attempt to compromise the software environment. There are some very well-known attacks in this domain, such as buffer and overflow attacks, as well as malicious software injection.
As such, a silicon designer must assume that your adversary is going to attack your protocol. So, you need to define a small, tight set of valid commands. Essentially, you want to make it conceptually unfriendly for an adversary to work with. Another technique of mitigating these attacks – especially for a chip in a communications link – is mutual authentication. This ensures that both sides are verifying each other. As well, a random nonce used in the verification process is a good way to mitigate against replay attacks.
In addition, all software that executes inside of secure chip must be suspect, so an immutable hardware layer is the best design practice. Specifically, checking the highest privilege software to ensure that even the highest privileged software that is executing in the chip is executing correctly. Another good design technique is that all code running in the chip must be cryptographically signed, with permissions assigned based on signatures. This makes it almost impossible for an adversary to duplicate permissions and execute code at a level of authentication that they are not authorized to access.
Brute Force Glitch Attacks
Glitch injection is a brute force attack where an adversary creates a significant amount of noise in the system or on your chip to cause the chip to behave in an unusual way. This can be done by simply shorting or zapping the chip’s power supply, often just by taking a paperclip and shorting some of the power supplies to ground. It is impossible to predict where any errors might appear in your chip when this is done. However, your adversary is hoping is that these glitches or bit flips will appear within a lifecycle control circuit within the chip.
Tarnovsky, Chris. (2010, July 28). Semiconductor Security Awareness Today & Yesterday at Blackhat 2010. Retrieved from https://www.youtube.com/watch?v=WXX00tRKOlw
A lifecycle control is how a security chip distinguishes between its insecure manufacturing state and its highly secure in-field state. When a chip boots, when it is first powered up, an adversary will attempt to glitch it, trying to confuse the chip to believe it is ‘waking up’ for the first time in an insecure manufacturing state. In this state, nonvolatile memory contents can be unloaded directly, and scan chains might be re-enabled. Put simply, a maliciously induced insecure manufacturing state makes the chip highly vulnerable to attackers.
The countermeasures for glitch injections are usually chip-level protections. This means the entire chip is protected against glitching, for example, with on-chip regulators that create internal-only voltages used to power up the logic that controls lifecycle controls. Another aspect of glitch attacks on a chip is how they relate to fault injection, which is much more of a surgical attack. In contrast, glitch attacks target the entire chip at once and are considered very heavy handed.
Every chip in a system is designed to operate within a range of voltages and temperatures. An adversary who takes control of a system can raise the voltage or lower the temperature – or lower the temperature and raise the voltage. This action forces the chip to operate in an environment it was not designed to operate in. This technique is similar to the glitch attack we described earlier. The intent of an environmental attack is to cause the chip to malfunction when it is booting, so the chip will ‘wake up’ in an insecure manufacturing state, rather than a secured in-field state.
The countermeasures for these attacks are quite similar to those used to protect against glitch injection attacks. Such countermeasures are usually provided at the chip level by various sensors and alarms that monitor the external voltages applied to the chip, as well as monitor the ambient operating temperature of the chip. Another counter measure for an environmental attack can be found in ‘first to fail’ circuits, which are built with the smallest design margin. After a secure computation is complete, you can check the output of these ‘first to fail’ circuits to verify that the ‘first to fail’ circuits operated correctly. This means the secure circuits with a more operational design margin have also completed correctly.