Cryptography in the age of AI and quantum computing
This entry was posted on Thursday, April 13th, 2017.
Paul Kocher, a Rambus security technology advisor, recently sat down with Ed Sperling of Semiconductor Engineering to discuss a wide range of topics, including the evolving cryptographic landscape in the age of quantum computing and artificial intelligence (AI).
As Kocher emphasizes, cryptography is the one aspect of security that the industry still expects to function reliably.
“For the most part, it’s been able to deliver on that promise. People typically know quite a long time in advance if there are little cracks in the defenses of an algorithm,” he explained.
“Right now, one of the areas of research is building public key systems that are resistant to quantum computers, which are themselves a decade or more off in terms of actually being able to scale to the point they are a threat to our current cryptographic constructions.”
According to Kocher, the RSA algorithms and the elliptic curve cryptography standard, which are the most widely used of the public key algorithms like Diffie-Hellman, could all be broken if a quantum computer of sufficient capabilities and reliability came along.
“[However], it’s not an immediate threat, and if you look at a medical analogy, what’s causing problems today are implementation bugs. Those pose a dire and immediate threat to security,” he continued. “From a resource perspective, building resistance into products before you get the bugs out is really not a very high priority. [Again], from a research perspective, it’s a really neat set of new mathematic and engineering problems to come up with sufficient algorithms that are resistant to these hypothesized quantum computers. There are some pretty good proposals that are currently on the table that are being studied, and there will be standardization process for those.”
In many ways, says Kocher, cryptography is comparable to bricks used to construct a building, although there’s obviously quite a bit more to architecture than the bricks themselves.
“You’re trying to figure out how you take algorithms and put them into protocols and solve a user’s actual security problem and how you implement those protocols in a way that’s correct,” he elaborated. “And then you have to put that correct implementation into a system in a way that the bugs and other parts of the system don’t compromise the security of the protocol. It’s an onion with many layers, and the crypto is often at the center of all of that. The algorithms themselves are in many cases are relatively trivial part of what you need to solve the ultimate business privacy problem that you are focused on.”
On the subject of artificial intelligence (AI) and cyber-security, Kocher expressed skepticism that AI would be (autonomously) tasked with recognizing complex software vulnerabilities in the near future.
“There’s an open question about whether AI can be taught to understand properties of software-hardware design and tell us useful things about them; for example, whether the design is one that might have certain categories of bugs in it. There’s an open question about how far AI can go there,” he explained. “The current AI applications tend to be ones where you’re optimizing some kind of a search space or you have a relatively straight forward set of problems with very large amounts of training data. Understanding complex logic doesn’t fit very well into that mold [and] there are clearly some advances in AI that are needed for that to happen.”
Kocher also touched on the concept of ensuring security in the context of AI self-learning.
“If you start doing things like intentionally producing things to trick it, the system can be susceptible. It will be a long time before AI can be useful in an environment, for example, where someone can manufacture an input file that must be correctly characterized or some kind of consequence occurs,” he concluded. “There will certainly be some problems there, although they’ll be small compared to a conventional computer security crisis that we’re struggling with around non-AI based systems, which will also affect AI-based systems. If you’re running on some cloud compute machine that’s compromised, it doesn’t matter whether your algorithm is AI-based or not. You’re still compromised.”
Interested in learning more? You can read the full text of “Security: Losses Outpace Gains” on Semiconductor Engineering here.