Researchers at MIT have created a way to determine robustness levels of machine learning (ML) models for various tasks. And that is done by detecting when those models make mistakes they’re not supposed to make
.
Rob Matheson of the MIT News Offices writes about that development in ECN.
A good part of Matheson’s piece gives the reader some basic tutorial about the new lingo AI and machine learning present to the average person. For example, he describes convolutional neural networks (CNNs) as “designed to process and classify images for computer vision and many other tasks.”
But tiny things the human eye cannot see can cause a CNN to trigger a completely different classification or modification, according to Matheson. Those modifications are so-called “adversarial examples.” That’s what MIT researchers zero in on. By studying the effects of adversarial examples on neural networks, researchers determine how their models could be vulnerable to unexpected inputs in the real world.
Matheson explains that CNNs process images through many computational layers containing units called neurons. For CNNs that classify images, the final layer consists of one neuron for each category. The CNN classifies an image based on the neuron with the highest output value.
But he adds, “Consider a CNN designed to classify images into two categories: “cat” or “dog.” If it processes an image of a cat, the value for the “cat” classification neuron should be higher. An adversarial example occurs when a tiny modification to that image causes the “dog” classification neuron’s value to be higher.”
The researchers’ technique checks all possible modifications to each pixel of the image. Basically, if the CNN assigns the correct classification (“cat”) to each modified image, no adversarial examples exist for that image.
As part of this article, Matheson reports that behind the technique is a modified version of “mixed-integer programming,” an optimization method where some of the variables are restricted to be integers. Essentially, mixed-integer programming is used to find a maximum of some objective function, given certain constraints on the variables and can be designed to scale efficiently to evaluating the robustness of complex neural networks.
One MIT researcher is quoted in the article as saying, “Across different networks designed for different tasks, it’s important for CNNs to be robust against adversarial examples. The larger the fraction of test samples where we can prove that no adversarial example exists, the better the network should perform when exposed to perturbed inputs.”
Leave a Reply