Computers, like people, understand what they see in the world based on what they've seen before.
And computer brains have become really, really good at being able to identify all kinds of things. Machines can recognize faces, read handwriting, interpret EKGs, even describe what's happening in a photograph. But that doesn't mean that computers see all those things the same way that people do.
This might sound like a throwaway distinction. If everybody—computers and humans alike—can see an image of a lion and call it a lion, what does it matter how that lion looks to the person or computer processing it? And it's true that ending up at the same place can be more useful than tracing how you got there. But to a hacker hoping to exploit an automated system, understanding an artificial brain's way of seeing could be a way in.
A team of computer scientists from the University of Wyoming and Cornell University recently figured out how to create a whole class of images that appear meaningful to computers but look like TV static or glitch art to the human eye. "It is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art [deep neural networks] believe to be recognizable objects," they wrote in a paper that's currently under peer review and has been posted to ArXiv, where scientists post preprints of papers while they are being reviewed.
And not only do computers recognize signals in the noise, they do so with a huge amount of confidence. So that while you see images that look like this...
...a computer's brain, or deep neural network (DNN), says it is 99 percent sure that it sees in those same images a gorilla, and an arctic fox, and a bikini, and an eel, and a backpack, and so on.
"To some extent these are optical illusions for artificial intelligence," co-author Jeff Clune told me via gchat. "Just as optical illusions exploit the particular way humans see... these images reveal aspects of how the DNNs see that [make] them vulnerable to being fooled, too. But DNN optical illusions don't fool us because our vision system is different."
Clune and his team used an algorithm to generate random images that appeared unrecognizable to humans. At first, Clune explains, the computer might be unsure about what it was seeing: "It then says, 'That doesn't look like much of anything, but if you forced me to guess, the best I see there is a lion. But it only 1 percent looks like a lion.'"
From there, the researchers would continue to randomly tweak the image's pixels—which remained unidentifiable to humans—until the computer said it could identify, with almost complete certainty, the image as a familiar object. And though the image would still appear nonsensical to the human eye, it would represent, Clune says, the Platonic form of whatever the computer sees. And this is a key point: Because it's not that the computer is identifying the image incorrectly per se, it's that a computer sees and thinks about the identifying components of any given thing differently—and more granularly—than a human does. "One way to think about it is this," Clune told me. "These DNNs are fans of cubist art. They want to see an eye, a nose, and a mouth in the image to call it a face, but they don't particularly care where those things are. The mouth can be above the eyes and to the left of the nose."