AI can now defend itself against malicious messages hidden in speech

Computer scientists have thwarted programs that can trick AI systems into classifying malicious audio as safe.

Thumb nr ident 150ppi
May 17, 2019
0
0

Computer scientists have devised a way of making computer speech recognition safer from malicious attacks — messages that sound benign to human ears but hide commands that can hijack a device, for example through the virtual personal assistants that are becoming widespread in homes or on mobile phones.

But deep-learning algorithms — artificial intelligence (AI) programs inspired by the brain that find patterns in data without needing explicit instruction — often work in mysterious ways. And this unpredictability opens them up to exploitation.

As a result, the patterns that AI uses to, say, recognize images, might not be the ones humans use. Researchers have been able to subtly alter images and other inputs so that to people, they look identical, but to computers, they differ. In one example, scientists produced glasses that made facial-recognition software misidentify the wearer as actress Milla Jovovich. These inputs are called adversarial examples.

By: Matthew Hutson/Nature News

No comments yet.