New AI can stop rogue microphones, preventing ‘them’ from snooping on you

Have you ever noticed that web advertisements are strikingly similar to anything you’ve just discussed with your friends and family? Microphones are everywhere these days, from our phones, watches, and TVs to voice assistants, and they’re constantly listening. In order to learn more about you, computers continually use neural networks and AI to analyze your voice. What would you do if you wanted to prevent this from happening?

You used to crank up the volume on your music or turn on the water in the bathroom, as shown in the popular TV program “The Americans.” But what if you didn’t want to communicate by yelling over the music? Researchers at Columbia Engineering have created a novel system that creates whisper-quiet noises that can be played in any space, in any scenario, to prevent smart gadgets from spying on you. It’s also simple to set up on devices like laptops and cellphones, allowing individuals control over their voice’s privacy.

“Making it all operate quickly enough was a crucial technological problem,” Carl Vondrick, assistant professor of computer science, said. “Our algorithm is the quickest and most accurate on our testbed, preventing a rogue microphone from accurately hearing your speech 80% of the time. It works even if we have no information about the rogue microphone, such as its location or the computer program that runs on it. It essentially camouflages a person’s voice over the air, disguising it from these listening equipment while not interfering with other people’s conversations in the room.”

Staying one step ahead of the discourse

While the team’s accomplishments in compromising automated voice recognition systems have been known to be theoretically viable in AI for some time, attaining them quickly enough to utilize in practical applications has remained a key hurdle. The issue has been that a sound that interrupts someone’s speech now—at this precise moment—is not the same sound that breaks speech a second later. People’s voices alter regularly as they pronounce various things and speak extremely quickly. These changes make it very hard for a computer to keep up with a person’s quick speaking rate.

“Our algorithm keeps up by anticipating the qualities of what a person will say next, giving it ample time to create the appropriate whisper,” said Mia Chiquier, the study’s primary author and a Ph.D. student in Vondrick’s group. “Our technique works for the bulk of English language vocabulary so far, and we want to adapt the algorithm to new languages in the future, as well as make the whispering sound absolutely undetectable.”

‘Predictive assaults’ are being launched

The researchers wanted to create an algorithm that could break neural networks in real-time, produce speech in real-time, and be applicable to the bulk of a language’s lexicon. While some previous work had met at least one of these three conditions, none had met all three. Chiquier’s new system employs “predictive assaults,” which are signals that interrupt any word that automated voice recognition models have been taught to transcribe. Furthermore, when assault noises are broadcast over the radio, they must be loud enough to drown out any rogue “listening-in” microphones that may be present. The assault sound must go the same length as the voice.

The researchers’ method achieves real-time performance by anticipating an assault on the signal’s (or word’s) future based on two seconds of voice input. The attack was tuned to a loudness that is equivalent to typical background noise, enabling individuals in a room to chat freely without being watched by an automated speech recognition system. The team was able to show that their technique works in real-world rooms with natural ambient noise and complicated scene geometries.

Artificial Intelligence that is ethical

“Ethical worries about AI technology are an important problem for many of us in the research community, but they seem to be part of a different thinking process. It’s as if we’re so ecstatic that we’ve finally created a driving automobile that we’ve forgotten to include a steering wheel and brake “Jianbo Shi, a top machine learning researcher and professor of computer and information science at the University of Pennsylvania, agrees. “From the very beginning of the study design process, we must ‘consciously’ consider the human and social effect of the AI technology we produce. The research by Mia Chiquier and Carl Vondrick asks, “How can we utilize AI to defend ourselves against unintentional AI uses?” Many of us are inspired by their work to consider the following: Instead than asking what ethical AI can do for us, consider what ethical AI can do for us. Ethical AI research is equally as entertaining and innovative if we believe in this approach.”

About The Author

Scroll to Top