Manually typing a search into your phone? Using an app to buy pizza? That’s so old school! In today’s digitally connected world, the dream of the future has arrived: the ability to say what you want aloud and have your will done, as if by magic. It’s Jetsons-level convenience and it saves the frustration of tapping against a tiny screen all day.

And yet that vision of progress comes at a price, with risks to your home’s security and privacy. Certain types of home automation—like a smart HVAC unit or thermostat, for example—are fine. But others, like Voice assistants, the “helper” software inside devices like Amazon’s Echo and Google Home, can be easily manipulated and exploited toward darker ends. In fact, vulnerabilities in these and other IoT devices make them fairly easy to hack by anyone who knows how. And since they’re sitting in your home, exposed daily to extremely private details, they can potentially deliver excruciatingly sensitive information right into criminals’ laps. Unless you take some precautions to protect yourself, of course. Here are some of the ways hackers can exploit voice recognition software and what you can do to protect yourself.

iStock-646485232.jpg

 

Voice Assistants Undone by Scrambled Speech Vulnerabilities

Not too many of us would accuse our devices of being “too good” at recognizing our commands. But that’s precisely what this particularly chilling security study found—and the results could change how you think about voice recognition. Researchers at UC Berkeley and Georgetown University wanted to measure voice assistants’ capacity to recognize distorted speech—as compared to our own human abilities. To do so, they “hid” commands in obfuscated speech and rated how well both parties understood.

What they found opened up frightening potential for home device hacking. While humans tended to better understand common or highly-conditioned phrases, like “call 911,” the voice assistant (in this case Google Assistant) was much more likely to recognize distorted commands than the human subjects. Google Assistant responded correctly 95% of the time, whereas the Amazon Turk workers, the human group, only managed to make out the garbled speech in about 22% of the cases. Essentially, what this means is that anyone with access to your assistant could launch a black box attack, both exploiting our own inability to understand, as well as our assistants’ willingness to comply.

To buck these kinds of hacks, voice recognition software will have to become even more powerful, capable of distinguishing not just commands, but human cadences and speech patterns. (Or to use facial recognition capabilities, as it’s rumored Apple may in its next devices). In some sense, machine learning functionalities in your phone and devices already do this to a limited degree. But until voice assistants become robust enough to head off black box attacks like these, we’re all living with a potential exploitable vulnerability in our homes.

 

Privacy Concerns Fuel Debates Over Voice Recognition Devices

On the other hand, almost everyone would agree that technology gargantuans like Apple and Google are big enough to be potentially dangerous. In fact, everyone from Steve Bannon to the New York Times has suggested regulation for these tech giants—so the concern is definitely universal. One reason for that growing anxiety is these company’s control over personal data.

Voice recognition devices like Google Home and Amazon Echo, which witness our most intimate moments, have virtually unfettered access to our personal lives. These devices are always on call, listening for a command that will send them into action. Once activated with a “wake word,” they begin recording your conversation in order to analyze your commands and react with the correct response. The recorded voice logs are saved in your account history (which you should manually delete occasionally, as privacy best practice).

However, while devices are supposed to record only useful snippets for analysis, there’s no real way to prevent them from being used for devious means. In the wrong hands, a hacker or any rogue worker could easily listen in to your every conversation, gaining access to sensitive information to exploit. Meanwhile, third party devices that you connect to your Echo or Google Home device may create further vulnerabilities, exposing details of your life you’d rather keep private.

That may seem like a big “what if?” but the dangers posed by the misuse of voice recognition devices are quite real. Just last fall, in fact, cybercriminals successfully used IoT malware to launch a widescale denial of service attack on sites like Twitter, Paypal, Netflix and Reddit. While that attack was targeted at Dyn, one of the major hosting services behind these sites, its existence shows how powerful a coordinated IoT assault can be. Experts roundly agree that smart devices contain multiple exploitable vulnerabilities, making them just as much a technological threat as a convenience.

Of course, you can’t exactly live like an ascetic, either. Even if you decide to hold out on purchasing one for a while, there will probably come a time when voice recognition devices like these are inevitably accepted as a home necessity. Therefore, your best bet is to protect yourself now, using the following precautions:

  • If you have an Echo, log into your Amazon account regularly and delete the recorded command logs.

  • Use two-step verification to log into your Amazon and Google accounts.

  • Avoid third-party devices that come with a default manufacturer’s password, or if you must purchase one, make absolutely sure to change the settings before you begin using it.

  • Mute your Amazon Echo or Google Home devices when you’re not using them, particularly if you’re discussing sensitive data, such as when you’re reading a credit card number over the phone or having a private conversation.

  • Keep sensitive applications—like your banking apps—separate from your Google and Amazon logins.

You may not prevent every kind of threat, but even these cursory safeguards will do a lot to keep your data—and your private life—for intended audiences only.