Google apologized on Monday to users of the Google Assistant voice service for the confusing way the tech giant explained how voice data was being collected and used. It also made clear that users could “opt-in” to have their voice recorded and analyzed to improve the service.
Google, which makes hardware under its Made by Google brand, and sells the Google Assistant internet-connected voice service to companies like Samsung and Lenovo, announced the news in a blog post by Nino Tasca, the company’s senior product manager.
Tasca explains the new “opt-in” step this way:
“Opting in to VAA helps the Assistant better recognize your voice over time, and also helps improve the Assistant for everyone by allowing us to use small samples of audio to understand more languages and accents,” Tasca writes.
How Google Got In This Mess
On July 10, the Belgium-based public broadcasting news service VRT-NWS broke the major story that “Google employees are systematically listening to audio files recorded by Google Home smart speakers and the Google Assistant smartphone app.”
The very next day, Google published this blog post explaining that the contractors were listening to improve its ability to understand “a wide variety of languages, accents and dialects.”
Google’s Assistant was not the first A.I. caught doing this, though. Amazon’s (Alexa was found to be sending audio for human analysis, Bloomberg reported. Apple’s Siri has also inadvertently shared private information with contractors, The Guardian reported this summer.
In the post on Monday, Google did not offer a lot in the way of explaining how this audio archiving and transcription process happened before, but they did provide new guidelines for users who wish to opt out now.
First and foremost, Google says that your audio data won’t be saved without explicit permission, which is given in the form of toggling on or off the [Voice & Audio Activity] feature of the Assistant. Google says that opting in will help your Assistant learn your voice and perform better overtime, as well as help it learn to recognize different languages and accents — but, it will also save your audio files and send them to human employees to be potentially listened to and transcribed.
However, users will be able to go into these saved recordings and delete them at anytime and Google says that only about 0.2 percent of all user audio snippets are actually used.
Google also says that it will be adding a feature for users to control how sensitive the ‘Hey, Google’ activation of their device is. The company already throws out accidental activation audio clips, e.g. if your Assistant heard something that sounded like ‘Hey, Google’ and accidentally recorded ambient noise without the user knowing.
Finally, the company says that they’ll be improving the security measures already in place for audio clips that do make it through to human transcribers. Prior security steps already included disconnecting the identity of users from their audio clips. Now, Google says it’ll be “adding greater security protections to this process, including an extra layer of privacy filters.” Exactly what those measures are though, remains unclear.
All in all, not much has changed. Google still wants to use your audio and intends to do so as long as users know it’s happening. For users who previously opted into to VAA it says that no new audio will be collected until users have reviewed and confirmed that setting. Even if you think you didn’t opt in it might be worth double checking — I was surprised to find that my toggle had indeed been switched ‘on.’
If this move by Google actually changes minds of skeptical consumers probably won’t be known for some time. If moves to transparency like this work, voice A.I. services may see even greater acceptance.