Hearing voices: Researchers show how Siri and Alexa could be manipulated

Hearing voices: Researchers show how Siri and Alexa could be manipulated

Now, two of those Berkeley students have published a paper that says they can hide such commands in recordings of music or even human speech. They can have digital assistants unlock doors of smart homes, transfer money through banking apps, and purchase items from online retailers, all without the user knowing what was happening. Even virtual assistants such as Alexa, Google Assistant and Siri are not safe from commands that slip by unheard to the human ear.

"My assumption is that the malicious people already employ people to do what I do", Carlini told the Times, with the paper adding that, "he was confident that in time he and his colleagues could mount successful adversarial attacks against any smart device system on the market".

A series of studies have proven that it's possible to secretly give silent commands to voice assistants like Amazon Alexa and Google Assistant without their owners ever knowing. Good question. None of the companies we've talked to have denied that attacks like these are possible - and none of them have offered up any specific solutions that would seem capable of stopping them from working. Speech-recognition systems typically translate each sound to a letter, eventually compiling those into words and phrases.

You should read the full Times article for a nice dive into the world of AI, speech recognition and modern hacking techniques.

According to the coverage in the Times, both Amazon and Google have taken measures to protect their smart speakers and voice assistants against such manipulations.

Family escapes cheetah attack in Dutch safari park
A woman carrying a child is seen shooing off the approaching cheetah as she steadily makes her way back to the auto . Especially when people do not realise how unsafe cheetahs can be, or how fast they can move.

The idea that voice assistants can be exploited or fooled is by no means new, and many stories have surfaced revealing potential and hypothetical exploit situations involving your typical at-home assistant device.

Apple also said its HomePod smart speaker isn't able to perform certain commands, such as unlocking doors.

For now, there are no American rules on broadcasting subliminal messages and hidden commands to humans or machines.

That warning was borne out in April, when researchers at the University of IL at Urbana-Champaign demonstrated ultrasound attacks from 25 feet away. Last year, researchers at Princeton University and China's Zhejiang University demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear.

"Companies have to ensure user-friendliness of their devices, because thats their major selling point", said Tavish Vaidya, a researcher at Georgetown. This isn't the first time we've seen smart devices go haywire due to ambient factors. "We want to demonstrate that it's possible", he said, "and then hope that other people will say, 'O.K. this is possible, now let's try and fix it'".

Related Articles