Commercial drones could be turned into weapons using AI, report warns

Commercial drones could be turned into weapons using AI, report warns

The report has been compiled by experts from numerous world's leading institutions and artificial intelligence research organisations, who claim that it is the first time that the intersection of artificial intelligence and its misuse in the world have been examined in such a way.

To meet these risks, the researchers counsel scientists and engineers to consider how they might mitigate misuse of their creations and governments to put laws in place to protect the emerging technology from exploitation.

"Because cyber security today is largely labour-constrained, it is ripe with opportunities for automation using AI".

According to The Daily Telegraph, cybercriminals could use the tech to scan a target's social media presence "before launching "phishing" email attacks to steal personal data or access sensitive company information".

The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks.

3M Agrees to Pay $850 Million to Settle Minnesota Water Contamination Lawsuit
Grant money will also be used for habitat and recreation improvements, such as fishing piers, trails and open space preservation. State Attorney General Lori Swanson filed the complaint just before year end in 2010. "We're pleased with the settlement".

The Malicious Use of Artificial Intelligence report looks at AI that is now or almost available rather than far-off-in-the-future technologies. Lifelike videos and speech impersonation could be used to target individuals, while drones could be launched to physically attack a person, the report says. "The challenge is daunting and the stakes are high", the report said.

Naming a few of the threats, Oxford research fellow Miles Brundage said, "AI will alter the landscape of risk for citizens, organizations and states - whether it's criminals training machines to hack or "phish" at human levels of performance or privacy-eliminating surveillance, profiling and repression - the full range of impacts on security is vast".

AI is a rapidly advancing field and society now seems unprepared to prevent such possible attacks, said the report.

Overall, the authors believe AI and cyber security will rapidly evolve in tandem in the coming years, and that a proactive effort is needed to stay ahead of motivated attackers.

A group of 26 experts including those from Oxford's Future of Humanity Institute, Cambridge's Centre For the Study of Existential Risk and OpenAI, the organisation backed by technology billionaire Elon Musk, said that malicious use of AI presented a "clear and present danger" to society that could emerge in the next decade.

Related Articles