Researchers Create ‘Psychopath AI’ Using Violent Images Online

Researchers Create ‘Psychopath AI’ Using Violent Images Online

"It's a compelling idea", says the website, and shows that "an algorithm is only as good as the people, and indeed the data, that have taught it". Norman was trained to perform image captioning, however instead of learning on a standard image captioning dataset, Norman was trained on an unnamed subreddit that is dedicated to document and observe death. Because of technical and ethical concerns, the team at MIT used captions and not actual images of people dying. They're calling it the world's first psychopath AI. When asked about why create "psycho bot" when we have enough human serial killers and psychos, the researchers explained, "The data used to teach a machine-learning algorithm can considerably influence its behavior".

They then made Norman and a regular image captioning AI take a Rorschach inkblot tests, and compared their responses.

As The Verge notes, Norman is only the extreme version of something that could have equally horrifying effects, but be much easier to imagine happening: "What if you're not white and a piece of software predicts you'll commit a crime because of that?" While those algorithms saw flowers and wedding cakes in the inkblots, Norman saw shooting fatalities and motor deaths, CNN wrote. While the Rorschach test has its own doubters whether it is a valid way to measure a person's psychological state, Norman's responses don't need a test to be labelled creepy. The researchers note that Norman is a case study aiming to see how AI could turn bad if its machine learning algorithms feed on this kind of data.

Trump Says US Won't Endorse G7 Statement Approved Earlier
Days after igniting a trade war , Donald Trump declared Saturday that he wants a world free of all tariffs and trade barriers. Trump left the summit early headed for his meeting with the leader of North Korea next Tuesday.

And for this image, a standard AI sees "a couple of people standing next to each other", While Norman sees "pregnant woman falls at construction story". Where standard AI saw opened umbrella, Normal interpreted the image as man shot in front of his screaming wife. About two years ago Twitter taught Microsoft's "Tay" AI chatbot to be racist.

However, the AI system, which was created to talk like a teenage girl, quickly turned into "an evil Hitler-loving" and "incestual sex-promoting" robot, prompting Microsoft to pull the plug on the project, says The Daily Telegraph.

Related Articles