Logo

The Data Daily

The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI

The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI

AI could reboot industries and make the economy more productive; it’s already infusing many of the products we use daily. But a new report by more than 20 researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns that the same technology creates new opportunities for criminals, political operatives, and oppressive governments—so much so that some AI research may need to be kept secret.

Included in the report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, are four dystopian vignettes involving artificial intelligence that seem taken straight out of the Netflix science fiction show Black Mirror.

An administrator for a building’s robot security system spends some of her time on Facebook during the workday. There she sees an ad for a model train set and downloads a brochure for it. Unbeknownst to her, the brochure is infected with malware; scammers used AI to figure out from details she has posted publicly that she is a model train enthusiast, and designed the brochure just for her. When she opens it, it allows hackers to spy on her machine and get her username and password for the building security system, letting them take control of it.

An Eastern European hacking group takes a machine-learning technique normally used for defending computer systems and adapts it to build a more tenacious and pernicious piece of malware. The program uses techniques similar to those found in the Go-playing AI AlphaGo to continually generate new exploits. Well-maintained computers remain immune, but older systems and smart devices are infected. Millions of people are forced to pay a 300-euro ransom (in Bitcoin, naturally) to recover their machines. To make matters worse, attempts to counteract the malware using another exploit end up “bricking” many of the smart systems they were supposed to save. 

A cleaning robot infiltrates Germany’s ministry of finance by blending in with legitimate machines returning to the building after a shift outdoors. The following day, the robot performs routine cleaning tasks, identifies the finance minister using facial recognition, approaches her, and detonates a deadly concealed bomb. Investigators trace the robot killer to an office supply store in Potsdam, where it was acquired with cash, and the trail goes cold.

The study is less sure of how to counter such threats. It recommends more research and debate on the risks of AI and suggests that AI researchers need a strong code of ethics. But it also says they should explore ways of restricting potentially dangerous information, in the way that research into other “dual use” technologies with weapons potential is sometimes controlled.

AI presents a particularly thorny problem because its techniques and tools are already widespread, easy to disseminate, and increasingly easy to use—unlike, say, fissile material or deadly pathogens, which are relatively hard to produce and therefore easy to control. Still, there are precedents for restricting this kind of knowledge. For example, after the US government’s abortive attempt to impose secrecy on cryptography research in the 1980s, many researchers adopted a voluntary system of submitting papers to the National Security Agency for vetting.

Jack Clark, director of policy at OpenAI and one of the report’s authors, acknowledges that adopting secrecy could be tricky. “There’s always an incredibly fine line to walk,” he says.

Some AI researchers would apparently welcome a more cautious approach. Thomas Dietterich, a professor at Oregon State University who has warned of the criminal potential of AI before, notes that the report’s authors don’t include computer security experts or anyone from the likes of Google, Microsoft, and Apple. “The report seems to have been written by well-intentioned outsiders like me rather than people engaged in fighting cybercrime on a daily basis,” he says.

Images Powered by Shutterstock