Logo

The Data Daily

Two ways AI technology is like Nuclear technology

Last updated: 04-20-2021

Read original article here

Two ways AI technology is like Nuclear technology

 We often hear that AI is the new electricity and data the new oil.  The former at least is kind of true. AI is already pervasive in all of our digital artefacts and interactions.  And to be fair, so is the latter. Data is dangerous to store, prone to leaking, and might lead to regulatory capture and political instability.

But I want to focus on a different analogy today, because I'm tired of people thinking that the problems of AI regulation can be solved by "AI 4 Good" projects. Look, of course it's good to be good. And since AI is now a pervasive technology, of course you can use AI to help you be good. And of course we frequently use regulation to improve our societies. But...

foreshadowing: Let's pretend for this first point that electrical energy from nuclear power plants is entirely clean, with no bad outcomes, no nuclear waste, etc. (I won't make this assumption in the second point.) No matter how much clean, planet-saving electricity we could get from a world full of fully safe nuclear power plants, we would still need to regulate nuclear weapons and do everything we can to avoid nuclear war.  Similarly, all the good things we can and will do using AI doesn't mean we don't have to worry about regulatory capture by technology companies, loss of privacy or freedom of thought, the elimination of the capacity to build viable political opposition parties, and so forth.

The USA helpfully training their troops by giving them direct experiences of atomic blasts. The truth however is that electrical power plants are used to create nuclear weapons, serve as enormous ecological risks as we've seen at Chernobyl and Fukushima, and produce extremely dangerous waste. It's been suggested that if you take into account the costs of construction and decommissioning, no nuclear power plant has ever made money. Germany has been shutting down its nuclear power plants. I honestly don't know which is a bigger threat to our well being–coal powered electricity or nuclear electricity. I'm pretty sure wind and solar are better than either. If there's no way to sustainably generate electricity, ultimately we will have to stop using it – that's what "sustainably" means, we won't have a choice. In the near term we should work both to reduce how much we use it and also to reduce how much damage we do when we use it, and to counteract that damage.

Similarly, I don't know whether all the entertainment, education, security, and economic benefits of AI can outweigh the dangers of having the capacity to identify who is going to vote how, who is most likely to join a military or a militia or create an opposition party or blow a whistle. I'm pretty sure though that these are problems we need to guard against anyway – a lot of regimes just throw all opposition (and often all academics) into jail, or kill them. They haven't historically needed AI for that. 

Professor of Ethics and Technology at the Hertie School So this is why I'm Professor of Ethics and Technology at the Hertie School now, to work on ensuring that the world we live in has the capacity to prevent bad applications of AI and heads off its unintentional bad outcomes, as well as for other (particularly digital) technologies. It's great that people want to do good things with AI – in fact, I personally use AI to build models to try to head off bad outcomes. But limiting, mitigating, and hopefully eliminating bad outcomes absolutely must be an essential component of AI regulation and governance. No amount of other good projects can outweigh that.


Read the rest of this article here