Logo

The Data Daily

AI & Global Governance: No One Should Trust AI - United Nations University Centre for Policy Research

AI & Global Governance: No One Should Trust AI - United Nations University Centre for Policy Research

No one should trust Artificial Intelligence (AI).

Trust is a relationship between peers in which the trusting party, while not knowing for certain what the trusted party will do, believes any promises being made. AI is a set of system development techniques that allow machines to compute actions or knowledge from a set of data. Only other software development techniques can be peers with AI, and since these do not “trust”, no one actually can trust AI.

More importantly, no human should need to trust an AI system, because it is both possible and desirable to engineer AI for accountability. We do not need to trust an AI system, we can know how likely it is to perform the task assigned, and only that task. When a system using AI causes damage, we need to know we can hold the human beings behind that system to account.

Many claim that such accountability is impossible with AI, because of its complexity, or the fact that it includes machine learning or has some sort of autonomy. However, we have been holding many human-run institutions such as banks and governments accountable for centuries. The humans in these institutions also learn and have autonomy, and the workings of their brains are far more inscrutable than any system deliberately built and maintained by humans. Nevertheless, we can determine credit and blame where necessary by maintaining accounts and logs of who does what and why.

The same is true of AI, and there are examples to illustrate this point. For every fatality caused by a “driverless” car, the world has known within a week what the car perceived, what it considered those sensor readings to mean, and why it acted in the way it did. The companies that built the car are held to account.

Unfortunately, not all AI is built to the standards of driverless cars. Because cars are especially destructive and dangerous, the automotive industry is well regulated. Now we are coming to realize that that software systems, with or without AI, should also be well regulated. We now know that extraordinary damage can be done when, for example, foreign governments interfere in close elections through social media. The same basic standards in other commerce and manufacturing sectors can apply to regulating software systems. Like any other manufactured product, either the manufacturer or the owner/operator must be accountable for any damage it causes. Otherwise, malicious actors will attempt to evade liability for the software systems they create by blaming the system’s characteristics, such as autonomy or consciousness.

Organizations that develop software need to be able to demonstrate due diligence in the creation of that software, including using appropriate standards for logging: what code is written (as well as when, why and by whom), which software and data libraries are used, and what hardware is used during system development. Organizations need to undertake and document appropriate testing before the software’s release and perform monitoring and maintenance while the code is in use. Like other sectors, they should be held liable unless they can prove such due diligence. These procedures lead to a safer and more stable software systems, intelligent or not.

By and large, enforcement of such standards should be done by local prosecutors, not only (but also) by specialist regulators at the local, national, and transnational level. There is a real need for transnational coordination because many of the corporations creating AI systems operate across national boundaries. The EU has provided a very strong precedent in this regard, particularly through models of how treaties concerning standards can be agreed between nations and then implemented by them. Perhaps increasingly in the future, nations will also negotiate such treaties with transnational corporations.

It is essential that AI systems are never presented as being responsible or as legal entities in and of themselves. Legal deterrence is based on human characteristics – the suffering we feel when we lose status, liberty, property, and so forth. Today, there is no way to build AI systems to be themselves persuaded by such deterrence, at least not in the way a human or even another social animal would be. AI legal persons would be the ultimate shell company. Across many jurisdictions, we have seen that the concept of legal personhood can be overextended, and it is likely that the corrupt will seek to extend it further into AI to avoid their obligations to society.

We are just becoming aware of the extent that AI is pervading our lives – at a time of vast inequality and when political polarization dividing societies and making them more difficult to govern. We should focus on AI as a tool for knowledge and communication, not fixate on narcissistic pursuits. AI is not a thing to be trusted. It is a set of software development techniques by which we should be increasing the trustworthiness of our institutions and ourselves.

This article has been prepared by Dr Joanna Bryson as a contributor to AI & Global Governance. The opinions expressed in this article are those of the author and do not necessarily reflect those of the Centre for Policy Research, United Nations University, or its partners.

Images Powered by Shutterstock