Logo

The Data Daily

A six month pause on new artificial intelligence won’t stop the race for dominance

A six month pause on new artificial intelligence won’t stop the race for dominance

Elon Musk and more than a thousand other tech experts want a pause to new AI development, but there’s no real incentive for a ceasefire, writes Chris Mallows.

Last week, as the race to dominate the artificial intelligence space continued in earnest, more than 1,000 tech experts, including Elon Musk, signed an open letter to the AI community pleading for a pause in development of powerful new systems for a period of six months.

The underlying concern raised by the signatories is that these powerful AI systems may, without proper planning and management, pose a profound risk to society and humanity. As development carries on at a pace, the letter argues that a pause is needed to enable AI labs and independent experts to “jointly develop and implement a set of shared safety protocols for advanced AI design and development”.

In parallel policymakers are requested to accelerate the development of robust AI governance systems.

This letter has caused plenty of discussion within the AI community about whether attention should be focussed on the long-term risks highlighted by the letter or more present harms such as information security and data privacy risks. Some commentators have even suggested the letter merely advances the AI hype cycle instead of addressing the actual risks.

I do not intend to debate the merits of pausing development here but would instead like to focus on the practicality of the demands.

The responsibility for enacting the pause is being laid at the feet of the AI labs themselves. This would require cross-industry consensus. In a still relatively nascent market where growing investment is targeted at those at the forefront of development, where is the incentive to lay down tools?

Companies that do follow the letter’s recommendation risk falling irreversibly behind as their competitors ignore the advice or merely pay lip-service to it. It is worth noting that the signatories to the letter do not include anyone from OpenAI or the C-suite at Microsoft or Google DeepMind.

Any pause will achieve little without the backing of the major market players. In the exceedingly likely scenario that those players do not listen, the letter recommends governments should “step in and institute a moratorium” instead. The intention then being that governments could, within that time, legislate to provide a governance framework for AI with an emphasis on safety.

For that to work we would need buy-in from governments across the world, an agreed common purpose, and trust. I am far from convinced of that possibility. The biggest GDP gains from AI increasing productivity are expected to be seen in North America and China. Without the US or China being in lockstep this plan looks destined to fail.

We have seen countries, including China, regulate AI to an extent and the EU has published proposals for regulations in the Artificial Intelligence Act, but the global approach is not aligned. It is difficult to see a scenario in the near-term where it even comes close.

Contrary to the EU’s approach, in the same week as the Future of Life Institute’s letter was published, the UK g­­­overnment released its white paper for AI regulation. In a style that mirrors the government’s current thinking in other areas, the focus is on a relatively hands-off approach to regulation. Responsibility for AI governance will not be given to an overarching regulator, but rather existing bodies will be required to come up with their own approaches on a sector specific basis within the confines of current law.  

To expect one government to be able to deliver a suitable regulatory framework for AI governance within six months is fanciful. To expect it on a global scale is preposterous.

Images Powered by Shutterstock