Logo

The Data Daily

The strange new world of AI power, politics and the ‘pause’ | The AI Beat

The strange new world of AI power, politics and the ‘pause’ | The AI Beat

But it seems like as AI leaves the research lab and launches fully-flowered into the cultural zeitgeist, promising tantalizing opportunities as well as presenting real-world societal dangers, we’re also entering a strange new world of AI power and politics. No longer are AI debates just about technology, or science, or even reality. They are also about opinions, fears, values, attitudes, beliefs, perspectives, resources, incentives and straight-up weirdness.

This is not inherently bad, but it does lead to the DALL-E-drawn elephant in the room: For months now, I’ve been trying to figure out how to cover the confusing, kinda creepy, semi-scary corners of AI development. These are focused on the hypothetical possibility of artificial general intelligence (AGI) destroying humanity, with threads of what has recently become known as “TESCREAL” ideologies — including “effective altruism” and “longtermism” and “transhumanism” woven in. You’ll find some science fiction sewn into this AI team sweater, with the words “AI safety” and “AI alignment” embroidered in red.

Each of these areas of the AI landscape has its own rabbit hole to go down, some of which seem relatively level-headed, while others lead to articles about the Paperclip Maximizing Problem; a posthuman future created by artificial superintelligence; and a San Francisco pop-up museum devoted to highlighting the AGI debate with a sign saying “sorry for killing most of humanity.”

Much of my VentureBeat coverage focuses on the effects of AI on the enterprise. Frankly, you don’t see C-suite executives worrying about whether AI will extract their atoms to turn into paper clips — they’re wondering whether AI and machine learning can boost customer service, or make workers more productive.

The disconnect is that there are plenty of voices at top companies, from OpenAI and Anthropic to DeepMind and all around Silicon Valley, who have an agenda based at least partly on some of the TESCREAL issues and belief systems. That may not have mattered much 7, 10, or 15 years ago, when deep learning research was in its infancy, but it certainly garners lots of attention now. And it’s becoming more and more difficult to discern the agenda behind some of the biggest headline-grabbers.

That has led to suspicion and accusations: For example, last week a Los Angeles Times article highlighted the contradiction that Sam Altman, CEO of OpenAI, has declared that he was “a little bit scared” of the company’s technology that he is “currently helping to build and aiming to disseminate, for profit, as widely as possible.”

The article said: “Let’s consider the logic behind these statements for a second: Why would you, a CEO or executive at a high-profile technology company, repeatedly return to the public stage to proclaim how worried you are about the product you are building and selling? Answer: If apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy.”

Over the weekend, I posted a Twitter thread. I was at a loss, I wrote, as to how to address the issues lurking beneath the AI ‘pause’ letter, the information that led to Senator Murphy’s tweets, the polarizing debates happening about open and closed source AI, Sam Altman’s biblical prophecy-style post about AGI. All of these discussions are being driven partly by those with beliefs that most in the public have no idea about — both that they hold those beliefs and what they mean.

What should a humble reporter do that is trying to be balanced and reasonably objective? And what can everyone in the AI community do — from research to industry to policy — to get a grip on what’s going on?

Former White House policy advisor Suresh Venkatasubramanian replied that the problem is that “there’s a huge political agenda behind a lot of what masquerades as tech discussion.” And others agreed that the discourse around AI has moved from the realm of technology and science to politics — and power.

Technology has always been political, of course. But perhaps it does help to acknowledge that the current AI debates have soared into the stratosphere (or sunk into the muck, depending on your take) of political discourse.

There were other helpful recommendations for how we can all gain some perspective: Rich Harang, a principal security architect at Nvidia, tweeted that it’s important to talk to people who actually build and deploy these LLM models. “Ask people going off the deep end about AI “x-risk” what their practical experience is in doing applied work in the area,” he advised, adding that it’s important to “spend some time on real-world risks that exist right now that stem from ML R&D. There’s plenty, from security issues to environmental issues to labor exploitation.”

And B Cavello, director of emerging technologies at the Aspen Institute, pointed out that “predictions are often spaces of disagreement.” They added that they have been working on focusing less on the disagreement, and more where people are aligned — many of those who disagree about AGI, for example, do agree on the need for regulation and for AI developers to be more responsible.

I’m grateful to all who responded to my Twitter thread, both in the comments and in direct messages. Have a great week.

Images Powered by Shutterstock