The Data Daily

The Future Of Fake News And Our Mental Health

The Future Of Fake News And Our Mental Health

Breakthrough AI programs can now generate videos from text input. The U.S. suicide rate and the prevalence of anxiety disorders are at all-time high. The White House has announced the “AI Bill of Rights.”

What’s the connection between these 3 news items?

They all hint at how we will live our lives in the near future: As illusionists, making up imaginary worlds, fearing fabricated threats, led by conjurers, tricksters, and demagogues. For some, this prediction is already a good approximation of their present reality.

Let’s start with “AI,” the most exciting, confusing, and menacing technology of our times. On September 29th, Meta unveiled Make-A-Video, an AI that generates five-second videos from text.

What an amazingly rapid “progress” we have witnessed over the last few years! After all, DALL-E, the incredibly “creative” AI program generating still images from text, was only introduced in January of 2021! Like other “AI breakthroughs,” it has also generated a lot of talk with one enthusiastic reporter noting its unusual capacity for "understanding how telephones and other objects change over time," and another seeing it as proof that “we’re in a golden age of progress in artificial intelligence.” The handful of protesting voices, like that of Gary Marcus pointing out “the giant gulf between drawing a picture and understanding the world,” were drowned out by the general adulation of the artistic machines and the general anxiety about our future creative masters.

But this was ages ago in AI years. Now we have the tools that will (eventually) allow all of us to just write a few words to create a realistic-looking video.

AWS's Bratin Saha recently observed that IT has progressed along the trajectory of Moore’s law, with compute capacity doubling every 18 months. In contrast, “the amount of compute that is being used for machine learning has been doubling every three-and-a-half months. So this has been going almost at 5X speed of what traditional IT was doing.” In four years, the number of parameters in the “language models” that are at the heart of the amazingly creative AI programs went from around 20 million to 175 billion.

The result of this new “AI Law” (doubling every 14 weeks) is yet another set of AI players out-competing each other, this time for the text-to-video Oscar: Meta’s announcement of Make-A-Video was quickly followed up by one from Phenaki and another one from Google. And China was first with CogVideo, as it keeps out-competing the U.S. in the global AI race.

Given the recent experience with text-to-image AI, it’s safe to predict that we will see similar rapid “progress” with text-to-video AI, culminating in a public version available to all 5 billion people currently connected to the internet (last July, OpenAI has made text-to-image DALL-E available to the general public).

Which brings us to the state of mental health of the 300 million-plus internet users in the U.S. Internet usage penetration among 18-49 years-old is around 99%. The U.S. suicide rate last year increased 4% compared with the rate in 2020, per recent preliminary report from the Centers for Disease Control and Prevention. 15-24 year-old males experienced the sharpest increase at 8%. Among all age groups, the suicide rate in the U.S. rose 36% from 2000 to 2018. And a medical advisory group to the federal government has just recommended that all adult Americans age 19 to 64 be screened for anxiety.

There is a lot of unsubstantiated claims regarding the negative influence of social media (for example, that it can influence voter behavior), so one recent study tried to find a causal link by investigating the introduction of Facebook to U.S. colleges and the corresponding data on students’ mental health. It concluded that the “roll-out of Facebook at a college increased symptoms of poor mental health, especially depression… after the introduction of Facebook, students were more likely to report experiencing impairments to academic performance resulting from poor mental health.” The researchers also concluded that “the results are due to Facebook fostering unfavorable social comparisons.”

That Facebook creates among many users “a spiral of envy” was already reported a decade ago. Social media brings us closer to other people but also provides so many new opportunities to watch, experience and envy the success of people who are just like us, our equals. “The more a society is dedicated to the value of equality and the more choices it offers for individual self-determination, the higher its rates of functional mental illness,” says Liah Greenfeld.

With social media and now with AI programs we have new tools for accentuating the negative (in addition to a few real benefits that they may offer here and there). In the near future, people suffering from mental illness and the multitudes of anxious people in the U.S. and elsewhere will watch and experience fake reality delivered with the most impactful medium—a video clip.

What will happen to anxious people when malevolent individuals will employ text-to-video AI to unleash fake but realistic-looking videos, at scale (to use Silicon Valley’s favorite term)? What will happen when the most “trending” tweet or “viral” Facebook post will show a realistic-looking scene of Manhattan after a nuclear attack with CNN anchors reporting the “news”?

Just like we regulate how people drive vehicles that have the potential to inflict harm, shouldn’t we regulate the use of potentially harmful AI programs?

It’s not that U.S. governments, federal and local, are not interested in regulating AI. It’s just that they either issue specific fines for AI programs that don’t comply with a “bias audit,” an audit which they don’t define; or, as the White House just did after extensive work and deliberations with its “AI Bill of Rights,” they prefer to talk (and talk) about a “framework” rather than propose actual legislation. After all, uttering meaningless words, or words that can mean anything because they are never defined, is what politicians consider acting on behalf of the people that elected them.

Is AI a “challenge to democracy” (in the words of the White House) and a civil rights issue? Or is the harm it may cause the result of the failure of the federal government to protect our data for the last half a century?

What we call today “AI” is the most recent stage of computer-based learning from data. “Mining” the data collected 24/7 by computers started in earnest in the 1970s and got a boost twenty years ago with the excitement over “big data.” That was when the U.S. government and its surveillance arm, the NSA, decided to go for “the whole haystack,” as a brave whistleblower revealed to us in 2013. A couple years later I wrote: “U.S. corporations have been legally collecting and sharing our data long before Google appeared on the scene and our data has been a fountain of enthusiasm for government officials and business executives for a long time.”

The government that regulates, for better or worse, many aspects of our lives, has failed to protect our data and define it as our property. It then compounded that failure with massive, ineffective (and expensive) surveillance. “Big data” is now called “AI,” but our elected leaders’ inability or unwillingness to advance and implement data protection legislation and regulation has not changed. Instead, we get words, meaningless words.

Here's just one example, from Andy Baio on “AI Data Laundering,” of how the absence of serious data protection laws leads directly to potentially harmful AI programs:

“…in addition to a massive chunk of Shutterstock’s video collection [compiled by academic researchers], Meta is also using millions of YouTube videos collected by Microsoft [Research] to make its text-to-video AI…

It’s become standard practice for technology companies working with AI to commercially use datasets and models collected and trained by non-commercial research entities like universities or non-profits. In some cases, they’re directly funding that research…

I was happy to let people remix and reuse my photos for non-commercial use with attribution, but that’s not how they were used. Instead, academic researchers took the work of millions of people, stripped it of attribution against its license terms, and redistributed it to thousands of groups, including corporations, military agencies, and law enforcement.”

That incredible sociologist, Mark Zuckerberg, is now transforming his Facebook from being the leader of social media, the “platform” for increasing and reducing loneliness, for new social connections and new sources of envy, for encouraging a sense of community and a sense of “familiarity breeds contempt,” transforming it to… Meta, the future purveyor of the “metaverse,” a collection of imaginary worlds where we will forget our anxieties, hide from fabricated threats, and believe in the new religion of “AI.” Or is it already the present?

Images Powered by Shutterstock