Logo

The Data Daily

Everyone Is Wrong About AI Writing Tools

Everyone Is Wrong About AI Writing Tools

I’m an AI writer. As such, I have a unique dual perspective on the topic: My “I know AI” mind and my “I write” mind have strong conflicting opinions.

My AI mind constantly repeats, “don’t worry, AI is dumb; it can’t reason or understand.” But then my writing mind says, “hey, I’ve had problems distinguishing GPT-3 from humans before.”

Of course, those identities of mine care about what matters to each of them. It's at the intersection of their views that I realize both are right—partially.

On the one hand, even the most sophisticated AI writing tools are—albeit impressive—nothing more than the appearance of mastery. A mirage that feeds from our gullibility and the limited access we have to the reality behind their perfected spell.

On the other hand, unlucky for us, appearances can be more than enough: As long as this feels like my writing and the illusion resists your scrutiny, then you’ll be satisfied.

It doesn’t matter to you if I wrote this or if an AI did.

Or does it?

We, humans, write to communicate something, as professors Emily M. Bender and Alexander Koller eloquently argued ina 2020 paperon AI language models.

There’s always an intention—a purpose behind the words.

Yet, words alone can’t convey that intent. Whatever effect I want to cause in you, reader, remains hidden in my mind. As soon as these letters leave my fingertips to stay forever immutable on this page, they become an empty casing.

Unless another mind—your mind—comes across these symbols to give them a new meaning. Maybe similar to mine. Maybe not.

Your task, as a reader, is to reverse engineer the message I intended to imbue in these words using the meaning you bestowed into them, combined with the linguistic system that we share (English) and your world knowledge.

And here comes the key.

It’s at that very moment—when your meaning overrides mine as you reconstruct the original intent and communication happens—that it becomes suddenly critical that these words came from my thinking mind and not from an AIstochastic parrot.

Why? Because if it wasn't a human who wrote this, you'd be pursuing a pointless search for something that isn't there:

No meaning within. And no intent to be retrieved.

And yet, despite its undeniable mastery of the form of language, there are places where AI won’t ever reach—regardless of how good it gets (under current paradigms).

These are places where intent matters more than words.

If you know me, you’re not here just because you want to read something or get some undefined value. You’re here because you want to read whatIhave to write and extract the value thatIcan provide.

Me being the writer has some inherent importance to you because you want to have the means, through my words, to access the communicative intent I hid inside. You want to take a peek at my mind.

These words are nothing more than the means to that.

Putting it cleanly: If GPT-3 had written these words instead of me, it would kill the very purpose of writing them in the first place. Reading this wouldn’t give you the same value because the means (words) wouldn’t take you to your end (retrieving my intent).

Let me usethis quoteto illustrate why (it’s from a teacher asking a student to not use GPT-3 to write essays):

The reader values being in a trustworthy relationship with the writer, not anapparentlytrustworthy one.

The relative weight readers give to intent vs words depends on many factors: Are they reading a book or an ad? Do they know the author or do they only care about what the content provides? Do they care about the argument and the thesis that’s being defended, or only about spending some time reading?

Reading can be passive consumption. In those cases, words hold the most value.

But when reading becomes a timeless active conversation with the writer, words matter little—it’s the underlying intention that is valuable.

And neither GPT-3 nor any other similarly built AI language model—however good at spitting tokens coherently—will ever be able to provide that.

You may argue that human intent can be effectively preserved with prompts.

In the end, the AI doesn’t come up with the words by itself, it’s a human who guides it through the possible completions with clever pushes in the form of natural language inputs.

However, because of the arguments I laid out above, prompts can’t contain the intent of the person behind the AI. And, even if they did, no AI could capture the original purpose.

AI systems not only lack the ability to writeintentionally. They also lack the ability to retrieve intent from human-written text. The reason? They can’t access meaning, which mediates the relationship between intent and words by grounding the latter in real-world entities.

The bottom line is that whenever an AI is present in the communication chain, there's a defective link.

As soon as a writer (prompter) accepts the AI's output as valid, they’re giving up their original intent—if there was any.

This is probably the most valuable insight in this article for writers: In using AI writing tools, you risk replacing your sensible, interesting, or useful communicative intent with whatever the AI decides to output.

As soon as you start saying, “that’s good enough,” your presence in the finished piece starts to shrink.

Now that I’ve made crystal clear why AI writing tools can’t ever replace the core values of human writing, let’s explore where this argument weakens.

For once, I see no reason to discard the use of these toolscompletely(I use Grammarly and I sometimes accept its word suggestions).

What matters is choosing wisely when to let AI tools help and when to retain your invaluable essence. Virtue lies in moderation.

Now, let’s see what features make AI writing tools shine and which spaces will be most affected.

You saw GPT-3’s examples above (and probably many more across the internet).

People cherry-pick, true, but the fact that AI (at its best) can generate seemingly human-written pieces is astonishing enough.

You may think you’re good at spotting AI-written text, but that’s a perceptual bias. You only know about what you know: You spot AI-written pieces that look AI-written, but what about those you don’t spot? And what about the upcoming superior language models?

Berkeley alumnusLiam Porrwas one of the first to illustrate this possibility with a real-world example. He started a GPT-3-based blog on Substack and managed to getan article on productivityto the front page of Hacker News. A few commenters suspected, but most were oblivious.

Here’s the first paragraph, and remember, this is raw GPT-3:

Character.ai is our illustrative example here.

See how, with a simple prompt, I can make AI Socrates elaborate a rather convincing argument (this can be extrapolated to virtually any topic):

I can also make it evoke (to some degree) Socrates’ unique tone and style:

Forget about coding. You only need to push GPT-3 lightly in the form of an English prompt to make it go in the right direction. It’ll eventually diverge but then you can just redirect it again.

Keep going until you've got an 800-word essay.

Developers have strong incentives to make writing apps even easier to use. Many people know how to write but not how to code. Those are the main targets.

There are tons of AI writing apps (most on top of GPT-3):Jasper,Copy,Copysmith,Otherside AI,Moonbeam,ParagraphAI… All are highly intuitive.

Every co-founderNathan Baschezrecently revealedLex, the new kid in the block, which works like Google Docs with superpowers.

It has sparked a debate on Twitter about these tools (and, to give credit where it’s due, prompted me to write this article).

Jasper, Copy, Lex… all are pretty much the same thing (GPT-3 with clever prompt engineering on top) and it’s unlikely most will survive once the funding dries up.

But some will, and they’ll get even better once GPT-4 is out.

I co-wrote an article entitled “AI Has an Invisible Misinformation Problem” with GPT-3. It took me an hour and about 32 cents.

It gets costlier with scale, but it's incomparably cheaper than hiring a human writer to produce the same amount of content.

As a bonus, let me tell you that, even if these systems are unreliable, lack long-term memory, can generate incoherent utterances, and make information up, you can always edit.

Some may find it annoying to edit a piece they didn’t write, but for some, it can be a source of ideas or a way to prevent writer’s block.

In the first half, I highlighted just how fundamentally flawed the comparison between humans and AI writing tools is.

In the second half, however, I noted how the strengths of language models make them suitable for many writing tasks. They can act as enhancers in some settings, instead of replacers.

This fine-grained analysis is key to understanding who is at risk and which tasks may end up being automated. For instance, tasks where scale matters more than style and where the effect on the reader matters more than the intent of the writer.

Is under those conditions that the risk is highest. If you’re reading Shakespeare’s tales, you care it’s him and not an AI emulating his voice. If a marketing agency wants to come up with a clever ad for the next campaign, neither you nor they care.

As a newsletter writer, I'm likely in the top half of the safe-unsafe spectrum. But copywriters, ad marketers, generic content creators, freelance writers, and even ghostwriters—among others—may face a harsher future.

Can readers obtain value from AI-written pieces? Yes.

Can AI writing tools impact the demand for human writers? Sadly, yes.

But can AI writehoworwhyhumans write? No. Now nor ever.

Something to remember for both writers and readers.

Images Powered by Shutterstock