Logo

The Data Daily

Do machines have minds?

Do machines have minds?

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence.

What you’ll read a lot in the media is that advances in artificial intelligence have reached a point where it is becoming increasingly difficult to tell the difference between humans and machines.

But to say it more precisely, the superficial similarities between human and artificial intelligence are making it difficult to see the underlying differences that remain persistent. I’ve been thinking about this issue a lot recently as I’ve been following the controversy surrounding Google LaMDA and AI sentience, the growing capacity of deep learning systems in outmatching humans in complicated games, and the use of generative models in creating stunning artwork.

Obviously, current AI systems are nothing like the human mind. Despite their spectacular feats, their flaws become easy to spot when they’re pitted against the versatility and flexibility of human intelligence.

But neither can we dismiss them as being stupid. So how do we examine and define the growing “cognitive” (if that’s the proper term) abilities that we see in our machines today?

The Book of Minds, a book by Philip Ball, helped me better adjust my perception of the fast-evolving world of artificial intelligence. And for me, the key was first to change my perception of “minds.” The book, which talks about all kinds of minds, from humans to animals to machines and extraterrestrials, gives you a framework for looking past your instinctive tendency to view things through the lens of your own mind and experience.

One of the recurring themes in Ball’s book is the fallacy of the anthropocentric view of minds, which is to evaluate other agents (living or not) in the image of our own minds. This false view can go in different directions. On one end of the spectrum, we dismiss all other organisms as stupid because they don’t have a mind like ours. On the other end, we anthropomorphize everything, from pets to cars to computers, interpreting their behavior in terms of human intelligence.

This habit has stripped us of the ability to see these other minds for what they really are. First, we must accept that there are different types of minds, and there is a “space of possible minds” with multiple dimensions. Second, we must move away from the “pre-Copernican” view of the mindspace, which puts humans at its center. We can, however, try to guess how close other minds are to ours based on their properties across the dimensions of the mindspace.

And finally, we must define what “mind” and “mindedness” are—which itself is a difficult task.

As Ball puts it, “‘mind’ is one of those concepts – like intelligence, thought, and life – that sounds technical (and thus definable) but is in fact colloquial and irreducibly fuzzy. Beyond our own mind (and what we infer thereby about those of our fellow humans), we can’t say for sure what mind should or should not mean.”

Nonetheless, Ball settles on the following definition, which is fashioned after a theory by philosopher Thomas Nagel:

For an entity to have a mind, there must be something it is like to be that entity.

Although this is an ambiguous definition, it can be helpful. “The only mind we know about is our own, and that has experience. We don’t know why it has experience, but only that it does. We don’t even know quite how to characterize experience, but only that we possess it. All we can do in trying to understand mind is to move cautiously outwards, to see what aspects of our experience we might feel able (taking great care) to generalize,” Ball writes.

We must also acknowledge that not everything has a mind, of it takes a level of complexity in an organism to be something like it. For example, we can attribute a mind to a dog or a chimpanzee or (maybe) a fly. But does it make sense to consider fungus to be minded? Ball stresses that minds are not all-or-nothing entities but matters of degrees.

“I don’t think it’s helpful to ask if something has a mind or not, but rather, to ask what qualities of mind it has, and how much of them (if any at all),” Ball writes.

In The Book of Minds, Ball suggests different ways to map the mindspace, including along dimensions such as experience, agency, intelligence, consciousness, and intentionality. While delving into each of these concepts is beyond the scope of this article, what is clear is that mindedness spans several dimensions, and no single mind excels at every one of them.

We must, however, be careful about what kind of attributes we fit into the mindedness platform, Ball warns, because “the well-motivated wish to escape from old prejudices about our own species constantly risks becoming an exercise in pulling other organisms up to our own level.”

“It can often look like another ‘Mind Club’ affair: whom do we admit (on the grounds of having minds deemed to be a bit like ours), and whom do we turn away at the door?” Ball writes.

Evidently, some animals exceed us in specific skills, such as motor functions, auditory acuity, navigational prowess, short-term memory, location memory, etc. On the other hand (and we’ll get to this in a bit), AI and computers can surpass us in tasks such as mathematical calculations and solving problems that require brute-force number crunching. But while these are criteria we often use to measure intelligence in humans, it tells us nothing about the mind of other beings.

“The question posed by [Thomas] Nagel still stands as a challenge: what do cognitive capacities of this kind imply for the subjective what-it-is-to-be-like of the animal mind?” Ball notes.

Ball notes that “much of the discussion about the perils of artificial intelligence has been distorted by our insistence on giving machines (in our imaginations) minds like ours.”

For example, reporters, analysts, and even scientists often speak about AI systems such as AlphaZero and GPT-3as if speaking about humans. This is largely influenced by the “computational view” of the brain, in which the human brain is viewed as a data processing machine. Scientists have designed computers based on this notion. In turn, advances in computation have reinforced the computational view of the mind.

One of the characteristics of the computational view is that it becomes agnostic of the physical substrate of the mind. As long as a thing can manifest the data-processing functions of a mind, it can be considered to have a mind of sorts.

Ball refutes this view of the mind. “My hunch is that no genuine mind will be like today’s computers, which sit there inactive until fed some numbers to crunch,” he writes. “It’s certainly notable that evolution has never produced a mind that works like such a computer. That’s not to say it necessarily cannot – evolution is full of commitment bias to the solutions it has already found, so we have little notion of the full gamut it can generate – but my guess is that such a thing would not prove very robust.”

Ball believes that embodiment plays an important role in shaping the mind, an opinion that is shared by many scientists. In every species, the evolution of the mind, brain, and nervous system are all tied together. Furthermore, in most organisms, the brain and nervous system grow along with the body. And the brain’s learning modes and capacities shift as the body ages. Therefore, the mind cannot be considered a piece of software that is installed on the brain.

“The interaction [between the mind and brain] is much more fluid than that. You could say that the world the mind builds is predicated on the possibility of doing something to it (which includes doing something to or with one’s own body). Thus the kind of world that the mind builds depends on the kind of actions it can take,” Ball writes. “In other words, a central feature of the mind is that it is embodied. It constructs a world based on assumptions and deductions about what interventions the organism might be capable of.”

Interestingly, while Ball’s book is dedicated to giving readers a broader view of what different minds look like, he doesn’t think computers have minds.

“Let’s be clear: every robot and computer ever built (on this planet) is mindless. I feel slightly uneasy, in asserting this, that I might be merely repeating the bias against animal minds that I have so deplored,” he writes. “All the same I think the statement is true, or would at least be generally regarded as such by experts in the field.”

He goes on to say that current AI assistants don’t feel bad when you mock them, and robots don’t need moral rights. The main reason for dismissing machine minds, in Ball’s opinion, is that differences between the principles of logic-circuit design and the speculative theories of consciousness are too vast.

“Computers today, for all their staggering capabilities, remain closer to complicated arrays of light switches than even to rudimentary insect brains,” Ball writes, a statement that will no doubt be popular among many scientists today.

He does, however, state two caveats: First, there is no guarantee that something mindlike could not arise in a device built from something other than the wetware found in organic brains (e.g., electronic components). And second is that we don’t know for sure how today’s AI works, a statement that is at least true for very large neural networks.

From this, he concludes that we should metaphorically consider current AI systems as “a collection of proto-minds” like the nervous systems of early Ediacaran organisms.

“We do not know if they are really the kind of stuff that can ever host genuine minds – that there can ever be something it is like to be a machine. We can and should, however, begin to think about today’s computers and AIs in cognitive terms: to treat them conceptually (if not morally) as if they are minds,” he writes.

If we go back to the “embodied mind” theory, then we must conclude that the mind that today’s AI has is very different from ours. Furthermore, we must accept that the direction AI—in particular machine learning—is currently headed is not toward replicating the human mind. And this gap will not be bridged by scaling today’s neural network architectures and throwing more data and compute power at them.

Ball quotes cognitive scientist and AI expert Josh Tenenbaum and other AI experts as saying, “We believe that future generations of neural networks will look very different from the current state-of-the-art. They may be endowed with intuitive physics, theory of mind, causal reasoning, and other [human-like] capacities.”

We’re reaching a point where our AI systems are increasingly capable of handling complicated tasks. In some cases, AI systems are being given sensitive tasks, such as driving cars, making financial and hiring decisions, and diagnosing patients. Having the right perspective on the AI “mind” will help us better understand what kind of tasks we can trust them with and where they reach their limits.

“While we lack any grasp of the reasoning at work in AI, we’re more likely to indulge our habit of anthropomorphizing, of projecting human-like cognition into places it doesn’t exist,” Ball writes. “But it’s one thing to attribute mind to dumb matter. It’s another, perhaps more dangerous, thing to attribute the wrong kind of mind to systems that genuinely possess some sort of cognitive capacity. Our machines don’t think like us – so we’d better get to know them.”

Images Powered by Shutterstock