Logo

The Data Daily

The Tricky Business of Measuring Consciousness

The Tricky Business of Measuring Consciousness

The only thingyou know for sure is that you are conscious. All else is inference, however reasonable. There is something in your head that generates experiences: the words you are reading on this page, the snore of a bulldog on a red carpet, the perfume of roses on a desk. Your experience of such a scene is exclusive to you, and your impressions are integrated into one unified field of perception. It is like something to be you reading, hearing a dog, smelling flowers.

But what is going on in the heads of other people, and do dogs or even computers have experiences too? Is it also like something to be them? If entities besides yourself are sentient, whence does consciousness come? Philosopher Dave Chalmers calls the question of how physical systems give rise to subjective experience the “hard problem” of consciousness. Many philosophers think the hard problem insoluble, because consciousness cannot be reduced to pulses in neurons in the same way bodily functions can be explained by gene expression. While our consciousness is the only thing we know, it is the most mysterious thing in the world.

Understanding consciousness better would solve some urgent, practical problems. It would be useful, for instance, to know whether patients locked in by stroke are capable of thought. Similarly, one or two patients in a thousand later recall being in pain under general anesthesia, though they seemed to be asleep. Could we reliably measure whether such people are conscious? Some of the heat of the abortion debate might dissipate if we knew when and to what degree fetuses are conscious. We are building artificial intelligences whose capabilities rival or exceed our own. Soon, we will have to decide: Are our machines conscious, to even a small degree, and do they have rights, which we are bound to respect? These are questions of more than academic philosophical interest.

What we want is a theory of consciousness that can measure sentience. Recently, Marcello Massimini and colleagues at the University of Milan devised a test that zaps the brains of patients with magnetic stimulation, captures brain activity with electroencephalography, and analyzes the results with a data-compression algorithm. In a groundbreaking study, 102 healthy subjects and 48 responsive but brain-injured patients were “zapped and zipped” when conscious and unconscious, creating a value called a “perturbational complexity index” (PCI). Remarkably, across all 150 subjects, when the PCI value was above a certain value (0.31, is it happens) the person was conscious; if below, she or he was always unconscious.

Massimini then tested his consciousness-meter on patients who were either minimally conscious or else unresponsive but wakeful. Here, the results were more ambiguous. Almost all subjects who were minimally conscious were correctly described as somewhat awake. Of 43 unresponsive but wakeful patients, where communication was impossible, 34 were below the level of consciousness, as expected. But nine people, terrifyingly, showed a complex pattern of brain activity above the threshold of consciousness. They might have been experiencing the world, but they were unable to tell anyone that they were still there, as if in a diving bell at the bottom of the sea.

Massimini's test is important because it is the first real proof of integrated information theory (IIT), a theory of consciousness invented by neuroscientist and psychiatrist Giulio Tononi at the University of Wisconsin. In the 20 years since Tononi began working on IIT, the theory has prompted an enormous literature and generated passionate, often acrimonious debate. Christof Koch, chief scientist at the Allen Institute for Brain Science, says IIT is the “only really promising fundamental theory of consciousness.” But Scott Aaronson, a theoretical computer scientist at the University of Texas at Austin, believes the theory is “demonstrably wrong, for reasons that go to its core.”

IIT doesn’t try to answer the hard problem. Instead, it does something more subtle: It posits that consciousness is a feature of the universe, like gravity, and then tries to solve the pretty hard problem of determining which systems are conscious with a mathematical measurement of consciousness represented by the Greek letter phi (Φ). Until Massimini’s test, which was developed in partnership with Tononi, there was little experimental evidence of IIT, because calculating the phi value of a human brain with its tens of billions of neurons was impractical. PCI is “a poor man’s phi” according to Tononi. “The poor man’s version may be poor, but it works better than anything else. PCI works in dreaming and dreamless sleep. With general anesthesia, PCI is down, and with ketamine it’s up more. Now we can tell, just by looking at the value, whether someone is conscious or not. We can assess consciousness in nonresponsive patients.”

As an idea, IIT is audacious. It ignores the meaning of information to quantify the way systems use information. The theory proposes five axioms and postulates that are properties of consciousness, which physical systems must possess to support sentience. Briefly, the more distinct the information in a system and the more fused those bits, the higher the information integration in a system and the more phi or consciousness. Considering information integration as the key to consciousness makes intuitive sense. Remember a first kiss: the touch of her lips, the smell of her skin, the light in a room, the feel of your heart racing. You were supremely conscious at the moment, because there was a very high level of information integration.

The great strength of IIT is that it’s mostly consistent with common sense, in contrast to competing theories, which often propose deeply weird solutions (such as denying that we are conscious at all). IIT explains why an assault to the cerebellum, which encodes motor events, causes ataxia, slurred speech, or a stumbling walk but results in no diminishment of consciousness. That’s because the cerebellum, unlike the neocortex, doesn’t integrate internal states, even though it is home to 69 of the 86 billion nerve cells in the human body. IIT tells us that human beings in deep sleep or under general anesthesia aren’t conscious, because information integration has broken down. And IIT is consistent with how life feels: Consciousness is graded over a lifetime, blooming in an adult but withering with age, drugs, or alcohol, when our capacity to integrate information falters.

But the theory has its surprises too. Because IIT proposes that consciousness is a fundamental property of the universe and that any system that integrates information is to some degree sentient, it follows that things that we do not think of as conscious at all, such as a light diode or the clock in a computer, will possess non-zero phi values, like temperatures just above absolute zero. This seems wrong, but Tononi promises that an upcoming paper will show that computers that are feed-forward systems, even artificial intelligences that employ deep learning, would not be conscious. “The phi of a digital computer would be zero, even if it were talking like me,” Tononi says. To make a conscious AI, Christof Koch speculates, would require a different computer architecture with feedback mechanisms that promote information integration, such as a neuromorphic computer. Other things that have zero phi, according to Tononi, include collectives of sentient individuals, such as corporations or the United States.

Critics of IIT share similar objections. “It’s promising, but Tononi doesn’t know if his axioms and postulates are complete,” according to David Chalmers. Others object to the theory’s creeping pan-psychism, the ancient belief that everything material, however, small, has some consciousness, including the universe itself, the anima mundi. Scott Aaronson complains, “Tononi and his followers identify consciousness with information integration, or what a mathematician would call “graph expansion.” That doesn't work for the fundamental reason that you can have information integration without any hint of anything that anyone who wasn't already sold on IIT would want to call intelligence, let alone consciousness.”

Giulio Tononi is undeterred. He believes that IIT’s evasion of the hard problem, by beginning with the brute fact of consciousness, is the only way to explain sentience. “Most things are not conscious,” he says. “Some things are trivially conscious. Animals are conscious, somewhat. But the things that are certainly conscious are ourselves— not our component parts, not our bodies or neurons, but us as systems.” What’s next for IIT, according to Koch, is more work like Massimini’s, with more kinds of humans in many different conditions, as well as animals and machines: “Experiment, experiment, experiment.”

Images Powered by Shutterstock