Accelerating Research on Consciousness

Back in December, psychologist and author Christian Jarrett got in touch to ask what I thought about the new project “Accelerating Research on Consciousness” organised by the Templeton World Charity Foundation. See this news story for more information about the project. Christian incorporated some of my comments into an article for BBC Focus magazine (which I recommend) but I thought I’d post my full reply here, in case anyone is interested. Here it is.

I have mixed feelings about the project. I’m delighted to see more funding for experimental work on consciousness. The data collected will undoubtedly be useful. I have worries, however. It looks like the project will focus on explaining consciousness in the phenomenal sense. That is, the organizers and participants conceive of conscious states as essentially subjective ones, involving awareness of phenomenal properties or qualia (the private mental ‘feel’ or ‘what-it-likeness’ of experience). If that’s right, then I am dubious of the chances of making decisive progress.

To begin with, it’s hard to see how one could explain phenomenal properties in terms of brain processes. The two things are just too different. (This is the so-called ‘hard problem’ of consciousness.) The most we can hope to do is to find correlations between brain processes and phenomenal properties. And even then there’s a methodological problem. For there can be no objective test for the presence of essentially subjective properties. The best we can do is to test for objective indications of their presence, such as the subject’s reports and reactions. And this means that tests of correlation hypotheses can never be decisive. Suppose theory A says that conscious state C occurs when brain region N1 is active, whereas theory B says that N1 isn’t sufficient on its own and that brain region N2 needs to be active as well. And suppose we run some experiments and find that participants report C when both N1 and N2 are active but not when only N1 is. Does this prove that theory A is wrong and theory B right? No. It might be that N1 is sufficient for C, but that N2 is needed to enable us to report it. The same problem will arise if we try to test for nonverbal indications of C. Again, how do we tell which brain states are necessary for the conscious state itself and which are necessary for producing the behavioural indications of it? Since there is no way of directly testing for subjective properties, we can never definitively rule out any theory.

In short, so long as we focus on phenomenal consciousness, we’re never going to have decisive tests of our theories. The moral I draw is that shouldn’t focus on phenomenal consciousness. In fact, I believe that we do not have phenomenal consciousness; it’s a kind of introspective illusion, which reflects the limited access we have to our own mental processes. (I call this view ‘illusionism.’) The real task is to explain our intuitions about phenomenal consciousness — why we think we possess it.

As regards the theories currently being tested, I am very sceptical of IIT. It is intended as a theory of phenomenal consciousness, so the worries I’ve just mentioned apply, but even as theory of that, IIT is implausible. All kinds of things can have a rich informational structure in the relevant sense, so the theory has the consequence that inanimate objects can be phenomenally conscious. Even a blank wall could be.

I am much more sympathetic to Global Workspace theory, though I think it should be construed as a theory of access consciousness — of the awareness of information in a functional sense — rather than phenomenal consciousness. Moreover, it needs to be supplemented with some account of why we think we have phenomenal consciousness.

As for what I’d like to see next: Unsurprisingly, I’d like to see the project test illusionist theories of consciousness, which focus on explaining our intuitions about phenomenal consciousness. These do not face the problems I’ve mentioned, and they offer a promising line of research. It’s early days yet, but such theories are being developed. A good example is the Action Schema Theory proposed by the Princeton neuroscientist Michael Graziano and his colleagues.

The bottom line, then, is that the funding for experimental work is welcome and the data gathered will be useful, but the project is unlikely to settle anything until we have a better conception of exactly what it is we are trying to explain.

Bright Shiny Colours

What are colours? My view is that they are properties of surfaces in the world around us — albeit complex gerrymanded ones, which can be picked out only by reference to our reactions to them. Blue things are things that evoke a certain distinctive cluster of reactive dispositions in us. Note that that I do not say that they are ones that produce blue sensations in us. I don’t think that experiencing blue involves entertaining a mental version of blueness — a blue quale or phenomenal property.

Where then is the quality of blueness ? It’s not out there in the world. Out there there’s just a surface with a microstructure that reflects certain wavelengths of electromagnetic radiation. And I’ve denied that there is any blue quality in our minds. So where is the blueness of the blue?

My answer that it is not really anywhere. It’s a property that our minds misrepresent external objects as having. However, it’s a property that corresponds to, and carries information about, something real and important — namely, the affordances of the objects in question. That needs a lot of unpacking and qualification, but the general idea is this. We are tuned up, by biological evolution, cultural evolution, and personal experience, to track worldly properties that it’s useful for us to notice. Such properties afford us opportunities for action in various ways; they have specific affordances. An object’s affordances are reflected in the suite of reactive dispositions its perception triggers in us — the suite of beliefs, expectations, associations, emotions, priming effects, and so on.

Now my suggestion is that the human brain monitors its own reactive dispositions and generates schematic representations of them, which are linked to its representations of the objects that triggered them. The upshot of this is that we experience the world as being metaphorically coloured by our reactions to it. We experience objects as having a distinctive but ineffable significance for us, which is a marker of their affordances. This is what we call their quality or feel. The blueness of blue is a distorted representation of the affordances it presents, represented as a property of the object itself.

That’s still very schematic, but a little example may help. Consider shiny, metallic colours, such as silver and gold. These seem to have a distinctive feel to them, and as a child I was very puzzled as to where they fitted into the visible spectrum. But, of course, they are not really different colours. Shiny things are just regularly coloured things whose brightness (and colour if they are very shiny) varies markedly with viewing angle. What gives them their distinctive ‘feel’ is precisely the affordances they present. We expect them to change in a distinctive way as we move in relation to them. The ‘feel’ of metallic colour just is the expectation of this effect.

A postscript: Another illustration of this is afforded by Gregory Thielker’s paintings of scenes though rain-spattered glass. In me, these create a powerful response (‘feel’, if you like). Doubtless, this is in part because they evoke memories of glum hours spent in traffic during rainy commutes. But I think it also reflects the way they trigger strong expectations that the scene will morph and distort in a distinctive way as the water drips or I move my head.

Illusion or identity?

Illusionists believe that consciousness involves no properties that are not detectable and fully describable by third-person science. Any other properties we think are involved are illusory. Suppose that’s right. Still, why should it follow that phenomenal properties are illusory? Why not say that they are properties that are detectable and fully describable by third-person science? It’s true (the objection continues) that we think of phenomenal properties as ones that present a problem for science — that pose a hard problem — but it doesn’t follow that they really do present one. Maybe we are just wrong about them.

Suppose that phenomenal concepts do in fact track completely unmysterious brain properties, which for some reason we mistakenly think of as nonphysical. There are many candidate explanations of why we might do this. If that’s the case (and illusionists don’t deny the possibility), then wouldn’t it be better to say that phenomenal properties are real but different from what we thought?

Here’s my answer. Maybe we could say that. It’s a revise-or-eliminate situation, and there is no simple procedure for determining the best way to go. But here are some reasons for rejecting the revisionary route.

First, it would invite confusion. The concept of the phenomenal carries a lot of connotations that physicalists must reject — assumptions about the reliability of introspection, intuitions about well-known thought experiments, associations with dualist notions such as sense data, and so on. Using a term with all this theoretical baggage is not the most perspicuous way of presenting a physicalist theory of consciousness.

Second, it would be misleading. The notion of phenomenal consciousness has become bound up with that of the hard problem — a problem that is supposed to be both substantive (there’s a real thing that needs explaining) and qualitatively different from ‘easy’ problems that can be solved by cognitive science. To offer a theory of phenomenal consciousness is to suggest that one has solved this hard problem, and physicalists shouldn’t do that. For physicalists, there is no hard problem, only the problem of explaining why there seems to be one.

Third, it would be tedious. In theoretical work, we’d have to laboriously disinfect phenomenal concepts before use, explicitly disavowing all their theoretical accretions.

Fourth, it would be pointless. After disinfection, we’d be left with nothing more than a bare demonstrative or quotational device, equivalent to ‘whatever this is’, applied introspectively. It’s not clear that this would pick out something determinate or theoretically interesting. We’d be gesturing at the whole complex perceptual-cum-reactive state triggered by the current stimulus, and without further specification it’s doubtful that the gesture would pick out a clear target for scientific investigation. (By contrast, gesturing at the supposed qualitative aspect of the state would narrow down the target, but only to something that physicalists must say is illusory.)

Fifth, it’s restricting. Physicalists need phenomenal concepts in their old theoretically laden senses in order to describe how people mistakenly think of consciousness (‘It seems that experiences have a phenomenal aspect as well as a functional one’). Compare the term ‘witch’. If we revise it to mean female naturopath, then it becomes harder to express what mediaeval people thought. After all, they were right to think that there were witches in that sense. Of course, this is only a linguistic problem and it could be solved by paraphrase, but it’s a consideration.

In the end, the concept of the phenomenal is too compromised to be useful to science. As Daniel Dennett says in his Consciousness Explained, let’s cut the tangled kite string and start over. Phenomenal properties are illusory.

Beetles and consciousness

A beetle in a box

‘Suppose everyone had a box with something in it: we call it a “beetle”. No one can look into anyone else’s box, and everyone says he knows what a beetle is only by looking at his beetle.’ — Ludwig Wittgenstein, Philosophical Investigations, Sec. 293.

Behaviourism: People say there’s a beetle inside the box.

Cosmopsychism: We are all inside the beetle in the box.

Eliminativism: There’s no beetle in the box.

Epiphenomenalism: There’s a beetle in the box but it’s harmless.

Emergentism: If the box is fancy enough, a beetle will appear inside it.

First-order representationalism: The beetle in the box is transparent.

Higher-order thought theory: When you think about the box, a beetle appears inside it.

Higher-order perception theory: When you look at the box, a beetle appears inside it.

Illusionism: It only seems as if there’s a beetle in the box.

Interactionism: There’s a beetle in the box and it bites.

Materialism: There’s a beetle in my brain.

Mysterianism: I’ll never understand how the beetle got into the box.

New physics: The beetle got into the box through microtubules.

Nonreductive materialism: There’s a beetle in my brain but it’s hiding.

Panpsychism: Electrons have tiny boxes with tiny beetles in them.

Property dualism: The beetle in the box is made out of ectoplasm.

Quietism: Beetle? Box?

Self-representationalism: When you put the box in front of a mirror, a beetle appears inside it.

Imagine

With apologies to John Lennon

Imagine there’re no qualia
It’s easy if you try
No feel or what-its-likeness
Just plain old cog sci

Imagine all the zombies
Being just like us

Imagine there’re no inverts
It isn’t hard to do
Nothing for Mary to learn
And no hard problem, too

Imagine all the people
Being illusionist

You may say I’m a quiner
But there’s nothing wrong with that
I hope someday you’ll join us
And learn what it’s like to be a bat.

Something that it is like to be

‘[F]undamentally an organism has conscious mental states if and only if there is something that it is like to be that organism — something it is like for the organism.’

So writes Thomas Nagel in his famous 1974 paper ‘What is it like to be a bat?’ (p.436). It is a compelling thought and one that seems self-evidently true. (I remember coming up with the same idea when I was a teenager and thinking it a great insight, and I’m sure many others have had the same experience.) But is Nagel’s claim correct or might it be seductively wrong? It depends, I think, on how we interpret it. Here are two things it might mean.

The first is that having conscious experiences involves having a certain kind of introspective self-awareness — an awareness of one’s own mental responses to the world. For you to be conscious is for you to know what you are like as you respond to the world, as well as what the world is like as it affects you. I’ll call this introspective subjectivity. Introspective subjectivity, we might say, stresses the ‘like’ part of ‘what it is like to be’.

The second thing Nagel might mean is that to have conscious experiences is to have a form of immediate inner awareness that exists simply in virtue of being the thing one is and that is not dependent upon introspective mechanisms. I’ll call this intrinsic subjectivity. We might say that this stresses the ‘be’ part of ‘what it is like to be’.

These two kinds of subjectivity are very different. Introspective subjectivity is not essentially private. With the right apparatus, another person might monitor the same internal states my introspective mechanisms do and so share my introspective awareness. But intrinsic subjectivity cannot be shared. The only way someone could share what it’s like to be me in the intrinsic sense is to be me, or perhaps a duplicate of me.

Now there doesn’t seem to be any special difficulty in understanding how a creature could possess introspective subjectivity. it would just need to have suitable introspective mechanisms targeting its own internal responses and hooked up in the right way to the rest of its cognitive system. But intrinsic subjectivity looks like a complete mystery. How does this inner awareness arise? What exactly is the subject of it? Which things have it? What does it do? How can we even investigate it? These look like, well, hard problems.

In the literature on consciousness these two kinds of subjectivity aren’t always clearly distinguished. Some theories — most obviously panpsychist ones — are plainly theories of intrinsic subjectivity. If there is something it is like to be a rock, it’s not because the rock is capable of introspection. But other theories look like theories of introspective subjectivity. Higher-order representational theories, for example, attempt to explain consciousness in terms of the internal monitoring of experience. Yet these theories are often discussed as if they were alternative accounts of the same thing.

Nagel’s claim has become the standard starting point for theories of consciousness, but it doesn’t identify a unique explanandum and it has sent researchers off down very different paths. In my view, it is immensely plausible to think that conscious experience involves introspective subjectivity, and developing theories of introspective subjectivity should be a major research programme for cognitive science. But the pursuit of theories of intrinsic subjectivity is, I fear, misguided and futile.

Image credit

Is the Hard Problem an illusion?

An oddly shaped iceberg

Note: This is a revised version of a letter I sent to The Guardian, responding to a letter by Philip Goff, which itself commented on an article on consciousness by Oliver Burkeman. The letter was deemed too long for publication in the paper, so I am posting it here instead. It is written for a general audience.

As a member of the Daniel Dennett camp on the Greenland consciousness cruise referred to in Oliver Burkeman’s article, I should like to respond to Philip Goff’s letter of 28 January 2015. Goff advocates a radical solution to the Hard Problem of explaining how consciousness fits into the natural world. Consciousness, he argues, is not a physical process, but an intrinsic feature of all physical reality. Consciousness is not fundamentally material; rather, matter is fundamentally conscious. A consequence of this view is that everything is conscious to some degree: trees, stones, atoms, quarks — all have a little bit of consciousness. This panpsychist position offers a neat solution to the problem, and Goff argues for it with intelligence and elegance, but I find it hard to take it seriously.

I do agree with Goff on one important point: Consciousness, as we ordinarily conceive of it, cannot be explained by the physical sciences. The Hard Problem, as posed by David Chalmers, can’t be solved by cognitive science. Goff draws the moral that consciousness is not physical in the ordinary sense. I draw the moral that we are conceiving of consciousness wrongly. We are mistaken about what consciousness is.

Our conception of consciousness is derived from introspection — from mentally ‘looking inwards’ at our experiences. When we do this, our experiences seem to have a private ‘phenomenal quality’ to them (think of the sensation of seeing a vibrant green leaf, or smelling coffee grounds, or running one’s fingers over a silk scarf). These phenomenal qualities (or ‘qualia’) seem almost magical and utterly different from the mundane physical properties of our brains.

But maybe that’s an illusion. Maybe when we introspect, what we are aware of are certain patterns of brain activity that seem magical and nonphysical but aren’t really. Moreover, as another cruise participant, Nicholas Humphrey, argued, maybe these brain processes were shaped by evolution precisely to seem magical to introspection. In his 2011 book Soul Dust Humphrey argued that evolution adapted pre-existing neural systems to create an inner ‘magic show’ which carries immense adaptive benefits — enriching our lives and our experience of the world, enhancing our sense of self, and deepening our engagement with each other. In short, maybe evolution has hardwired us to think that we have a magical inner life, and the problem of consciousness is a benign trick that nature has played on us.

Most people, I find, think this suggestion is just as crazy as panpsychism. If there’s one thing we are absolutely certain of (the argument goes) it’s our experience. We may doubt that there is a green patch in front of us, but we can’t doubt that we are having an experience with a green phenomenal quality. This takes us back to the origins of the Hard Problem in Descartes’ sceptical thought experiment mentioned in Oliver Burkeman’s article. There’s something right about this. If we suspect that our senses are misleading us about the external world, then we retreat to more cautious and secure claims about how things seem to us. But (I would argue) such claims should not be construed as infallible reports of the nature of our experiences. Being cautious about the external world doesn’t make us infallible about the interior one. We may be sure that we’re introspecting something, but can we rule out the possibility that we’re mistaken about its nature, just as we may be about the nature of external things? After all, to the spectator a good illusion of something is indistinguishable from the thing itself.

Of course, it’s not so simple to solve the problem of consciousness. For one thing, we need to explain what it means to say that experiences seem to have phenomenal qualities. (It better not mean that they generate further experiences which really do have phenomenal qualities. Otherwise we’d merely have moved the Hard Problem back a step.) But thinking of consciousness as involving an illusion changes the questions we have to answer, and does so, I believe, in a productive way.

On the cruise I proposed the name ‘illusionism’ for the sort of position I have been describing, and the term ‘the Illusion Problem’ for the problem of explaining how the consciousness illusion is created. (I wasn’t claiming to have originated the position or the problem; Daniel Dennett has advocated illusionism for decades, and Nicholas Humphrey has done pioneering work on the Illusion Problem.) For me, the attraction of illusionism is that it allows us to give full weight to the intuitions that motivate views like Goff’s — consciousness really does seem weird — without requiring us to endorse a weird metaphysics. Maybe it’s time to stop banging our heads against an illusory Hard Problem and start trying to solve the hard-ish but solvable Illusion Problem?


The talks from the consciousness cruise, including Jesse Prinz’s introduction to my paper on illusionism, my reply, and the following discussion, were videoed by the Moscow Center for Consciousness Studies and can be viewed on the centre’s Youtube channel. Here is the full playlist.

Red pill or blue? Qualia or qualia representations?

Hand holding red and blue pills

Assume for the sake of argument that (1) qualia are real and nonphysical, (2) the physical world is closed under causation (and there’s no overdetermination), and (3) apart from qualia, the mind is physical.

Now, you have experiences with qualia. But this isn’t all. You are also aware of having qualia. You can attend to your them, think about them, recall them, and respond to them. And since (given our assumptions) the qualia themselves don’t have any causal effects on you, this suggests that you have representations of your qualia. You represent your experiences as having qualia, and these representations do the causal work. Your awareness of your qualia and your responses to them are mediated by qualia representations. (I assume these are fine-grained analogue representations of some kind. You can detect and respond to changes in your qualia that you can’t conceptualize.) The representations aren’t actually caused by the qualia, of course, any more than the other effects are. They are physical states of your brain and are caused by prior brain events, but things are somehow set up so that they track your qualia perfectly.

Now consider your zombie twin – an exact duplicate of you minus the qualia. Since it is a physical copy of you, this creature will have the same qualia representations you do, and these representations will have the same effects on it as yours do on you. Call this creature a Q-zombie.

Next consider another type of zombie. This one is a physical and phenomenal duplicate of you, except for the qualia representations. It has the same qualia you do, but no representations of them; the brain circuits involved have been fried. Call this creature an R-zombie (for Representational zombie).

The Q-zombie will take itself to have qualia just like yours, and it will display the same qualia-related sensitivities, thoughts, and responses you do. It will have the same reactions to pain and pleasure, the same sensitivity to colours, sounds, and smells, and the same beliefs about the character of its conscious experiences, even though it has no qualia at all.

The R-zombie, on the other hand, will behave – well, like a zombie. The absence of qualia representations will have drastic consequences for its mental life and behaviour. It will not attend to its qualia, or think about them, or respond to them. It will exhibit various ‘blindsighted’ behaviours, reacting unconsciously to external stimuli, but it will show no sign of having conscious experiences and no awareness of pain, pleasure, colour, smell, or any other phenomenal property — even though it does in fact have exactly the same qualia you do.

Now here’s the punchline. Something really unpleasant is going to happen to you – something that will cause a lot of pain (and pain representations). There is an anaesthetic on offer, however. In fact, there are two drugs available: blue pills and red pills. A blue pill will turn you temporarily into a Q-zombie and a red pill will turn you temporarily into an R-zombie.

Which pill would you take? And would you have any trouble deciding?

Image credit: Red Pill or Blue Pill? by Tomaž Štolfa