Something that it is like to be

‘[F]undamentally an organism has conscious mental states if and only if there is something that it is like to be that organism — something it is like for the organism.’

So writes Thomas Nagel in his famous 1974 paper ‘What is it like to be a bat?’ (p.436). It is a compelling thought and one that seems self-evidently true. (I remember coming up with the same idea when I was a teenager and thinking it a great insight, and I’m sure many others have had the same experience.) But is Nagel’s claim correct or might it be seductively wrong? It depends, I think, on how we interpret it. Here are two things it might mean.

The first is that having conscious experiences involves having a certain kind of introspective self-awareness — an awareness of one’s own mental responses to the world. For you to be conscious is for you to know what you are like as you respond to the world, as well as what the world is like as it affects you. I’ll call this introspective subjectivity. Introspective subjectivity, we might say, stresses the ‘like’ part of ‘what it is like to be’.

The second thing Nagel might mean is that to have conscious experiences is to have a form of immediate inner awareness that exists simply in virtue of being the thing one is and that is not dependent upon introspective mechanisms. I’ll call this intrinsic subjectivity. We might say that this stresses the ‘be’ part of ‘what it is like to be’.

These two kinds of subjectivity are very different. Introspective subjectivity is not essentially private. With the right apparatus, another person might monitor the same internal states my introspective mechanisms do and so share my introspective awareness. But intrinsic subjectivity cannot be shared. The only way someone could share what it’s like to be me in the intrinsic sense is to be me, or perhaps a duplicate of me.

Now there doesn’t seem to be any special difficulty in understanding how a creature could possess introspective subjectivity. it would just need to have suitable introspective mechanisms targeting its own internal responses and hooked up in the right way to the rest of its cognitive system. But intrinsic subjectivity looks like a complete mystery. How does this inner awareness arise? What exactly is the subject of it? Which things have it? What does it do? How can we even investigate it? These look like, well, hard problems.

In the literature on consciousness these two kinds of subjectivity aren’t always clearly distinguished. Some theories — most obviously panpsychist ones — are plainly theories of intrinsic subjectivity. If there is something it is like to be a rock, it’s not because the rock is capable of introspection. But other theories look like theories of introspective subjectivity. Higher-order representational theories, for example, attempt to explain consciousness in terms of the internal monitoring of experience. Yet these theories are often discussed as if they were alternative accounts of the same thing.

Nagel’s claim has become the standard starting point for theories of consciousness, but it doesn’t identify a unique explanandum and it has sent researchers off down very different paths. In my view, it is immensely plausible to think that conscious experience involves introspective subjectivity, and developing theories of introspective subjectivity should be a major research programme for cognitive science. But the pursuit of theories of intrinsic subjectivity is, I fear, misguided and futile.

Image credit

Is the hard problem an illusion?

An oddly shaped iceberg

Note: This is a revised version of a letter I sent to The Guardian, responding to a letter by Philip Goff, which itself commented on an article on consciousness by Oliver Burkeman. The letter was deemed too long for publication in the paper, so I am posting it here instead. It is written for a general audience.

As a member of the Daniel Dennett camp on the Greenland consciousness cruise referred to in Oliver Burkeman’s article, I should like to respond to Philip Goff’s letter of 28 January 2015. Goff advocates a radical solution to the Hard Problem of explaining how consciousness fits into the natural world. Consciousness, he argues, is not a physical process, but an intrinsic feature of all physical reality. Consciousness is not fundamentally material; rather, matter is fundamentally conscious. A consequence of this view is that everything is conscious to some degree: trees, stones, atoms, quarks — all have a little bit of consciousness. This panpsychist position offers a neat solution to the problem, and Goff argues for it with intelligence and elegance, but I find it hard to take it seriously.

I do agree with Goff on one important point: Consciousness, as we ordinarily conceive of it, cannot be explained by the physical sciences. The Hard Problem, as posed by David Chalmers, can’t be solved by cognitive science. Goff draws the moral that consciousness is not physical in the ordinary sense. I draw the moral that we are conceiving of consciousness wrongly. We are mistaken about what consciousness is.

Our conception of consciousness is derived from introspection — from mentally ‘looking inwards’ at our experiences. When we do this, our experiences seem to have a private ‘phenomenal quality’ to them (think of the sensation of seeing a vibrant green leaf, or smelling coffee grounds, or running one’s fingers over a silk scarf). These phenomenal qualities (or ‘qualia’) seem almost magical and utterly different from the mundane physical properties of our brains.

But maybe that’s an illusion. Maybe when we introspect, what we are aware of are certain patterns of brain activity that seem magical and nonphysical but aren’t really. Moreover, as another cruise participant, Nicholas Humphrey, argued, maybe these brain processes were shaped by evolution precisely to seem magical to introspection. In his 2011 book Soul Dust Humphrey argued that evolution adapted pre-existing neural systems to create an inner ‘magic show’ which carries immense adaptive benefits — enriching our lives and our experience of the world, enhancing our sense of self, and deepening our engagement with each other. In short, maybe evolution has hardwired us to think that we have a magical inner life, and the problem of consciousness is a benign trick that nature has played on us.

Most people, I find, think this suggestion is just as crazy as panpsychism. If there’s one thing we are absolutely certain of (the argument goes) it’s our experience. We may doubt that there is a green patch in front of us, but we can’t doubt that we are having an experience with a green phenomenal quality. This takes us back to the origins of the Hard Problem in Descartes’ sceptical thought experiment mentioned in Oliver Burkeman’s article. There’s something right about this. If we suspect that our senses are misleading us about the external world, then we retreat to more cautious and secure claims about how things seem to us. But (I would argue) such claims should not be construed as infallible reports of the nature of our experiences. Being cautious about the external world doesn’t make us infallible about the interior one. We may be sure that we’re introspecting something, but can we rule out the possibility that we’re mistaken about its nature, just as we may be about the nature of external things? After all, to the spectator a good illusion of something is indistinguishable from the thing itself.

Of course, it’s not so simple to solve the problem of consciousness. For one thing, we need to explain what it means to say that experiences seem to have phenomenal qualities. (It better not mean that they generate further experiences which really do have phenomenal qualities. Otherwise we’d merely have moved the Hard Problem back a step.) But thinking of consciousness as involving an illusion changes the questions we have to answer, and does so, I believe, in a productive way.

On the cruise I proposed the name ‘illusionism’ for the sort of position I have been describing, and the term ‘the Illusion Problem’ for the problem of explaining how the consciousness illusion is created. (I wasn’t claiming to have originated the position or the problem; Daniel Dennett has advocated illusionism for decades, and Nicholas Humphrey has done pioneering work on the Illusion Problem.) For me, the attraction of illusionism is that it allows us to give full weight to the intuitions that motivate views like Goff’s — consciousness really does seem weird — without requiring us to endorse a weird metaphysics. Maybe it’s time to stop banging our heads against an illusory Hard Problem and start trying to solve the hard-ish but solvable Illusion Problem?


The talks from the consciousness cruise, including Jesse Prinz’s introduction to my paper on illusionism, my reply, and the following discussion, were videoed by the Moscow Center for Consciousness Studies and can be viewed on the centre’s Youtube channel. Here is the full playlist.

Red pill or blue? Qualia or qualia representations?

Assume for the sake of argument that (1) qualia are real and nonphysical, (2) the physical world is closed under causation (and there’s no overdetermination), and (3) apart from qualia, the mind is physical.

Now, you have experiences with qualia. But this isn’t all. You are also aware of having qualia. You can attend to your them, think about them, recall them, and respond to them. And since (given our assumptions) the qualia themselves don’t have any causal effects on you, this suggests that you have representations of your qualia. You represent your experiences as having qualia, and these representations do the causal work. Your awareness of your qualia and your responses to them are mediated by qualia representations. (I assume these are fine-grained analogue representations of some kind. You can detect and respond to changes in your qualia that you can’t conceptualize.) The representations aren’t actually caused by the qualia, of course, any more than the other effects are. They are physical states of your brain and are caused by prior brain events, but things are somehow set up so that they track your qualia perfectly.

Now consider your zombie twin – an exact duplicate of you minus the qualia. Since it is a physical copy of you, this creature will have the same qualia representations you do, and these representations will have the same effects on it as yours do on you. Call this creature a Q-zombie.

Next consider another type of zombie. This one is a physical and phenomenal duplicate of you, except for the qualia representations. It has the same qualia you do, but no representations of them; the brain circuits involved have been fried. Call this creature an R-zombie (for Representational zombie).

The Q-zombie will take itself to have qualia just like yours, and it will display the same qualia-related sensitivities, thoughts, and responses you do. It will have the same reactions to pain and pleasure, the same sensitivity to colours, sounds, and smells, and the same beliefs about the character of its conscious experiences, even though it has no qualia at all.

The R-zombie, on the other hand, will behave – well, like a zombie. The absence of qualia representations will have drastic consequences for its mental life and behaviour. It will not attend to its qualia, or think about them, or respond to them. It will exhibit various ‘blindsighted’ behaviours, reacting unconsciously to external stimuli, but it will show no sign of having conscious experiences and no awareness of pain, pleasure, colour, smell, or any other phenomenal property — even though it does in fact have exactly the same qualia you do.

Now here’s the punchline. Something really unpleasant is going to happen to you – something that will cause a lot of pain (and pain representations). There is an anaesthetic on offer, however. In fact, there are two drugs available: blue pills and red pills. A blue pill will turn you temporarily into a Q-zombie and a red pill will turn you temporarily into an R-zombie.

Which pill would you take? And would you have any trouble deciding?

Hand holding red and blue pills

Image credit: Red Pill or Blue Pill? by Tomaž Štolfa