Indirect realism and idealism are not popular with contemporary academic philosophers (in the 2022 PhilPapers survey, only 6.6% endorsed idealism and only 5.0% sense data theory), and to many — including me — they are profoundly unattractive.
Yet they retain some popularity among a wider public. When discussing philosophy of mind on social media, I frequently find people asserting as an unquestionable truth that the only things we are directly aware of are our own experiences and that all our beliefs about the external world are inferences from beliefs about our experiences. (My response is usually to ask (a) Who or what does ‘we’ refer to in this context? and (b) How do we gain direct access to our experiences? The only answers that work in the context are ‘a soul’ and ‘magic’, or some more nuanced variants of them.)
Why is this? I don’t know (the philosophical arguments are certainly not compelling), and I’d be interested to know what others think. But I suspect that one factor may be the way neuroscientists write about consciousness. They tend to stress that the brain has no direct access to the world it represents. Here’s an example from David Eagelman:
Here’s the key: the brain has no access to the world outside. Sealed within the dark, silent chamber of your skull, your brain has never directly experienced the external world, and it never will … Everything you experience — every sight, sound, smell — rather than being a direct experience, is an electrochemical rendition in a dark theater.
David Eagleman, The Brain: Story of You (Pantheon, 2017), p. 41
Now, I’m not suggesting that Eagleman or others who write like this endorse indirect realism. They are, I take it, making two perfectly good points: first, which aspects of the world we experience and how we experience them are determined by processes in our brains, which (let us suppose) create internal models of the environment, and, second, the brain has to construct its models from completely uninterpreted data — essentially, spiking patterns in neurons.
But such talk can easily be misinterpreted. A reader might reason as follows: ‘Scientists tell me that my brain doesn’t have access to the world outside. But I have access to something — this rich buzzing, booming world of sensory qualities. So this something must be an internal world, the model constructed by my brain. That’s what I’m directly aware of, and all my beliefs about the outer world are inferences from beliefs about this inner world.’
This is, of course, fallacious. My brain may not have access to the world beyond the skull, but it doesn’t follow that I, the whole organism, do not. The brain’s job (or one of its jobs) is to put the organism to which it belongs into a relation of tight sensitivity to the world around it — the relation we call ‘awareness’ or ‘experience’ — and the models the brain constructs are part of the subpersonal machinery that creates this personal relation.
Of course, our personal awareness of the world is not immediate and perfect; far from it. It is dependent on the hugely complex processes neuroscientists describe, and it is simplified, distorted, and caricatured in ways that reflect our needs as evolved biological organisms. But it is an awareness of the world, not of some inner simulacrum of it.
Indeed, if we were personally aware of an internal model of the world, then, from an explanatory point of view, we’d be back to square one. Neuroscientists would point out that the brain has no direct access to the models created in other brain regions and that it must construct our awareness of them from uninterpreted neural signals — making models of models. Drawing the same fallacious inference, indirect realism would become doubly indirect, then triply, and so on.