Indirect realism and idealism are not popular with contemporary academic philosophers (in the 2022 PhilPapers survey, only 6.6% endorsed idealism and only 5.0% sense data theory), and to many — including me — they are profoundly unattractive.
Yet they retain some popularity among a wider public. When discussing philosophy of mind on social media, I frequently find people asserting as an unquestionable truth that the only things we are directly aware of are our own experiences and that all our beliefs about the external world are inferences from beliefs about our experiences. (My response is usually to ask (a) Who or what does ‘we’ refer to in this context? and (b) How do we gain direct access to our experiences? The only answers that work in the context are ‘a soul’ and ‘magic’, or some more nuanced variants of them.)
Why is this? I don’t know (the philosophical arguments are certainly not compelling), and I’d be interested to know what others think. But I suspect that one factor may be the way neuroscientists write about consciousness. They tend to stress that the brain has no direct access to the world it represents. Here’s an example from David Eagelman:
Here’s the key: the brain has no access to the world outside. Sealed within the dark, silent chamber of your skull, your brain has never directly experienced the external world, and it never will … Everything you experience — every sight, sound, smell — rather than being a direct experience, is an electrochemical rendition in a dark theater.David Eagleman, The Brain: Story of You (Pantheon, 2017), p. 41
Now, I’m not suggesting that Eagleman or others who write like this endorse indirect realism. They are, I take it, making two perfectly good points: first, which aspects of the world we experience and how we experience them are determined by processes in our brains, which (let us suppose) create internal models of the environment, and, second, the brain has to construct its models from completely uninterpreted data — essentially, spiking patterns in neurons.
But such talk can easily be misinterpreted. A reader might reason as follows: ‘Scientists tell me that my brain doesn’t have access to the world outside. But I have access to something — this rich buzzing, booming world of sensory qualities. So this something must be an internal world, the model constructed by my brain. That’s what I’m directly aware of, and all my beliefs about the outer world are inferences from beliefs about this inner world.’
This is, of course, fallacious. My brain may not have access to the world beyond the skull, but it doesn’t follow that I, the whole organism, do not. The brain’s job (or one of its jobs) is to put the organism to which it belongs into a relation of tight sensitivity to the world around it — the relation we call ‘awareness’ or ‘experience’ — and the models the brain constructs are part of the subpersonal machinery that creates this personal relation.
Of course, our personal awareness of the world is not immediate and perfect; far from it. It is dependent on the hugely complex processes neuroscientists describe, and it is simplified, distorted, and caricatured in ways that reflect our needs as evolved biological organisms. But it is an awareness of the world, not of some inner simulacrum of it.
Indeed, if we were personally aware of an internal model of the world, then, from an explanatory point of view, we’d be back to square one. Neuroscientists would point out that the brain has no direct access to the models created in other brain regions and that it must construct our awareness of them from uninterpreted neural signals — making models of models. Drawing the same fallacious inference, indirect realism would become doubly indirect, then triply, and so on.
I think an issue here is making sure you are being as vacuous as you intend. As far as I can tell this construal of how perception could work indirect perception is logically impossible, it is eliminated by a semantic redefinition, verbal legislation as a friend of mind puts it. Basically you are saying what some define as indirect perceptions are better called direct perceptions, not because of some different proposed mechanism of how they relate to verbal reports of perceivers, but just sort of overall linguistic aptness of description. If that’s what you mean fine, but it makes things a bit tricky to say what substantive points you disagree about with sense data theorists (and possibly idealists).
To try and gesture vainly at what I mean. Suppose an astronaut on some alien world had to spend her entire time safely inside a space probe and content herself with interpreting instrument readings and then using the probe’s controls to bring about actions by the probe. If I understand your preferred description of such a situation the astronaut does not directly perceive the world outside their probe, but the space probe system as a whole does indeed perceive the world around it, fair enough.
To take the example further what if the probe requires three operators, one say interpreting the electromagnetic sensor readings another interpreting tactile and acoustic sensors and a third responsible for actuating the controls on probe external responses (moving robot arms etc.) and the trio work by having conversations about their interpretations of the sensor data and then coming to a consensus for the actuator to decide what commands to send. So then each pilot has a model of the information they have access to they use limited bandwidth verbal communication to construct a consensus and this consensus guides the probe, the probe has a set of perceptions and intentions formed out of that conversation plus the other relevant features of the probe’s operation, construction etc. this is its perception of the world constructed from the models of the operators. Fine, the thing is that in terms of individual facts, given actions the probe takes, given reports the probe issues etc. no sense data theorist is going to disagree, it’s only about how we some up those things, does the probe have an aggregate perception of the world, attitudes etc. or not as a separate thing or as a useful fiction.
The aliens of this world could have a substantive disagreement about the probe, some researchers could contend the probe is controlled by a central robot brain with no distinctive nodes of cognition while others have hit on the more accurate description that there are three distinct organic nodes of cognition plus a complicated interface between those nodes and the probes sensors and its facilities for doing things (arms, propellers and such). However the alien scholars also might be stuck on a debate that sounds purely semantic to me about whether the organic nodes in control of the space craft are best referred to as sub-personnel or rather each should be described as a distinct person where this designation changes none of the functional properties of how the probe is understood or predicted to behave etc.
I mean I realize at least dimly that sense data theorists like Ayer can make constructions like the Argument from Illusion which appear substantial, but I don’t know I think that argument is just a bad argument, it may be more obviously bad if we adopt the semantics you suggest, but I think it’s bad regardless. Like the Argument from Illusion would seem to imply in my space probe case that either the readings of the instruments are non-physical (they are clearly physical) or the models the pilots construct are (let us stipulate the pilots construct actual physical models in their discussion).
One reason people might sign on to indirect realism is that they take experience to be directly present – subjectively real – and the world it models as thus only indirectly present and real by virtue of being “merely” the representational content of the model. But they wrongly suppose they are in an awareness relation to the model when, as you rightly say, they are in an awareness relation to the world. There’s no awareness relation to the model since that’s what experience is: as conscious subjects we consist of it, and being in an awareness relation to experience would only create a regress of representation, as you point out. That said, the model is itself a sort of simulacrum, as becomes evident when we have dreams, in particular lucid dreams, in which perfectly vivid, coherent experiences don’t line up with the world at all, as they do most of the time when awake. This is what Thomas Metzinger is getting at in talking of consciousness as an ego tunnel and Anil Seth as a controlled hallucination.