A postscript to yesterday’s post on animal sentience. Some readers took me to be proposing that we drop the concept of sentience and stop asking which animals are sentient and which aren’t. Since it’s generally agreed that sentient creatures have ethical claims on us that non-sentient ones don’t, such a policy might have worrying ethical implications.
That wasn’t my intention. I no more want to eliminate the notion of sentience than you, in my imagined conference scenario, would want to eliminate the notion of life. You would want the conference participants to revise their conception of life — to start thinking of it as a cluster of biological processes rather than as a hidden essence that is only contingently connected to those processes. Substituting ‘psychological’ for ‘biological’, that’s what I want to do with consciousness.
Revising our conceptions of life and sentience in these ways would not prevent us from continuing to ask about the distribution of those properties in the natural world, and it would, in fact, make the task much more tractable. Nor would it prevent us from continuing to regard life and sentience as ethically significant. (Indeed, the revised conceptions would provide a much better foundation for ethical concern than the old ones, which treated those features as mysterious essences, which might have no casual role in the physical world.)
This isn’t to say that the revisions would have no consequences. For one thing, they would change the way we frame questions about the distribution of life and sentience. Instead of asking ‘Is this creature alive/conscious?’, we would ask ‘Which aspects of the cluster of biological/psychological functions constitutive of life/consciousness does this creature possess, and to what degree?’.
Focusing on sentience, we would cease to think of consciousness as a binary feature and and cease to ask whether or not a creature possesses it tout court. Instead, we would think of sentience as a multi-dimensional space of possibilities, whose axes correspond to different psychological sensitivities and abilities, and ask whereabouts in this space a creature is located. In short, we would replace a neat but intractable metaphysical question with a messy but tractable empirical one.
We would also change how we approach the ethical issues. If sentience were binary, then our task would be to divide animals into the sentient sheep, who have an ethical claim on us, and the non-sentient goats, who don’t. But if it’s a multi-dimensionally graded feature, then we would need to adopt a much more nuanced approach. We would need to to determine where each creature was located in the region of sentience space and ask what kind of ethical claims creatures in that region have on us, given their characteristic sensitivities and abilities. Instead of asking, ‘Should we care about this creature?’ we would ask, ‘How should we care about this creature?’
I think that would be progress.