Hardism does not have to follow Chalmers all the way to his strong conclusion that consciousness must be introduced as a new fundamental essence; there are potential points of divergence along the way and weak hardist positions that still keep much of the sense of mystery.
For instance, hardists can profess to be agnostic about the possibility of zombies, but insist that they are conceivable.
Even if zombies are actually impossible (as I will argue), but they can nonetheless be imagined without immediately confronting a contradiction, this is a disconcerting situation. The exercise of thinking about them would have to involve misimagining the scope of what is possible, which raises a new challenge. Chalmers’ opponents would need to explain where the misimagining has taken place. How could zombies be impossible? They’re just a manifestation of our own physics, unadorned with mentality, with atoms following legitimate physical laws, and nothing else. If they’re impossible, why does it seem so easy to imagine them?
The resolution of this lesser challenge need not be all that exciting. It could be argued that all thought experiments in relation to consciousness are a waste of time. We are almost certainly not in a position to have reliable intuitions about matters that we openly regard as confusing and mysterious. When we imagine zombies, we might merely be ignoring high-level properties while telling ourselves that they are absent. There is a very fine line between imagining something as truly absent, while accurately seeing all the consequences of that hypothetical absence, and just failing to imagine it.
We might be imagining pseudo-zombies and convincing ourselves that we are imagining zombies, or flitting between conflated versions of “experience” that we have not managed to disambiguate.
Our ability to imagine zombies might merely reflect our temporary ignorance about the nature of consciousness. Indeed, some neuroscientists have suggested that the Hard Problem is no more than an inappropriate obsession with a temporary shortfall in the reach of science, and it will dissolve as we make progress. This would be the most anticlimactic outcome.
But if I thought this was the situation, I would find consciousness much less fascinating, and I would not have written this book.
The Hard Problem is interesting, I propose, because it gets things half right – because its flaws are subtle and require careful analysis. Hardism is based on some legitimate conceptual challenges. Zombies are almost plausible.
Many hardist philosophers, like Chalmers, anticipate that zombies will remain conceivable in the face of all future progress in neuroscience. This is ultimately no more than an appeal to intuition, but even I find them conceivable, at least in part, despite having confidence that they are ultimately impossible. Chalmers and I might not be typical of the general population, but we are not alone, and many philosophers share similar intuitions. For reasons to be considered in the chapters ahead, I don’t think this situation reflects temporary ignorance, and I think that zombies will remain superficially plausible to many people no matter where neuroscience takes us.
If so, the ongoing conceivability of zombies requires a special sort of explanation.
Some philosophers, such as Colin McGinn (1989), have suggested that the ongoing mystery might simply reflect our cognitive limitations. Our brain’s structure leaves us unable to understand the mind-brain relation in the same way that chimps can’t understand algebra or quantum physics. We won’t have to extend reality to accommodate consciousness, in this view, but we won’t ever understand it, either.
I have some sympathy for the idea of impenetrable cognitive barriers, and the chapters ahead will argue that relevant cognitive barriers do indeed compromise some naïve attempts to understand consciousness, but I think McGinn’s position is ultimately too pessimistic, and I think it concedes too much ground to the hardist framing. McGinn assumes we are asking sensible questions and failing for inexplicable reasons; in contrast, I think hardists are asking ill-posed questions, and they are failing for mundane, explicable reasons. Chalmers’ framing, I propose, is self-defeating. We don’t need to stay mystified. We can adopt a different approach, explain the cognitive issues that frustrated our initial attempts, and achieve satisfactory understanding. Unlike McGinn, I think we can already propose explanations as to why consciousness resists a head-on analysis. We can understand why zombies seem superficially plausible and also understand why they are actually impossible.
Importantly, then, not all versions of the Hard Problem are the same.
If you like feeling puzzled, you can dial the mystery up by supposing that zombies must be truly possible, which leads to the strong hardist conclusion that consciousness is non-physical.
Or, if you prefer, you can dial the mystery down a little by proposing that zombies are merely conceivable, which renders our understanding of reality incomplete. Somehow, we are imagining an impossibility and not seeing the contradiction; we are stuck with consciousness as an inexplicable aspect of reality. This weaker position will often focus on our explanatory shortfalls, rather than making the assumption that reality itself needs to be extended.
These high-mystery and low-mystery versions of hardism are, respectively, what I will call the strong and weak forms. Alternatively, these are the ontological and epistemological versions of the Hard Problem, where “ontology” is the study of reality and its ingredients, and “epistemology” is the study of what we can know and understand. Chalmers and his supporters tend to roll these two versions together. They blame reality itself for our sense of bafflement – but they nonetheless appeal to conceptual exercises (like imagining zombies) to make their case. Just because we cannot explain something in physical terms (an epistemological failure) or we can imagine some scenario (reflecting our epistemic grasp) does not mean reality contains non-physical elements (an ontological conclusion).
This slide from epistemology to ontology constitutes one of the major errors of Chalmers’ framing, and it will be a key theme of this book.
If zombies are impossible but we are perpetually unable to say why, our ability to imagine them would put consciousness outside the explanatory reach of the physical sciences, potentially justifying weak hardism, but it would not put consciousness outside the physical domain. Our explanatory reach need not coincide with reality itself, a subtle but important distinction.
In general, Chalmers (and other hardists) consider this to be a distinction that we can ignore. Weak epistemological hardism and strong ontological hardism are rolled into one.
If consciousness persistently resists explanation, they say, we will still need to treat it as a fundamental essence even if, for reasons we cannot understand, it is actually entailed in the physical structure of our world. The physical sciences describe forces and movements. How could any ordinary physical process, they ask, end up intrinsically outside the explanatory reach of the physical sciences? How could some pattern in the way matter moves or how neurons fire be fundamentally resistant to analysis? In theory, we should be able to list all the causes and consequences of any set of neural events, and understand them. If analysis cannot reduce consciousness to physical components, Chalmers has argued, consciousness might as well be considered a fundamental essence, so we should proceed as though this were the case.
I think this logic is wrong, and I will argue so in the chapters ahead. The leap from finding explanatory shortfalls to proposing whole new ontological ingredients is entirely unjustified. Treating those shortfalls as though they “might as well” involve new domains or ingredients is not much better. I propose that a purely physical world can indeed have aspects that we cannot explain in the manner we might have initially wanted, but this does not constitute a major problem for science because we can understand where the explanation is going wrong and why it has failed. The outline of such a reconciliation is already available. Indeed, I propose that understanding our explanatory blind spots will be a large part of resolving the mystery of consciousness.
Weak hardism differs from McGinn’s position only in emphasis: under weak hardism, consciousness still gets credit for being mysterious. We have not failed to understand something because of our inadequacies, like chimps failing at algebra; our brains have done something so amazing that it has left science behind. Weak hardists will usually insist that these puzzles would remain if we had perfect knowledge of the physical facts and perfect cognition. It is in the nature of consciousness that it defies explanation, and it is the entire domain of functional, causal relations that fails to capture consciousness, not our specific grasp of the issues. How anyone could know what a perfect cognitive being would know without being perfect themselves is, of course, unexplained.
Not all rejections of the Hard Problem are the same, either.
If you like, you can dial the mystery all the way down. You can insist that zombies are impossible and not even conceivable, casting the Hard Problem as completely misguided. You can even protest that there is no fundamental explanatory shortfall. You might insist that, presented with a sufficiently detailed physical description of the brain, neuroscientists of the future could derive all the experiential properties of interest. The intrinsic nature of redness, for instance, could be extracted from a bunch of neural firing patterns and understood in the usual way by someone who was colour-blind from birth – if only we had the right theory of colour vision… Or the perfect cognition imagined by hardists.
Some of these optimists could be considered latent hardists: they express confidence in conventional science (and might even scoff at the supposed intractability of the Hard Problem) while nonetheless thinking about consciousness in such a way that they will never be satisfied with any actual scientific account of consciousness. Many of them apply the logic behind Chalmers’ Razor, rejecting any account that fails to capture the subjective feel of experience. Some of them envisage a version of “experience” that is not substantially different to Chalmers’ mysterious functionless extra, but they nonetheless trust that it emerges from physical neurons as some bizarre side effect that will seem less strange when we finally achieve understanding. The critical breakthrough is always envisaged to lie around the corner in some dimly seen future, where the paradoxes of hardism are imagined as being resolved, but these latent hardists maintain this hope without actually surrendering any of the problematic intuitions that make the Hard Problem unsolvable.
I think this optimism is misplaced, and it falsely assumes that our explanatory reach faces no important barriers. (It slides from ontological beliefs to epistemological assumptions for reasons that are just as flawed as a slide in the other direction.) Why should our explanatory reach precisely mirror reality? For reasons to be discussed in the chapters ahead, I don’t think it will ever be possible to read a black-and-white neural circuit diagram and, from that description alone, end up knowing what red looks like. But I also think that this limitation is understandable, once we consider the cognitive steps being attempted. Explanatory shortfalls do not indicate that we need to posit non-physical extras to account for the shortfall. Similarly, our rejection of such extras does not require commitment to a perfect explanatory reach, either.
One compromise approach, adopted in this book, is to recognise a limited level of mystery, tweaking the mystery level to accommodate our epistemic frustrations, but stopping just short of McGinn’s pessimism or the excited bafflement of weak hardism. In this book, I will happily concede that consciousness is indeed somewhat strange; it requires a special explanatory approach with a modified set of expectations. We will not understand everything in the manner we might have wanted, and some of us will be left with some troubling intuitions, but we will understand things well enough and we will see where our expectations were misguided.
This compromise still leaves us with the lesser puzzle of accounting for the original sense of mystery. Why do zombies seem plausible? Why does consciousness seem to resist the reach of science? Why can’t we derive redness from neural circuit diagrams? Why does it sometimes seem that we face a Hard Problem?
This cluster of lesser puzzles is sometimes known as the Meta-Problem of Consciousness (another term coined by Chalmers). For reasons that should be clear by the end of this introduction, the linguistic content of the Meta-Problem is identical to the Hard Problem – but it is one of Chalmers’ Easy Problems. In fact, these two Problems stand in the same relation already considered: the Hard Problem is about experience as target; the Meta-Problem is about experience as source.
A full resolution of the Meta-Problem is beyond the scope of this introduction, but addressing the cognitive sources of hardism is a major goal of this book. To anticipate, I have found that the central philosophical challenges surrounding consciousness primarily relate to representational issues. The brain evolved to represent the world, and as it got more complicated it had to represent many of its own activities. Evolution did not consult the laws of physics in constructing the internal user interface used by cognition, so while the substrate of cognition is constrained by physics, the represented entities within cognition are physics-indifferent. When scientists started studying the brain from within the system they were studying, they encountered anomalies and representational tangles.
I propose that the widespread sense of deep bafflement about consciousness arises because our proximity to our subject matter – and our epistemic capture within it – tempts us to ask inappropriate questions that bridge awkwardly from the world represented within our brains to the representing substrate. The subject matter and the organ of understanding are the same. This situation creates cognitive traps related to the nature of representation – traps that, I believe, many philosophers have fallen into, including Chalmers with his Hard Problem.
The range of possible opinions, then, is wide. The Hard Problem can be interpreted as an ontological mystery, as an impenetrable epistemic conundrum, as a cognitive puzzle we can resolve by rejecting its framing (my own view), or as a completely misguided beat-up based on temporary ignorance. Each of these positions can be mapped to a verdict on zombies, which can be: truly possible; conceivable for reasons that must remain baffling; conceivable for flawed reasons that can be exposed (my own view); or inconceivable.
If you want the world to be mysterious – if you like the prospect of science being humbled when it tries to break into the inner sanctum of consciousness – you should be hoping that zombies are logically possible, because that would imply we all have some non-mechanical spark inside us, a small private miracle. A ghost in the machine.
Conversely, if you want the world to make sense, then hypothetical zombies should make you uneasy. If so, you might be receptive to suggestions that this famous thought experiment is built on a confused notion of consciousness, which is what I will argue.
But what we want plays no role here. Where does logic itself take us? Is your zombie twin a genuine possibility? Or is it ultimately no more than a philosophical conundrum that, at best, casts some light on our misunderstanding of consciousness?
The standard philosophical term for what the zombie is missing is “phenomenal consciousness” (Block 1995), which has largely come to replace Chalmers’ term, “experience” (Chalmers. 1995); some people refer to this same broad concept as “something it is like something to be” (Nagel, 1974).
All of these terms are intended to isolate the philosophically puzzling part of consciousness, distinguishing it from what the zombie still has: the overt physiological form of consciousness that manifests as behavioural reactivity in the awake state (which Block referred to as “access consciousness”). There has been much talk of the dual nature of consciousness, and it is commonly assumed that we can use Block’s terms to label each side of the duality. The zombie has access consciousness, but it lacks phenomenal consciousness.
Unfortunately, the situation is much more complicated than that. As already noted, our concept of phenomenal consciousness is necessarily a hybrid; it is applied to the two entities that, for now, I am calling source and target. Making matters even more confusing, though, I propose that there is a yet another ambiguity to address. The source entity is itself subject to a conceptual duality – indeed, it has to be, even under a scenario in which zombies are possible, to cause us to propose two entities. Even if it exists, remember, the target entity can’t inspire belief in itself, because non-physical experience, conceptualised as a human-zombie difference, has no effects on the physical brain. The target, accepted at face-value, can’t inspire belief in anything. That means all the troubling intuitions need to be traced to the source, including the conviction that reality has two domains. Even if experience exists as a fundamental essence in a nonphysical domain, the source-target conceptual duality must arise from the physical domain.
But that means, when we try to discuss phenomenal consciousness in relation to zombies, there is constant interplay and confusion between at least three concepts: two concepts arising from the source, and then, if we accept the proposed zombie scenario, we need another concept for the disputed non-physical target. And we still have “access consciousness” in our lexicon. We face multiple levels of nested duality, all in relation to consciousness.
No wonder we are confused.
In this book, I will be arguing that the source of our concept of phenomenal consciousness is a neural model. Like any model, the representational substrate is conceptually distinct from the content, and I will argue that all of the duality that puzzles us and gets talked about within consciousness debates can be traced to the conceptual contrast between substrate and content. But, if I am right, that means we will need to expand our terminology to cover three aspects of phenomenal consciousness.
Even when we have carefully specified that we want to talk about the source of our concept of “phenomenal consciousness”, rather than the non-physical target essence imagined by Chalmers, there are at least two approaches we can take to that source. Sometimes, we might be talking about the physical substrate of the neural model; at other times we might be talking about its represented content – about consciousness as it seems to us. Statements that are true under one approach might not be true under the other, which is the case for any representational system. (For instance, the ink splodges on the pages of a novel are alphabetical characters, but we interpret combinations of them as story characters; we would be confused if we debated whether those ink splodges were sad or evil or asked how the ink gave rise to the story characters.)
If we wanted to disambiguate these notions, we could add cumbersome subscripts, talking of:
phenomenal consciousness (target);
phenomenal consciousness (source-substrate); and
phenomenal consciousness (source-content).
But this level of precision is not necessary at this stage, and such a terminology would be so ugly that readers with any love of language might just decide that consciousness is just not interesting enough to justify the assault to the eyes.
For now, I will use the term “source” in a broad, inclusive sense to mean the cognitive entity responsible for providing us with the key concept of phenomenal consciousness, putting aside the issue of how we might modify our terminology to distinguish between substrate and content. Later, I will introduce more appropriate terms.
But note that, when I propose that source and target are often confused, most of that confusion necessarily happens at the level of content. We are not really at risk of confusing a concept of physical neurons with a concept of some ghostly essence missing in your twin; obviously, your twin is not missing the neurons. If the source of our consciousness concept is a model, though, we are at risk of confusing Chalmers’ ghostly target entity with the implied content of the neural model that inspired us. The target entity is just the implicit content of the model, imagined as real, and conceptually modified to be regarded as a separate entity. Later, I will be arguing that this is where most of the puzzlement about consciousness arises – from misunderstanding this representational arrangement.
In particular, the primary confusion comes from mistaking the implicit content of a neural model of consciousness with some hypothetical entity that would vindicate the model, under the faulty assumption that the brain would only model consciousness if it were real.
Consciousness, I propose, is virtual. A representation without a vindicating target.
Few readers, I suspect, are likely to be sympathetic to this view, and I am approaching the Denial that Strawson called the silliest claim ever made, so I obviously have substantial work ahead if I am to make my position plausible.
This book, as I’ve said, is about everything we imagine your zombie twin to be missing. That means it is about the proposed target of the Hard Problem, which Chalmers called “experience”. And that target suffers from all the ambiguities already discussed.
Chalmers’ framing strikes an intuitive chord in many people, and they will insist that the puzzle before us is clear – we merely want to know how matter could be conscious, how the meat of the brain could experience mental properties.
I will be arguing that pinning down the target of our inquiries is not as simple as it might at first seem. The framing of the Hard Problem makes several self-defeating assumptions along the way, it is built around a vague, undefined target, and the result is a conceptual mess. I don’t believe that zombies are ultimately coherent, and, once we try to analyse the issues, it is not even clear what the Hard Problem is about. What is “experience” or “phenomenal consciousness” supposed to be? Is it the thing that causes our expressed puzzlement, or is it some entity that exists, mirage-like, on the outskirts of the causal story, unable to account for our puzzlement?
“Qualia”, a related term I will discuss below, is not any better. Qualia are supposed to be the raw flavours of experience – colours, tastes, emotions – with a special emphasis on the aspects that resist explanation, but the concept is inevitably tangled because fans of qualia generally do not acknowledge the physical reasons that qualia might defy the sort of analysis demanded by hardists and, in general, qualia as often conceptualised float outside the causal story, so they are incapable of causing puzzlement in the physical domain.
It will take most of this book to explore all the problems with Chalmers’ framing and to dissect the associated notions of “experience”, “phenomenal consciousness”, and “qualia”, but I will mention the biggest issue up front.
It’s there in the title of this book: The Zombie’s Delusion.
I propose that, in imagining your twin and accepting the framing of the Hard Problem, we’ve already gone astray… And we know this, because our twins have gone astray, representing themselves as housing some special experiential extra that, by definition, they lack. They are our cognitive isomorphs, so in positing their existence, we share their errors in reasoning.
If we look carefully, we can even see their primary mistake. There is a contradiction in the notion of zombies, after all, hiding in plain sight: zombies claim to possess phenomenal consciousness, but their reasons for making this claim are necessarily independent of phenomenal consciousness. From clues that don’t actually require the presence of any special extra, notional zombies are nonetheless inferring the existence of the extra. Human hardists do the same, overlooking the fact that the posited extra cannot possibly be the source of their puzzlement because it cannot generate any detectable clues.
We have to partake in the zombie’s delusion to conceive of them as possible.
If phenomenal consciousness (or experience) is what zombies lack, it cannot possibly make us puzzled or give rise to any Hard Problem that we can know about, because it cannot change the behaviour of a single neuron in our heads. Phenomenal consciousness, conceived in this way, is causally inert. It cannot give us any notion of what we are supposed to be puzzled about, because it cannot provide us with any conceptual content at all.
A causally inert extra cannot inspire the Hard Problem, which therefore pivots away from what actually puzzles us towards some hypothetical entity that is fundamentally incapable of puzzling us even if it exists. The Hard Problem is motivated by one thing, but it sets off in pursuit of something else. It is therefore fundamentally ill-posed, and – in its original, strong form – it fails to pick out a well-defined, stable target.
When we think of our zombie twins, we are necessarily using the physical machinery of our brains to think about a difference that is, by definition, non-physical and lacking all physical effects, and hence that difference should be unknowable to the machinery of the brain.
So what are our physical brains doing when they point their cognition at “phenomenal consciousness” or “experience” and imagine it as missing in our twins? What is really causing our puzzlement? And what are our twins doing when they point their matching cognition at something that, on their world, is non-existent? Why would zombie brains evolve in such a way as to misrepresent their reality so severely? Why would human brains evolve such that they happen to know about – even obsess about – an entity that cannot be detected by the physical circuitry of the brain? If “experience” lived up to its billing, we should not know anything about it at all.
From a subjective perspective, identifying the target of our puzzlement might seem to be straightforward: there’s nothing experiential going on inside your twin’s brain, whereas your mind is a centre of rich experience. Deep blue, middle C, pain, and so on. The Hard Problem is about that massive, obvious difference. What is it and where does it come from? Most of us seem to have a clear concept of what needs explaining, of what it is like to be human, and Chalmers’ defenders can insist that he is merely asking how that entity or that set of properties relates to physical reality.
In order to conceive of zombies, the subjective aspects of experience have to go missing while leaving all the physical counterparts of subjectivity behind, and that soon leads to paradox. According to the rules of the zombie thought experiment, when we switch to an objective perspective, everything must remain in order, which means that most of subjectivity is still there in the objective account, including most (if not all) of its puzzling aspects.
The laws of physics on your twin’s world are causally contained and identical to our own, so we mustn’t find any clue that anything is missing. As your perfect physical duplicate, it has your physique, your freckles, your hair, your physiology, your mannerisms. There can be no telltale blankness in the zombie’s gaze, no dullness in its speech, no flaw in its behavioural façade. There can’t be a single neuron that fires in your head while its counterpart stays dormant in your twin, and vice versa; there can’t be a single atom out of place.
(If chance plays a role in the resolution of quantum events on this human world, the same indeterminacy applies on your twin’s zombie world, but – for the sake of the argument – we must assume that cosmic coincidence always leads to the same physical outcomes.)
The precise objective mirroring between you and your twin means that it would be impossible to tell the two of you apart by any physical test, including any number of prolonged sessions with intimate confidants or skilled interrogators, or any high-tech scan of its neural activity.
The problem for defenders of zombies is that this similarity is not merely skin-deep, and it is not merely behavioural, in terms of overt bodily behaviour. Your twin’s internal cerebral processes are also functionally identical to yours. On the surface, that means you and your twin are in behavioural synch; the same motor neurons are activated, and the same muscle fibres contract with the same force. As a result, your twin mirrors your behaviour precisely, driven solely by empty mechanisms.
But among those behaviours is language. And among the unseen causes of our language output is, well, everything we can ever have good reason to talk about. (There are subtleties here; we can talk about the largest star in the universe even if it is too far away to have influenced any species on Earth. I will address those wrinkles later, but the essential point remains valid.)
Under matching physical conditions, your twin will always say exactly what you would say. The muscles controlling its mouth, throat and vocal cords (or its typing fingers) respond to the physical activity of motor neurons that receive the same inputs as the corresponding neurons in your own motor cortex. That activity can in turn be traced to neurons in language-output areas of its brain, which again provide a perfect match to yours, and so on, through the entire causal history of everything you both say on every topic, and that includes everything you could say, at any moment, if you decided to express your private thoughts, or we did a clever scan and decoded the linguistic content of your physical brain.
All of which is very strange and ultimately paradoxical, because the zombie will probably say that it is conscious in the special way that humans are conscious, and so it will argue, quite erroneously, that it can’t possibly be a zombie. It will only deny that it has the special human extra if it denies the possibility of zombies, which is paradoxical in its own right – a zombie is denying that zombies could exist – though in that case the fault would lie within the person conceiving of the zombie. (If I entertain the idea that my twin is possible, but I imagine that my twin says that zombies are impossible, then the basic rules of the thought experiment have already been broken, because my twin and I are not in synch.)
The precise behavioural match between humans and zombies extends to all human literature, including every reference, academic or literary, to every known aspect of human consciousness. That means, to comply with the thought experiment, we must rediscover the Hard Problem on the zombie’s world and in the zombie’s brain – where it is linguistically identical to the human version of the Hard Problem, with all the same terminology, the same arguments, and the same thought experiments. Indeed, the entire canon of literature on the Hard Problem is still there on the zombie’s world – albeit stripped of its purported target, and therefore exposed as ill-motivated. Unfortunately for fans of zombies, the reasons that zombies report a Hard Problem must be spurious. Ironically, weaker forms of puzzlement could be entirely rational on the zombie world; this criticism only applies to versions of the Hard Problem that pursue a non-functional extra, but zombie worlds necessarily entail the stronger forms of hardism. When fans of zombies discuss their special extra, their notional zombie twins reveal themselves to be impossible, incoherent, or irrelevant.
Or, as my title suggests, delusional.
Even Chalmers notes that this represents a problem for his framing.
[Quote]
Now my zombie twin is only a logical possibility, not an empirical one, and we should not get too worried about odd things that happen in logically possible worlds. Still, there is room to be perturbed by what is going on. After all, any explanation of my twin’s behavior will equally count as an explanation of my behavior, as the processes inside his body are precisely mirrored by those inside mine. The explanation of his claims obviously does not depend on the existence of consciousness, as there is no consciousness in his world. It follows that the explanation of my claims is also independent of the existence of consciousness.
To strengthen the sense of paradox, note that my zombie twin is himself engaging in reasoning just like this. He has been known to lament the fate of his zombie twin, who spends all his time worrying about consciousness despite the fact that he has none. He worries about what that must say about the explanatory irrelevance of consciousness in his own universe. Still, he remains utterly confident that consciousness exists and cannot be reductively explained. But all this, for him, is a monumental delusion. There is no consciousness in his universe—in his world, the eliminativists have been right all along. Despite the fact that his cognitive mechanisms function in the same way as mine, his judgments about consciousness are quite deluded.
(Chalmers, The Conscious Mind, 1996, emphasis added.)
I will address the details of these two paragraphs later – they contain some subtle rhetorical tricks and faulty assumptions. But it is enough, for now, to note that Chalmers’ framing requires that his own claims about consciousness are not motivated by his “experience”, and he even concedes this.
Earlier, I posed a simple question. Feel free to update your answer, knowing your twin will do the same.
How do you know you are not the zombie in this scenario?
You are, presumably, still confident that you are the human. So where does your confidence come from? How do you know you’re not the zombie? How do you know that this is not actually a purely physical world? How do you reject the possibility that this world is what Chalmers would call a zombie world? Perhaps what we are calling a human world with special extras is just a flight of fancy dreamed up by purely physical beings who lack the posited extra. Why isn’t there a hypothetical world in which your human twin is blessed with some special extra, called “experience”, but you lack it, having only the physical analogue of that special extra?
Perhaps your answer is: It’s just obvious.
Perhaps it is: I know I am not a zombie because I have a distinct, clear understanding of the human-zombie difference, and I can feel myself experiencing that difference right now. I know I’m not dead inside.
Perhaps your answer is: I can imagine redness, and the subjective redness I am bringing to mind right now has a quality that could never be derived from an objective description of my brain and its neurons.
Perhaps you quoted Descartes: I think, therefore I am.
Or perhaps your answer is: I can’t be a zombie because I don’t believe zombies are possible; no human brain can have a valid reason for thinking it houses a special non-physical extra.
Perhaps, if you are epistemologically cautious, you would insist that the logical possibility of a special extra cannot be entirely excluded, but even if such an extra existed it would have no effect on your cognition, so your zombie status is technically uncertain, even to you. In that case, though, you might add that there are no good grounds to propose such an extra, so you are probably a human without such an extra. Zombies are probably a silly flight of fancy, and the extra is so inconsequential that it doesn’t really matter.
Right now, like you, your twin is reading a book about consciousness. This book. This paragraph. Perhaps, in the absence of consciousness, we might want to call it “pseudo-reading”, but there is no cognitive difference between what your zombie twin is doing and what you’re doing, so “reading” is a reasonable term. If it helps, when it comes to discussing the cognitive activities of your twin, imagine that the prefix “pseudo” has been pre-pended to every mentalistic verb.
As the zombie reads about “phenomenal consciousness”, every sentence is parsed by language centres that are every bit as sophisticated as your own, and checked against a world model that is identical to yours. Quite possibly, it will do the zombie equivalent of subvocalising as it reads, activating a virtual audio stream, offering agreement or counter-arguments, along with you as you both parse the same sentences. If we were to interrupt your twin and ask it about what it is like to be a conscious human and how that situation differs from the blank interior of a hypothetical zombie, it would be able to discuss the contrast with all the eloquence and apparent insight that you could muster. We know it would not say: “I have no idea what you’re talking about.” It would not ask: “What is this strange consciousness thing that humans are supposed to have?” Its expressed opinions on the nature of consciousness would be identical to yours, complete with apparently sincere declarations of a rich inner life.
Philosophical zombies are problematic because, whenever you think about consciousness as non-physical, the brain in your notional zombie twin is doing the same thing as your brain, and it is pseudo-thinking about non-physical consciousness for purely physical reasons. It is reading along, and a few paragraphs ago it provided your answer as to why it cannot possibly be the zombie in this scenario.
But it is wrong, and whatever reason it gave must be unreliable.
Which makes you wrong, too. You can’t be employing superior logic. It’s the same logic. You can’t be drawing on special evidence. Your brains have the same inputs and perform all the same computations.
This is not a shallow linguistic mirroring, such as we might find in an AI trained to feign consciousness. The mirroring extends to the entire causal network of every word you produce and to the logical sequencing of all of your thoughts. Your twin says the same thing, on identical evidence, applying the same processes of inference and deduction. It will argue for – or against – the special nature of human consciousness and the possibility of zombies with the very same words, and for the very same reasons, as you.
Just as no physical test can detect the difference between you and your zombie twin, no neural circuit in your identical brains can detect a difference either. You have no neurons that differentially react to the presence of the special human extra, and it has no neurons that reach for evidence of consciousness and flounder because they fail to find it. Its neurons find exactly what your neurons find, nothing more, nothing less.
Here, then, is further evidence that the standard conception of phenomenal consciousness must involve a hybrid. This book, recall, is about what we imagine to be absent in the zombie. But the zombie, reading along, will not find the book to be about nothing. If it expresses interest in the Hard Problem and it claims to possess the special target of that problem, “phenomenal consciousness”, its version of the Hard Problem is necessarily misguided. But it is motivated by the very same set of influences that led Chalmers to describe the Hard Problem.
That means there is something – perhaps not consciousness as you know it, and certainly not consciousness as Chalmers wants us to imagine it, but some lesser thing – that the zombie references within its brain when it reads sentences about the human-zombie difference.
When your twin (unconsciously) complies with the instruction to conceive of some lesser being, a “philosophical zombie” that is dead inside, the words cannot be met with confusion – not even with a blank, zombified analogue of confusion. Something gets represented in the zombie’s neural circuitry: a model is formed of the mysterious entity that your twin represents its twin as lacking, while its twin does the same. Your twin represents this as a difference between “humans” and “zombies”, identifying with the first group, but from our perspective we would consider these two groups to be “zombies” and “meta-zombies”, which are equally unconscious, with no difference between them at all. That means, if we take the possibility of zombies seriously, no physical difference is required for hardist cognition to infer an experiential difference.
So what is that dead analogue of your own special consciousness, and why might it come to be erroneously represented as non-physical and non-functional? What does your physical brain actually engage with when it thinks about phenomenal consciousness, when it says to itself: “My zombie twin doesn’t have this”?
To be continued…
Are meta-zombies possible?
https://open.substack.com/pub/zinbiel/p/are-meta-zombies-possible?r=2ep5a0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false