Welcome to The Zombie’s Delusion, a new blog about the nature of consciousness.
Why zombies? What delusion?
And why does the world need another blog about consciousness?
Most readers will have guessed that my blog title does not refer to the creatures from B-grade horror-movies, but to philosophical zombies.
For those new to the idea, zombies are hypothetical beings that are physically and functionally identical to humans; they are physiologically alive, but they are dead inside in the sense that they lack subjective consciousness. They consist of anatomy and physiology and nothing else. That physiology produces convincing human behaviour, because every neural pathway in a zombie’s brain is connected just as it would be in a human, but zombies are missing the internal, subjective aspects that we know from our own situation. They don’t experience redness or pain; they lack any inner spark. There’s no one inside, looking out. It is not like anything to be a zombie.
If you turned into your zombie twin overnight, your conscious life would be over — you would be dead in all the ways that really matter — but no one would notice.
(On a side note, zombies obviously don’t look anything like the AI-generated pictures that accompany this blog; they look like normal human beings in every way. Hopefully, you’ll forgive a bit of poetic licence. But this is an early warning sign that something is off-kilter with our concepts. I will be arguing that there could never be any justifiable reason for drawing zombies in any way that distinguishes them from humans, not even if we were trying to draw their interiors. But that argument lies ahead.)
Philosophical zombies are implicitly at the centre of David Chalmers’ Hard Problem of Consciousness, an expression he introduced in 1995, as follows.
“The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information‑processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience.”
Chalmers’ landmark paper avoided the Z-word, but what Chalmers has in mind when he talks about experience is intimately connected to the idea that we could have ended up as neurons firing silently in the dark, lacking the special something that it-is-like-something-to-be. (There are other potential conceptions of experience, but this is the one most closely allied to the Hard Problem.)
The Hard Problem essentially says: We could, conceivably, have been zombies, so why aren’t we?
If zombies are logically possible, argues Chalmers, then consciousness cannot be the inevitable consequence of putting neurons together the right way. Within the zombie, all the neurons have been put together the right way by definition, and that still hasn’t been enough to create subjective consciousness. Any functional explanation of brain processes that we might propose in the hope of accounting for consciousness would apply equally well to our zombie twins, because they are our perfect functional isomorphs inside and out. In their case, the described mechanisms will exist without consciousness, so our theory will necessarily miss its mark. As long as we stay with functional accounts of the brain, we won’t have explained the special spark of experience (which is being envisaged by Chalmers as the elusive human-zombie difference). We’ll only have explained the objective behavioural analogues of consciousness, not the subjective aspects.
To get subjectivity into our theory, Chalmers believes, something more than standard neurobiology will need to be involved. Accommodating consciousness within science will be deeply problematic, because science cannot provide any objective clues as to what that extra could be, and we could never detect its presence.
If zombies are possible, this line of thinking constitutes a devastating blow to the standard scientific view that consciousness is ultimately a property of the physical brain — a view we can call physicalism, for now, though we will need to be more precise later.
But are zombies possible?
My short answer is no.
My long answer will be in the posts ahead, but I can point out some preliminary issues up front. I think that Chalmers’ line of thinking is paradoxical, and it reduces experience to an epiphenomenon. If experience really had no causal effects on our cognition, we would have no reason to be puzzled about it. We would not know about it. We could not even discuss it, except accidentally, by discussing something else — something that our language centres could actually engage with.
We pick out experience during a cognitive act of introspection, which zombies can also perform, and trying to separate their cognitive target from ours eventually leads to an incoherent concept, as I will argue in the posts ahead.
There are much better ways of conceptualising “experience” than assuming it can be successfully captured in the act of imagining a human-zombie difference. A minimum requirement for a coherent view of these issues is that we are puzzled by something capable of causing the puzzlement. (Is that too much to ask?)
If I’m right, and zombies are impossible, the Hard Problem has its own problem. It is targeting “experience” conceptualised as a human-zombie difference, but if there is no such potential difference (because zombies can’t exist), the Hard Problem is not targeting anything at all. Science can say: Come back when you have a coherent target for us to explain.
If so — and the full supporting argument obviously lies ahead — the Hard Problem needs to be replaced by three different questions:
Why are zombies impossible?
Why do they seem possible?
What would be a better way of thinking about experience?
This blog will primarily be about my answers to those three questions, but I won’t be addressing them directly in this post. I will merely sketch an outline of the territory ahead.
Zombies might sound silly (and I will be arguing that they constitute a flawed concept, so they are indeed silly in a deep sense), but the idea of zombies is not frivolous. Let’s overlook the allusion to B-grade monsters. Zombies merely provide a vivid, playful name for a serious philosophical issue that strikes to the heart of neuroscience, and all the same issues can be expressed without the Z-word.
The mere possibility of zombies touches on what makes us human and on the nature of reality itself.
And they have never been more relevant.
In 2025, there is already a debate about whether contemporary AIs are conscious. Although I think that debate is premature, because current AI architectures don’t include anything that remotely resembles human consciousness, the distinction between unconscious automata and sentient beings is blurring. It is getting harder to distinguish the behavioural products of genuine consciousness from the output of clever role-playing machines.
Just this week, a new AI was announced (“inait”) that is explicitly modelled on human neural architecture.
I predict that, by the end of this century, humans will live alongside conscious AIs.
Perhaps you think this is impossible. Perhaps you believe an AI will inevitably fail to produce real consciousness, because an AI will always consist of insentient algorithms and will lack some elusive special spark. In other words, you might think that digital zombies are possible.
If so, we are back to the Hard Problem in a new guise.
Imagine a perfect digital replica of you, accurate to the finest details, with each one of your 86 billion neurons modelled with enough finesse that this silicon replica chooses all the same motor neurons as you and therefore matches your behaviour perfectly. Would the replica be conscious? If so, how could it possibly be conscious? Why isn’t it a zombie?
If you believe such an entity cannot be conscious, why not? After all, it will say that it is conscious — because you do, and it says whatever you say. And it will argue for its consciousness with all the arguments that you might employ.
Perhaps you believe consciousness is an intrinsically biological phenomenon, a stance sometimes known as biological naturalism (and sometimes as bio-chauvanism). Perhaps there is some biological process that we will understand in the future, after further breakthroughs in neuroscience, a process that produces consciousness as a biological by-product or some non-algorithmic trick. But what could biological neurons have that gives them the capacity for subjective experience? Why couldn’t digital circuits achieve the same subjective result?
If the biological extra you have in mind creates consciousness without influencing behaviour, why would it be there in a biological organism? From an evolutionary perspective, it should be completely invisible to the processes of Darwinian selection. There could be biological entities that didn’t bother doing the special trick, and evolution would not notice the difference.
Conversely, if your imagined biological extra does influence behaviour, then that should be reflected in your digital model, at the behavioural level, and we can start again. We can update the model until it matches your behaviour and internal functional dynamics, and then re-ask all the same questions about whatever it is that hasn’t been captured in your model. How has it survived the ruthless evolutionary filter?
Intuitions on these issues are widely divergent. In many discussions, it is assumed that all we can do is fall back on those intuitions and agree to disagree if our intuitions don’t align. It is often accepted that there can be no firm grounds for declaring who is right, in these discussions, because we are talking about something that escapes the reach of objective analysis.
I think we can do better.
What we can conjure up during an imaginative exercise might guide our intuitions, and your imagination can do whatever it likes… But I think we still need to consider what we can logically conclude about the possibility of zombies.
What happens to the coherence of our world view if we tentatively adjudicate one way or the other?
Suppose that zombies are possible. How could we ever know that we’re not zombies? (And do we even know that we’re not zombies?) What reliable mental processes could we possibly call upon to support the conclusion that we have the thing that we imagine zombies as lacking? For that matter, what is the nature of the entity that we imagine zombies as lacking? Can we say anything sensible about it at all?
If our cognition and a zombie’s cognition are functionally isomorphic, both of us reaching our conclusions for the same underlying functional reasons, does Chalmers’ notion of non-functional “experience” even make sense? What could put content into our idea of a human-zombie difference, if the entity we’re thinking about, “experience”, is supposed to be outside the causal nexus of cognition?
Experience, conceptualised as something lacking in zombies, should be unable to contribute content to our concepts, so any informational content within our concept of experience must have come from something else — something incompatible with the notion of a non-functional entity.
That “something else” cannot be “experience” as envisaged by Chalmers, because a functionless extra cannot modify the content of any of our concepts. If they existed, zombies themselves would have that something else, because they have exactly the same informational content in their representation of experience. Their causal network is in synch with ours; they say all the same things for the same underlying reasons. By definition, the human-zombie difference doesn’t cause anything to happen within our cognition, so it can’t cause our puzzlement.
Why not reserve the word “experience” for that “something else”, then, given that it is the real source of our opinions, and the true cause of our puzzlement? More importantly, what is that other thing, if it is not Chalmers’ “experience”? Isn’t that a much more pressing question than Chalmers’ Hard Problem? Why waste time getting puzzled about something that, by definition, can’t be the true cause of our puzzlement? Or is this other thing what Chalmers originally wanted to explain? Has the Hard Problem simply missed its intended target?
These considerations suggest that zombies might entail hidden contradictions, that there might be different notions of experience in conflict with each other, and, worse, that the Hard Problem is not even targeting the true source of our puzzlement.
But if zombies are not possible, why do they seem plausible to so many people?
And what can this blog add to the debate?
In real life, I am a clinical neurologist. My profession has obviously had a major impact on the development of my philosophical beliefs, but my interest in this puzzle predates my time in medicine - and even predates publication of Chalmers’ landmark paper, the one in which he described the Hard Problem.
I was introduced to the mystery of consciousness when, in the summer of early 1984, I picked up a friend’s copy of “The Mind’s I”, the excellent anthology of stories and essays curated by Daniel Dennett and Douglas Hofstadter. Within its pages, I read about Searle’s Chinese Room for the first time, along with Turing’s Imitation Game, brains in vats, Nagel’s bats, Schrödinger’s Cat (and other classic thought experiments that broke up the Dr Seussian theme).
The story that struck the deepest chord was “The Story of a Brain”, by Arnold Zuboff, which begins as follows:
Once upon a time, a kind young man who had enjoyed many friends and great wealth learned that a horrible rot was taking over all of his body but his nervous system. He loved life; he loved having experiences. Therefore he was intensely interested when scientists friends of amazing abilities proposed the following:
“We shall take your brain from your poor rotting body and keep it healthy in a special nutrient bath…”
Zuboff’s story was essentially zombie-adjacent. The fictional scientists tried to reproduce their friend’s experience by stimulating the laboratory neurons to fire in predetermined patterns, but the author cunningly makes this sound fundamentally impossible and ultimately pointless.
By various twists and turns, the project goes awry, until single neurons on shelves are being stimulated in increasingly arbitrary patterns, and even a determined physicalist would struggle to believe that the kind young man of the opening paragraph was still there in the lab, experiencing anything.
How could any individual neuron know what the other neurons were up to? Why would any neuron care if it fired a day early, or a day late? Why would the overall experiential process change if one neuron was substituted for another? How could the young man tell?
Zuboff was actually demonstrating the Hard Problem, though it wasn’t called that yet. I recognised a puzzle that was worthy of further attention, and I was hooked.
Most of the essays in “The Mind’s I” remain relevant and readable 40 years later, and I still recommend it, though the science of consciousness has come a long way since then, and there are several new books dealing with the issues. (I’ll list some at the end of my next post).
For me, “The Mind’s I” piqued an interest that eventually veered me towards the study of clinical neurology.
Along the way, I also spent some time in neuroscience laboratories. I have spent hundreds of hours staring at living brain slices, patching on to individual neurons, playing with their voltage, and watching them fire. It is indeed difficult to think of consciousness as emerging from any collection of such neurons.
Over the years, I have not stopped thinking about the philosophical challenges of consciousness and the central puzzle laid down by Chalmers, and others before him:
Why aren’t we just neurons firing in the dark?
At the most superficial level, this question obviously can’t be answered with the tools of neurology or neuroscience, because zombies share our neurological structure — that’s Chalmers’ point, after all.
Some readers, I imagine, would love to hear that I have collected a folder of neurological cases that defy scientific explanation, thereby proving that the human brain houses a miracle. They would like me to report that I am humbled by our vast ignorance and ready to declare consciousness deeply baffling, incapable in principle of ever being understood. I’ve assessed patients who have survived cardiac death; perhaps I’ll be able to report that they glimpsed a hidden reality beyond this mortal domain?
Well, I have no such cases to report. Over many years of daily exposure to injured or diseased brains, I have seen no reason to doubt that the mind is a high-level aspect of physical neural processes that are entirely compatible with the rest of science.
Physicalism is the default hypothesis among neuroscientists and neurologists, so an anti-physicalist might point out the potential for confirmation bias. Perhaps I am caught within a bubble of such bias, surrounded by clues I repeatedly miss, psychologically unable to see the problems with physicalism, protected by a similar blindness in my colleagues.
Such a situation cannot be totally excluded, but I think it unlikely. My physicalist perspective on consciousness has not just survived daily exposure to all of the strange and wonderful (and tragic) cases that clinical neurology has put before me; the physicalist hypothesis has been an indispensable tool. I literally could not do my job without trusting the tight relationship that is observed between brain and mind. Several times within the course of taking a patient’s history, I must consider the causes of the experiences they report, and the hypothesis that the causes are physical seems as secure to me as the principles of heavier-than-air flight.
That is, I consider physicalism to be about as speculative as the accepted principles of aerodynamics, which I trust with my life whenever I step on a plane.
That does not mean I consider the philosophical issues to be trivial. And of course — before someone says it for me — it is generally not appropriate to treat correlation as proof of causation. But the mind-brain relationship is not merely correlative in a statistical sense; it is much tighter, more detailed and more extensive than many outsiders realise. As far as anyone knows, the so-called correlation between brain and mind is 100%, because they are different aspects of the same thing.
Unfortunately, not even that counts for much, in the face of Chalmers’ challenge, because all the correlations I observe in patients are ultimately obtained in the third-person. I would draw the same conclusions in a zombie world, if there were such a thing.
Some readers with an anti-physicalist bent would probably be content for me to concede I have not found the answer to the Hard Problem in any of the MRI scans I’ve studied over the years, just as I could not see it when studying single neurons in the lab.
To them, I can happily concede that the answer to Chalmers’ puzzle isn’t to be found in brain scans, or anywhere within conventional neuroscience… Because it’s not that sort of puzzle, and mechanistic solutions have been ruled out by Chalmers’ framing. In fact, despite my ongoing daily dealings with the hardware of cognition, the aspects of consciousness that interest me the most are still to be found in the same armchair musings that fascinate philosophers.
For instance, science has difficulty accounting for elemental experiences, like seeing red, feeling pain or smelling cinnamon — these are the raw flavours of experience usually known as qualia. I accept that we won’t be able to derive qualia from circuit diagrams, so if we think that such a derivation is appropriate, and if we believe that it is the only avenue to understanding, then neuroscience will indeed remain stumped. Accordingly, no amount of neuroscience training or clinical neurology will be much help. I also accept that no functional theory of consciousness is likely to capture, in a truly satisfying way, the ineffable sense I get when I consider that I am a conscious being, conscious of being conscious.
Indeed, all the elements I find on private introspection seem to be in stark contrast to what my daily clinical practice tells me about the contents of our skulls. From a scientific perspective, I know that we have a lump of meat in there, and nothing else, and yet my primary sense of mystery does not relate to the meat but to the ethereal, magical mind we seem to feel from the inside.
The contrast between brain and mind can seem perplexing, and it might seem self-evident that the Hard Problem is no more than an honest acceptance of that contrast. Chalmers’ landmark paper was titled, “Facing Up to the Problem of Consciousness.” It is easy to think that, as soon as we face up to the contrast, we have a problem. And we do, but that’s not the Hard Problem as posed by Chalmers. Future posts will explain why in more detail, but recognition of the contrast is not enough to create Chalmers’ framing. We also have to believe that zombies are possible.
Before there was the Hard Problem, the philosopher Colin McGinn suggested that the “hard nut” of the mystery of consciousness was this one: How can technicolour phenomenology arise from soggy grey matter?
Is McGinn’s Hard Nut the same as Chalmers’ Hard Problem?
It comes close, but I would say “no”, because McGinn’s version does not allude to zombies, and McGinn, a physicalist, did not think zombies were possible. McGinn’s phrasing merely points out the puzzling contrast that we all recognise. His solution to the puzzle was one for which I have sympathy: he proposed that we face cognitive closure on this particular puzzle, much as quantum physics is just too hard for chimpanzees to understand. We lack the cognitive capacity to see how technicolour phenomenology arises from soggy grey matter, but that does not mean reality itself can keep these apart. As far as we know, the phenomenology is inevitably there whenever the conditions are right; we just can’t see why, because we are beings with cognitive limitations.
This approach is sometimes called mysterianism — and I suppose I could be characterised as a meta-mysterian.
McGinn’s distinction between what we can understand and what is happening in the reality we are trying to understand is an important one, and I will make similar observations in the posts ahead. I don’t agree with McGinn that we lack the cognitive capacity to understand these issues, but I do agree that our own cognition can get in the way, if we let it.
Clinical neurology has not provided (and cannot provide) any direct answer to Chalmers’ version of the puzzle of consciousness, but I believe it has nonetheless provided a perspective that has helped clear the fog. It can cast light on more neutral versions of the puzzle that merely start with a recognition of the mind-brain contrast, and then look for sources of that contrast. That is, I think we can resolve the Hard Problem, not by accepting its terms and finding a miracle, but by realising that it is conceptually flawed, and then studying those flaws in light of our own cognitive structure.
One of the biggest flaws hinges on the assumption that zombies are logically possible.
It doesn’t really matter whether we can imagine zombies. I can certainly imagine them, and I will assume that you can, too (perhaps with some misgivings). Our ability to engage in that cognitive exercise without an immediate sense of contradiction doesn’t mean that they are actually logically possible.
Like many before me, I’ve come to the conclusion that appealing to our intuitions is a very poor way of trying to settle the areas of contention. On the question of zombies, and all the other difficult issues in this space, we need to consider whether logic itself can lead us to any stable, coherent view of reality.
I think it can, and that stable conception excludes the possibility of zombies, forcing us to consider other conceptions of “experience”.
Conversely, including the potential for zombies in our view of reality leads to a conceptual mess — and, unlike McGinn, I think we can say why.
When I mention to friends that I am interested in the philosophy of consciousness, they often assume that I professionally outrank the philosophers with whom I disagree: I know about the brain; philosophers merely guess from atop their ivory-tower. Philosophers are likely to take the opposite view: they ponder subtleties that will be missed by the foot-soldiers struggling in the consciousness trenches.
The truth is that philosophers and neuroscientists each need the perspective provided by the other. Better science leads to better philosophical questions, and better philosophy informs science.
The value of clinical neurology in addressing these issues does not lie in any real prospect that a neuroscientific approach to consciousness will ever answer Chalmers’ Problem directly; it lies in the way that it opens up an important meta-perspective.
In my study of neurology, I am often humbled by the complexity of the brain, and I know I won’t live to see all of its secrets revealed, but I am aware of its limits, too.
And those limits turn out to be a rather important piece of the puzzle.
Like any clinician who deals with consciousness, I’ve looked behind the curtain; I’ve seen cognition break and mend and break again. When the machinery of the mind is exposed in this way, certain approaches to the mystery of consciousness become less appealing, and many popular arguments in the field become unconvincing.
To anyone familiar with the brain’s wiring, for instance, it is apparent on functional grounds alone that scientists won’t be able to derive colour as we know it from circuit diagrams: their neuroanatomy prevents this type of explanatory journey. That means that the so-called “explanatory gap” for qualia cannot be taken as evidence of a fundamental metaphysical mystery. The brain seems causally complete, after decades of close study by neuroscience laboratories around the world; that means anti-physicalist views are running out of space to hide a non-physical consciousness. It is clear that the brain is not structured as a device that detects consciousness like a radio, so conceptions of consciousness that appeal to an interaction between the physical brain and a non-physical field of mentality are in direct conflict with a wealth of neuroanatomical evidence. Consciousness might seem like an ethereal cloud of awareness from within, but many of my stroke patients demonstrate, in the most tragic way, that awareness itself has a left and right half, split on anatomical grounds. The ethereal fog has a blood supply; it can be cut with a surgeon’s knife.
Most of all, through the accidents of neurology, human cognition has been exposed as modular. That modular architecture has deep implications for the philosophy of consciousness, because all philosophy is conducted in the brain, using the very organ we are trying to understand. Most philosophical questions in this field can be recast as attempted cognitive journeys through a structured space of cognitive possibilities.
There is no idealised logical space in which we can consider the issues; we can only think about our brains with our brains.
The physicist Emerson M. Pugh said, “If the brain were so simple we could understand it, we would be so simple we couldn’t.”
I think this alludes to an important part of the difficulty, but like McGinn’s suggestion that the puzzle is beyond us, Pugh’s diagnosis is unduly pessimistic… Because we can understand ourselves, provided that we do not have unrealistic expectations of what the explanation will feel like, and provided we allow for our own cognitive modularity.
These considerations lead to a part of the puzzle that has not been given adequate attention:
What goes on in the brains of philosophers when they contemplate these matters?
This is one way of approaching what Chalmers has called the Meta-Problem of Consciousness, which is the challenge of explaining why a physical brain might come to be puzzled by consciousness in the particular way that has been captured by the Hard Problem. Why would a brain complain about something that purportedly had no effects on its cognition?
“At least if we accept that all human behaviour can be explained in physical and functional terms, then we should accept that problem reports [that is, reports of a Hard Problem] can be explained in physical and functional terms. For example, they might be explained in terms of neural or computational mechanisms that generate the reports. […]
We can reasonably hope that a solution to the meta‑problem will shed significant light on the hard problem. A particularly strong line holds that a solution to the meta-problem will solve or dissolve the hard problem. A weaker line holds that it will not remove the hard problem, but it will constrain the form of a solution.” (Chalmers, 2018.)
I am one of those that hold the “strong line”: resolution of the Meta-problem will completely dissolve the Hard Problem, by showing it to be an ill-posed problem. As long as we think of consciousness in terms of a human-zombie difference, the Meta-problem is necessarily the study of a faulty concept.
The Meta-Problem, unlike its more famous sibling, is well-posed, and solveable.
Whereas the functional structure of the brain cannot possibly account for a human-zombie difference, it can account for belief in that difference (to the extent that beliefs can be expressed in physical behaviour, such as writing about consciousness). That puts belief in a human-zombie difference in a precarious, paradoxical place, threatening the entire coherence of Chalmers’ framing.
When we understand why a physical brain might come to claim it houses a non-physical entity, we will also understand that this claim rests on faulty assumptions, and this blog will be an exploration of those faulty assumptions.
The Hard Problem and the Meta-problem have almost identical linguistic content, but they stand in a representational relationship to each other such that they have very different conceptual content.
Take the entire cannon of literature on the Hard Problem, put it in quotation marks, and ask:
Why do humans say this: “…”?
The Hard Problem is still there inside the Meta-problem, so it has not gone away, but the Meta-problem provides some clarifying distance from the puzzlement. We can talk about the brain’s representation of paradoxical concepts, instead of committing to those concepts.
Even when we have addressed these issues, zombies will probably still seem conceivable, but — in my experience — considering the puzzles of consciousness from a meta-cognitive perspective clears up most of the main points of confusion. In my own case, vague intuitive awe has been replaced with respect for what is, at its centre, an incredibly complex set of subtle conceptual traps that human brains — including mine — find difficult to think about. It is easy to make a misstep, to muddle representational levels, and to draw faulty conclusions.
Fortunately, the puzzles of consciousness remain interesting after stepping out of Chalmers’ framing. As a consequence of the fact that I can still imagine zombies, I can still make myself feel puzzled in a Chalmers-like way whenever I want; it is just that I no longer feel confused about consciousness. In many ways, I actually find the puzzle more intriguing than I did all those years ago, and more rewarding to think about, in much the same way that evolutionary theory is far more fascinating and multilayered than the Book of Genesis.
Chalmers’ landmark paper describing the Hard Problem was published in 1995 — it turns 30 this year.
That means I’ve been contemplating Chalmers’ Hard Problem for about a decade longer than his famous expression has been around. The puzzle has acquired a new name, but it is the same one that captured my attention all those years ago.
In blog posts to come, I will have quite a bit to say about what Chalmers’ phrase implies, and how the Hard Problem fits into the broader study of consciousness. To anticipate, most of what I will be saying will be quite critical of Chalmers’ framing. I think the Hard Problem is built on a number of conceptual errors, and I will explain why. Along the way, we will need to consider all the other classical philosophical battlegrounds in this field: Mary the Colour Scientist, Searle’s Chinese Room, Leibniz’s Mill, and so on.
The conceptual framing around the Hard Problem is what I will call hardism.
In opposition to all the classic thought experiments that try to inflate the mystery of consciousness, I will be presenting the case for demystificationism, instead, which is the process of placing consciousness in the natural world without resorting to any strange, science-busting miracles. Where this project butts up against the Hard Problem, it can be considered under the label anti-hardism.
Ultimately I will be advancing an alternate view that is best considered a form of virtualism, which is partially related to illusionism — a controversial philosophy that is often unfairly summarised as the claim that consciousness is an illusion.
I do not think that consciousness is best characterised as an illusion, but I agree with the illusionists that there is often an illusion involved when we think about consciousness, particularly if we think about it from a hardist perspective. Relentless exposure to the cracks in the facade of consciousness has contributed to my conviction that there is no fundamental mystery involved, just the illusion of one. The brain is a machine that often thinks it contains a ghost, and that ghost is illusory. But the source of that thought is also a good candidate for being called “consciousness”, so I won’t be arguing that consciousness does not exist.
I will be arguing that we should reconceptualise it, instead — which is what the illusionists were trying to say all along.
It is common to subdivide opinions on consciousness based on what people think about the fundamental elements of reality. Is everything physical? Is reality ultimately mental? Are there two domains in reality and, if so, what is their causal relation? The answers to these sorts of questions usually produce groups such as physicalists, idealists, dualists, panpsychists, functionalists, computationalists, illusionists, eliminativists, and so on.
I think a more useful division of opinions is one based on attitudes to the Hard Problem. Does the Hard Problem identify a genuine mystery, or not? Around this divide there are many different positions, but we can identify two broadly opposed camps: those who accept the legitimacy of the Hard Problem and those who do not.
These camps should be recognisable to most seasoned debaters in this field, as covered in an excellent article by Rafael Harth over at LessWrong, in 2023.
Excerpt from the link:
Camp #1 [anti-hardism] tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. In other words, once we've explained the full causal chain that ends with people uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.
Conversely, Camp #2 [hardism] is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Therefore, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.
Note that many of the dichotomies mentioned in Harth’s post (and this one) can be viewed in representational terms. The Meta-Problem stands in a representational relation to the Hard Problem, because it is about the Hard Problem — and about belief in the Hard Problem.
In trying to sort through these issues, I have found it useful to take a Two-Brain Approach to consciousness, distinguishing the brain under study (the generic Subject’s Brain) from the brain attempting the explanation (the Scientist’s Brain).
The Scientist’s Brain is thinking about the Subject’s Brain, and both of them are thinking about the world. I will have more to say about this approach later.
The two-camp conception of the debate reflects this same representational layering.
Camp #1 (anti-hardism) looks in the Scientist’s Brain to explain the Meta-Problem in terms of the mechanisms of scientific puzzlement; the Hard Problem is embedded, but not taken at face-value.
Camp #2 (hardism) primarily adopts the perspective of the Subject’s Brain; it accepts the Hard Problem at face-value and attempts to understand the issues from within the representational system that introspects and finds consciousness.
Anti-hardism looks to our meta-cognition to explain our representational foibles and our limitations, accepting that science is only a representation of the reality it seeks to describe, aware that scientists are flawed navigators of that representation; hardism accepts things as they seem, challenges science to keep up, and posits mysterious extras when they fail.
As Harth notes, it is often difficult to see and to communicate across this major conceptual divide. Mutual incomprehension is common; debates go off the rails quickly, with each side accusing the other of not seeing the real issues. The posts ahead will be more appealing to people whose natural bent is to reject hardism, but I hope there will be material of interest for hardists, too.
At any rate, it is important for both camps to at least try to understand the other, because the Meta-Problem remains an issue in both camps. It does not disappear for those whose primary interest is the Hard Problem accepted at face-value, because the Hard Problem also features as a complex behavioural act that must be accounted for in functional terms. Indeed, for those who think of experience as a non-functional entity, the Meta-Problem must be solved to account for everything we say about experience - given that experience itself is posited to be causally cut off from our language output.
Zombies will not have been imagined in enough detail to assess their coherence until we can account for what they say, so solving the Meta-Problem is also a necessary precursor to having a robust opinion on zombies.
Despite my anti-hardist stance, much of what I will say will affirm the importance of the mystery as a puzzle worthy of attention, and in some ways I stand between camps, with sympathy for both perspectives. Chalmers and I are essentially interested in the same issue, and we started our inquiries from a similar point of puzzlement.
How can we reconcile the mind we find on introspection with the physical brain described by science?
Like Chalmers, I think it is useful to consider this question with zombies in mind — we can gain insights by considering the zombie framing. Unlike Chalmers, I think logic alone rules out zombies, which means that the Hard Problem is ultimately misguided. From our shared starting point, we have gone in different directions, and some readers who have followed Chalmers to a place of mystification might not have seen the fork in the road that they passed along the way.
It is my hope that this blog will draw attention to that path not taken.
So why is this blog called The Zombie’s Delusion?
Some readers might fear that I am about to declare, improbably, that we are all zombies… And they would not be completely wrong, because I do think we lack consciousness of a certain type. But my blog title actually refers to a little-discussed issue hiding within the whole notion of zombies.
Why do zombies talk about the Hard Problem?
It is indeed my claim that you have to be delusional to believe in zombies, but if that were my main point, this blog would be called The Zombie Delusion. That apostrophe-s is there in the title because zombies themselves have to be delusional to believe in zombies. They can have no valid grounds for thinking they house a special experiential extra, so they must be making errors on their way to this conclusion.
But they are our cognitive isomorphs. Whatever errors they make, we make, too.
That means we can understand our own faulty thinking by considering theirs — without the distraction of a contentious non-functional extra.
In his book, The Conscious Mind, Chalmers essentially concedes this, writing:
Now my zombie twin is only a logical possibility, not an empirical one, and we should not get too worried about odd things that happen in logically possible worlds. Still, there is room to be perturbed by what is going on. After all, any explanation of my twin’s behavior will equally count as an explanation of my behavior, as the processes inside his body are precisely mirrored by those inside mine. The explanation of his claims obviously does not depend on the existence of consciousness, as there is no consciousness in his world. It follows that the explanation of my claims is also independent of the existence of consciousness.
To strengthen the sense of paradox, note that my zombie twin is himself engaging in reasoning just like this. He has been known to lament the fate of his zombie twin, who spends all his time worrying about consciousness despite the fact that he has none. He worries about what that must say about the explanatory irrelevance of consciousness in his own universe. Still, he remains utterly confident that consciousness exists and cannot be reductively explained. But all this, for him, is a monumental delusion. There is no consciousness in his universe—in his world, the eliminativists have been right all along. Despite the fact that his cognitive mechanisms function in the same way as mine, his judgments about consciousness are quite deluded.
(The Conscious Mind, Chalmers, 1996, emphasis added.)
There is much to criticise in this pair of paragraphs, and I will return to this excerpt in later posts, but at least Chalmers has acknowledged a central flaw in his framing. His promotion of the Meta-Problem can be viewed in a similar light: he is acknowledging that every word of the Hard Problem can be traced to functional causes.
If we can explain why zombies are wrong about their own consciousness, we will have solved the Meta-Problem, so the Zombie’s Delusion is essentially another name for the Meta-Problem of Consciousness.
Chalmers’ preference for the less pejorative name is understandable, but it also hides the severity of the issue: I will be arguing that the Meta-Problem is necessarily a problem about the conceptual errors of hardists.
Within every hardist zombie, there lurks a delusion that is isomorphic to the reasoning of their human counterpart.
Dissecting that delusion is a necessary first step in understanding consciousness.
Post Script.
The concepts discussed here are difficult to think about and even harder to explain, so I welcome feedback and constructive discussion.
My next blog post will have a much simpler aim: steel-manning the possibility of zombies by trying my best to imagine them. That should be easy. All we have to do is imagine a bunch of physical neurons firing in the dark, right?
Minus any special extras.
What could possibly go wrong?
Let’s strap in then and see if you can do it. I’m skeptical you will be able to, but I’ll read
Wow, fantastic post! This isn't the one I read first but obviously in retrospect it should have been, it really explains where you're coming from. Fascinating to hear from a clinical neurologist who is also so well versed in philosophy. I found this blog from reddit (I'm u/lordnorthiii).
I too originally got interested in philosophy of the mind from reading "The Mind's I", probably around 1998 or so? The reason I originally picked it up started with Martin Gardner, who wrote the column "Mathematical Games" for Scientific American. I learned that after Gardner retired, Douglas Hofstadter took over the column, renaming it "Metamagical Themas" (and anagram of Gardner's title). Reading these, Hofstadter instantly became my favorite author, and I ended up reading "The Mind's I" because of that (and of course Godel Escher Bach).
Interestingly, Chalmers PhD advisor was none other that Hofstadter. I believe Chalmers and Hofstadter are (or at least were back then) functionalist "at heart" in some sense, both strongly believing that AI can be conscious just as humans can. However, at some point I think Chalmers realized there was a very real disconnect between functionalism or other physicalist accounts and what he experienced every day of his life. I give him credit for going "against the grain" and positioning himself as a non-physicalist at a time when I think most people thought it was a matter of time before neuroscience fully explained the mind.
You're obviously on the other side of the coin from Chalmers, but you not only undertand the non-physicalist side but also have genuine sympathy for it, something some anti-hardists lack. I hope you keep up the blog.