I was off-grid recently, with time on my hands, and I tried to write something that could serve as the prologue or introduction to a book of the same name as this blog.
It was difficult to know what to include, and what to leave out, and soon the piece blew out to 22k words. It’s obviously not fit to serve as a prologue, then. Nonetheless, it offers a sketch of my overall position, and I’d be interested to see which parts hit their mark.
I’ll break it into four instalments of about 5-6 k each.
Some material will overlap previous posts.
Introduction
In a hypothetical world with the same physics as ours, but no subjective consciousness, your physical duplicate is reading this book. Its brain is parsing these sentences along with you, but – for reasons that must remain obscure – it’s experientially dead.
Your twin is what’s known as a philosophical zombie.
In the same way that it is like something to be a human but it’s not like anything to be a lump of meat, your zombie twin has the blank experiential insides of a corpse. It’s missing the mysterious spark that adds the subjective flavours. The feelings. The pains. The awareness. The colourful mental interior. The felt sense of a self.
Its atoms and ions follow the dictates of physics, going left, right or wherever as the forces of attraction and repulsion demand. Indeed, its nervous system operates according to standard neuroscientific principles, and its eighty-six billion neurons behave exactly the same as yours. It is physiologically alive and clinically awake. It reacts to the environment. It shares your intelligence and your cognitive skills.
It will even claim to be conscious.
But – we must imagine – this claim is delusional. No one is inside, watching its cognitive activity unfold. All of its neural patterns are meaningless, or might as well be, and its speech is a brute expulsion of sounds that the zombie itself does not experience.
Some philosophers would argue that, in not experiencing its own language output or in not feeling the cognitive precursors to its speech, the zombie fails to understand what it says, or to mean anything by its words. Some would even say – have said – that this lack of meaning saves your twin from the charge of delusion. Even so, we can examine what it says, because it speaks our language; we can assign meanings to its words based on their causal relations to various entities (some real, some fictional), and come to understand the source of its false claims.
Because some of its claims are about our special consciousness, that analysis should be informative, and it’s a major goal of this book.
For many people interested in the nature of consciousness, the mere conceptual possibility of zombies seems to highlight a profound mystery. What accounts for what you have and your twin lacks? Your twin doesn’t have to exist to raise this issue; its existence just has to be an outcome that could have happened within the bounds of logic. How and why did the laws of nature on this world diverge from those on the zombie’s world? Science as we currently understand it seems condemned to describe human brains as chemical machines, runs the argument, but we know from direct introspection that we possess something more, leaving us with the mystery of accounting for the extra. Why didn’t we all turn out to be zombies?
This is sometimes known as the Hard Problem of Consciousness, a term attributed to the philosopher David Chalmers (1995), though the problem has been around for much longer, and it is descended from what was previously called the mind-body problem, which was the general challenge of reconciling the mental domain with physical reality.
[Quote]
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information‑processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience.
It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information‑processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all?”
(Facing up to the Problem of Consciousness, Chalmers, 1995. Emphasis added.)
As shown here, the Hard Problem can be expressed without mentioning the Z-word, but Chalmers often appeals to zombies in laying out the mystery, and I think zombies offer a good starting point. They illustrate Chalmers’ concerns and my counter-objections, in equal measure. If the allusion to B-grade movie zombies is distracting, think of zombies as brute chemical machines that emerge from an evolutionary process on a world that has exactly the same objective characteristics as ours, including the same physics, but no subjective properties.
It is not possible to overstate the influence that this formulation has had on discussions of consciousness. The Hard Problem, in various guises, has become the central issue in the philosophical study of consciousness, and any neuroscientist interested in the phenomena is obliged to consider the issue that Chalmers has highlighted.
Chalmers’ contribution was cleaving the original mind-body problem into two: on one side, he placed several Easy Problems about cognition, which he felt were potentially amenable to functional explanations; on the other, he claimed that there was a Hard Problem about a much more resistant experiential core.
[Quote]
Why are the easy problems easy, and why is the hard problem hard? The easy problems are easy precisely because they concern the explanation of cognitive abilities and functions. To explain a cognitive function, we need only specify a mechanism that can perform the function. The methods of cognitive science are well-suited for this sort of explanation, and so are well-suited to the easy problems of consciousness. By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained. (Chalmers, 1995).
Although this cleavage into easy and hard problems has been welcomed by many people, and Chalmers is usually credited with focusing our attention on the resistant core of the mind-body problem, there is a sense in which he has made things worse, legitimising a framing based on vague, paradoxical intuitions.
In putting the Hard Problem beyond “the performance of all relevant functions”, Chalmers is flirting with the possibility of zombies (and elsewhere he makes this explicit) but there is enough ambiguity here that he might be interpreted as making a lesser claim: that the functions of his “experience” are not yet obvious. Which they’re not, so he has an unassailable position to which he can retreat. And he’s only asking questions, at this stage, so the Hard Problem seems to come from an easily defended position of innocent curiosity. Its assumptions are difficult to see, and the more tenuous extensions of the framing can be denied when they are challenged, and the mystery of consciousness ends up slippery and vague. The target of the Hard Problem is not even defined; there is no clear fact that must be explained, so there is no rebuttal that can get an unambiguous purchase on anything clear.
Nonetheless, I will be arguing that the Hard Problem, in its usual formulation, is ill-posed. It comes from a place of confusion, leading us into a conceptual dead-end from which it is difficult to escape, but we can expose its errors.
This book is about the mystery that bothers Chalmers, his Hard Problem. That means it is about “experience”, a term that is used frequently in the paragraphs quoted above, but has never been adequately defined, by Chalmers or anyone else. There are reasons that this difficult aspect of consciousness defies definition, and there are steps we can take to add precision, but the vagueness is intrinsically tied up with the intractability of the mystery – we can’t define something that has no functional effects. Worse, we can’t propose any definition of Chalmers’ “experience” that our zombie twins will not propose at the same time, for the same reasons, while missing the intended target.
Fans of Chalmers framing will usually object that it is very clear what we are targeting. For instance, we can approach the idea of “experience” as what we imagine your zombie twin to be missing… And for most people that seems to capture a clean idea.
Unfortunately, this approach is disastrously ambiguous. It is probably not obvious yet, but the expression, “what we imagine your zombie twin to be missing”, can be read in two different ways; those words permit a conceptual slide between two incompatible entities that both have a claim to satisfying the same descriptor… And we can’t think of zombies without partaking in that slide. Which means that the Hard Problem, the zombie thought experiment, most of the consciousness literature and this book are all about two things that are easily confused with each other.
A major part of the perplexity that Chalmers laments arises directly from that conflation.
We can expose some of the central confusion by asking you and your twin to answer a simple question:
How do you know you are not the zombie in this scenario?
Think about the question for a moment and actually try to compose an answering sentence without committing yourself or your zombie twin to a cognitive error. You can write your answer down, or say it out loud, or – an inferior option – you can sound out the words within the privacy of your own head, knowing that your chosen sentence could have been manifested in the external world, so it must have a footing in physical reality. But actually compose an answer. The point is that you should explicitly commit your brain to one conclusion or another.
When you’re ready, read on.
Your twin’s answer will probably be wrong: it will claim to be the human, to possess the special extra differentiating it from zombies, but it can have no good basis for this claim because it lacks the extra by definition.
This should be somewhat disconcerting, undermining the entire scenario and raising the spectre of paradox.
(My zombie twin won’t be wrong, at least not in quite the same way, because it will not claim to possess such an extra; instead, it will argue that such an extra is incoherent. Perhaps, if you agree with me, your twin will also avoid simple falsehood… But that leaves our zombie twins denying the possibility of zombies, exposing a different face of the same basic issue. The scenario is still incoherent.)
Unless you are overly fearful of contradiction, though, we can put aside concerns about paradox and continue with the exercise. It should be possible for you to imagine your zombie twin (perhaps with some misgivings) and to consider what it is supposed to be missing, and then to consider why we (or other people more prone to fantastical musings) might believe that the entity underlying this imaginative effort is non-physical and functionally inert.
Do you see the ambiguity yet? Perhaps not.
Notionally, there are two entities that can be matched with the descriptor, “what we are imagining your zombie twin to be missing”.
One interpretation applies to the reality in which you perform the imaginative act – to this reality, the only one we know. In that context, the expression refers to something you pick out with your cognition and then imagine as being absent in a hypothetical being. It doesn’t matter whether the hypothetical beings are possible or not; experience is what you picked out in yourself, here in this world.
The other interpretation applies to the reality you are being invited to imagine – not to the zombie world viewed in isolation, but to the meta-reality in which human worlds contain a special extra and zombie worlds do not. Fans of Chalmers’ framing will invite you to look for a contradiction in the zombie world, as if it were the only world being imagined in this exercise, but this is sneaking in the assumption that there are just two alternatives: the zombie world, lacking consciousness, and our own world, which is then cast as a world in which consciousness is a non-physical extra. But we don’t know this to be an accurate conception of our world. In imagining a zombie world, we have also imagined a whole new human world, and populated it with a conception of consciousness that is not necessarily compatible with our actual world, or with what we picked out on our own world at the start of the exercise.
The local entity (the one you find on introspection within this reality) and the special non-physical extra (on the human world of the imagined reality) are not necessarily the same thing, and I will be arguing that they cannot possibly be the same thing.
Unfortunately, these two entities are usually not adequately demarcated when laying out the zombie thought experiment or the Hard Problem, and for many readers it might not yet be clear that we really need to draw this distinction.
But we do, so let me refer to them as the source and the target of Chalmers’ concept of experience, and compare them.
Firstly, there is the functional, cognitive source of the central concept of experience – the thing or cognitive feature or set of processes that led both you and your twin to claim possession of the special extra. The source is what both of you appealed to within your cognitive structure when you both defended your human status, providing your identical answers. We’re invited by Chalmers (and fans of his framing) to think of this source as not having any of the features of interest – the inner awareness, deep blue, and so on are all missing – but we are not obliged to accept this verdict, or we can insist on more nuanced views that question whether any aspect of consciousness is deep blue in the way being imagined. Even under Chalmers’ framing, the source provides the cognitive origin of every word (and image) that we use to describe or reference the features of interest, and it also provides all of the reasons we might think about those features.
Secondly, there is what your twin lacks, if we accept the scenario as described. This is the hypothetical target of the central concept of Chalmers’ “experience”, taken at face value, on the assumption that zombies are logically possible. The target is the special human extra, some mysterious, non-physical version of consciousness that could go missing with no behavioural consequences, if only the laws of nature were slightly different. Within the proposed scenario, you and other humans actually have this special extra, but your twin only claims to have it. When you both explain why you’re sure you’re not the zombie, the presence or absence of this special extra makes no difference to the generation of your answer. It cannot account for the zombie’s belief, or yours. We are invited to imagine that within this entity are all the features of interest, all the subjective flavours and awareness – but, as a central part of the zombie thought experiment, we must accept that if these features did go missing, we would still continue to describe them identically. Were the mysterious features to disappear, we would have no way of detecting their disappearance. Experientially, everything would stop, but cognitively, we would not even notice when the lights went out.
If we pressed a button that suppressed experience for ten seconds, turning us into zombies, taking away the awareness, and the capacity to experience deep blue and middle C, and so on, we could not tell whether the button worked. We would have suppressed the target of these concepts, but not touched their source.
Whether we can really separate the causal source of our words on this topic from what we want those words to target is somewhat controversial – but only among philosophers who have explored the nooks and crannies of the ongoing debate about consciousness. Among most lay people, and even among many philosophers and scientists, the distinction I am trying to draw is usually ignored.
When we imagine zombies, there are many reasons for considering source and target to be separate. The source is shared with your twin; the target differentiates you from your twin. The source is physical (or it is a cognitive feature entailed by physical processes in the brain); the target is non-physical (and not even entailed by physical reality as some abstract logical extension). The source entity is involved in the causal network of physical reality, but the target is not.
The source entity is what you grasp within your cognition when you refer to your own subjective consciousness. Conversely, a defining characteristic of the intended conceptual target of the zombie scenario is that what goes missing must have no effects on your physical brain – after all, you and your twin are in perfect behavioural and cognitive synch.
Now, zombies might not be logically possible, because – as I will argue – consciousness is likely to be a physical process in the brain (or some unavoidable cognitive interpretation of such a process), such that it automatically gets duplicated in the proposed scenario. It would be contradictory to suppose that consciousness could be both duplicated and absent.
There’s also that hint of paradox we chose to overlook earlier.
If the target is non-existent or incoherent, that would mean that the biggest difference between source and target is that the source exists and is worthy of explanation, but the target is just a bad idea.
If I’m right, the target concept in the zombie thought experiment will turn out to be something incoherent, even silly, and I will need to debunk the idea of a mysterious extra in humans.
But that debunking would still leave the source entity. There’s something in human minds, something in the vicinity of the standard concept of consciousness that takes part in the causal network of cognition, and is therefore physical (perhaps with caveats), but is imagined by some philosophers as non-physical and non-functional and therefore as something that could be distilled off, in principle, to leave the bare behavioural machinery of the physical world.
This book is about that entity, too.
The source entity is the true reason for our puzzlement, but it doesn’t get nearly as much attention as it deserves. It has a number of puzzling properties and it even has some claim on being non-physical in interesting ways… If the target entity does not exist, as I will argue, we must look for all the features of interest in the source entity – we must look to the source to account for our sense of awareness and our appreciation of the quality of deep blue, and so on. This is clearly challenging. Explanations don’t suddenly become easy just because we reject the target entity. But it’s a necessary first step. It is almost impossible to discuss the true nature of the source of our puzzlement while the debate is captured by the special human extra, consciousness imagined as an experiential add-on to the physical brain.
In other contexts (when we are not talking about entities as nebulous as consciousness), it is usually taken as obvious that the source and target of a concept are not necessarily the same entity.
Suppose that a herd of wild horses ran down the main street of a village at midnight and this event was reported by unreliable witnesses as a stampede of unicorns. The next morning, the villagers’ excited talk is notionally about the unicorns, but its motivation can be traced to the horses. In this situation, the identification of “what the villagers are talking about” is ambiguous. Are they talking about the real horses, or the fictional unicorns? Or both at once?
Note that, even in this clear-cut case, there are at least two ways of describing the status of the unicorns. We could say that the unicorns didn’t exist (or, equivalently, that they were fictional or illusory). Or we could say, somewhat clumsily and less accurately, that they did exist – but they were actually horses.
In the case of humans and zombies, I propose, there is the potential for similar ambiguity. The expression “what we imagine your zombie twin to be missing” can be interpreted in two ways – the topic could be the source or target of our imaginative effort. And, again, we have (at least) two ways of describing the status of the target concept, if we believe, like me, that the source and target are not aligned. We can say that the target of Chalmers’ “experience” concept doesn’t exist… But the source of that concept does. Or we can say that the target exists… But it lacks the features required for the scenario to play out, and it is actually just the source, misimagined and misdescribed. If we are not careful, we might end up denying the existence of the target for perfectly good reasons, but face an accusation that we are foolishly denying the existence of the source.
It would be nice to think that we could talk about consciousness sensibly without getting stuck in the details like this, but the history of consciousness debates has convinced me that we can’t – and that glossing over these relationships is a chief source of confusion.
Indeed, this sort of confusion is rife in published discussions of consciousness, and it is on display here, in this famous quote from the philosopher, Galen Strawson:
[Quote]
“What is the silliest claim ever made? The competition is fierce, but I think the answer is easy. Some people have denied the existence of consciousness: conscious experience, the subjective character of experience, the “what‑it‑is‑like” of experience. Next to this denial — I’ll call it “the Denial” — every known religious belief is only a little less sensible than the belief that grass is green.” (Strawson, 2018).
Strawson is denouncing a school of thought known as illusionism; this is the controversial view that consciousness as it is often conceptualised does not exist. I am not quite an illusionist, when it comes to consciousness, but the position I will be defending comes close. Indeed, I’d be happy to endorse the Denial, provided we are talking about the target concept of a human-zombie difference. The targeted entity doesn’t exist on our world; it is not even coherent. I depart from the illusionists in that I think that belief in this target entity comes from bad philosophy, not from an illusion.
But none of that means I deny the source entity, the “experience” driving all of this puzzlement, though its relationship to reality is not straightforward, either.
Unless you’ve spent quite some time discussing zombies and thinking about the issues with particular care, the key ambiguity is still probably not obvious yet, or the issues don’t divide for you along the lines I’ve suggested. You might believe that the source and target concepts of the special human ingredient are the same. Or you might want to insist that the shared source of the matching claims you and your twin produce is not the interesting part of consciousness; that common causal source might account for the matching motor output in each of you, but it is not what you find when you look inside. You introspect, you find subjective consciousness with all of its rich experiential flavours, and that’s what you imagine missing in the zombie. The zombie lacks whatever you find on introspection, so what I am calling source and target are actually the same concept.
If that’s where you are now, then I hope to open your eyes to a different way of viewing the issues, or at least convince you that we need more precise terms to discuss the issues sensibly, especially if we want to argue – as I will – that the target of this thought experiment is incoherent.
Ultimately, we will need separate names for these two entities, because the source entity directly involved in the cognitive act of introspecting and finding consciousness is usually interpreted as though it had all the properties attributed to the mysterious extra, and our standard concept of consciousness ends up as a messy hybrid of both. The source entity, which is confusing enough in its own right, gets enveloped by a confusing halo of conceptual embellishments that make it even more mysterious, until eventually, in the brains of some philosophers, it inspires belief in the target entity and all the concepts merge.
And that target, if we take it seriously, constitutes a threat to science.
As Chalmers notes, if your twin were a genuine possibility – if a zombie stood beside you, physically identical to you but unconscious – it would make no sense to look for a physical explanation of what distinguishes you from your twin; the difference between the two of you would have to reside in non-physical elements of reality. After all, any attempt to account for consciousness in physical terms could not possibly explain a difference between you and your twin, because all the physical details are the same. Any explanation of consciousness based on your neurobiology or your computational structure would apply equally well to your twin, leaving the difference unaccounted for.
If we find the notion of zombies too fanciful, we can reach a similar conclusion merely by starting with the standard scientific view of physical reality, which tends to be zombified by default. We only need to consider a bland conception of physical reality as mindless atoms banging around in the void, and then imagine that neurons and brains offer more of the same – just more atoms, with more complicated arrangements. If we compare that superficially plausible situation with our own consciousness, known to us from our privileged subjective perspective, science seems to have left out the experiential component of reality and to have no sane way of incorporating it. The Hard Problem, then, is the challenge of finding a place for consciousness within a world of matter and physical forces.
But here, if we are not careful, our concept of a zombie can either fall apart completely or gain a false plausibility because of the source-target conflation re-appearing in a different guise. If there are two conceptions of experience, the original source entity and its embellished re-imagining as a non-physical extra, then there are also two potential conceptions of zombies, which we must carefully keep apart.
Should zombies be defined in terms of our original target concept, as beings that lack some mysterious ghostly essence that floats outside the physical domain? Or should zombies be defined as lacking subjective consciousness, whatever that might turn out to be? In the second case, zombies would be defined relative to the true, unconfirmed source of our concept of subjective consciousness – as what we actually find on introspection.
If source and target are the same thing, then this distinction doesn’t matter. But if they are divergent, it matters a lot. The logical possibility of the first sort of zombies would pose no threat to conventional science at all, whereas the second sort would require us to extend our concept of reality.
This might seem back-to-front, but that’s because we are imagining a scenario in which the relevant entity is missing.
If consciousness is essentially physical, as I will argue (with caveats), or it is a high-level feature of some physical systems (as most scientists believe), then the special ghostly essence envisaged in the traditional zombie scenario does not exist anywhere, not even on our own world. That would mean physical duplicates could easily lack the silly, mysterious extra; humans would lack it, too. If we have defined zombies simply as beings that lack the mysterious extra, this would make zombies possible after all – but only by virtue of an imprecise definition. But then the extra would no longer be incoherent…
We can chase this line of reasoning in a circle, if we want to, but we should stop when we get to beings that do not really lack anything relative to humans. These pseudo-zombies would not be dead inside in any interesting way; they would just be humans by another name. In particular, they wouldn’t lack anything important relative to humans, because they wouldn’t lack anything with the critical properties of: 1) being something humans actually possess; 2) being what humans call “subjective consciousness”; and 3) being something that matters in some way.
If we reduce the human-zombie difference to zero, then of course zombies can exist, but they are no longer worth debating. To threaten conventional scientific views, a zombie must be dead inside in a way that is not true of humans.
In what follows, when I talk about zombies, I am explicitly not talking about pseudo-zombies that are known to lack an inconsequential fiction. When Chalmers argues that the mere possibility of zombies would falsify physical conceptions of consciousness, he is also not intending to talk about mere pseudo-zombies… But all the concepts in this domain flow into each other, and it can be difficult to notice when we have crossed an important conceptual watershed.
When we merely consider atoms in the void, these distinctions are not specified, and we can easily draw unreliable conclusions. The Hard Problem, I propose, relies on conflations of this nature. Zombies are rendered more plausible because pseudo-zombies are entirely plausible. A scientific characterisation of our world as lacking a mysterious extra is plausible, but conceding this seems like opening the door to let the genuine zombies in.
If genuine zombies could exist, even as a mere logical possibility never instantiated in any actual universe, that would render subjective consciousness in this universe irretrievably mysterious – because we could have been zombies, but we’re not, and whatever saves us from being zombies will be undetectable to any objective methods. Neuroscience would be condemned to groping around the edges of the puzzle of consciousness, perpetually baffled by an entity that can’t be detected by any instrument or accommodated within any theory based on physical reality. Consciousness would have to be treated as a fundamental ingredient of the universe, not explicable in terms of any other elements, and not explicable as an evolutionary adaptation. Indeed, in various forms, this is what some philosophers believe.
Chalmers has a point, of sorts. If zombies were truly possible – or if a zombified view of physical reality were legitimate – this would put subjective consciousness outside the physical domain. When we imagine zombies, we are imagining consciousness as non-physical, and if we find them plausible, we are opening up the mystery of why we aren’t zombies.
In what follows, I will call this attitude “hardism”, which can be taken as a broad term for the entire set of disparate philosophical positions sympathetic to the legitimacy of the Hard Problem.
We can reach a similar conundrum by considering a functional duplicate of a human brain, rendered in computer circuitry, with every neuron faithfully modelled to have identical input-output characteristics as in your own brain. In theory, we could hook up that circuitry to a robot that behaved just like you, and we could imagine that it was dead inside, telling ourselves that it was a digital zombie merely mimicking consciousness. And then we could insist that there was a Hard Problem accounting for the experiential difference between you and your robotic copy.
For many people, digital zombies are much more plausible than the classical sort. We know that a zombified account of any computer program is possible because a computer literally follows a set of discrete instructions, line by line. In principle, nothing (apart from time constraints) could stop us from following the robot’s entire story in low-level terms, and many people think that such a low-level engagement with the story reveals its essential nature. (This attitude is captured in an infamous thought experiment known as Searle’s Chinese Room, which will need to be rebutted later.)
If functional copies in silico were destined to be unconscious, this would pin conscious experience to some obscure biological essence that we could not hope to explain in functional terms. In some ways, this version of the puzzle is even more challenging, because if digital zombies were possible on our universe, we could no longer appeal to deep, universe-level mysteries to account for consciousness; we would be restricted to the difference between carbon-based meat and silicon-based circuits, and we could not appeal to any functional effects of the posited biological extra. Accounting for consciousness in evolutionary terms would be impossible, as the posited biological essence would be stripped of any functional consequences.
*
We can achieve a similar result simply by adopting a skeptical stance in the face of any new theory of consciousness that appeals to functional processes in the brain. We don’t need to commit to a position that all functional theories will necessarily fail; we can hold out hope for some future theory of consciousness based on conventional physical reality, while rejecting every actual theory that comes along. Whatever a neuroscientist might say in defence of their favourite physical or functional theory of consciousness, we can ask: But why is that neurobiological process conscious? Why should atoms, moving this way rather than that, or neurons firing in some particular configuration, give rise to subjective experience? We don’t have to mention zombies by name, or even agree that they are possible, but our skepticism would be based on the idea that we can imagine the candidate functional processes without the experiential extra.
Chalmers takes this attitude in his landmark Hard-Problem paper, as follows:
[Quote]
It is common to see a paper on consciousness begin with an invocation of the mystery of consciousness, noting the strange intangibility and ineffability of subjectivity, and worrying that so far we have no theory of the phenomenon. Here, the topic is clearly the hard problem—the problem of experience. In the second half of the paper, the tone becomes more optimistic, and the author’s own theory of consciousness is outlined. Upon examination, this theory turns out to be a theory of one of the more straightforward phenomena—of reportability, of introspective access, or whatever. At the close, the author declares that consciousness has turned out to be tractable after all, but the reader is left feeling like the victim of a bait-and-switch. The hard problem remains untouched. (Chalmers, 1995)
The feeling that some intangible essence will inevitably be untouched by any functional approach is at the core of hardist concerns. If we are sympathetic to those concerns, we can use them as a filter to separate promising theories from the doomed ones; we can insist that we will not accept any theory of consciousness if it can be recounted in functional zombified terms. We can introspect, and then demand that the theory accounts for… this, the inner feeling we are turning our attention upon as we imagine it missing in our notional twins.
Chalmers’ Hard-problem paper takes a similar approach to individual theories of consciousness, such as Bernard Baar’s Global Workspace Theory.
[Quote]
According to this theory, the contents of consciousness are contained in a global workspace, a central processor used to mediate communication between a host of specialized nonconscious processors. When these specialized processors need to broadcast information to the rest of the system, they do so by sending this information to the workspace, which acts as a kind of communal blackboard for the rest of the system, accessible to all the other processors.
[…] The best the theory can do is to say that the information is experienced because it is globally accessible. But now the question arises in a different form: why should global accessibility give rise to conscious experience? As always, this bridging question is unanswered.
We can take a similar attitude to the individual flavours of experience (often known as “qualia”). To be taken seriously, theories of colour vision must allow us to derive the subjective flavour of redness; theories of pain must make us empathise with the system described, convincing us that some functional arrangement of matter really feels pain, and so on.
I will call this filter Chalmers’ Razor, which can be summarised as the idea that we can reject any theory of consciousness if it can be rendered in functional terms while we imagine the subjective perspective to be absent.
The Razor can be employed explicitly, with full acknowledgement of its devastating, extensive reach. Chalmers uses it this way – albeit without employing the phrase, “Chalmers’ Razor”, which is my own. He argues, in the same paper, that all functional theories must fail, and he takes this as evidence that we must introduce consciousness as an explanatory primitive, a base fundamental essence like mass, charge or spacetime. The Razor can also be employed accidentally, sometimes by neuroscientists themselves, eroding confidence in candidate theories that would otherwise seem promising.
There is nothing wrong with noting that something about these theories feels inadequate. Chalmers is only reporting – presumably accurately – that he feels underwhelmed, so what is my complaint, here?
The problem is that Chalmers’ Razor slashes indiscriminately. Chalmers is complaining about the omission of properties he has conceptualised as non-functional and only accesses from within a private subjective perspective. A description of that perspective, even an accurate one, will not feel the same as a live instantiation of that perspective. Textbook neuroscience is always implicitly zombified, or it can be treated as such, if we are so inclined – because of course it has a functional emphasis. At best it can only be about a subjective perspective described from a distance, without recreating the feel of that perspective.
If we are not careful, we might condemn ourselves to being permanently dissatisfied with all potential theories of consciousness, even correct ones. We can always ask a bridging question, but our ability to ask this question is so unconstrained it is ultimately unhelpful. Hardists don’t have functional criteria for evaluating theories of consciousness – instead, they will openly insist that consciousness is not functional. Instead, they approach the puzzle of consciousness with inflated expectations of what understanding should feel like; inevitably, they come away with a feeling of disappointment. They then take that dissatisfaction as evidence of a mystery so deep that our model of reality will need to be extended.
Chalmers is right to point out that no current theory of consciousness can meet his demands, but are the demands reasonable? What could possibly count as a viable theory, with this filter in place?
Suppose that consciousness is actually a physical process, and a completed theory of mind were placed on your desk tomorrow, correct in every detail (delivered by a time‑travelling neuroscientist from the future, say, or coalescing from thin air by a miracle and endorsed by God in a foreword overflowing with praise). If the theory rests on physical processes and it does not merely presuppose the existence of mentality as fundamental, it will still be amenable to the Chalmers-inspired charge of leaving out experience. Skeptics will be able to say it describes some lesser functional process related to cognition, one of the “straightforward phenomena”, because the underlying physical processes could be followed at a mechanistic level while ignoring the special first-person perspective of the experiencer.
Almost any physical process will be susceptible to this charge. In other words, if Chalmers’ Razor is accepted as legitimate, it can be applied as a universal filter for any functional approach to consciousness.
For instance, can you imagine that the brain is ultimately performing neural computation and nothing else, simply processing sensory inputs to produce intermediate computational states that eventually – by some method, any method – produce behavioural outputs? If you can (and this means no more than accepting the standard neuroscientific view of the brain while ignoring subjective perspectives) then it seems that mere neural computation cannot possibly explain consciousness. The input‑output relations that constitute such computation can, at least in principle, be understood at the mechanistic, local level, neuron‑by‑neuron. Following the causal story of the brain from an objective perspective (and assuming that random events turn out the same way each time), we will always reach the same functional result, producing the same outputs from the given inputs, without needing to consider any higher levels, including the alleged subjective consciousness that is supposed to be present.
For any functional theory of what the brain does, in physical terms, we can imagine that the corresponding feels are missing. Hardists can insist that we might merely be describing your twin, rather than you. There is no physical or functional difference between you, so however much we try to refine our theory, it is impossible to uniquely describe you while excluding your twin, in any terms that science can accept – unless our conception of reality admits consciousness as a functionless extra.
Note that this conclusion does not depend on how we bundle up the computation into functional modules or what fancy names we use to describe those modules, or what mathematical principles we apply to the relations between the cognitive subsystems, or even what the neural activity is supposed to represent; all of this conceptual edifice remains an optional aide to understanding, and all of the behavioural outcomes rest on the fundamental base level, which is simply brute computation performed by chemical machines, which we can consider as empty mechanisms, if we want to.
If we don’t need subjective consciousness for some candidate theory to work, and the behavioural result is the same in the zombified account, then it can’t be a theory of subjective consciousness.
Or so says the intuition that is captured in Chalmers’ Hard Problem.
This is Chalmers’ point, of course. For him, the resolution is simple: consciousness is non-functional. But the extensive reach of his Razor should make us question whether it might incorporate some faulty assumptions. When we apply the Razor to some theory, how do we know we’re not just ignoring consciousness in the physical story, or getting distracted by the necessary shift in perspective that comes in when we’re describing someone else’s consciousness and comparing it to our own? Don’t all descriptions leave out what they merely describe?
I think the central hardist intuition is unreliable, the logic of the Razor is faulty, and hardism – while understandable – is deeply misguided. Part of the work ahead is to explain why the hardist intuition cannot be trusted, which is not easy to articulate, because hardism itself is based on the idea that the explanatory target is ineffable and essentially indefinable – this is a necessary consequence of embracing the fantastical notion that consciousness can constitute a difference between functionally identical systems, as exemplified in the zombie thought experiment.
To be continued…
Nicely written! I am a big fan of analogies, and the horse / unicorn analogy I think is very helpful here in explaining the source / target language.
Note I think you missed a period here:
> perspective A description of that perspective, even an
Here is another thing I've been thinking about in regards to your central argument. It concerns my friend Bob, who lives on the moon and sustains himself mostly via moon cheese. As you may be able to tell, Bob doesn't actually exist. But he thinks he exists, and claims to exist on every imaginary phone call we have. He says "Cognito ergo sum! I just notice my own thoughts and own experiences, and even if I don't understand the fundamental nature of those thoughts, something must exist to be generating them". This happens to be the exact same reason I think I exist! Yet Bob is wrong. So am I wrong too? Or maybe correct merely via coincidence and not correct logic?
I have an issue with calling zombies logically impossible. I take the thought to be that consciousness is in fact physical, so it can't be both physical and non-physical and the zombie scenario would be nonsense if physicalism were true. But it I don't take it to be analytically true that consciousness is physical which means, before science, what consciousness meant does make sense to say it can or can't be removed from some physical states or processes.
I think of consciousness as similar to water and H20 and you can tell me if you disagree. I think the clear drinkable liquid in lakes and rivers could have been xyz. It's logically possible anyways that something with those properties wasn't a particular chemical. My understanding of your claim of zombies being logically impossible is like saying it is logically impossible that water is not H20. But that seems like equivocating a bit and like you're talking past the zombie argument people because they mean something like what people meant by water before we discovered H20 and what you mean by water just is H20.
I think it's helpful for me to think about this issue in terms of analytic and synthetic identities. I don't see how mental states being whatever substance, process, system, whatever, can be analytically derived, which is why the hard problem seems so hard. But it can be synthetically theorized about and the most plausible explanation given our observations is that mental states are physical states because of all the correlations and whatever other arguments one could make. What convinces me to stop being baffled by consciousness as much as I used to be is making the analytic synthetic identity difference clear. Then I realized lots of stuff has concepts which are unable to be derived from one another, yet in reality, they are the same.
One of the most helpful points I've read from you is that epiphenominalism can't be believed in in virtue of its truth which ought to make one highly skeptical if not throw out the theory. You pull apart the reason why we believe in the view and the actual consequences of the view being true which is helpful. I don't care about platonism, other than for being unintelligible, for the exact reason you gave about epiphenominalism before. If it is defined out of the causal chain, so you dont believe it in virtue of its truth and you'd believe it if it weren't true anyways, what work is it doing?