The Ghost in the Machine
The physicalist philosopher Gilbert Ryle coined the expression “the ghost in the machine” to characterise a problematic view of consciousness that cast it as something mysteriously separate from the mechanisms of the brain. Later this phrase became the title of a book by Koestler, and then the name of an album by The Police (my first exposure to it).
Ryle, writing in 1949, considered this dualist view to constitute a category error, and I agree — the mind is likely to be something the brain does or represents, not a separate entity that needs to find its own place in ontology. For now, I will merely borrow his metaphor, which has new significance as we enter the AI era.
It seems particularly relevant to the theme of this series: pseudo-epiphenomenality.
That’s a mouthful, so let’s back up.
Epiphenomenalism is the idea that consciousness has no causal effects in the physical world.
As an attitude to consciousness, epiphenomenalism has a long history (some philosophers have found traces of epiphenomenalist ideas in ancient Greek philosophy), but it was only named as such in the late 1800s:
Epiphenomenalism is the view that mental events are caused by physical events in the brain, but have no effects upon any physical events. Behavior is caused by muscles that contract upon receiving neural impulses, and neural impulses are generated by input from other neurons or from sense organs. On the epiphenomenalist view, mental events play no causal role in this process. Huxley (1874), who held the view, compared mental events to a steam whistle that contributes nothing to the work of a locomotive. James (1879), who rejected the view, characterized epiphenomenalists’ mental events as not affecting the brain activity that produces them any more than a shadow reacts upon the steps of the traveller whom it accompanies. (Stanford Encyclopedia of Philosophy, emphasis added.)
The term is also sometimes used with weaker meanings, which will be considered in the next post, but I will always use it in this strong sense: epiphenomenalism implies that consciousness is something separate from the physical actions of the brain, not logically entailed by those actions and exerting no important causal effects on the physical brain. (If consciousness exerts redundant effects that make no difference, but it is still considered as being outside the physical processes of the brain, then this will also count as epiphenomenalism in the discussion that follows).
In this strong sense, there is no meaningful difference between epiphenomenalism and what we could call “zombie-possibilism”, the belief that philosophical zombies are possible. When I use the term “epiphenomenalism”, it is a synonym for the belief that zombies are possible.
To be frank, I think epiphenomenalism is an absurd philosophical position, but not because I think there is anything foolish in the concept of immaterial essences per se.
After all, I am writing this post on a device that can receive whole movies as invisible fluctuations in an electromagnetic field, which would have seemed like pure magic just a few generations ago. Also, quantum physics has done away with the idea of a strictly mechanical universe. I am told that ghost electrons can pass through slits and interfere with the probability distribution of where real electrons end up, and that, in such cases, the distinction between the real electrons and the ghost electrons is fundamentally unclear not just to experimenters but to the universe, so ghostliness is a part of reality.
Ryle’s metaphor invokes the idea of an ineffectual ghost haunting a machine, which sounds kinda silly — and possibly he meant to dismiss the idea of non-physical consciousness as what we would now call “woo” — but the metaphor need not be pejorative. Paradigmatic machines are drab and industrial, whereas ghostly entities could include all sorts of wonders that defy the mundane reaches of mechanical explanation.
Furthermore, from the inside, I relate much better to ghosts than to machines; I feel like an immaterial spirit inhabiting my body. If there were good reasons to believe in a non-physical mind, I would actually welcome the prospect of being more than a chemical machine.
By some accounts, of course, ghosts are not entirely epiphenomenal; supposedly they haunt houses and rattle the windows at night. Never mind. Huxley’s steam whistles and James’s shadows are not entirely epiphenomenal, either; they are part of the known physical world, and have effects. All metaphors for epiphenomenal entities necessarily miss their mark, because we analogise to things we know about, and we gain that knowledge through causal processes, where it is reliable, or through acts of invention, where it is not. Whistles are heard. Shadows are seen. Ghosts… Well, you can pick between the window-rattling kind and the fictional kind. Neither is an example of a real epiphenomenal entity.
The part of Ryle’s metaphor that I think is useful is the contrast between: 1) immaterial ghosts that are free to do their own inscrutable thing, indifferent to the constraints of physics and too insubstantial to pull levers; and 2) machines that are tangible and generally predictable. Machines follow rules. Everything they do, they do for a reason. If there were epiphenomenal entities involved in consciousness, it would be appropriate to think of them as ghostly. And the notion of ghosts is clearly inspired by folk conceptions of consciousness. The body dies, and the immaterial essence breaks free. When they are metaphorically applied to consciousness, ghosts are returning to their conceptual roots.
Ryle’s metaphor is still apt in 2025 because many people, inspired by dualist intuitions, insist that something tells them they house some special experiential extra that is more than the workings of their brain. They might not admit to being epiphenomenalists, but they often accept that the special extra plays no direct role in behaviour.
What is that something?
By which I mean, not “What is the special extra?”, but: What is the something that tells some people about the special extra?
Somewhat surprisingly, there is no widely accepted term for this central player in the consciousness debate, the “something” in the cognitive machinery that inspires the puzzlement about a ghostly extra, as distinct from the extra itself. Much has been written about the ghostly target of dualist intuitions, but relatively little about the mechanistic source of those intuitions. David Papineau, 2002, provides an important exception, with his discussion of conceptual dualism. In his defence of illusionism, Keith Frankish has talked about “quasi-phenomenal properties”; he is referring to a set of functional, cognitive mechanisms that include those that contribute to epiphenomenal belief, but his term could be taken to mean the virtual properties being represented by the brain (‘inner redness”, “the ghost of consciousness”), rather than the actual properties of the brain that lead it to being misunderstood from within.
David Chalmers, in his 1996 book, touched on the idea of non-phenomenal causes of qualia-related speech with his discussion of “phenomenal judgments”, but he also tried to distance his position from epiphenomenalism, and he took such a deflationary approach to phenomenal judgments that their central role as the sole source of epiphenomenalist belief was obscured. (Since then, he has rekindled attention to this important issue in the form of The Meta-Problem of Consciousness, discussed below.)
Hang on, say fans of the ghost. We have plenty of names for what puzzles us: “phenomenal consciousness”, “qualia”, “experience”, and “what-it’s-likeness”.
Unfortunately, all of these terms conflate the source of puzzlement with the proposed non-functional target.
This conflation seriously undermines the coherence of the debate, because the cause and the posited target of dualist beliefs are not necessarily the same thing, and for many popular conceptions of the special extra, including all epiphenomenal notions of phenomenal consciousness and qualia, they cannot possibly be the same thing.
Qualia, for instance, are the features in our heads that set us off on failed explanatory journeys conducted within the physical world, leading to physical evidence in the form of typed words, but they are also the entities that some people imagine as floating outside the physical story and incapable of causing any physical events. They are supposedly what Mary the Colour Scientist discovers on her release, so they must have some way of letting her physical brain react to their existence, but they are also outside the physical domain, with no known way of influencing a single neuron. They must always be at least two incompatible things, and we need distinct names for those things.
As far as anyone can tell, the machinery of the brain runs according to its own physical principles, and, at the relevant scales, those physical principles seem to constitute a trustworthy characterisation of physical events. So why would the physical brain report the presence of a ghostlike consciousness? If we put aside the idea of interactionist dualism (on the basis that it is without supporting evidence and it would require a breach in the known laws of physics), no ghostly essence can modify the mechanisms of the brain, so any brain proposing a special non-mechanical extra must be doing so for reasons that are independent of the posited extra.
This is the Paradox of Epiphenomenalism: all possible arguments for epiphenomenalism must have their true source in non-epiphenomenal entities, so the proponents of these arguments would hold the same views even if they were wrong; the arguments are independent of the truth of their claims.
And that’s where pseudo-epiphenomenality comes in.
Using Ryle’s metaphor, let’s suppose that consciousness is a ghostly essence, that the brain is a machine, and that these are distinct entities rather than different views of the same thing. If the ghost is epiphenomenal and does not interfere with the machine’s mechanisms, then, even if it exists, we cannot appeal to the ghost to account for the machine’s expressed belief in the ghost
The ghost of consciousness is being proposed for machine reasons.
That’s a serious problem for fans of the ghost, but it’s good news if we want to understand the generation of belief in the ghost.
The Hard Problem and the Meta-Problem
In modern discussions, the dualist intuitions that Ryle was targeting are more often expressed as the Hard Problem of Consciousness.
“The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information‑processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience.” […] It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all?
[…] the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained.
(Chalmers, 1995, emphasis added.)
Chalmers differentiates this problem from the Easy Problems of Consciousness, which refer to the functionally tractable aspects of cognition. For more details, see The Hard Problem, Part 1.
If the Hard Problem is primarily concerned with the ghostly side of Ryle’s metaphor, then The Meta-Problem of Consciousness is about the machine side. It is the challenge of accounting for why some people report that there is a Hard Problem. Why would a physical brain operating under the laws of physics report that its own workings are mysterious in a way that challenges physicalism?
Problem reports [verbal reports of a Hard Problem] are a fact of human behaviour. Because of this, the meta‑problem of explaining them is strictly speaking one of the easy problems of consciousness. At least if we accept that all human behaviour can be explained in physical and functional terms, then we should accept that problem reports can be explained in physical and functional terms. For example, they might be explained in terms of neural or computational mechanisms that generate the reports. (Chalmers, 2018)
As Chalmers notes here, the widespread discussion of the Hard Problem is itself a feature of the physical world, and so the existence and content of that discussion has antecedent physical causes, which should be amenable to scientific explanation, at least in principle. (If the Hard Problem and the Meta-Problem are completely unfamiliar to you, start here.)
Previously, I have defined “hardists” as those who accept the framing of the Hard Problem as outlined by Chalmers, but hardism is a philosophical position much older than Chalmers. The Hard Problem had not been named as such when Ryle spoke of ghosts and machines, but he clearly had hardism in his sights: hardists consider consciousness to be mysteriously incompatible with any functionally explicable biological process, which leaves consciousness detached from the machinery of biology.
Hardists usually offer two main reasons for thinking that consciousness is special: some mental properties seem irreducible (resistant to reductive explanation) and some properties — often the same ones — seem epiphenomenal (floating outside the causal processes of the brain).
These two issues are not always clearly disentangled in hardist discussions, but they can and should be addressed separately, because they have different origins and different theoretical implications.
The classic thought experiment pointing to irreducibility is Frank Jackson’s Knowledge Argument involving Mary the Color Scientist, while the classic thought experiment suggesting epiphenomenality is the Zombie Argument, popularised by Chalmers. Hardist literature appeals to both of these thought experiments (and to a cluster of similar arguments that mostly reflect the same two concerns).
This series will be about the second of these issues, epiphenomenality, but its focus will be firmly on the machine, not the ghost. I won’t be considering ghostly essences themselves, except indirectly, when those essences are targeted by the concepts I want to analyse. In particular, I want to explore the brain’s generation of the claim that there is some mental essence or set of properties distinct from its own mechanisms and unable to influence those mechanisms.
My interest is therefore in what could be called “meta-epiphenomenalism”: How does belief in epiphenomenalism arise within the machinery of the brain?
To keep the issues separate, I will refer to the property of actually being epiphenomenal as “epiphenomenality”, and I will use the term “pseudo-epiphenomenality” to refer to any set of properties or inferences that help convince a cognitive machine that it has access to some internal features with the property of epiphenomenality.
As a matter of straightforward logic, epiphenomenality can never contribute to expressed belief in epiphenomenalism, so pseudo-epiphenomenality is not a side-show; it is the sole basis of epiphenomenalism.
Pseudo-epiphenomenality is what makes Chalmers’s Zombie Twin write that it possesses a special extra not found in its zombie twin. Somewhat more controversially, it is also what makes the bare physical brain of Human Chalmers’ express the same belief, and hence it is a key driver of hardism.
Understanding pseudo-epiphenomenality ends up being one of the so-called Easy Problems of consciousness, but it’s not one of the headline Easy items, and in fact it is a key part of resolving the Hard Problem.
There are plenty of aspects of consciousness that are conceded by all to be functional, but when we hand those reducible aspects off to science and concentrate on the difficult, mysterious end of the philosophical puzzle of consciousness, the two main residual issues are the ones that concern hardists: irreducibility and epiphenomenality. The Hard Problem is therefore two problems at once, the Hard Problem of Irreducibility and the Hard Problem of Epiphenomenality; these become four problems when we consider the associated Meta-Problems of Pseudo-Irreducibility and Pseudo-Epiphenomenality.
My concern in this series of posts will therefore be the bottom-right quadrant of the 2 × 2 schema shown below: the spurious reasons physical brains have for sometimes self-diagnosing epiphenomenality.
Importantly, not all brains do this. Mine doesn’t, for instance, though I sometimes feel the tug of the hardist intuitions including the suspicion my brain would get along fine without me. Those intuitions cannot have come from an epiphenomenal source. No one can sensibly claim to have seen or heard an epiphenomenal entity, and even if such an entity were hallucinated or strongly intuited, we can safely conclude that no epiphenomenal entity contributed to this process. That means the study of pseudo-epiphenomenality is not the study of inevitable brain processes, such as those responsible for inescapable illusions; it is ultimately the study of avoidable philosophical mistakes. I believe that, if academic philosophy had followed a different path, the conceptual missteps contributing to the fallacies of pseudo-epiphenomenality might even have been largely avoided — though I suspect that the intuitive traps involved would have caught some people, regardless, and the problems of irreducibility cannot be dismissed nearly so confidently.
In the schema illustrated above, it should be noted that there is a representational relationship between the left and right halves: hardism is about the ghostly properties in the left half, but the right half in turn is about the origins of hardism — or at least the behavioural expression of hardism, such as hardist literature. Every ghostly item within hardism has a corresponding entry in the account of the machine; the machine represents the ghost.
That representational relationship does not require the vindicating presence of the ghost, and so nothing on the right half can be taken as reliable evidence of a ghostly extra.
As a physicalist, I don’t believe in Ryle’s ghost or in Chalmers’ human-zombie difference; I think this is a case of representation without vindication. For me, then, the ghostly half of this 2 × 2 schema is about spurious properties, and the machine half is about faulty or unjustifiable conclusions reached by the machinery of cognition in relation to those ghostly properties. But even hardists should be interested in the mistakes made in the right half, because they constitute the true reasons for their claims about the left half. A satisfactory physicalist solution to the overall puzzle of consciousness needs to account for the apparent irreducibility and apparent epiphenomenality of mental phenomena within a biological organ that does not really have either of those properties. A satisfactory hardist solution has the additional task of explaining how the machine, in positing an epiphenomenal ghost, managed to be right about something it cannot detect. The challenge of explaining this coincidence— the surprising correctness emerging from mechanical processes indifferent to ghosts — is what I call the Meta-Meta-Problem of Consciousness. How do the Hard Problem and the Meta-Problem, respectively a problem about a ghost and a machine, end up linguistically synchronised, with both of them remaining well-posed, despite being about entirely different ontologies, one of which is straightforwardly mechanistic and the other is supposedly irreducible and epiphenomenal?
Note that this third problem in the family is only a problem for hardists.
The tight relationship between the Hard Problem and the Meta-Problem undermines Chalmers’ proposed division between the Easy and Hard Problems, because understanding the origin of the Hard Problem must be part of the Easy Problems of Consciousness, according to Chalmers’ own definitions — and this is something he now concedes. But that means every sentence written about non-functional experience was itself written for reasons that are amenable to functional analysis.
The Hard Problem, once it is embedded within quotation marks, turns out to be an (upper-case) “Easy” Problem.
(Indeed, the Hard Problem is only truly Hard if it is imagined as being well-posed, based on the actual existence of Ryle’s ghost or Chalmers’ non-functional experience and actual vindication of all the machine’s errors — but the Hard Problem has necessarily been posed as the result of the same functional processes that it laments as inadequate to account for its content, so it is arguably incoherent even before we get into the details.)
As regular readers will know, I am generally critical of the Hard Problem, but I think the Meta-Problem is an important and worthwhile puzzle. At least the Meta-Problem is a functional problem, and so it is amenable to functional analysis.
Anything we express puzzlement about is part of the causal network in which the puzzlement is expressed. For anything we can complain about, we can, in principle, explain where the complaint comes from. Raw feels might seem to offer no explanatory purchase, but any raw feels that truly had no functional effects would have to be feels that have never inspired a single word about qualia or phenomenal consciousness or any other anti‑physicalist intuition; they would have to be raw feels that we can’t even know about — or feels that we know about without being able to talk about that knowledge; that includes anything we can subvocalise within the privacy of our own heads, and it includes knowledge we pick out with ostensive pronouns. (“Redness is like that”.) Any knowledge of epiphenomenal feels would have to be causally quarantined from our language centres (and from anything that might be displayable on a likoscope). It would never get discussed by anyone except by happy coincidence, when they discussed something else, from the machine side of this schema, using overloaded terms that also managed to refer to the epiphenomenal feels on the ghostly side.
Until we have understood that something else, it is premature to get too worked up about the posited happy coincidence between the epiphenomenal nature of the posited ghost and pseudo-epiphenomenal aspects of the machine.
The Meta-Problem is often treated by hardists as a peripheral issue (Chalmers is a notable exception), but the Hard Problem and the Meta-Problem are largely isomorphic. They can both be split into two closely related halves, irreducibility and epiphenomenality, and they both have to account for the same corpus of literature with all the same arguments and thought experiments.
But they come with radically different theoretical commitments.
The Hard Problem, in its original Chalmersian form, is the challenge of accounting for experience on the assumption that irreducibility and epiphenomenality are both genuine aspects of some mysterious entity, Chalmers “materials for a bridge”.
The Meta-Problem, by contrast, is the challenge of accounting for why a physical brain might infer these properties, or at least report them — but without the assumptions that the properties are genuine or that they even contribute to hardist belief.
Without the problematic commitments, pseudo-irreducibility and pseudo-epiphenomenality are much easier to explain than their ghostly counterparts — we only need to understand the brain’s self-diagnosis of irreducible and epiphenomenal entities, without resolving the paradox of such properties genuinely arising in a purely physical universe. For instance, there could be cognitive reasons that reductive explanations fail, and there must be cognitive, non-epiphenomenal causes of belief in epiphenomenalism.
That doesn’t make the Meta-Problem lower-case easy. The Hard Problem suffers from severe under-specification, and it is about an undefined entity beset with serious conflations; much of this vagueness carries over to the Meta-Problem, which necessarily inherits all the same definitional confusion. Indeed, in terms of what we can express with language, the Meta-Problem covers exactly the same conceptual ground as the Hard Problem, syllable for syllable, comma for comma (albeit from a redeeming distance of one representational remove), and so it can’t be resolved without untangling what the Hard Problem is supposed to be about. “Experience”, supposedly. Whatever that is.
But at least the Meta-problem is a functional problem. Where the Hard Problem is about an inscrutable ghost, the Meta-Problem is about a machine. Clean up the vagueness, and the Meta-Problem should be tractable.
Unfortunately. addressing the vagueness is far from simple. (For my own on-going attempt, see A Guide for Translators).
For a start, “experience” is not clearly defined in Chalmers’ landmark paper, Facing Up to the Problem of Consciousness (1995). Nor is it clearly defined in his follow-up book, The Conscious Mind (1996). To be fair, though, we sort of know what he means. (Or most of us do; some people see nothing mysterious in mentality.) Chalmers usually means qualia, the ineffable nature of deep blue, the experience of hearing middle C, the intrinsic nature of pain, and so on — but sometimes he means consciousness itself, the fact that it seems like anything at all to be a conscious being.
More often, he equivocates on this important distinction, combining qualia and container under the common label of “experience” and addressing the conceptual intersection of both: consciously appreciated perceptual content.
As noted in other posts, Chalmers also equivocates on whether “experience” is intended to refer to the source (Σ) of his puzzlement or to his imagined non-functional target (Δ); for him it is all the same, and conflating these two concepts is a necessary part of imagining zombies. Ned Block, using the term “phenomenal consciousness” runs into all the same ambiguities, and manages to introduce a few other conflations, as well, as discussed elsewhere (see A Guide for Translators).
Adding to the confusion, Chalmers, Block, and other fans of the Hard Problem usually seem undecided as to whether the Hard Problem is supposed to be about the irreducibility or epiphenomenality of mental states (or both). These are two quite different ways in which experience might be “beyond the performance of functions”, but hardists typically combine them; they vacillate between confidence that reductive explanation leaves something out and reluctance to accept the epiphenomenalist implications of a complete functional story that has left out the feels.
Often this leads to closet epiphenomenalism.
Closet Epiphenomenalism
In online discussions and in the official academic literature, many hardists seem hesitant to accept the suggestion that they might be epiphenomenalists.
I think many of them are engaging in double-think. They want experience to have effects in order to avoid the Paradox of Epiphenomenalism, but they’re also content to let the physical story account for behaviour. They can watch themselves type the words, “I have qualia”, which puts the source of their beliefs in the physical world, but they also want to insist that the physical causal source of those typed words suffers from the same lack of qualia that Mary encountered in her textbooks. That leaves the non-functional feels with nothing to do and no causal role in the generation of hardist claims… But push hardists on the issue of epiphenomenalism, and they will insist that, yes, qualia obviously make themselves known and play some causal role. After all, they can type, “I have qualia”.
Qualia slip in and out of the causal story depending on which part of the puzzle is being considered. They’re causal, but they’re not in the substrate that accounts for behaviour. They’re effectively epiphenomenal, but they can’t be acknowledged as such.
There is no single coherent position here, just a dance in the fog.
For instance, despite championing zombies, Chalmers has written:
I do not describe my view as epiphenomenalism. The question of the causal relevance of experience remains open, and a more detailed theory of both causation and of experience will be required before the issue can be settled. But the view implies at least a weak form of epiphenomenalism, and it may end up leading to a stronger sort. . (Chalmers, The Conscious Mind, 1996.)
Despite this aversion to open embrace of epiphenomenalism, many of the key players in this field have drawn a direct line from irreducibility to epiphenomenality, and sometimes this has been made explicit.
The standard hardist complaint that mental properties cannot be found in the physico-functional story (an irreducibility issue) is not often accompanied by a matching complaint that physical events have a causal gap and fail to account for behaviour; instead, it is usually emphasised that the physical substrate only accounts for behaviour, leaving out the qualitative feels. But of course, the hardist insists, the feels do something; they make themselves known by some irreducible, mysterious process.
The very fact that this is an inscrutable Hard Problem let’s this contradiction persist in the framing. What would be intolerable in a machine that is supposed to make sense is forgivable in an irreducible ghost.
Thomas Nagel made oblique references to epiphenomenal ideas in his landmark Bat paper, initially as a casual aside — he “doubts” that the flavours of consciousness have any behavioural implications.
But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism. There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism. (Nagel, 1974, What is it like to be a Bat?)
The general thrust of the same paper is that Nagel frames objective biology as making adequate sense when considered in isolation, but he sees it as radically incomplete in its failure to account for the qualitative feel of subjective perspectives. This wouldn’t amount to epiphenomenalism if Nagel were merely making the claim that reality is difficult to study, but such a claim would also have made the paper relatively unexciting. There is no real mystery as to why human scientists cannot emulate the perceptual states of bats, and the failures of bat scientists only become interesting if we imagine that some property has not merely evaded their conceptual grasp but also stands outside the objective reality that they seem to have grasped quite well in every other respect — including all the causal relations. Nagel seems to want the left-out what-it’s-likeness of bats to have a legitimate existence over and above the physical facts included in the objective story, and to do that the Explanatory Gap must reflect more than our ignorance in relation to how some physical events within one cognitive system appear to some other cognitive systems. And yet the physical events in Nagel’s framing seem to be causally complete, leaving the elusive what-its-likeness with nothing to do in the objective world. If this is not epiphenomenalism, then it comes so close that Nagel has more work to do in explaining how he avoids epiphenomenalism.
Frank Jackson, in turn, had Mary following the perception of blueness right through the nervous system all the way to expulsion of the words “The sky is blue”. In Jackson’s framing, then, the physical story was causally complete, but it left out subjective colour. If the epiphenomenalist implications were not already clear, Jackson initially published his story of Mary under the title of “Epiphenomenal Qualia” (1982).
John Searle, with his Chinese Room Argument, also flirts with epiphenomenalism, in his characterisation of meaning as something distinct from the causal, computational aspects of cognition. But that’s a post for another day.
David Chalmers’ brand of hardism leans heavily on the irreducibility of qualia and the logical possibility of zombies, which he combines in his conception of experience. His landmark 1995 paper appealed to the Explanatory Gap when he posited “materials for a bridge” standing outside the functional story. Those materials for a bridge, in turn, are the same ones he imagines missing in a zombie. Chalmers’ paper also explicitly suggested that his psychophysical bridging laws are unlikely to alter physical events because physics appeared causally closed.
His later book-length defence of hardism (Chalmers, 1996) noted that his notional zombie twin talks of conscious experience with exactly the same words, and for exactly the same reasons, as Chalmers. The parallels with epiphenomenalism should be clear: whatever causes the difference in feel between human and zombie, it makes no difference to behaviour, not even to language acts that are ostensibly about consciousness.
Ned Block, as discussed in a previous pair of posts (starting here), was ambivalent about whether phenomenal consciousness was epiphenomenal, but he also spoke of phenomenal consciousness as what goes missing in zombies, hence casting it as potentially epiphenomenal.
Epiphenomenality and irreducibility are therefore clearly linked in the minds of many hardists, even when the epiphenomenalism is merely implicit.
Importantly, these two central properties of “experience” have an asymmetric relationship: epiphenomenality entails irreducibility, but irreducibility does not necessarily entail epiphenomenality. Physicalists could concede some form of irreducibility without conceding the plausibility of epiphenomenalism.
This reflects the logic that, if only we could prove that qualia were present in a physical duplicate of a human brain, by deriving them from neural circuits, zombies would be inconceivable. The converse is not true. Even if it were known that qualia were non-derivable, for instance, that would still not prove zombies to be possible. Reductive analysis is an attempted epistemic journey conducted within the space of cognitive possibilities. There are many potential reasons for irreducibility that do not rest on qualia and consciousness being truly outside the causal story. The required cognitive path might be ill-considered, for instance, or simply unavailable for contingent neuroanatomical reasons, like my inability to picture hypercubes.
The slide from irreducibility concerns to epiphenomenalism is rarely made explicit by those who are puzzled by both, and the ghostly nature of the explanatory target seems to blur the important boundaries, but the distinction is vital.
An important aspect of the consciousness debate, not usually acknowledged, is that there is a weak version of the Hard Problem that merely asks why some mental properties are (or seem) irreducible, and a much stronger form that also asks why some mental properties are (or seem) epiphenomenal. Online, most hardists I have engaged with talk of a single Hard Problem, not bothering to distinguish between these versions, and Chalmers original framing simply posited that experience was, at the same time, irreducible and capable of going entirely missing from the physical account.
This is unfortunate, because the plausibility of each version of the Hard Problem is different, and the weak version provides a point of retreat (a classic motte and bailey arrangement). The weak version is misguided, but it is only a couple of steps away from legitimate concerns. The strong version is indefensible. And yet a hardist might think that they have an unbreachable stronghold in that they can always challenge a physicalist to derive qualia from neurons firing in the dark, or find pain in the movement of atoms.
The recursive nature of the puzzle reinforces this tactic. Provide an account of why Jacksonian derivation fails, and qualia will be missing from that account, as well.
The weak, gap-focussed version of the Hard Problem has an epistemic basis; the strong zombie-focussed version is a clear case of philosophical overreach, and it has ontological pretensions. Irreducibility refers to the resistance of mental properties to reductive explanation, so it reflects the failure to find expected properties within the physico-functional account; it is a failure encountered by human scientists during an attempted act of explanation; that failure could have a host of causes within the machine, including cognitive limitations, inappropriate expectations and poorly conceived questions. Epiphenomenality refers to the presumed lack of causal effects of some mental properties, not to our lack of cognitive access to those effects, and so it requires the additional conviction that the properties are not just lacking in the physico-functional account, but also lacking within the reality described by that account.
(Whereas irreducibility is essentially epistemic, but can be extrapolated to ontology, epiphenomenality is usually conceptualised in essentially ontological terms, by default. That is, epiphenomenalism is a belief about actual causation, not about the epistemic visibility of causation. As far as I know, there is no accepted term for the epistemic analogue of epiphenomenality, for not seeing causes that are actually present, and that’s why I am proposing one: pseudo-epiphenomenality.)
In practice, the line between epistemological and ontological issues is hopelessly blurred, at least within hardist literature. Irreducibility naturally has an epistemic focus — but hardists add ontological assumptions. Epiphenomenality naturally comes with ontological commitments — but hardists are necessarily motivated by epistemic concerns, and they generally cite epistemic evidence for their beliefs: the key hardist arguments are always about what people can know, conceive or imagine about phenomenal consciousness.
To a hardist, perhaps, this might seem like splitting hairs. Machine-like mechanism should not have any features that are irreducible or epiphenomenal, and so these are both minor variations of the same puzzle. Machines can usually be understood by breaking them down into their component parts, so they should not be irreducible — but Mary. No important part of a machine is without causal effects, so physicalist science has no way to accommodate epiphenomenality — but zombies.
In the posts ahead, I will be arguing that the significance of both of these ghostly characteristics has been overblown, and the thought experiments offered in support of these features are ill-conceived, but I am much more sympathetic to irreducibility concerns, which are at least based on a grain of truth.
Most of my scorn will be reserved for epiphenomenalism — including the closeted kind.
That means I will be arguing for two different conclusions:
Brains probably do exhibit a limited form of apparent irreducibility, in that they have irreducible elements if we approach the puzzles of perception with the wrong questions in mind.
Brains can’t possibly provide us with any valid reasons for inferring epiphenomenality, and, even if brains had epiphenomenal elements, we would not know or care about those elements.
In short: weak hardism is mistaken; strong hardism is ridiculous.
Failures of reductive explanation necessarily arise from properties that can be identified by the machine, and hence they relate to targets of cognitive ostension; the machine must be differentially sensitive to the presence or absence of such properties to know that they are there and to realise that explanations have fallen short. True, there are nuances to be sorted out in terms of the ontological nature of the detected properties. Are they real, illusory, virtual? Are we pointing to the substrate or to what it represents? But, in broad terms, the property that inspires claims of irreducibility can be both the source and the target of the claim.
With epiphenomenality, though, the source and target of puzzlement necessarily part ways.
Epiphenomenal entities never puzzled anybody.
Once we carve off concerns about irreducibility, I propose that the Hard Problem is left with no real substance.
The major problem for strong hardists — those who believe in the possibility of zombies and therefore support epiphenomenalism — is that there can never be a defensible argument in favour of epiphenomenal elements of consciousness. Why would a machine ever concern itself with ghostly properties lying outside its own causal network? If the machine has detected a ghostly property and become puzzled by it, that property cannot possibly be epiphenomenal. If the machine has not detected such a property, then why are we talking about it?
Strong hardism incorporates zombie possibilism, and zombies expose the Paradox of Epiphenomenalism in a particularly stark form. Why would a physical brain in a human consider itself to possess something missing in its zombie twin? A zombie hardist would necessarily claim to possess the same human extra, for the same reasons, and be wrong — this is the Zombie’s Delusion for which this blog is named.
The inability of a non-causal extra to get itself talked about through any defensible chain of reasoning constitutes an obvious major problem for hardism, and to his credit, it is a problem that Chalmers concedes. (Within a long chapter of his book, The Conscious Mind, he explores it under the label of “The Paradox of Phenomenal Judgment”, not really finding a resolution.) But, even knowing about this issue, it is customary for fans of the Hard Problem to insist that, somehow, human brains correctly identify and report the presence of some special extra, despite that extra not contributing to those reports, and despite the reports being word-for-word identical on Zombie Worlds. The feeling that consciousness is outside the causal loop is so strong that hardists are often prepared to put aside the Paradox of Epiphenomenalism even after it has been pointed out.
What could account for these strong convictions?
Why do physical brains, operating under normal causal processes within the physical domain, sometimes self-diagnose a non-physical character that is imagined to lie outside the very causal processes that lead to that diagnosis?
To resolve this issue, it doesn’t matter whether any epiphenomenal ghost or Chalmersian experience actually exists within a human brain, or elsewhere in reality, because the presence or absence of such an entity could make no possible difference to what we believe and report.
That means the challenges for this part of the puzzle of consciousness are much the same for hardists and anti-hardists. Everyone needs to account for why some brains claim to house non-functional epiphenomenal extras, and the truth of those claims does not play any role.
This has only been a brief survey of the conceptual territory, but it leaves everyone with a reason to understand the source of epiphenomenalist intuitions:
Strong hardists need to account for the words coming out of their own mouths without appealing to epiphenomenal extras. Equivalently, strong hardists must account for zombie claims of consciousness and qualia; without an explanation of why the physical brain self-diagnoses epiphenomenal consciousness, a hardist cannot claim to have examined a zombie in enough detail to declare them logically possible.
Weak hardists need to show that the stronger claims of their more committed hardist colleagues are misguided. They need to find a way of drawing a line between mysterious irreducibility and self-refuting epiphenomenality, and one way to do this is to show that all the usual reasons for believing in epiphenomenal experience are derived from fallacious lines of reasoning.
Anti-hardists need to account for the convictions of hardists, and show that their claims come from explicable functional sources, not from some magical intuition about a non-physical, non-functional domain.
Sources of Pseudo-Epiphenomenality
In the next two posts, I will lay out a more extended discussion of pseudo-epiphenomenality, but I will conclude this post with a list of some of the main factors. They can be considered in three groups: factors pertaining to the scientist’s cognitive grasp of the issues, factors at work in the subject, and features of the virtual, represented ontology in which phenomenal properties are placed when the brain builds its model of the world.
This is not an exhaustive list of the sources of pseudo-epiphenomenality, and each of these factors really requires a post of its own, but it is a start. (Feel free to suggest additional factors in the comments.)
Scientist Factors
Pseudo-Irreducibility and the Gap. The Explanatory Gap seems to put phenomenal properties outside the functional story, which appears to be causally complete, encouraging a conceptual drift to epiphenomenalism.
Redundancy of Explanatory Levels. The availability of multiple potential levels of explanation leaves high-level properties in a position of explanatory redundancy; they add no additional causation to a story told at lower explanatory levels.
Representational Dichotomy. Causation happens at the level of substrate, but phenomenal properties are considered within a representational context. Whether or not the representational conventions of a system are respected or even available from outside that system makes no difference to what that system does. What we can ignore seems to have no effects. (We can ignore what is displayed in the second room in the likoscopic lab.)
Virtualism. The virtual nature of many mental properties means that the targeted properties do not exist in reality in a recognisable form; their merely virtual existence prevents them from having effects. (The contents of the third room in the likoscopic lab make no difference to the first two rooms.)
The Phenomenological Fallacy. Hardism inappropriately assigns properties to brain-states that do not actually have those brain-states. There is no point in hunting for the blueness of a “blue experience” if no brain-state is blue, and spurious the mislocated blueness will have no effects.
Subject Factors
Unfinished Science. The actual functional role of some brain properties is still unclear to cognitive neuroscientists. When we don’t understand the functional distinction between unconscious processing and conscious processing, we will find it easy to imagine that the poorly understood difference has no behavioural effects. Functional theories of consciousness tend to be abstract: it is easy to dismiss the value of a global workspace or attention schema, particularly when the empirical evidence of how the brain processes conscious thoughts differently from unconscious representations is scanty or contradictory. (By itself, this can’t imply epiphenomenalism, because we can say when we have been conscious of something, but as already noted we can say we have qualia, too; when the other factors are in play, ignorance of the function being ignored in low-level accounts contributes to the overall plausibility of epiphenomenalist intuitions.)
Time Lag. Some empirical evidence (such as Libet’s experiments) can be interpreted as showing that consciousness of behavioural decisions occurs after the fact, when the decisions are already inevitable.
Represented Factors
Represented Ethereality. Many mental properties are represented as being non-physical or as having no major physical effects. Our attentional grasp, for instance, is represented as passing through brick walls or delicate panes of glass without breaking them. The representation of the mind as lacking physicality involves omissions, where the brain’s model didn’t need to respect physical constraints, as well as constructive acts of commission, where the model adds explicit notions of non-physicality. From an evolutionary perspective, the representation of the mind as non-physical partly reflects the lack of any need for material metaphors in the brain’s representation of its own interiority (an omission). From within the learned component of the cognitive system’s model, the cultural environment and a lifetime of cognitive habit can exaggerate this non-physicality and add explicit notions of non-physicality (an act of creative commission).
Calibration of Embellishments. Our virtual models of reality are continually calibrated against the real world, leaving no room for major causal divergence between our private virtual ontology and actual ontology (at least at the everyday scale of objects and our interactions with them.) Perceptual processes can add embellishments with marked subjective effects, such as the visual pop-out of red objects in a visual scene or an attentional spotlight that can roam around the world in three dimensions, passing through physical objects, but those embellishments are constrained by calibration with other sensory modalities and with reality. Our mapping of photon wavelengths to a 3D colour space only affects the surface appearance of objects, not changing the way they are represented as behaving. Any perceptual embellishments added in generation of the model have to be extremely subtle in their departure from the actual causal story playing out in the external world; they can’t usefully be represented as having causal effects outside our own subjective perspective. For instance, if red objects were systematically perceived as substantially heavier or lighter than their real weight, this tendency would have adverse evolutionary consequences.
The first item on the list — ontological extrapolation from the Explanatory Gap — is worthy of special comment because it carries more weight than all the others, and it is probably what lets all the other factors persist largely unrecognised within the hardist framing. This first item also relates to a major theme of this post: the need to distinguish the Meta-Problem of Pseudo-Epiphenomenality from the Meta-Problem of Pseudo-Irreducibility, despite their close conceptual relationship.
A complete exploration of irreducibility issues is well beyond the scope of this post, but the most common way to slide into epiphenomenalism is to note that we can’t find phenomenal properties in their expected form when we study their proposed neural substrates, and then to conclude that they belong in some other domain. Having been divorced from the domain where all the causation is occurring, they cannot be assigned a causal role. At this point, the main conceptual choices are reconsidering the nature of the apparent irreducibility (the anti-hardist approach), or pressing on towards epiphenomenalism with its paradoxes (the strong hardist approach), or deferring a decision on the basis that this is a baffling mystery (the weak hardist approach).
In some ways, the drift from irreducibility concerns to epiphenomenalism is understandable, even though it leads to paradox. If we study the physical neurons of the brain with the mindset Chalmers has promoted, armed with no more than neuroscience in its current state as of 2025, we don’t encounter a compelling reason to believe that the neurons necessarily house a conscious being who is aware of a mind and all its rich experiential flavours. We don’t find the quality of deep blue and pain and the ineffable subjective presence of awareness (at least not in a convincing guise). We seem to get a bland functional story, instead, and that functional story seems to account for all the objective features of reality, but none of the subjective flavours. There is no particular reason, apart from faith in physicalism, to believe that this gap will be closed with advances in neuroscience.
On realising this, it therefore seems conceivable to many people that, had the universe been put together with a reduced set of natural laws — the objective laws just considered and found lacking — the bland story might have been the only story, playing out all of its physical effects in the experiential dark, zombified, with none of the subjective richness we obviously enjoy.
Provided it is not examined carefully, this linking logic between irreducibility and epiphenomenality seems straightforward. Mental properties like blueness and pain and subjective awareness don’t seem to be derivable from physical processes, so they don’t seem to be an intrinsic part of the physical account. But physics seems causally closed — this is not universally accepted, of course, but the failure of reductive explanation is not accompanied by matching causal gaps in neuroscience. It therefore seems as though physical processes could carry on without the special flavours of mentality.
The error in this line of thinking is that it fails to distinguish between explanatory shortfalls in the physical account and gaps in physical ontology itself. The faulty underlying assumptions are that the physical account should provide full cognitive access to all aspects of reality, and that distinct cognitive grasps require distinct entities being grasped. The resolution being rejected is that the missing properties are merely representations within a story that is entirely contained within the physical processes that seemed inadequate when grasped from an unengaged cognitive perspective. We access our own cognition from within a representational system, so this assumption of transparency ultimately means that hardists expect theories to furnish examples of perceptual states in their own brains that have merely been described in a generic brain. This faulty line of thinking is often accompanied by the phenomenological fallacy that ascribes represented properties to the representational medium; the missing blueness is imagined to be a property of the mental state itself, and science can make no sense of this inappropriate framing.
Having made the slide from irreducibility to epiphenomenality, all the other items on the list reinforce the notion that the missing properties sit in another domain, exerting no causal effects of their own, because all the causation is exerted by the physical substrate.
If we dig down into the logic linking irreducibility and epiphenomenality, though, it wouldn’t make sense to cast experience as epiphenomenal even if irreducibility itself were an inexplicable mystery. Irreducibility is an epistemic issue; it could have a cognitive basis, and so it can’t provide evidence for an ontological extra. Furthermore, an epiphenomenal entity necessarily fails to account for any irreducibility that we can know about. Why would we set out to derive some property that had never let itself be known? How would we know that the physical story had left something out unless our physical brains had access to the entity that had been left out?
Here a strong hardist will typically say that our claims of special knowledge come from something else, some flavourless cognitive quirk that makes our mouths produce sounds that merely happen to describe the epiphenomenal domain.
Apart from requiring faith in an extraordinary coincidence, this still leaves hardists needing to understand the actual source of their own words, so accounting for the machine’s claims of a ghost remains a problem for everyone.
To be continued…