In this post, I consider a serious ambiguity in the way that the term “phenomenal consciousness” is used in discussions of the Hard Problem.
Does the term purely relate to what zombies lack, which leaves it inapplicable to humans in a physicalist universe? Or is it applicable to humans, regardless of our ontological beliefs, and people on either side of the hardist/anti-hardist divide merely disagree on whether it comes from the functions of the brain or from some other aspect of reality?
Two physicalist philosophers walk into a bar.
We can call them Austin and Delilah.
They are all set to discuss the philosophical implications of the latest advances in AI and robotics.
Let’s stipulate, for the sake of discussion, that both of them are functionalists with sympathy for representationalist views of consciousness. They think that the human brain builds models of external reality, but it also models some things purely for cognitive utility. The brain houses cartoonish simplifications of its own interior that allow it to successfully navigate its own complexity. Those private models, they both believe, do not pay much heed to physical constraints; evolution never selected for metaphysical accuracy in the models, only for utility. Human minds can push their focus of attention through physical objects, or into the past. They can hold clear concepts of things that the physical sciences cannot find, like colour qualia. A lot of people have become excited by these oddities, and the way the brain’s modelled interiors do not match anything described by a naïve account of physical reality, but our two physicalists think that’s all hogwash.
They both think that a functionally identical robot and human would be in identical experiential situations (apart from trivialities related to their different bodies, such as the human feeling a heart beat, and the robot noting a low battery). The human and robot would both model the same interiors with the same anti-material metaphors being applied to create a similar internal user interface that disregarded the rules of physics. Human puzzlement about that interior and robot puzzlement would be similar, coming from a cognitively analogous source, and being about the same thing.
Let’s also suppose they have a mutual hardist friend who is a fan of the Zombie Argument (where a “hardist” can be broadly defined as someone who accepts the legitimacy of the Hard Problem, as discussed in a previous post).
Their friend — let’s call him Harry the Hardist — believes that humans enjoy an experiential extra that could be hypothetically absent in notional zombies on other worlds and literally absent in robots on this one.
Harry has always been a strong critic of the potential for conscious machines. He reasons that the special human extra responsible for experience will inevitably be absent in any algorithmic machine because, after all, we could sit inside the machine and watch its workings, and we would find no need to invoke any experiential extras in accounting for what it does. Any algorithmic machine would necessarily be a digital zombie. We could even perform all of its calculations ourselves, writing the results on mountains of cardboard, and we would feel nothing of its faux inner life. We would remain unimpressed as we personally calculated the motor outputs that erroneously announced the robot’s possession of phenomenal consciousness.
Let’s use the mathematical symbol for difference, Δ (“delta”), to indicate the experiential human-zombie difference that Harry believes in. (This is what I have previously called “phenomenal spice”, but none of the characters in this tale are familiar with that term.)
Harry thinks that humanity faces a Hard Problem, which can be summarised as the challenge of explaining how physical processes in human brains give rise to Δ.
Harry’s view is something like:
H = R + Δ
or… H = Z + Δ
where H = human consciousness,
R = robot cognition, Z = zombie cognition, and
Δ = experiential extras.
Over the years, Austin and Delilah have each argued separately with Harry, and occasionally they have joined forces arguing against him in the literature, but they have not compared notes on his latest paper, which denounces all suggestions that robots have the potential to be conscious, ethically important beings.
They plan to remedy that situation tonight.
Let’s add a final stipulation that the two physicalists have almost identical opinions about the way the universe produces consciousness (or, in their terms, the way consciousness is represented within physical brains), and they fundamentally agree about all the various ways in which Harry is confused.
They are both very dismissive of Δ and the hardist search for Δ.
They’d bet their last dollar that Δ does not exist, but they happily concede they cannot prove it; they insist instead that they could have no evidence of it, nor any reliable concept of it even if it did exist, so they don’t care. They think Δ is an incoherent non‑entity, making no difference to anything, borrowing what meaning it seems to have from other sources — and those sources are shared by human and robot in the suggested scenario (and hence they are equally present in H and R). They think that, in terms of ontology, there cannot be anything important in reality that maps directly to Δ, even if Harry is partially correct and Δ hovers over them all like some sort of ineffectual ghost.
Humans can only have evidence of ideas about Δ, they are both prone to saying, and they have similar explanations for where those ideas have come from. For instance, they both agree that Harry’s concept of Δ partially arises from a misunderstanding of the difficulty in deriving qualia — it is the Explanatory Gap writ large onto ontology (as discussed in previous posts).
The idea of Δ is also a result of Harry’s brain taking its own representations too literally, incautiously extrapolating from its own private models out into the world. Harry is adding an implicit sensory step, thinking his brain is detecting something like Δ, rather than modelling something like Δ for its own internal reasons.
They both think that the apparent epiphenomenality of Δ largely reflects an over-reaction to an unremarkable explanatory redundancy across different levels of engagement with cognition. A computer running a dragon-fighting adventure can be perfectly well explained as a complex mechanism without once mentioning the dragon, they might argue, but that does not make the dragon epiphenomenal. The dragon, to the extent that it exists at all, is there in the mechanisms, and its actions and existence are implicitly covered in the mechanistic explanation of the computer even if that explanation focuses on a different level. Ignoring the dragon in the computer is analogous to ignoring the brain’s models in a zombie: it’s possible, but proves nothing.
Austin likes to ask his students how the apparent epiphenomenality of the represented dragon in a computer game differs from the apparent epiphenomenality of the hardist extra, Δ.
When Delilah lectures, she makes a very similar point in relation to the white queen in a virtual game of chess, comparing the pseudo-epiphenominality of merely being a representation, compared to the double-whammy of being represented as epiphenomenal.
Austin and Delilah have also accused Harry and his fellow hardists of several other mistakes that needn’t concern us just now.
Like Harry, the two physicalists believe that, faced with a human being and a perfect robotic functional mimic of that human, some philosophers might form the the idea of an important human-robot difference, Δ. Unlike him, they think that any conceptual difference in the mind of the philosopher looking on would merely be a marker of an inconsistent, bio-chauvanist perspective adopted by that philosopher. They both think that Δ only exists as an idea, and, worse, it is an idea about a fiction that, even if it were rendered in reality according to Harry’s very vague specifications, could make no possible difference to the robot or to anyone else. The robot (if it were a hardist) might insist that it possesses Δ, but it would have no good reason to care whether Δ was actually there or not. The addition of Δ to the robot would not make the robot more aware or give it a richer inner life; subtraction of Δ from the human would not make the human less aware or sap any of the richness from the human’s interiority, which is only represented anyway.
Furthermore, they don’t think Harry’s belief in Δ can possibly have come from Δ, because Δ can’t modify any of Harry’s cognitive activities.
This conclusion stems directly from the definition of Δ, which requires that H and R are perfectly matched at the cognitive, functional level; Δ only refers to some non-matching element, so it can’t have any functional effects.
It does not matter, for now, whether you agree with the two physicalists, or with Harry, or take some third position. What matters is that you understand what it is that all three believe.
In the face of this comprehensive agreement, one might think the two physicalist philosophers would have nothing much to argue about when it comes to phenomenal consciousness. But it just so happens that, based on their slightly different reading history earlier that day, they enter the pub primed to use the term “phenomenal consciousness” completely differently.
Delilah uses the term “phenomenal consciousness” (or P-consciousness) to mean Δ.
She thinks of it as the spurious human-robot difference that Harry believes in. She is confident that this is the true meaning of “phenomenal consciousness” because the last thing she read before heading to the pub was Ned Block’s landmark paper, “On a confusion about a function of consciousness” (Block, 1995) — the very paper that put the phrase “phenomenal consciousness” into the philosophical lexicon.
Knowing her friend to be a stickler for definitions, she took a screenshot of a couple of relevant paragraphs, including this part where Block explicitly declares that P‑consciousness has no cognitive properties, writing:
I take P‑conscious properties to be distinct from any cognitive, intentional, or functional property. (Block, 1995)
The same paper also introduced the expression “access consciousness” (or “A-consciousness”). Like Block, she takes this to be the functional form of consciousness that the robot and human would both possess, and she takes “phenomenal consciousness” to be the mysterious extra thing that a functional mimic would be purported to lack, for those who believe in such things.
“As an example of A‑consciousness without P‑consciousness, imagine a full‑fledged phenomenal zombie, say, a robot computationally identical to a person, but one whose silicon brain does not support P‑consciousness. I think such cases are conceptually possible, but this is very controversial, and I am trying to avoid controversy.” (Block, 1995)
Unlike Block, she thinks the identification of P-consciousness with a human-zombie difference makes the concept all but useless. She thinks A-consciousness (or a modern update of Block’s version of A-consciousness) is already enough by itself to match human consciousness, without needing any special extras. She doesn’t see much point in proposing zombies, but she appreciates the way Block clearly commits to phenomenal consciousness being an epiphenomenal extra.
It makes it that much easier to dismiss the whole idea.
Delilah would describe her views as something like this: H = R + Δ, where Δ = 0.
By contrast, Austin uses the same term, “phenomenal consciousness”, to mean the entity that a human and its robotic copy would both call their own internal “what-its-likeness”. Phenomenal consciousness is the way things seem, along with whatever is responsible for that seeming.
Austin’s rationale for this approach is simple: he looks inside and finds phenomenal consciousness, and that’s what he wants to understand. Phenomenal consciousness is the rightful target of his puzzlement. No one can trace their puzzlement back to an epiphenomenal entity; that is logically impossible. But they are puzzled, and they have chosen the phrase “phenomenal consciousness” to refer to the source of their puzzlement, and Austin is happy to go along with that linguistic choice.
Let’s call Austin’s version of “phenomenal consciousness” Σ (“sigma”); it is the sum, or combination, of all the elements that combine to make up the rich inner life he enjoys as a conscious being, including the functional elements that physicalists believe in as well as any controversial epiphenomenal elements that hardists want to throw in. He strongly suspects that Σ is also ontologically identical to what the philosopher Ned Block called “access consciousness”, a name that can be applied unproblematically to objective, functional conceptions of consciousness. If Δ turns out to be non-existent, this will indeed be the case: A- and P-consciousness will be the same thing, viewed from different perspectives. A robot having A-consciousness would automatically get P-consciousness.
Symbolically, P=A+Δ.
Austin has learned from bitter experience that Delilah is a stickler for citing source material, and she could well attend with a definition of her own, complete with the evidence to back it up. Luckily, the last thing he did before heading out to meet Delilah was to ask his AI to find supporting evidence of his definition, and it ended up taking a screenshot of these passages from Block’s landmark paper, “On a confusion about a function of consciousness” (Block, 1995).
“Those who are uncomfortable about P‑consciousness should pay close attention to A‑consciousness, because it is a good candidate for a reductionist identification with P‑consciousness.”
“…perhaps P‑consciousness and A‑consciousness amount to much the same thing empirically even though they differ conceptually… Perhaps the two are so intertwined that there is no empirical sense to the idea of one without the other.”
(Block, 1995)
Austin is especially in agreement with the last comment, where Block concedes that phenomenal consciousness might turn out to be the same thing as A-consciousness, approached from a different conceptual angle. That would require, of course, that phenomenal consciousness was not epiphenomenal, because access consciousness is clearly not epiphenomenal. The suggestion that A- and P-consciousness might be the same thing would require that phenomenal consciousness has causal effects on cognition, because access consciousness clearly has causal effects on cognition.
For this to be held out as a possibility in Block’s paper proves that Block’s “phenomenal consciousness”, in Block’s own words, is nothing like Harry’s mysterious non-functional extra.
If A- and P-consciousness turn out to be the same thing, then phenomenal consciousness will eventually find its place in the world described by objective neuroscience, and everyone can forget about this epiphenomenal nonsense.
Harking back to Harry’s example, Austin would describe his views as something like this: Σ = H = R + Δ, where Δ = 0.
He would insist, though, that much more nuance is required than this simple ontological summary. He sometimes likes to say that he is a realist about consciousness, but he is prepared to concede that the relationship between Σ and physical reality is somewhat indirect. He likes to point out that he engages with Σ via his concept of it, from within a representational system. Accordingly, the full scope of his concept of Σ allows for representational twists; the term “phenomenal consciousness” refers to whatever is at the end of an innocent act of internal introspection, even if the final connection to reality includes a representational substitution step that will make the target of ostension look entirely unfamiliar when viewed from outside that representational system.
Neurons and fuzzy clouds of shiny consciousness can’t be expected to look the same, after all, not any more than optic-nerve firing looks like the light it represents.
There would be no reason to expect that introspection would reveal the true metaphysical nature of consciousness, and every reason for evolution to have hard-coded metaphors that were cognitively useful rather than ontologically accurate.
We can even see some of the relevant substitution steps happening in real-time, when structures like the retina convert actual light into neural activity; this translation and others like it obviously need to be reversed when tracing the brain’s models back out to ontology. Views of consciousness on different sides of those substitution steps will therefore seem very different conceptually, but need not differ ontologically.
Austin would argue that one of Harry’s many mistakes is thinking that reality literally provides the fuzzy internal cloud of phenomenality, the conceptual ancestor of Δ, that was already being represented by functional cognition.
And so our two philosopher friends enter the pub and order their drinks, armed with quite different definitions of phenomenal consciousness — sourced from different parts of the very paper that introduced the term.
Fortunately, because of this dramatic terminological difference, the two philosophers can argue about phenomenal consciousness all night, which is very much their preferred way to spend a Friday evening after a dreary week of correcting “student” essays composed by GPT5, filtered through TurnItIn, and often riddled with definitional vagueness.
As they take their seats at a table by the wall, they consider the question: Does the robot of Harry’s favourite thought experiment have phenomenal consciousness?
Delilah says no. The robot lacks the special spark (Δ) that Harry ascribes to humans.
Austin is indignant. So now you believe in digital zombies?
No, she says. The suggestion that robots are missing some silly spark does not amount to belief in the possibility of zombies, digital or otherwise. Sure, the robot of their discussion lacks “phenomenal consciousness”, but so does everyone else. Delilah insists that phenomenal consciousness (Δ) could at most be epiphenomenal, making no difference to anything, and so there is no valid reason to believe in it.
Phenomenal consciousness, she says, is no more than a hardist fancy.
At this point, she could pull out her screen-captured quotation, but she wants to see if Austin will dig himself into a deeper hole.
Austin counters that the robot’s consciousness, being identical to human consciousness in all the ways that matter, must be granted full status as phenomenal consciousness (Σ). He argues that phenomenal consciousness (Σ) plays an important role in the robot’s cognition, that it is something the robot can ostend to from within a complex cognitive system that takes its own representations seriously, and so the target of ostension has as much validity as the human version. If its ontological nature needs some revision on the way out of the representational system, such that the attachment to reality is via computer circuits rather than through a fuzzy cloud of ectoplasm, that is a twist worthy of discussion and clarification, but the ectoplasm-substrate mismatch applies equally to human and robot, which is why they share intuitions and both claim to possess consciousness.
It is ridiculous to propose that phenomenal consciousness (Σ) could be epiphenomenal.
At this point, he could pull out his screen-captured quotation, but he wants to see if Delilah will dig herself into a deeper hole.
Undeterred, Delilah defends her dismissal of phenomenal consciousness (Δ) by arguing that it could not make any possible difference to the robot’s functional story, which was already fully accounted for by functional mechanisms.
Austin accuses her of sounding just like Harry. He doubles down by noting that the robot’s own statements about its phenomenal states (Σ) — statements like, “I am having a greenness experience right now” — directly refute an eliminativist or epiphenomenalist view, because the expressed statements of belief in the physical world have to be causally downstream from the phenomenal states (Σ) to which they refer.
No, says Delilah. Such statements are causally downstream from the relevant cognitive properties, but not downstream from phenomenal consciousness (Δ), which is a fiction imagined by hardists to be a non-cognitive experiential extra: the thing zombies lack, the aspect of colour not found in Mary’s textbook, the embodiment of explanatory frustration that hardists project out onto the world when they encounter an Explanatory Gap.
As the night proceeds, the two philosophers might take different stances on broader questions such as whether phenomenal consciousness (Δ or Σ?) was visible to natural selection, whether it served any cognitive purpose (Δ or Σ?), and whether it was better for science to try to explain away phenomenal consciousness (Δ or Σ?) as a non‑entity (an eliminativist or illusionist position), or to explain its functional properties (a demystificationist position).
We can only guess when they might reach for their phones, pull out their screen-captured quotations, and realise they were having a purely semantic dispute.
That is, two physicalist philosophers with the same actual underlying beliefs would appear to hold diametrically opposed beliefs, merely because the terminology provided was so vague and contradictory.
For two philosophers with genuinely opposing beliefs, mutual incomprehension within this linguistic environment would be all but assured.
Illusionists are chronically misunderstood, and some of the blame can be attributed to this same semantic confusion.
For instance, Strawson has rejected illusionism in very strong terms, as follows.
What is the silliest claim ever made? The competition is fierce, but I think the answer is easy. Some people have denied the existence of consciousness: conscious experience, the subjective character of experience, the “what-it-is-like” of experience. Next to this denial—I’ll call it “the Denial”—every known religious belief is only a little less sensible than the belief that grass is green. (Strawson, 2018).
What does Strawson think he is talking about (Δ or Σ?) when he objects to illusionism? And which version of phenomenal consciousness (Δ or Σ?) is being rejected by the illusionists? Without clearer use of language, we can have no idea.
In a previous post, I suggested that hardists necessarily have a hybrid concept of phenomenal consciousness (or “experience”). They treat phenomenal consciousness as though it stands outside cognition, but they always rely on actual cognitive functions to flesh out the concept with content. Remove the illegitimate content, and nothing will be left. I suggested that they were conflating ostensional consciousness (the cognitively targeted entity found on introspection) and phenomenal spice (the epiphenomenal entity that notionally separates us from zombies).
In later posts, I will argue that both components of this hybrid play a key role in the Zombie Argument.
Zombies make no sense unless phenomenal consciousness is operationally defined as epiphenomenal. But we conceive of zombies as being fundamentally different to ourselves, so the concept of phenomenal consciousness must have enough genuine content (and enough functional properties) for us to know when we are including consciousness in the conceivability exercise and when we are not. We are invited to imagine what it is like being a zombie (picturing phenomenal darkness and silence) and to contrast that with what it is like to be human (picturing an inner light, lots of sensations, and a private voiceover). This is a marked functional difference, within our own cognition, such that we would immediately notice if, as observers, we switched from being inside the zombie’s mind to inside a human mind.
The content contributing to the imagined human-zombie difference must have been built up from a non‑epiphenomenal concept of consciousness, or from some hybrid concept that combines features of the model with the mere idea of an epiphenomenal experiential extra. The hybrid concept gets its content from the marked functional difference just considered, but it is inappropriately imagined as functionless — a contradiction that is easily resolved when we adopt a two-brain approach to the puzzle.
This is an important point, so let’s consider it from another angle.
The Zombie Argument relies on the twin notions that: 1) not even Chalmers would notice if his phenomenality were suddenly switched off (making him a zombie) and then back on (making him human again), because there would be zero cognitive difference; and 2) it is nonetheless appropriate for us to hold two highly contrasting concepts of what it would be like for him in zombie and human mode, such that switching from one concept to the other constitutes a major functional difference to us as we imagine looking on. Our concept of phenomenal consciousness, when it is engaged, has marked functional effects within our imaginative faculties, but we imagine it as having no functional effects or content in a different hypothetical brain that has the same cognitive structure as our own.
Paradoxically, phenomenal spice is conceptualised as having more effects in our brain when we are contemplating another brain losing or gaining spice, than in the brain imagined as losing or gaining the spice. Austin would say we were ostending or failing to ostend to Σ, while merely telling ourselves that we were turning Δ on and off.
Of course, this is a controversial area, and it won’t be resolved in a few paragraphs, but the notion that a hybrid concept is necessarily involved in our concepts of zombies is not new; it can even be found in Chalmers’ own treatment of these issues.
Austin and Delilah are not guilty of this conflation conceptually, but they are linguistic victims of the conflation, and different versions of their faux disagreement can be seen throughout typical discussions of the Hard Problem, even among physicalists.
In the next post, I will attempt to reduce the risk of miscommunication by briefly laying out a map of some conceptual territory in the region of phenomenal consciousness, naming some important sub-components of the messy hybrid concept that is phenomenal consciousness.
(Block’s original paper uses the term “phenomenal consciousness” in at least four different ways, and variations in meaning have been proliferating ever since, so it will take multiple posts to explore all of the semantic forms of “phenomenal consciousness”.)
For now, I will leave you with a question.
If we use symbols instead of ambiguous terms, we could roughly sketch the relevant conceptual territory in a way that Harry, Austin and Delilah would all recognise, despite their different understandings of the ontology:
Σ = ρ + Δ.
(Sigma = rho + delta).
Ostensional consciousness in toto ( Σ ) =
functional stuff we share with notional zombies ( ρ ) +
stuff that differentiates us from zombies ( Δ ).
Ausin and Delilah don’t believe in zombies, but the equation still works for them because the difference term (Δ) is null, and the functional sharing between human and zombie is complete by definition (functionally, Σ=ρ).
Harry does believe in zombies, and when he reads the same equation, he puts all the flavour into the difference term (Δ). For Harry, introspectively identified phenomenal consciousness (Σ) is ontologically equivalent to the mysterious non-physical extra (Δ), but he believes that claims about consciousness come from the functonal architecture of the brain (ρ).
He is not disturbed by the coincidence that the functionally-generated claims align so well with the epiphenomenal extra — but perhaps he should be.
So here is my question:
Which one of these is phenomenal consciousness for you?
Up next: The Spice-Meal Conflation
You are the best philosophy of mind writer out on Substack
Well said! I've made a similar distinction, calling Austin's version "manifest consciousness," consciousness as it seems to the subject, and Delilah and Harry's version "fundamental consciousness," the concept of something indescribable, unanalyzable, irreducible, and indetectable except for the subject.
To me, fundamental consciousness is a theoretical construct, an assumption that the Harry's of the world are making. But it's a problematic theory, because it has the hard problem, while manifest consciousness doesn't.
Overall the main point is Block's original concept of phenomenal consciousness is ambiguous and we need additional terms for the different ways people use the term "phenomenal consciousness."
I'll be curious to see what the other two meanings you found in Block's paper are.