In the previous post, I told a story of two physicalists, Austin and Delilah, with identical ontological views. Despite comprehensive agreement about the nature of consciousness, they seemed to be in major conceptual conflict, merely because of semantic confusion. They had different ideas about the default mapping of the philosophical jargon term, “phenomenal consciousness”, to their shared view of reality. If we want to understand their ontological view, which is uncannily similar to my own — or even if we just want to defend hardism in a manner that is cognisant of the alternatives — we will need new, more specific terms.
In this post, I will zoom in on the definitional conflation that confused our two physicalists, showing how it has very different significance for hardists and anti-hardists.
This discussion will only make sense if you start with the previous one.
The conclusion I’m heading to in this short series, “A Guide for Translators”, is that the term “phenomenal consciousness” is broken beyond repair, and we should abandon it.
Before we do, though, I propose that we try to understand where things went wrong.
Two hardist philosophers walk into a yoga retreat.
As they reach an open-air platform by the ocean, they start talking about the ineffable blueness of blue.
Harry says to Sally: I think phenomenal consciousness is that extra bit of reality that is purely subjective, underivable from the perspective of the objective sciences, and completely outside the causal nexus of physical reality.
At that moment, just as he proposes this science-busting hypothesis, he summons a representation of phenomenal blueness, conceptualised as an extra flavour (Δ) that is connected to our world by mysterious psychophysical bridging principles, and his mind fills with an experience of beautiful ocean blueness.
Sally says: Well, I like to think of it as what we find on introspection; it’s just the most natural, obvious thing we find when we look inside.
At that moment, she thinks of phenomenal blueness as something that she knows intimately through introspection (Σ), and her mind fills with an experience of beautiful ocean blueness.
Right, says Harry, beaming. Exactly!
Sally smiles as she grabs her yoga mat and tosses it on the wooden deck. Glad we’ve settled that, then!
And, despite their fundamentally incompatible definitions of phenomenal consciousness, one busting science and one leaving it intact, one defined as outside the reach of cognition and one defined within cognition, no arguments were had that night.
The End.
There is ongoing controversy about whether the functional processes of the brain are responsible for consciousness and, if they are, whether the brain has to do something very special to give rise to the subjective, experiential nature of consciousness — or whether it can get by doing ordinary things within the scope of conventional neuroscience.
The need for a special extra (Δ) has been disputed in relation to several classic thought experiments: philosophical zombies, Mary locked in her room, improbable AIs made of cardboard, and so on.
Whenever these traditional arguments are trotted out, mutual misunderstanding between hardists and anti-hardists is almost inevitable. The two camps don’t just disagree on the truth value of a well-defined premise; they completely fail to connect.
Some of the confusion can be traced to important conceptual issues, but some of it can be attributed to vague terminology.
These two sources of confusion cannot be cleanly separated, but they are not the same.
In the previous post, Austin and Delilah enjoyed full conceptual agreement but argued all night because they were working under different definitions: Austin mapped “phenomenal consciousness” to ostensional consciousness (Σ), and Delilah mapped it to spice (Δ). In the uneventful tale relayed above, Sally and Harry have a corresponding mismatch in their terminological mapping, but they nonetheless find themselves in full conceptual agreement. For the two hardists, there is no important difference between ostensional consciousness and phenomenal spice, or at least none worthy of a name.
I propose that the lack of any clash between Harry and Sally is expected: people can’t name distinctions they can’t see. But I also think that hardists need to try a lot harder to be precise, because some of the difficulty of the Hard Problem arises from the conceptual conflation that Harry and Sally are both ignoring, the one between ostensional consciousness (Σ) and spice (Δ).
In turn, that means anti-hardists need to do better at explaining what it is that Harry and Sally are not seeing.
Language is important, and the current vocabulary is woefully inadequate.
Definitions reflect the conceptual landscape but also modify it; words can either blur or reveal important semantic watersheds. As things stand, it is possible to have disagreements about phenomenal consciousness that are entirely semantic, and those disagreements can be mistaken for deep conceptual disputes. But it is also possible for a single overused term to spuriously link ideas that should have been kept apart.
The faux argument of the last post and the faux agreement of this one both provide clues to the same underlying conceptual issue. Before choosing sides on the ontological status of spice, we need to see why Austin and Delilah need two, three, or possibly more terms to discuss phenomenal consciousness, whereas Harry and Sally think they can get by with just one.
Both of the two major professional groups interested in consciousness, scientists and philosophers, have a tendency to stay within their own domains, which is perfectly understandable, but this separation is not always conducive to seeing the big picture.
All philosophers, whether they like it or not, are considering phenomenal consciousness from within a physical brain with objective limitations; the scope of what they can think is subject to relevant cognitive constraints that affect their philosophical conclusions, and scientists could have told them that some of their arguments don’t make much sense in the setting of those limitations. And all scientists, whether they like it or not, are working from within a subjective perspective that they cannot escape, making philosophical assumptions in everything they do, and those assumptions underpin every word they write. A lot of those words were coined from a hardist perspective and embody hardist confusion. A dangerous circularity is involved: the cognitive structure of the brain influences the language and concepts that underpin the scientific study of the cognitive structure of the brain.
In philosophical quarters, the cognitive modularity of the brain revealed by cognitive neuroscience is often ignored completely, or it is treated as a source of illuminating anecdotes; this often means that the myth of ideal a priori reflection is applied inappropriately to messy conceptual issues that are directly linked to the physical machinery in which the key questions are raised. Philosophers of a certain bent treat themselves as “ideal rational conceivers” mainlining truth, even when this view is demonstrably false, and even though some of their own favourite thought experiments are directly based on failed cognitive journeys.
In the halls of science, the underlying philosophical issues are often considered briefly (with varying degrees of respect), but the philosophical controversies are then put aside as unsuitable for empirical study. Unfortunately, this often means that the scientists have accepted an implicit philosophical framework unwittingly, without applying the scepticism that the same scientists would apply to a scientific dispute.
Some research papers on consciousness wholeheartedly adopt the terminology provided by philosophers, effectively outsourcing the conceptual framing of their own field. I think this is a serious mistake, given the unresolved ambiguities I will be exploring in this series.
Other research papers include a paragraph or two attempting to distance themselves from the terminological confusion, and often that distancing is presented as a simple methodological necessity.
For instance, Stanislas Dehaene, the eminent cognitive neuroscientist writes, in a 2013 paper:
Some philosophers consider one last aspect of consciousness as worthy of a separate term: phenomenal awareness (Block, 1995; Chalmers, 1996). This term is used to refer to the subjective, feel of conscious experience (also called qualia) – “what it is like” to experience, for instance, a gorgeous sunset or a terrible toothache. Introspectively, there is no doubt that these mental states are real and must be explained. However I share with the philosopher Dan Dennett the view that, as a philosophical concept, phenomenal awareness remains too fuzzily defined to be experimentally useful (Dennett, 2001).
Dehaene uses the term “phenomenal awareness” here, but he is talking about the cluster of synonyms that surround “phenomenal consciousness”. In appealing to experimental utility, I suspect he is merely being diplomatic, not wanting to get embroiled in controversy. If pushed, he might support the much stronger stance advocated here: the fuzziness of these central hardist concepts is not just a barrier to experimental utility, but also to coherent thought.
Other researchers will talk about the “neural correlates of consciousness” (NCC) in an attempt to escape the philosophical controversies, but even that phrase involves a philosophical surrender; it elevates the idea that two fundamentally different things need to be approached through their correlation.
Suppose that we want to cut through all the controversy and establish someone’s position in this debate.
In order to ground the discussion, we ask them a couple of basic questions.
Do you believe in phenomenal consciousness? And does it arise from the physical brain?
Importantly, we will usually ask them about “phenomenal consciousness”, rather than about “consciousness”, because there is a behavioural sense of consciousness that is intimately and obviously tied to the physical brain and about which there is no real controversy. In clinical settings, for instance, the level of consciousness is usually quantified between 3 and 15, inclusive, on simple behavioural grounds, using the Glascow Coma Scale. Points are scored for reaching towards the site of a painful stimulus, for following verbal commands, for speaking coherently, and so on. If the score is 5/15, then it doesn’t matter whether the patient is a human or a zombie; they need to go to the Intensive Care Unit (assuming that their survival is a desired and achievable outcome).
From the perspective of cognitive neuroscience, consciousness can be treated as a complex function of the brain that is less shackled to objective behaviour than the clinicians’ Glascow Coma Scale, but still not as abstract as the hardists’ elusive target. And again there is an obvious functional meaning for this sort of consciousness, even when we consider it from a subject-centric perspective. For instance, we can think of subjective consciousness as a set of cognitive mechanisms used for maintaining and directing attention within a model of the world; that model can include the cognitive system itself, allowing introspection. That introspection is presented to cognition as taking place in a semantically pre-digested virtual world, rather than in the convoluted soggy grey matter perched on the edge of a physical organ.
(If we follow this functional story through to all of its myriad implications, I think we will often end up with the physical brain getting confused about why it seems to have a non-physical interiority. Understanding the Hard Problem, as a cognitive artefact, is well within the scope of objective science.)
But there is also — arguably — an elusive leftover question as to why that introspective process feels like anything, and this subjective aspect of the puzzle is what we want to pin down by adding the “phenomenal” descriptor.
So, we ask:
Do you believe in phenomenal consciousness? And does it arise from the physical brain?
Unfortunately, with the available language, the only rational answer to these and similar questions is:
That depends on your definition.
All simple answers to questions about “phenomenal consciousness” are wrong if they skip the definitional stage. Physicalist answers are wrong. Idealist answers are wrong. Hardist answers are wrong.
This is not an expression of explanatory pessimism. There is plenty of scope for complex, nuanced answers to questions about phenomenal consciousness, and a lot of sensible things can be said even in the general region of some imagined function-feel divide. But there is no point whatsoever in discussing whether anyone believes in “phenomenal consciousness” (or “qualia”, or “experience”) until we have established what the terms presuppose — and everyone uses the terms differently.
To make any headway in resolving the controversies, or even to know where each of us stands, we will need terms that are much more specific. And, most of all, we need to know whether we want the term “phenomenal consciousness” to include the hardists’ controversial extra, or to consist of that extra in isolation.
In their own quite different ways, Chalmers and Block both complained, in 1995, that cognitive scientists were guilty of a serious conflation. Scientists were claiming to address the really interesting consciousness we find on introspection, but they were delivering bland functional theories instead.
Chalmers divided the problems of consciousness into Hard and Easy piles, and he suggested that many theorists were committing a bait-and-switch. They start off promising theories of what we find subjectively (Σ), but they end up offering theories of straightforward objective phenomena (ρ).
The ambiguity of the term “consciousness” is often exploited by both philosophers and scientists writing on the subject. It is common to see a paper on consciousness begin with an invocation of the mystery of consciousness, noting the strange intangibility and ineffability of subjectivity, and worrying that so far we have no theory of the phenomenon. Here, the topic is clearly the hard problem—the problem of experience. In the second half of the paper, the tone becomes more optimistic, and the author’s own theory of consciousness is outlined. Upon examination, this theory turns out to be a theory of one of the more straightforward phenomena—of reportability, of introspective access, or whatever. At the close, the author declares that consciousness has turned out to be tractable after all, but the reader is left feeling like the victim of a bait-and-switch. The hard problem remains untouched.
(Chalmers, 1995. Facing up to the Problem of Consciousness)
Block took a similar approach. His landmark Confusion paper was named after what he saw as definitional ambiguity across the subjective-objective divide. (The abstract is included in the previous post, and the full paper can be found online.)
From my perspective, Block’s paper is notable for its many conflations and inconsistencies, but he argued that he was offering the terms “phenomenal consciousness” and “access consciousness” to prevent a serious conflation. Other authors were focussing on functional matters (ρ), not noticing that they had left out the properties found on ostension (Σ); the difference (Δ) needed more attention.
Potentially, there is merit in noticing this issue and in trying to name the conceptual elements that seem to go missing from functional accounts. As long as the function-feel divide seems conceptually attractive, we will need different names to discuss the target of ostension and the underlying functional mechanisms. We will need a name for the conceptual gap between them, and for the hardists’ proposed gap-filler, even if the conceptual gap actually has no ontological footprint of its own.
Both Chalmers and Block have a legitimate point in noting that something seems to go missing when we switch from subjective views to objective descriptions. As long as we consider these two philosophers as describing our concepts, with no ontological commitments, I think they are outlining an important issue.
But the naming of the relevant concepts needs to be done with considerable care, and the necessary care has not been applied.
Both of these philosophers, using different terminology, carved the original target of consciousness into a functional component amenable to reductive explanation (an Easy/access component) and a feel component that was resistant (a Hard/phenomenal component) — and then they both (to different extents) dismissed the relevance of the functional component to the core mystery of phenomenality.
I will be arguing that the real reason that the functional components seemed disappointing to these philosophers is that those components were merely being described, but the descriptions were being compared to the real-life in situ examples of the relevant circuitry, and therefore seemed inadequate. A description of a circuit representing blueness can’t represent blueness like a real circuit can represent blueness. The inadequacy of the described functionality was mistaken as an upper limit on what neural circuits can achieve in situ, within cognition, when their representational commitments are accepted.
Instead of considering this explanation, both philosophers assumed (with varying consistency) that the missing element had to be non-functional.
Block wrote: “I take P‑conscious properties to be distinct from any cognitive, intentional, or functional property”. He invoked the idea of zombies.
Chalmers wrote that the explanatory target was “beyond the performance of functions” and argued that zombies were logically possible. The core reasoning behind the Hard Problem is that whatever we could describe in zombies — which is any conceivable functional aspect of the brain — will inevitably leave out the part of the problem he is most interested in.
Both philosophers interpreted a conceptual gap in ontological terms, inventing spice to fill it.
With varying degrees of consistency and explicitness, they then equated their original ostensional target with a non-functional feel, relegating the functional mechanisms of consciousness to some bland, lesser status. Having named some component beyond cognition, they proceeded to use terms like “experience” and “phenomenal consciousness” without paying any further heed to whether they were referring to the cognitive elements that had originally inspired them (Σ) or to their invented, paradoxical gap-filler (Δ).
The resulting confusion between Δ and Σ is what I will call the spice-meal conflation, via an analogy with curry.
Analogies, of course, prove nothing, but sometimes they can help us see concepts in a new light, so in this section I will briefly show how a similar conflation might operate in less confusing domains.
First up: my suggested term for Δ, “phenomenal spice”, was not chosen just because it is imagined as adding flavour to cognition, but also because the English word, “curry”, suffers from a very similar synonymy.
In English, the word “curry” is sometimes used to refer to various powdered spices (CΔ) that can be added to meat or vegetables during cooking to give them flavour, and sometimes to the meals (CΣ) that have been so flavoured (and sometimes to the specific spice obtained from a curry plant). It would be odd, but not illogical, to accuse someone of putting too much curry in the curry, because curry-the-spice is not the same thing as curry-the-meal.
These two main uses stand in a simple relationship to each other:
CΣ = bland stuff + CΔ.
If we are going camping and someone “forgets to pack the curry”, the difference in these two usages could be quite important. Under one meaning (CΔ), we are destined to have a bland meal; under another (CΣ), we will go hungry.
In the case of phenomenal consciousness, we have a very similar definitional ambiguity:
PΣ = bland stuff + PΔ.
This ambiguity immediately sets up a risk of semantic debates that go nowhere.
Is “phenomenal consciousness” the epiphenomenal spice (Δ) that hardists believe must be added to functional models of consciousness to save them from bland zombiehood? Or is it the phenomenally enriched, flavoured version of human consciousness (Σ)? To refer back to the camping analogy, philosophers need to know if illusionists are metaphorically leaving the whole meal at home, or merely claiming that unseasoned chicken is already tasty enough without the hardists’ spicy additives.
Similar conflations can be found in other contexts.
If a drug trial shows a good outcome (improvement or survival) in 47% of the placebo group and 49% of the active group, then the attributable, placebo-subtracted treatment effect (TΔ) is estimated at 2% (plus or minus uncertainty). The active group had a total good-outcome (TΣ) in 49% of subjects but this includes any good outcomes that would have been expected under placebo conditions.
TΣ = placebo outcome + TΔ.
A drug company advertising a treatment effect of 49% would be badly misrepresenting the trial, relying on people misinterpreting this total, placebo-inclusive value as the difference between groups. In most legislative environments, this misrepresentation of the truth would be illegal.
Suppose we suspect that Clark Kent, the bumbling newspaper reporter, shares an identity with a certain caped superhero. Does the name “Superman” refer to a persona that Clark puts on, complete with cape, so that the term merely amounts to the difference (SΔ) between the reporter and the superhero? Or does “Superman” refer to the superhero himself, a hybrid entity (SΣ) that includes the combination of Clark plus the caped persona?
SΣ = Clark Kent + SΔ.
If we take the former, difference view, “Superman” (SΔ) refers to a bit of fancy dress with questionable utility. In the latter, inclusive view, “Superman” (SΣ) refers to the superhero; his two names (“Clark Kent” and “Superman”) merely reflect two different views of the same individual, distinguished by an inconsequential costume.
In one definitional scheme, “Superman” picks out just a cape and a name‑change, and is therefore an entity with no significant powers; the cape is close to being epiphenomenal. In the other definitional scheme, the one in line with more typical usage, “Superman” picks out an entity with superpowers.
(Compare this with Eric Schwitzgebel’s recent discussion of whether a wizard needs to have magic powers to be considered a wizard. Like me, Schwitzgebel was looking for an analogy that could be applied to phenomenal consciousness; he was focussing on the issue of how much we can lose from our initial naïve conception and still be considered to be talking about the original entity. We could define the terms such that a wizard, WΣ, consisted of a bearded guy, Wρ, plus wizarding powers, WΔ, leading to a familiar relationship between three terms: WΣ = Wρ + WΔ. Schwitzgebel considers the definitional ambiguity over whether the magic component is intrinsic to the definition of a wizard, but I think that issue is eclipsed by whether phenomenal consciousness should be analogised to WΣ, Wρ or WΔ. Note that, in this case, in contrast to the Superman case — and very much in contrast to an anti-hardist view of the consciousness case — the real power resides in the difference term.)
Suppose that we drop in on a 19th century vitalist debate, listening to a dispute about whether the physical processes of biology were sufficient to account for the mysterious life-death difference. Vitalists argued that an extra was needed.
LΣ = biology + LΔ.
Was élan vital the imagined difference (LΔ)? Or was it the enhanced, animated version (LΣ) of the biological processes that resulted when the extra spark was added?
In this case, the language was clear, and both sides knew what the terms implied: even vitalists used “life” for the inclusive conception (LΣ) and reserved “élan vital” for the gap-filler (LΔ). As biological explanations improved, élan vital was squeezed out, and we were left with life, which turned out not to need ontologically special ingredients after all (LΔ=0).
As far as I know, there was no outraged vitalist suggesting that the anti-vitalists were making the silliest claim ever made — that they were disputing the existence of life and declaring everyone dead.
To borrow terminology from the software industry for our final example, hardists sometimes treat phenomenal consciousness as though it were a patch, or an incremental version upgrade (VΔ) that needs to be applied to access consciousness, and sometimes as a new, self‑contained, replacement version (VΣ) of consciousness.
VΣ = old version + VΔ.
In the case of consciousness, though, hardists can only ever ostend to an installed version of phenomenal consciousness, so the upgrade/replacement ambiguity is baked in; we have no convincing examples of VΣ without VΔ (Block, of course, would disagree). Indeed, because we do all our thinking within the upgraded version, no one can even form a concept of what the phenomenal patch would be like, by itself, or how it would get its content, or be noticed, or be any use at all — because in this case the patch is specified as not changing anything detectable when it is notionally installed.
As far as we can tell, functional mimics without the patch would form the analogous concepts, for the same reasons — and even hardists agree that those mimics would say all the same things. (They would even have their own versions of Σ, ρ and Δ, inviting an infinite regress that I will need to confront in a later post.)
In nearly all of these examples, common sense and context usually tell us which meaning we are employing, and no one is very confused. Unfortunately, when it comes to phenomenal consciousness, we cannot appeal to common sense or context and everyone is confused, because the definitional ambiguity is sustained by an underlying conceptual conflation, the one between the the functional antecedents of phenomenality and our concept of phenomenal spice.
If we are to understand that conceptual conflation, we will need terms capable of naming the components that have been inappropriately blended.
Σ = ρ+Δ
The consciousness debate is a big, complex, sprawling mess, but some of the important conceptual elements can be expressed with the following simplistic summary, which attempts to capture three key perspectives on consciousness: Σ = ρ+Δ.
This equation can be taken to mean: ostensional consciousness is a combination of the brain’s cognitive functions plus some disputed non-functional extra, spice.
Importantly, this formulation is not being offered as a resolution of the mystery; it merely provides signposts to three important areas of the debate. It is hopelessly simplistic — for a start, it completely glosses over the important distinction between consciousness-the-container and qualia. Also, it ultimately describes a relation between concepts, not actual ontological elements that have been characterised with any precision; we will need to drill down to get to reality, and we will all disagree on how that drilling should be attempted. All three terms have serious problems with built-in vagueness and the last, Δ, has only marginal claims on coherence.
But it is still better than the term “phenomenal consciousness”.
The first symbol, Σ (sigma) can be taken to mean consciousness picked out on introspection, including everything that contributes ontologically and conceptually to how our minds seem (and with varying, unspecified commitments to the mind’s representational assumptions).
I will call this element “ostensional consciousness”; it is roughly equivalent to what most people call “phenomenal consciousness” before they add any explicit theoretical commitments as to what it is or where it comes from.
Provided we keep our theorising to a minimum, Σ would be a very good candidate for being what Block wanted to define when he coined the term “phenomenal consciousness.”
But the best one can do for P‑consciousness is in some respects worse than for many other things because really all one can do is point to the phenomenon
Block, 1995
Of the three terms, Σ is the one that is closest to the definitional approach recommended by the philosopher Eric Schwitzgebel. He proposes that we should begin with a theoretically innocent ostensive definition of phenomenal consciousness (where “ostensive”, like “ostensional”, refers to pointing). He suggests that we should compile a list of obvious examples of phenomenal consciousness and point at their commonality. After compiling such a list (and also holding it up against a list of negative examples such as “the release of growth hormones in your brain”), he writes:
Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack. I do think that there is one very obvious feature that ties together sensory experiences, imagery experiences, emotional experiences, dream experiences, and conscious thoughts and desires. They’re all conscious experiences. None of the other stuff is experienced (lipid absorption, the tactile smoothness of your desk, etc.). I hope it feels to you like I have belabored an obvious point. Indeed, my argumentative strategy relies upon this obviousness.
Unfortunately, I believe the obviousness is a mirage.
If this essay were longer, I would flesh out the Σ term of my suggested formulation in the manner proposed by Schwitzgebel , producing a list of classic examples of experience: “the pain of having a tooth drilled”, “the subjective nature of blueness”, “the sense of being aware when introspecting”, and so on. Instead, I will assume the reader can generate such a list, and notionally point at it, and that they will do their best to avoid adding in any explicit theoretical commitments.
The problem is that everyone inevitably adds implicit theoretical commitments to ostensional consciousness, and in most cases “the most obvious psychologically obvious thing or feature” will be misleading. For instance, we just saw Sally effectively assume that some part of Σ can turn blue on demand — or, at least, that was the visual her brain provided when she was trying to find an innocent example of phenomenality. Was she imagining what her mind is like when she summoned phenomenal blueness? Or was she looking in and accurately seeing what her mind is like? Is any part of her mind really blue? Is this innocence or rank naivety? If we subtracted the blueness from Sally’s concept of Σ, on the grounds that it is deeply implausible that any brain-state is actually blue, Sally would almost certainly object; she might not even recognise the blue-free version as what she was ostending to; we are removing Schwitzgebel’s most obvious thing.
Is the theory-free, “innocent” version of her ostensional target the one that leaves the blueness in? Or is it the one that takes the blueness out?
Either answer is inevitably theory-laden.
To me, if Sally is introspecting and finding blueness inside herself, the blueness of the ostensional target is nonsense; to Chalmers, Harry and Sally, it is the whole point.
That is, I agree with Pete Mandik, who recently wrote in a Substack comment:
How things seem to the subject depends on what theories the subject believes and is thus no less a theoretical construct than any other putative definition of “consciousness”.
I share Mandik’s conviction that any search for an entirely innocent definition is something of a lost cause, because we don’t just have to worry about the explicit theoretical commitments to consciousness that we’ve picked up in books or philosophical discussions; we also have to factor in the much more powerful implicit commitments we all owe to the cognitive framing of our ostensive target. Some of these commitments were hardwired by embryology. Some of them were probably learned before we were even capable of anything resembling conscious thought, starting in utero, when we first moved a limb and received sensory feedback. Constant calibration against the world has taught us, at the deepest subconscious levels of our cognitive machinery, that our neural activity involves causal loops extending out into a world. Our own life, and the lives of our forebears have encoded within us the belief that neural activity must be treated as a surrogate for the world, so how can introspection be innocent of assumptions?
I strongly believe that some of the entities that we are representationally committed to in our day-to-day lives are no more than fictions that arise from our complete disregard for important representational distinctions — like the difference between properties of represented content and the properties of the representational medium. This is the disregard that seems to put blueness in our head, waiting for people like Sally to mistakenly ostend to it.
But perhaps you disagree? Perhaps your ostension comes with different theoretical commitments.
For all these reasons, and more, the notion of ostensional consciousness can’t possibly be innocent, but, all these caveats aside, the term is intended to be as broad and as ontologically agnostic as possible. When we think of “phenomenal consciousness”, and we point our cognition at an example (or, more likely, we create an example within cognition), Σ covers the entire ontological footprint of that ostensional act and whatever it targets, to the extent that our introspective pointing successfully targets anything at all, in whatever ontological domains are relevant, and with complete flexibility about which representational level is the true target.
Ostensional consciousness is an ontological blank cheque, waiting to be filled in, and the filling-in process will be intensely theory-laden, if we include implicit theories.
Unfortunately, I can’t just call this ostensive target “phenomenal consciousness”, and be done with the definition, because the same term can be applied, in different circumstances, to the other two terms in my suggested equation.
The second symbol, ρ (rho) represents the functional and material basis of the introspective act that leads us to ostensional consciousness. It’s the bit that physicalists think is responsible for ostensional consciousness — and so, for physicalists, it has the primary claim on being what phenomenal consciousness really is, when we strip away the confusion that the agnosticism of “ostensional consciousness” allowed in to our concept of Σ.
Personally, I call this functional basis of ostension the ostensum (“the thing being pointed at”), but I won’t be giving that term a work-out in this post; the focus will be on the other two terms. The ostensum is the thing we point at, but without the ontological laxity of Σ. By definition, an ontological filter is applied that only lets through physical, functional processes.
Unfortunately, ρ still suffers from massive conceptual ambiguity, because we only know ρ through our representation of it, and that representation happens within ρ, and representations of things are not the things themselves. Although ρ is, by definition, intended to have an ontological footprint entirely consistent with physicalism, it contains multiple levels of representation, and it can represent things that are not compatible with physicalism. Our representation of a neural substrate is not a neural substrate; our neural representation of blue is not itself blue; our concept of a neural substrate representing blue does not even seem blue. And why should it?
The ambiguity of ρ is recursive. Somewhere within the complex functionality covered by ρ, we will need to uncover represented versions of the three elements described in this post: Σ, ρ and Δ. And there, within represented ρ, will be all three elements represented yet again. And so on.
Potentially, we are looking down into a bottomless pit of recursion, and eventually that recursion will lead to trouble.
But that’s a topic for a later post.
The third symbol, Δ (delta), stands for the set of experiential extras that (allegedly) go missing in a purely physical world, the ones that seem not to be covered by the physicalist account. It is elusive and ineffable and it is entirely without functional effects — again, this is by definition.
Unfortunately, Δ is often called “phenomenal consciousness”, too, sometimes by those who believe in it (like Harry), and sometimes by those who don’t (like Delilah).
To avoid conflation with ostensional consciousness (Σ) and the ostensum (ρ), I will call this entity “phenomenal spice”.
Phenomenal spice can’t impact upon our cognition, so why do we bother discussing it?
There are several reasons we might discuss something incapable of motivating that discussion, and I will explore some of those reasons in the posts ahead, but one reason is predominant: analysis of the functional aspects of the brain, ρ, seems to come up short. At least, functional analysis comes up short for those who expect such an analysis to lead them all the way to their initial targeted concept of ostensional consciousness, complete with its qualia. We start with blueness in our ostended concept, like Sally, and our neural circuits aren’t blue, so it is tempting to adopt Harry’s strategy, inventing something special to house the missing blueness. Or pain. Or awareness. Or whatever. We have an apparent Explanatory Gap, and so spice, Δ, is envisaged by the hardists as the gap-filler implied by the Gap.
This approach is understandable, but it immediately sets us up for a logical contradiction.
From the hardist perspective, it seems as though phenomenal spice is what remains after functional accounts have run their course and failed, so spice can’t have any functional effects, so it must be epiphenomenal. But if spice is epiphenomenal, we can’t have any reason to be puzzled by it; it can’t cause a Gap. So now we have a conceptual add-on that requires a whole new ontological domain — but it is one that we can’t rationally talk about, or find on introspection. We still have a Gap, and we still need to know why it is there, and we are no closer to understanding why we face a Gap. It is not just that we have reached the end of the reductive journey and need to posit a fundamental entity; we are positing an entity that cannot influence the reductive journey in any way.
A deficiency in physicalism of the sort proposed by hardists cannot cause any Gap we know about. To many of us, that makes spice an entirely unattractive theoretical construct.
Debates about consciousness can be roughly split into two camps based on people’s varying attitudes to the chain of reasoning outlined above, though the division is not usually described this way.
These two camps should be recognisable to most seasoned debaters in this field, as covered in an excellent article by Rafael Harth over at LessWrong, in 2023. (If you’ve not yet read the LessWrong article, I suggest you give it a look. )
I tend to call these two camps hardists and anti-hardists, but we could also call them spice-believers and spice-deniers.
The spice-believers have deep respect for the Gap as a sign that something is missing in physicalism, and — if necessary — they are content to live with a paradoxical epiphenomenal entity at the heart of their framing.
The spice-deniers have deep mistrust of spice because it is an ad hoc conceptual add-on that has no causal role to play and can’t even create the puzzlement that led to it being proposed, and — if necessary — they are prepared to accept that properties they seem to find on introspection will eventually go missing in their framing. This is not a big cost to pay for logical consistency. Science will have to refrain from the creative act that put blueness in our heads for us to ostend to it.
The two camps have radically different opinions about how the three conceptual elements, Σ, ρ and Δ, relate to the underlying ontology. Both sides will tend to see the other side as making a mistake in the weighting being applied to intuitions that are in fundamental opposition. Both sides will also see the other side as entertaining a serious conflation within the concept of phenomenal consciousness, but they will disagree on which two elements are being conflated.
Suppose we consider the notion of phenomenality in the broadest sense, taking it to refer to the elusive set of properties we’re all trying to pin down somewhere in this three-element scheme.
Hardists are generally those who think that the brain’s functionality (ρ) does not contribute anything of interest to the hunt for phenomenality, so it is okay to use the same term, “phenomenal consciousness”, across both sides of my suggested ontological summary. What we find on introspection are phenomenal properties, reductive concepts of circuits don’t have those properties, so we need a gap-filler.
After all, the hardist can argue, if we mentally compiled a list of exemplars as recommended by Schwitzgebel, pointing at some commonality within the list as instructed, we wouldn’t end up pointing at physical neurons. We’d be pointing at the blueness, not at neurons firing in the V4 colour cortex. We’d be pointing at pain, not at the cerebral circuitry responsible for that pain. We’d be pointing at our conscious selves pointing at our conscious selves, in a dizzying hall of mirrors, not at the neural substrate of some merely functional attention schema turned upon itself.
We won’t see physical neurons anywhere on our list of classic innocent examples, says the hardist, so that’s not what this debate is about. We want to know what adds the phenomenality to ρ to give us the flavoured target of our ostension: the blueness, the pain, the subjective awareness. If Σ = ρ+Δ, and the functional ρ term contributes nothing flavourful, then Σ has to be equivalent to the non-functional component: Σ = Δ, and we can ignore ρ.
Conventional physicalists, or anti-hardists, take a very different view. Spice (Δ) has been conceptualised by the hardists as non-functional; that means it contributes nothing to our concepts, and it cannot play any role in the introspective act that leads to our supposedly innocent identification of the explanatory target. Phenomenal spice is not only incompatible with science as we know it, we would have no reason to care about it even if it did exist. We can ignore it, and turn instead to the core scientific problem of phenomenality without the philosophical distractions. The properties causing our puzzlement must lie within cognition to cause that puzzlement: Σ = ρ, and we can ignore Δ.
Both camps want to explain Σ, and both might agree to call that initial target “phenomenal consciousness”, but they have already gone separate ways, reducing a key concept in their opponents framing to null, and equating two things that the opponent wants to keep distinct.
Almost everyone agrees that there is a fundamental conceptual impasse between these two camps, which sets us up for a confusing debate. But the situation is even worse than that. The conceptual disagreement has contaminated the available language so badly that there are no neutral terms available to discuss the conceptual divergence.
The hardists see physicalist explorations of ρ as irrelevant by definition, because the whole point of a term like “phenomenal consciousness” is to get to the puzzling non-functional part — the part imagined as absent in a zombie, the bit Mary couldn’t derive, the part that a mere functional mimic made of cardboard would obviously lack, and so on. From a hardist perspective, the anti-hardists must be obstinately failing to notice that none of the relevant properties are there in the circuitry. Where’s the pain? Where’s the blueness? How can physicalists fail to see that these elements are entirely missing? How can they be so silly as to conflate Σ with ρ?
Conversely, the anti-hardists see Δ as irrelevant by definition. If the hardists hadn’t gone ahead and cleaved off all functionality from the explanatory target, then something vaguely like phenomenal spice could have served as a conceptual placeholder for an entity like “non-derivable phenomenality” or “what Mary couldn’t derive”. But it’s too late for that, because phenomenal spice has already been conceptualised as something zombies lack; it is already something “beyond the performance of function”. That means the hardists’ conceptual placeholder can never hold anything coherent. From an anti-hardist perspective, the hardists must be obstinately ignoring the fact that their posited target cannot possibly be the source of their puzzlement. How can they conflate Σ, a cognitive entity that clearly causes puzzlement, with Δ, which has already been defined out of contention?
If one side is equating Σ to ρ, and dismissing Δ as null, while the other side is equating Σ to Δ, and dismissing ρ as bland and irrelevant, then they are not actually talking about the same thing even if they started with the same innocent list of Schwitzgebelian examples.
And here I would say to both camps, we need all three terms to continue this debate against a sane definitional background.
Postscript
Once your brain has been sensitised to the spice-meal (Δ/Σ) conflation, I suspect you will see it frequently, but I know that many of you remain totally unconvinced.
The next post will consider some specific examples of this conflation in action.
This and the previous post are a truly excellent discussion of the matter. I think I came up with the anti-hardest view independently (if your idea of “consciousness” is epiphenomenal, then I don’t care about that kind of consciousness. I care about what the zombie means when it says it sees red or enjoys ice cream.) instead of spice, I think of it as a soul, or Dennett’s niftyness. Your presentation is really helpful in considering how to present my thoughts.
One thing I was hoping for but missed was the consideration of information. My current thought is that in your formula of Σ = ρ+Δ, ρ may be physical/functional properties but Δ may be informational properties, which are determined by ρ but are independent of ρ in that they are multiply realizable.would that be a correct assessment of the formula?
*
A lot of good stuff in this long post.
As a card carrying functionalist, I have to admit I don't even know what many philosophers are talking about when they discuss non-functional consciousness. And with only a few exceptions (early Frank Jackson for example), most seem to want to deny the most obvious implication of positing a non-causal view of consciousness, that it's causally impotent, that it makes no difference in the world, but it's not clear to me how that can be done coherently.
As I noted on your last post, I've taken my own shot at more precise terminology. But after several conversations with hardists about it, I'm starting to despair that new terminology will ever be a lasting solution. I think of Dennett's comment in his 1988 Quining Qualia paper. (Which was before both Block and Chalmers complained about terminological issues and introduced new terminology.)
"I am out to overthrow an idea that, in one form or another, is "obvious" to most people--to scientists, philosophers, lay people. My quarry is frustratingly elusive; no sooner does it retreat in the face of one argument than "it" reappears, apparently innocent of all charges, in a new guise."
Along those lines, don't be surprised if hardists say something like, "Yes, yes, exactly. But you know, the real hard problem is how we have ostensional consciousness."