Degrees of Difficulty
From the very first mention of the Hard Problem of Consciousness in Chalmers’ landmark paper of 1995, discussion about the nature of the presumed target in the Subject has been tangled up with how we feel about the explanatory process in the Scientist. Even though they are often interpreted as picking out different aspects of reality, the Easy and Hard labels were explicitly named after our ability to grapple with the issues.
As discussed briefly in previous posts, and as will be explored further in posts ahead, many of the explanatory difficulties that provide fuel for hardist intuitions can be related to cognitive barriers in the brains of those attempting the explanation.
I think the most important of these is the barrier between description of a neural circuit and the ability to point at an example of such a neural circuit in situ, within our own cognition. In the former situation, the brain engages with merely represented circuitry; in the latter, our cognition engages with actual neurons, and hence with the represented content of the relevant brain-state.
Obviously, hardists do not agree that this mundane meta-explanation accounts for their frustration, and so much more argument will be needed to make this explanation plausible in the posts ahead. But there must be at least some aspects of the puzzle of consciousness that are simply not suitable for analysis within an organ that has limited access to its own mechanisms, so hardists and anti-hardists should be able to find some common ground, identifying aspects of the puzzle that are Hard for reasons that are functional, and perhaps disagreeing on other aspects. Both camps agree that Mary has physical reasons for acquiring her new knowledge on her release, for instance, even if her acquisition provides additional insights. Both camps agree that even Zombie Mary faces a Gap, of sorts. Both camps agree that Zombie Chalmers is an eloquent defender of hardism, and must adopt its/his positions for purely physical reasons.
If we are to escape unnecessary mystification, we will need to identify the precise point at which hardism introduces elements that go beyond these agreed cognitive barriers and begins to defy any potential incorporation within a coherent science of consciousness — or, more significantly, defies any potential incorporation within the physical world. These are not the same thing, because theories entertained within human cognition can have internal Explanatory Gaps without being matched by a schism in the reality they seek to describe. (I face such a gap in my cognitive relation to the geometry of hypercubes, for instance — which I will consider in a later post.)
Chalmers’ identification of a difficult area of inquiry starts off reasonably enough. His initial paper drew a sharp (and potentially useful) distinction between the overtly functional aspects of cognition (the so‑called “Easy Problems”) and what he saw as the philosophical mystery at the centre of consciousness (the “Hard Problem”).
Why are the easy problems easy, and why is the hard problem hard? The easy problems are easy precisely because they concern the explanation of cognitive abilities and functions. To explain a cognitive function, we need only specify a mechanism that can perform the function. The methods of cognitive science are well‑suited for this sort of explanation, and so are well‑suited to the easy problems of consciousness. By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained. (Chalmers, 1995).
The key move, here, is splitting the original Mind-Body Problem into two piles: carving off the cognitive functions of the brain and putting them on an Easy pile of puzzles, while reserving some nebulous entity, “experience”, as the sole target of the Hard Problem. As noted in the previous post, experience is an undefined entity, and although we usually get to the concept initially via an act of introspective pointing, it takes on different theoretical assumptions in different people. “Experience”, here, can be taken to refer to qualia and to consciousness itself, and it broadly corresponds with the term “phenomenal consciousness”, which is used in a similar way.
Because it is not identified functionally, but is instead picked out by ostension, experience remains undefined. This is not necessarily a disaster, but it leads us into dangerous territory. In the hardist framework, experience ends up as a hybrid that gets content from cognitive sources but also identifies itself as standing outside cognition; this seems paradoxical, as discussed in the last post, though the combination has many eloquent defenders.
Referring to the hybrid concept is messy, because there are things that need to be said separately about each component. In the posts to come, I will use the term “ostensional consciousness” to refer to the subjective elements that we find on introspection, and “phenomenal spice” to the entity that zombies lack. Both of these are called “experience”, even though they have very different causal relationships with the physical world, and I think hardists routinely conflate them by using the hybrid concept without acknowledging its dual nature.
If zombies are impossible, as I will be arguing, humans don’t have phenomenal spice, but they still have ostensional consciousness. That still leaves room for substantial debate about the nature of ostensional consciousness, but such a debate is difficult while the possibility of spice is still considered plausible (and sensible discussion often proves impossible if the hybrid is used unwittingly).
The process of identifying experience as something that lacks functionality begins with the act of ostension that launches our curiosity (because we don’t ostend to the functions of our brain states but to what is being represented). That lack of obvious functionality takes on a more explicit form with Chalmers’ initial split of the puzzle into Easy and Hard Problems.
This division could be a reasonable approach if the aspects of cognition being labelled as “Easy” were merely those with overt functionality, and the Hard Problem was intended to mark out a region of inquiry where progress was more difficult, because the functions were less obvious. Wherever there is a well‑defined functional question, the path ahead will obviously be easier for science to see: aspects of mentality without any apparent structure or function will be at the harder end of the spectrum.
There is nothing ill‑posed in this observation — there is nothing controversial about it at all. It is essentially a truism.
But this is where hardism starts, not where it ends up.
Early in his paper, Chalmers provides a list of explanatory targets that constitute the Easy Problems:
● the ability to discriminate, categorize, and react to environmental stimuli;
● the integration of information by a cognitive system;
● the reportability of mental states;
● the ability of a system to access its own internal states;
● the focus of attention;
● the deliberate control of behavior;
● the difference between wakefulness and sleep.
These targets could all be considered legitimate aspects of “subjective consciousness”, conceived in theory-neutral terms, but they lack the special status that Chalmers wants to isolate with his Easy-Hard divide.
This division is not completely clean, even at this early stage. Some of these topics are potentially about acts of introspection, including the very cognitive acts that identify subjective consciousness. Some of them are likely to provide key components in a final theory of consciousness. I agree with the cognitive neuroscientist Michael Graziano that consciousness itself is likely to be an attention schema, so a full exploration of “the focus of attention”, fifth on Chalmers’ list, might ultimately provide a theory of subjective consciousness. These theories are nonetheless excluded from Chalmers’ concept of “experience” because they are not directly targeted by introspection. From this external perspective, they are not the same as ostensional consciousness.
Qualia, if they can be found anywhere in this list, are buried under objective terminology and linked to behaviourally testable skills: “the ability to discriminate, categorize and react to environmental stimuli.” (Arguably, this trivialises the list, casting functional approaches to consciousness as a straw-man with nothing to say about qualia, but that is a minor issue along a path that raises more serious concerns.)
All of these aspects of consciousness involve considerable challenges, says Chalmers, but we have reason to expect that they will be amenable to reductive explanation, which is the standard process of accounting for some entity by considering its component parts and their functional interactions. For each of these Easy items, we can identify the function, show that the brain’s mechanisms achieve that function, and cross it off the list.
But that still leaves the Hard Problem of experience, which (Chalmers insists) is not susceptible to reductive explanation because it is not about the performance of any function. Or, at least, it is not about the performance of any overt function.
There is already a major ambiguity, in this early step of Chalmers’ argument, over whether we are talking about headline functionality suitable for a bullet list or talking about all functions — including those that are too complex for the mind of a scientist to contemplate with a gratifying sense of illumination, as well as those that are subjectively invisible to the cognitive system under consideration.
It is relatively easy and uncontroversial to argue that experience won’t be explained by ticking off a headline item and that hidden functionality will be more challenging to address, but it is not so plausible to suggest that experience has no functional effects at all — which is what would be necessary for zombies to be logically possible.
Ultimately, Chalmers wants to ask “Why should physical processing give rise to a rich inner life at all?” This question introduces a type of Hardness that is completely different to the sort we encounter when we first move away from headline items.
The claim that consciousness is not on the bullet list is a long way from the (potentially self-defeating) claim that there could be an entity, “experience”, that does not modify the behaviour of a single neuron in our brain, but about which we spend time puzzling. In part, Chalmers’ argument gains rhetorical advantage by sliding from one of these claims to the other, and this post will consider the full length of this wild slide, from bullet list to zombies.
There is indeed a spectrum of difficulty extending from the easiest problems of cognition to the most philosophically challenging, and it can be useful to identify major landmarks along this spectrum, the most important of which is the distinction between objective and subjective aspects of mentality. This approach leads to an Easy pile and a Hard pile, just as Chalmers suggests — but Chalmers’ presentation of the issues skips over unacknowledged conflations about what it means for a problem to be difficult within each of these piles, as well as between them, and that’s where things start to go wrong.
This post will ultimately be about three different types of Hardness that have been caught up in Chalmers’ presentation of the puzzle of consciousness. We could call these obscurity, irreducibility, and epiphenomenality. They each offer different challenges to physicalist neuroscience, but only the third of them marks a clear departure from physicalism, because, unlike the other two sources of Hardness, epiphenomenal extras cannot be attributed to difficulties within the Scientist’s Brain. Epiphenomenal entities can live alongside the laws of physics, but they cannot be accommodated within those laws.
Some folk who are otherwise sympathetic to hardism take a slightly different turn along the failed explanatory journey, suggesting that consciousness causes neural activity to depart from the laws of physics as we currently understand them. Most scientists dismiss this possibility because any mind-brain interaction would be likely to violate conservation of energy and momentum, and the physics necessary to explain neural behaviour is relatively well understood. I think the suggestion is poorly motivated, and will discuss why in later posts. In general, this philosophical position does not lead directly to the possibility of zombies, so the current series of posts will not directly address these interactionist views.
The Hard Problem, as it was originally formulated, did not allow for this form of hidden interaction between mind and brain, and the Hard Problem is my main target.
The issues are complex, but we can consider the full range of difficulty in three sections that span from headline Easy items to zombie-inspired impossibilities.
The Hardness within the Hard Problem ultimately involves:
Variation in difficulty within the Easy pile (related to whether the relevant functions are simple and explicit or complex and obscure)
A perspectival shift between the piles (related to whether we take a meta-perspective of the frustrated scientist, or attempt the same cognitive journey ourselves)
A fatal conflation within the Hard pile (related to the difference between the non-derivability of ostensional consciousness and the posited extra, phenomenal spice)
Variation in Difficulty within the Easy Pile
Not all “Easy” items are of the same calibre. Cognitive functions that are readily summarised in bite‑sized cognitive chunks suitable for a bullet list are not the only sort of functions. For complex cognitive systems, these headline items probably constitute a very minor part of overall functionality. As neuroscience matures, I think it will be relatively rare for complete understanding to be achieved by having a nameable function in mind, and then seeing how that function is implemented. We will discover functions we had no name for, and we will uncover functional processes we struggle to grasp, even from an objective perspective.
For instance, programmers of modern AIs can have complete confidence that the algorithms being computed are entirely functional in nature, but the programmers generally do not have easy familiarity with the program’s full abilities and all of its implicit functional effects. Most of the cleverness is not in the initial algorithms, but in the billions of parameters that have been acquired through training. The functional effects of any single parameter are usually obscure, and the myriad interactions between parameters are almost impossible for the programmers to follow. The programmers have to treat the vast inscrutable matrices producing the AI’s outputs as something of a black box, functionally explicable in principle but not explicable in practice.
Between these overt and implicit conceptions of “functional”, there is a vast and important difference. The idea that experience is not on a list of headline items does not provide any support for the more contentious idea that it has no functional basis, but Chalmers’ paper begins with a direct appeal to this uncontroversial part of the difficulty spectrum.
If the argument stopped there, we could all nod and say that, yes, consciousness is tricky.
But if Chalmers were merely saying that we face a Hard Problem because many aspects of consciousness cannot be transparently associated with an entry on his list of headline items, then this would be a claim that had no major significance for our conception of reality.
Confusingly, at various points in his paper, Chalmers explicitly includes these milder forms of difficulty in the scope of the Hard Problem by conceding that experience might eventually turn out to have unexpected functional effects after all. If so, the Hardness will turn out to be no more than a couple of conceptual moves away from simple explanations of functions.
“This is not to say that experience has no function. Perhaps it will turn out to play an important cognitive role. But for any role it might play, there will be more to the explanation of experience than a simple explanation of the function. Perhaps it will even turn out that in the course of explaining a function, we will be led to the key insight that allows an explanation of experience.” [Emphasis added.]
Comments like this can make it difficult to get a fix on what Chalmers really believes, but this brief concession that experience might play a hidden functional role is not consistent with the overall hardist position, and it is in conflict with most of what Chalmers has written since, including his frequent discussion of zombies; it is even at odds with the general thrust of the same paper.
By holding out the possibility that hardism is merely waiting on scientists to identify some reasonable functional explanation of experience, the hardist framing can defend itself against the charge of being empty or ill‑posed. If scientists could only demonstrate the function of experience, Chalmers is suggesting, everything would be resolved; if a solution is possible in principle, then the Hard Problem must be asking a legitimate question, and the fault for not answering it lies with scientists. But this is something of a rhetorical trick, because accounting for experience in functional terms is a challenge he actually thinks that scientists can’t possibly meet, because, in the meantime, the non‑derivability of qualia justifies ongoing pessimism, as he is keen to point out elsewhere.
Chalmers’ offer to acknowledge a successful functional theory if and when it comes along is an empty one; his own framing prevents it. The Gap prevents it. But the hopelessness of the explanatory task is hidden by the fact that there is, indeed, a vast amount of functionality in the brain that is yet to be described, and room for a range of difficulty even within the functional domain, and hence within the Easy pile.
This first ambiguity is important to recognise because the plausibility of the deep, unbridgeable Easy‑Hard divide has been boosted by an uncontroversial truism. Of course there is a spectrum of difficulty, even among items for which there must be an in‑principle functional explanation. And of course we will understand things better as we make progress in neuroscience.
Even here, though, within the Easy pile, we can expect explanatory frustration. The processing power of deliberate, expressible cognition has been estimated at ten bits per second, compared to cerebral processing in the computational background that amounts to about a trillion bits per second.
That means we will have vast areas of pure functionality that are not easily approachable within our cognitive capacity, aspects that we may never really understand in a satisfying way, even within a purely objective approach.
When Chalmers writes, “To explain a cognitive function, we need only specify a mechanism that can perform the function,” or “there will be more to the explanation of experience than a simple explanation of the function”, he is making the uncontroversial claim that the functions of experience currently seem obscure. This is so obvious that it adds intuitive force to his overall argument that consciousness is tricky, but it does not add any support to his much stronger overall claim.
Ultimately, this range of difficulty within purely functional approaches to consciousness presents an important practical limit to the study of consciousness, but, in the context of critiquing hardism, it is merely a rhetorical issue. It reflects the way hardism has been presented, but it is not a core feature of hardism. In the broader discussion, it soon becomes clear that Chalmers intends to include non‑headline functionality on the Easy pile. For instance, under a hardist view, the impossible‑to‑follow workings of a contemporary AI are still Easy in flavour. In fact, if we accept the hardist framing, then anything we can explain in any remotely satisfactory way by appealing to underlying algorithms ends up on the Easy pile, and what we are left with on the Hard pile are items that fundamentally resist explanation.
And here I agree with Chalmers that there is something fundamentally difficult to explain in the manner we might have wanted. The Hard Problem of Consciousness encounters issues that have no true analogy in any other domain (with the possible exception of self-referential logic systems, or buggy programming code that has dangerous access to different levels of abstraction.) Classic analogies such as the Previously Hard Problem of Life don’t really cut it.
A Perspectival Shift Between the Easy and Hard Piles
The real difficulty with understanding consciousness relates to its subjective aspect.
One of the challenges in combating hardism is that the notion of an Easy‑Hard divide incorporates an important and worthwhile observation about the effects of subjectivity on explicability. I think the Hard Problem correctly identifies a distinction between difficult functional questions, which can nonetheless be answered within an objective perspective (and hence qualify as “Easy”), and fundamentally more baffling questions that involve a subjective aspect (and hence qualify as “Hard”).
A core tenet of Chalmers’ framing is that, for everything on his Easy list and (note the slide) for any potential functional explanation of consciousness that might be offered in future, no matter how complex, we will always be able to ask a further unanswered question.
These further questions are always related to properties we identified on introspection; they are always motivated by ostensional consciousness, and always involve ostension in their posing.
“What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?” (Chalmers, 1995).
Chalmers writes here that there “may” be a further unanswered question, but this display of caution can either be viewed as a rhetorical nicety, used to prepare the reader for stronger versions of his thesis, or as yet another a tactical escape route leading back to much weaker versions.
If the Hard Problem is expressed cautiously enough, its supporters can always retreat to a defensible position that merely states consciousness is puzzling and its functions are not currently apparent; the full version of Chalmers’ Hard Problem can borrow plausibility from milder forms.
Why do some questions end up on the Easy pile while others end up on the Hard pile? Chalmers’ answer is that some questions go “beyond the performance of function”, but I propose that in many instances (if not all), there is much less exciting explanation.
I will be arguing that the Hard questions are those that involve an act of internal pointing, and hence rely on an example, so the primary Easy-Hard divide is an overreaction to the fact that we do our cognition in an untidy space that houses theories and examples in confusing jumble.
Some questions are posed within the brain of a Scientist in objective, functional terms, and those are Easy. Others are posed by holding up actual examples of what needs to be explained, with the Scientist momentarily playing as Subject, or swapping roles mid-question. Those questions will remain Hard if the example is considered a key element of the question, the cognitive route from theory to example is not available, and we insist on getting to the sought after property in our own heads. This will often be the case, so many self-referential, example-containing Hard questions will be difficult to address in scientific terms, for reasons that we can identify — reasons that are ultimately functional in nature, when we consider the brain of the Scientist and we factor in the level of access of that brain to its own mechanisms.
Accounting for this difficulty will therefore constitute an Easy Problem (albeit a very difficult Easy Problem) whenever we take the stance of Meta-Scientists — but that perspective won’t stop the problems remaining fundamentally Hard in the brain of the Scientist.
Every Hard question, then, will create a dual entry into this classification system.
These questions are, therefore, simultaneously Easy and Hard (and one possible response to hardism is just to say that everything can be considered Easy once the representational relationships between concepts are adequately factored in.)
A question like, “How do I derive blueness from circuit diagrams?” is Hard for the Scientist but explaining the barrier to derivation is Easy for the Meta-Scientist. To return to a familiar example, we can explain Mary’s failures in functional terms, and even pre-release Mary could explain her own failures, but she still can’t rewire her visual system from text alone. The meta-explanation leaves the original Gap in operation, and we can add as many meta-layers as we like, and there will still be a cognitive route from theory to example that seems desirable but cannot be achieved.
Zooming out to consider the big picture, all the concerns articulated in Chalmers’ book, The Conscious Mind, can be explained in functional terms as very complex motor acts of authorship, and thus they constitute one giant Easy problem, even though the book is about the Hard Problem. There is no line of text in that book, or anywhere else in hardist literature, that does not have an entirely functional explanation, even when it is purportedly about the lack of any functional explanation. Every line of the hardist canon requires an entry on the Easy pile, when we consider why it has been said, and on the Hard pile, when we engage with it as readers sympathetic to its content.
Questions change from Easy to Hard and back depending on which representational perspective we adopt, and whether we, as Meta-Scientists or Meta-Philosophers, are ourselves engaging in the problematic practice of embedding examples in our questions. If we read along with Chalmers’ discussion and ostend to blueness, we nod our heads, sharing in his mystification. If we merely watch this ostensive act as observers, I propose that we would see a complex cognitive process confusing itself by trying to connect theories to examples. (And, as a virtualist, I would say we are making it hard for ourselves by trusting the veracity of all of our representations.)
Zombies, of course, would have the same vistas of complexity within their brains, too, and a similar division between Easy and Hard, complete with an Explanatory Gap. All of their questions have functional explanations from a meta-perspective, casting those questions as Easy, even though their analogues in our own world are Hard. Explaining the Gap is Easy; crossing it is Hard. The same rule applies in the Zombie World, though we might be tempted to call their difficulties Pseudo-Hard. The arguments remain word-for-word identical.
This dual nature of every Hard Question will require further posts to explore, but I will put it on hold, for now, and return to it later. Because hardism does not just consist of dwelling on the non-derivability of qualia; it also involves the following spurious logic in the Scientist’s Brain:
If my functional analysis of the relevant cognitive substrate does not lead me to the property I want explained, that property must be functionless.
That spurious step in logic creates the third ambiguity on the slide to zombies.
Ambiguity within the Hard Pile
Let me briefly recap. So far, I have considered the range of difficulty within the Easy pile of questions, and I have alluded to the fact that every Hard question requires a dual entry on both the Easy and Hard piles, such that the Easy-Hard divide is potentially no more than a matter of perspective. The implications of that dual entry will be explored in the posts ahead.
But there is much more critical ambiguity within Chalmers’ set-up of the Hard Problem.
There is a spectrum of Hardness within the Hard pile. The central bafflement about experience involves a mix of legitimate concerns about barriers to derivation and ill-posed musings about zombies.
Superficially, it might seem that Chalmers has already identified what it takes for some aspect of mentality to end up on the Hard pile: it just needs a subjective aspect.
As soon as subjective qualities are introduced, the Gap comes into play, taking us beyond some simple spectrum of difficulty among functional processes. We have an Easy-Hard divide, with functional analysis failing on the Hard side. At that point, we might be tempted to think of the subjective aspect as epiphenomenal; from the objective perspective, the subjective elements do not play any discernible causal role in the functional story, and the objective story is causally complete. Ergo, the subjective aspects must be epiphenomenal.
For instance, when we say that contemporary AI programmers are forced to treat the matrices of trained variables as a black box, we can nonetheless have confidence that the black box is algorithmic at its core. Copy the parameters into new hardware, run exactly the same algorithm (with the same seed in your random number generator), and you will get the same answer. As long as we are asking questions about the behaviour of the black box, we are still dealing with objective aspects of reality, and there is no real doubt that everything we observe has a functional explanation in principle, even if the programmers have to resort to hand‑waving simplifications. But subjectivity introduces a fundamentally new type of difficulty: if we built a working algorithmic model of colour perception and instantiated it in a machine, there would be a notional Hard component based on our inability to determine whether that machine experienced colour. And, if so, what it was like for the machine. These are subjective questions, and they can’t be satisfactorily answered from an objective perspective — at least not from within the hardist framing, which has a specific understanding of what counts as a satisfactory explanation. The computer’s output will continue regardless of whether the computer feels anything, so it might seem that we are asking about a property that is essentially epiphenomenal.
This flow of conclusions has acquired an easy familiarity, so we might not notice its assumptions, or we might overlook the paradox that, in our own case, we can talk about the supposedly epiphenomenal extras, so they must be there in the causal story after all, unless we appeal to an extraordinary coincidence that words chosen for one reason end up serving for a different reason.
The earlier conflations we met along the slide to epiphenomenalism contribute to the final acceptance of this functionless extra. Some of our explanatory frustration comes down to complexity, and hinges on issues within the Easy pile. That makes it impossible to deny that mentality is hard to explain. Chalmers rightly points out that there is another layer of difficulty introduced by subjectivity. Complex cognitive systems come with their own privileged access to elusive subjective properties, which can’t be reached by other cognitive systems approaching them from an objective perspective.
That leads to the psychological phenomenon, in Scientists, of pseudo-irreducibility. There is nothing ill-posed in asking for an explanation of the appearance of irreducibility. Pseudo-irreducibility leads to pseudo-epiphenomenality — we can easily ignore what we can’t derive. Other cognitive factors join in, such as the redundancy across different levels of explanation — high level features are easy to ignore if the low-level story is causally complete. There is nothing ill-posed in asking why people think of spice. Scientists owe hardists an explanation of all the cognitive factors underlying their intuitions.
But hardists take these features and posit a complete lack of functionality in the Subject, as well. They invent phenomenal spice.
Early in his landmark paper, Chalmers suggest that he is open to experience playing a functional role, but the Gap makes it unlikely that he would accept any theory placed before him. No card-carrying hardist truly believes that any account of some previously hidden functional role for colour qualia will rescue Mary from her frustration; similar concerns apply to the even more nebulous notion of consciousness itself.
When Chalmers alludes to zombies, though, in follow-up publications (or discusses equivalent ideas that omit the Z-word), the offer to accept a functional theory is negated even more profoundly. Even if we account for the Gap and explain why the theory is disappointing within the Scientist’s Brain, that won’t be enough.
If we think zombies are possible, any announcement of a previously hidden role for consciousness will immediately succumb to a strong version of the Easy-Hard divide that is based on the implicit possibility of zombies. Not only does the proffered theory fail to lead us to the property of interest in our own heads, it is plausible that a physical instance of the substrate being described does not lead to any experience in the head of our functional duplicate, either. (This is like the difference between a theory of gravity not sucking things in versus a body of mass not sucking things in. Or the difference between a theory of the retina and actual retinal activity.)
Suppose a scientist takes up the original offer to find a previously hidden functional role. It seems inevitable that, faced with a detailed description of the posited role, it would still be possible for a hardist to insist that a merely physical brain might instantiate the same mechanisms but be dead inside; it would still be possible to ask why the previously hidden role is accompanied by experience, and the newly described role would merely be added to the list of Easy Problems.
Indeed, this is the very essence of the Zombie thought experiment, and why it is so devastating.
We can see Chalmers take this approach in his 1995 paper, which dismisses whole categories of potential research, as well as some individual candidate explanations of consciousness, such as Bernard Baar’s Global Workspace Theory.
According to this theory, the contents of consciousness are contained in a global workspace, a central processor used to mediate communication between a host of specialized nonconscious processors. When these specialized processors need to broadcast information to the rest of the system, they do so by sending this information to the workspace, which acts as a kind of communal blackboard for the rest of the system, accessible to all the other processors.
[…] The best the theory can do is to say that the information is experienced because it is globally accessible. But now the question arises in a different form: why should global accessibility give rise to conscious experience? As always, this bridging question is unanswered.
There is nothing wrong with noting that something about these theories feels inadequate. Chalmers is only reporting — presumably accurately — that he feels underwhelmed by Baar’s theory, so what is my complaint, here?
My objection to the hardist framing is not that hardists report a sense of disappointment or frustration with current theories of consciousness, nor that they anticipate similar frustration in response to all future functional theories. That frustration exists in the brains of those dissatisfied with the theory. It is understandable, even expected.
My objection is that hardism approaches the puzzle of consciousness with inflated expectations of what understanding should feel like; hardism comes away with a feeling of disappointment, and then it takes that dissatisfaction as evidence of a mystery so deep that our model of reality will need to be extended, and we will need epiphenomenal entities to bridge the holes in the explanation.
It will always be the case, for any functional explanation, that it could have applied to a zombie, so a further question will always be conceptually plausible, just as Chalmers says.
“But now the question arises in a different form: why should [X] give rise to conscious experience?”
I agree that we can always ask a bridging question of this nature — we can insert any theory in the spot marked by X. But our ability to ask this question is so unconstrained it is ultimately unhelpful.
Acknowledging a Gap is not the same as assuming that something must exist for the Gap to be about — we do not need to assume that there is some thing that fills out the difference between theory and example. (Other than the example itself. The difference between a theory of gravity and an example is the presence of a massive body; the difference between a theory of the retina and an example is the retina. The difference between a perfect description of a cat, and a cat is… the cat.)
Hardness related to a gap in explanation is very different to Hardness related to a functionless entity.
It is not often acknowledged that the hardist framing incorporates two main types of Hardness, and hence two types of experience. These two types are what I called ostensional consciousness and phenomenal spice in the previous post, and it is worth reviewing how I intend to use the terms.
Ostensional consciousness is simply subjective consciousness, as picked out by introspection, with minimal theoretical commitments (although it does come with cognitive commitments that we cannot escape, and this leads to its own serious conceptual problems, an issue to be explored in later posts).
Phenomenal spice is what zombies lack. It raises the spectre of the Zombie’s Delusion. To debunk the Hard Problem is to debunk the legitimacy of phenomenal spice.
Ostensional consciousness is the initial conceptual source of our puzzlement in this field, and it has some claim on being a legitimate target. It’s what I would call “subjective consciousness” or “phenomenal consciousness” if these terms were not already entangled with the idea of phenomenal spice. Ostensional consciousness is what we pick out on introspection, but without tagging it as epiphenomenal; accordingly, it does not run directly into paradox. Ostensional consciousness is puzzling, though, because we can’t produce a satisfying reductive account of it. We can’t start with circuit diagrams, do some clever analysis, and, from that process alone, end up with the property we want to ostend to. In many cases, we can’t find it directly in the physical sciences at all, and many serious people doubt that it exists, though some of them are really saying that spice does not exist.
Ostensional consciousness might even be fundamentally different to how it seems, which is what I will be arguing. (This is just another way of saying it does not as it seems, but some ways of expressing this idea cause more outrage than others.)
Ostensional consciousness is therefore a little bit mysterious — which is good, because if we can’t capture a sense of mystery in our target, we are probably missing it entirely. Ostensional consciousness potentially satisfies what Eric Schwitzgebel calls the wonderfulness condition.
One necessary condition of being substantively interesting in the relevant sense is that phenomenal consciousness should retain at least a superficial air of mystery and epistemic difficulty, rather than collapsing immediately into something as straightforwardly deflationary as dispositions to verbal report, or functional ‘access consciousness’ in Block’s (1995/2007) sense, or an ‘easy problem’ in Chalmers’ (1995) sense. If the reduction of phenomenal consciousness to something physical or functional or ‘easy’ is possible, it should take some work. It should not be obviously so, just on the surface of the definition. We should be able to wonder how consciousness could possibly arise from functional mechanisms and matter in motion. Call this the wonderfulness condition.
[...]
The wonderfulness condition involves a mild epistemic commitment in the neighbourhood of non‑physicality or non‑reducibility. The wonderfulness condition commits to its being not straightforwardly obvious as a matter of definition what the relationship is between phenomenal consciousness and cognitive functional or physical processes.
(Schwitzgebel, from Illusionism, 2017, compiled by Keith Frankish).
The Hard Problem intimately incorporates both of these elements. The Chalmersian concept of “experience” is primarily directed at phenomenal spice, but it gets all of its conceptual content (including its wonderfulness) from ostensional consciousness, and so it is necessarily a hybrid. The Hard Problem starts with curiosity about ostensional consciousness, but the hardist version of this concept adds the idea that it might go missing in zombies — that it might be spice.
The Hard Problem slides between these two concepts of experience until it latches onto the impossibility of the physical sciences ever accounting for what zombies lack. And there it stops, unable to appeal to any functional explanation, and obliged to reject any possible scientific account of consciousness.
A Four-Pile Approach
Ultimately, we will need four piles of problems to map out the different levels of difficulty we encounter when trying to explain consciousness. After the first pile, each new pile will add a new type of difficulty.
We need an Easy pile, as before, to cover well-formed objective questions, but it can be split into headline functionality and obscure black‑box functionality. Call these the Headline pile and the Obscure pile, both of them formerly known as Easy.
The Hard pile, which deals with issues of subjectivity, can be split into a Gap‑affected pile and a Zombie‑difference pile.
We can recall these four piles with the acronym HOGZ: Headline, Obscure, Gap-affected, and Zombie-Difference.
On the first of the Easy piles (the Headline pile), we can place overtly functional questions like: How does the visual system detect a vertical edge? How does the retina process colour? How does the brain form the sentence, ‘The sky is blue’? How does the brain move its focus of attention around?
On the second Easy pile (the Obscure pile), we put more difficult questions where we haven’t yet managed to come up with clear functional questions, but we can envisage doing so. What does empathy involve? What is the neuro‑cognitive basis of enjoying music? To what extent does the semantic embedding of colour concepts distort the three-dimensional colour space? What aspects of cognition involve top-down control? Why do primate brains express belief in a Hard Problem?
At this point, as Chalmers correctly noted, we will have leftover questions that would previously have been thrown onto a common Hard pile because they involve an element of subjectivity. Many of them will have tell-tale ostensive pronouns or they will involve questions across disanalogous cognitive systems. How can I derive this property from circuit diagrams? How do neurons give rise to this sense of being in here? What does redness look like? [Asked by a colour-blind scientist.] What is it like to be a bat? [Asked by a human].
These are the traditional Hard questions, because they require the asker to adopt a specific cognitive state, holding up an example of what needs to be explained, and they belong on the pile of experiential matters that is supposedly targeted by the Hard Problem.
But this time we will notionally subdivide the questions with the help of an omniscient demon.
If the barrier to derivation occurs because of cognitive factors in our own head, as Scientists, we will place our questions on the Gap‑affected pile, or G‑pile (along with a similar entry on the O-pile, repeating the same question within quotation marks and attributing it to a Scientist.) If the barrier to derivation indicates that we are considering a functionless aspect of mentality in the Subject, possessed by humans but missing in hypothetical zombies, then we will place it on the Zombie‑difference pile, or Z‑pile.
The G-Z ambiguity within the original Hard pile is more difficult to recognise than the H-O ambiguity within the original Easy pile, but it relates to what is assumed to be the underlying reason that some aspect of mentality qualifies as Hard, once we have truly followed all functional explanations to their bitter end and are left with a set of unanswered questions.
Because the Hard pile has notionally been split in two in this four‑pile version of the puzzle, there are now two potential versions of the Hard Problem, corresponding to two reasons we might face a perpetual further unanswered question.
We might be asking why some subjective mental properties are on the Gap‑affected pile, which could be thought of as a weak G‑version of the Hard Problem focusing on the Scientist. Or we might be asking why some properties are on the Zombie‑difference pile, a much stronger Z‑version focusing on the Subject.
In other words, we might be considering the Hard Problem with Jackson’s Mary in mind (minus his contentious conclusion that qualia are non‑physical), or we might be considering it with Chalmers’ zombies in mind (which necessarily includes the assumption that qualia are non‑physical, because this assumption is built‑in to the definition of a zombie).
Equivalently, we might be asking about the explanatory opacity and underlying nature of ostensional consciousness, or about phenomenal spice, with its presumption that zombies are possible.
For some readers, especially those already drawn to anti-hardism, the distinction between these two dimensions of Hardness will be obvious, and a key part of why hardism never ensnared them. For others it will be obscure or implausible, so it is worth back-tracking to consider where our concepts come from.
When the idea of “experience” is introduced by hardist authors, we are usually invited to think about, and hence point to, the subjective aspects of our own perceptual states: what it is like to perceive deep blue, middle C, and so on.
Intuition strongly suggests that the subjective properties of these states are non‑derivable in the sense highlighted by Jackson. That means it is plausible to imagine these properties stuck on a stubborn Hard pile of mental properties that are not explicable in functional terms at all, not even “in principle”. (In reality, we are forced to use a biological organ to do our thinking, so what we can explain “in principle” cannot necessarily be separated from what we can explain “in practice”, but that’s a problem for later consideration.)
Because these subjective properties cannot be derived from a full functional account of the brain, they are easily imagined as being outside the causal network of the brain, which might seem to lead us directly to the notion of zombies.
Translating Chalmers’ arguments into the terms I am using, his landmark paper moves across the four piles, H, O, G, Z. He starts with the suggestion that the functions of ostensional consciousness are unclear and not to be found on the Headline pile. All it would take for this to be true would be the presence of some items on the Obscure pile, and in this spirit, he suggests that experience might turn out to have functionality after all, when scientists finish sifting through the Obscure pile.
He then moves on to the observation that many familiar aspects of ostensional consciousness appear to be non-derivable, suggesting we will be stuck with a Gap-affected pile.
At that point, he cautiously proposes that an exhaustive account of the functions of the brain “may” leave a further unanswered question. We will have a residual question if the Gap left the obvious one unanswered.
All of this prepares the reader for the plausibility of the Zombie-difference pile, which is where physicalism can no longer keep up.
In additional arguments, appearing later in the same paper and expanded at length in his follow‑up book, The Conscious Mind (1996), the caution is put aside and Chalmers insists that there inevitably will be such a further question, which essentially comes down to this one:
Why do we experience these phenomenal properties that, hypothetically, could have been entirely absent while the very same functional processes were taking place in a purely physical world?
Or, equivalently:
How does a subjective, first-person reality arise, filled with phenomenal properties that, hypothetically, could have been entirely absent while the very same functional processes were taking place in a purely objective reality?
Or, more succinctly.
Why aren’t we zombies?
No matter how much progress we make in understanding what the brain does, runs the central hardist argument, there will always be a leftover component related to how it feels.
But here we must take note of the ambiguous nature of what it takes to be Hard.
A mental property like subjective blueness might end up classified as Hard merely because it cannot be derived from a black and white textbook. Or it might qualify as Hard because it doesn’t play any role in physical reality at all, a much more significant situation.
It matters, then, whether we conceptualise “experience” in terms of the Gap, or in terms of what a zombie lacks. These are not the same, even if we usually lump them together under a common label. One version of experience frustrates physicalism, the other breaks it.
A zombie is not a zombie by virtue of having puzzling qualia that defy derivation when scientists try to study its brain; the zombie lacks qualia completely, even from its own perspective, because it is imagined as not having any perspective worthy of being considered valid.
In laying out the Hard Problem, Chalmers is not just noting that scientists reach a dead‑end with their explanations; he wants to argue that this dead‑end informs them about the nature of reality, and that reality contains important non-functional elements that exert no causal influence on the physical world.
Importantly, these two types of Hardness stand in a representational relationship with each other (though that is not their only difference). One side of this ambiguity merely involves the explanatory deficiency of the physical sciences (an academic field that seeks to describe reality), whereas the other pertains to whether physical reality itself is deficient.
Philosophers would express this idea by saying that the Gap is epistemological, whereas what separates us from zombies is ontological. I will consider these terms in the next post, but epistemology is the consideration of what we can know, and ontology is the consideration of the nature and ingredients of reality. Many Gap-affected questions are difficult because they embed a piece of actual ontology within the explanatory mission, attaching it with an ostensive pronoun, creating recursive difficulties that persist even as the asker tries to take a meta-perspective. These questions effectively ask a theory of a brain-state to reproduce the brain-state, though they express themselves in terms of their represented content, so this tangle is often difficult to appreciate.
This will be a recurring pattern in all of the discussions ahead: the most serious conflations within hardism are those that occur across representational boundaries (which often involves an extrapolation from epistemology to ontology). For now it is enough to take note of a clear instance of the pattern, which we will meet at every step of the discussion. Theories are confused with examples, models are confused with contents, and problems in the brains of scientists are confused with mysteries in the brains they are studying.
An important point to note about this four-pile approach to the puzzle of consciousness is that there can never be any defensible reason for putting an item on the Z-pile. If your zombie twin is about to place some question on the Z-pile, you can redirect it by trying to convince it that it has no qualia, but it will reject your redirection for all the same reasons you might reject anti-hardist arguments. You will be arguing at a disadvantage, because its reasons for placing any item on the Z-pile are ultimately identical to yours, which is why this blog has its name.
To some readers, it might seem counterintuitive to talk of experience as a hybrid. Subjectively, it is quite difficult to identify these sub‑components within any particular quale, such as blueness, because we are trying to dissect a concept that presents itself as homogenous. And, as soon as we conceptualise a quale as non‑functional, we are essentially committing to that homogeneity; our target concept is about something that has no internal structure, so it is difficult to see that the concept itself is a hybrid of different entities entailing different theoretical commitments.
For instance, we probably imagine the exact same blue property when we think of what we find on introspection, as when we imagine Mary stepping outside her lab to experience blueness for the first time, or when we contemplate the blueness missing in a zombie. The accompanying imagery for these concepts always reverts to the same mysterious fog of phenomenal blueness, so it is easy to miss the hybrid nature of the target concept.
In part, these hybrid elements can be overlooked because they are not core features of blueness; they are conceptual add‑ons to the much simpler concept of colour we acquired as children, serving as mental footnotes that attempt to link the elemental properties back into an adult world view.
A hardist might even insist that the discussion is really about the target of these flawed concepts: the blueness, the felt quality of internal awareness, and so on. In that case, the hardist can argue, the theoretical footnotes don’t matter because it is the blueness itself that sits outside the physically explicable causal account. The hardist can simply refuse to consider the conceptual basis of that blueness, insisting that it is the targeted blueness that is mysterious, not our subsequent cognitive processing of that blueness (or our cognitive processing of some functional analogue of blueness).
Here we can ask a follow up question.
Should we consider “the target” of hardist puzzlement to be the concept we are trying to explain, such as our concept of blueness created during a moment of introspection, or the entity targeted by that concept, such as subjective blueness itself?
The hardist might answer, with some indignation, that of course it is the subjective blueness that is the target. The cognitive machinery is not the problem; it is not being targeted at all. After all, the same machinery exists in zombies; it is described in Mary’s textbook; it is part of the Easy Problems.
Again, we get a glimpse of the core paradox lurking within hardism. The entire story of how a concept of blueness arises in our brain and causes puzzlement is dismissed as Easy. Because the cognitive process is functional, there is no conceivable way that epiphenomenal blueness quale itself played any role in that process — but the puzzlement is nonetheless proffered as evidence for the epiphenomenal blueness quale.
Evidence of one thing is taken as support for something else.
So, is the Hard Problem about our introspectively acquired concept of ostensional consciousness (the thing driving our puzzlement), or phenomenal spice (which never disturbs our thoughts)?
How could we possibly tease these apart?
It’s not as though we can ever think of pure blueness without engaging our innate concept of it; we can’t ever escape from our own thinking. But if we are ultimately trying to explain a concept, such as our concept of blueness, we are back at the failure of reductive explanation, and the failure could reside entirely within the explainer.
Indeed, whether that failure is backed up by a non-functional extra can make no difference to the intractability of our frustration. At best, epiphenomenal extras could add a frustration-quale and awareness of our frustration, without actually impacting on anything we say or why we say it.
Whatever gives rise to a much-discussed barrier to reductive explanation must be playing some role in our cognition because that frustration is within our cognition. An epiphenomenal entity such as spice can play no such role. The entity that frustrates us and the epiphenomenal entity we have been led to propose because of that frustration cannot be the same entity, and yet the Hard Problem targets both under a common label.
To a hardist who believes that zombies are possible, the charge that their concept of experience is a hybrid might be met with a shrug. So what?
The concepts of “irreducibility” and “epiphenomenality” are easy enough to distinguish in theory — in the sense that people interested in consciousness know what the two words mean — but hardists usually assume that experience embodies both of these properties, so they can be conveniently discussed together. A common hardist belief seems to be that the non‑derivability of qualia and the possibility of zombies both pose an equally serious threat to physicalism, and both stem from a common source, so they merely provide slightly different ways of seeing the difficult core of the mystery. Indeed, in online debates around these issues, discussions of the Hard Problem often slide from one of these conceptions of experience to the other, without either side noting when the target of the discussion has changed. In some cases, defenders of the Hard Problem are really only defending the weak G-version, but they are content to draw sweeping conclusions as though they accepted the Z-version.
In many ways, this is an understandable conflation: the quality of deep blue can’t be derived from a textbook description of colour perception; physical reality could be bland like the textbook description, missing the blueness; that means blueness can’t be playing any functional role or having any physical effects, or the book description would have had a functional hole in it, which it didn’t.
From the hardist perspective, it’s all the same problem.
Except that it’s not the same problem at all. Throwing all of these issues together is to drift between the Hard Problem of Consciousness and the Meta-Problem of Consciousness.
Which means this is another instance of a representational tangle.
To be continued…
Please let me know of any typos. I have been on-call, and I am sleep deprived.