Required Reading
If you are a regular, please skip this section, and pick up the post below the blue square.
This post is a continuation of Part 1. If you haven’t read it, please start there, instead:
If the term “hardist” means nothing to you, I also suggest you read this post at LessWrong. Hardists are from Camp #2; they accept the original framing of the Hard Problem with its implicit reliance on zombies; non-hardists are from Camp #1, and they don’t think there are any special non-functional extras involved in consciousness.
If you are not familiar with the symbols, Δ and ρ, I strongly suggest you start with this earlier post.
Recap: The First-Generation Likoscope
In Phase One of the previous post of this series, we put your (impossible) zombie twin in a neuroscience lab, and we asked it to imagine a red triangle.
By declaring your twin to be a philosophical zombie, we banished phenomenal spice (Δ) from all consideration, but we kept the purely functional components (ρ) of brain activity. To the extent that logic permits, we took away anything that resembles an inner life, including all inner colours and experiential flavours, and anything else that exclusively exists from a first-person perspective.
If you think that leaves us with a normal human — that we didn’t really remove anything, and so we haven’t managed to create a zombie that is convincingly “dead inside” — that’s fine; I agree. But the conceptual rules of the last post still apply. We’ll do our best to respect the idea of a zombie. We’ll only grant the existence of functional, pseudo-interiority when and where logic absolutely demands it.
Somewhere in that combination of what we removed (Δ) and what we left behind (ρ) is whatever gives you your sense of having a colourful interior. As a non-hardist, I think that all the phenomenality you seem to ostend to when you imagine a red triangle is still there in the functional story (to the extent that it is there at all). I don’t think phenomenal spice exists; I don’t think the idea is even coherent; the idea that our heads house literal shapes or colours is beyond silly.
Moreover, I think it is inescapable that even the conceptual origins of “non-functional” spice must be buried in the functional story, as revealed by our ability and desire to talk about it — because, in humans and zombies alike, spice has no behavioural effects; it can’t underwrite abilities or desires; it can’t account for the phenomenon of hardism.
I concede only two things to the hardists: 1) some of the properties represented in the brain, like subjective colours, seem to lack any direct counterpart in physical reality; and 2) representations of those same properties cannot be derived by reading about the representational substrate and applying clever analytical techniques within cognition. Neither of these conceded difficulties is a significant issue if the only evidence of these properties arises within the cognitive system that represents them. Before we can get that argument underway, though, we need names for the different pieces of the puzzle, and the hardists have thrown everything into the same cauldron.
Nonetheless, in the previous post of this series, I at least tried to accept the idea that we had somehow delegitimised the zombie’s own cognitive perspective of its interior, reducing that interiority to bare functionality, considered in the blandest possible terms. I even took the extra precaution of banning mentalistic talk and all notions of intensionality — though the rules of zombie thought experiments meant that we were obliged to keep the functional analogues of those things. And, as alluded to in that previous post, something very like intensionality creeps back in anyway, no matter how hard we try to ban it.
A zombie’s neural activity does not need to be blessed with any special non-functional aboutness factor in order for intensionality-based language to be relevant. If we notionally remove the special human extras, but keep the functional aspects of cognition, we have kept the true causal sources of the publicly shared vocabulary that we all use for talking about intensionality. Even if we change the words, adding asterisks and “pseudos” or rephrasing entirely, the zombie’s neural activities can be conveniently treated as though they were about things.
Neural models, even in a zombie, can be interpreted as having content.
In Phase Two, to make this point more vivid, we built a machine that performs a likoscopic examination of the zombie’s brain. In crude terms, it tells us what it is “like” for the zombie to imagine a visual scene, using a purely functional sense of likeness. The zombie was asked to think of a red triangle, and a matching triangle appeared on our likoscopic screen.
The likoscope might seem far-fetched, but mind-reading of this nature has always been broadly possible in principle, and recently it has become possible in practice, at least for images as simple as a triangle.
In this post and the next, we’ll start to look at some of the philosophical fine print that comes with such technology, and also consider why your zombie twin might profess to have spice, despite lacking it.
This is important, because its reasons for saying things are the same as yours.
If you are a hardist, your notional zombie twin is busy making a fool of you.
Phase Three
For demonstration purposes, we’ll split the likoscopic lab into three rooms.
In the first room, we sit your zombie twin in a chair, and we attach a brain scanner. We don’t ask it to *imagine anything.
In the second room, we set up our likoscope; it takes brain data from your twin and renders it on the screen as a likograph. We’ve streamlined the process, and we can obtain an optimal match using simulated versions of the zombie brain. If everything is working properly, the screen shows the represented contents of the zombie’s idle brain activity while it is *waiting for the experiment to start. We see scenes from earlier in the day, some idle *daydreams; we see *flashbacks to a movie the zombie *watched the night before, domestic details, and so on, perhaps a few things we’d prefer not to have seen, all jumbled in a kaleidoscope of undirected *thought.
In the third room, we place a blue square, roughly painted on wood, and set in a frame. We mount it on the wall, and we ask the zombie to *think about it.
A blue square immediately appears on the likograph.
If we were allowed to use the language of intensionality, we would say that the zombie’s pseudo-thoughts were about the blue square in the third room. If you, a human, were in the first room, that’s exactly what we would say. But you’re not, so we can’t.
Functionally, the third room doesn’t add anything to this set-up. There are no causal links between that room and the other two rooms. It would be a different matter if the zombie were looking directly at the hanging wooden square, creating the opportunity to calibrate its models with reality, but in this set-up the third room might as well be a bunker: it is closed and windowless. The walls are so thick we could set off a grenade in there and the likoscopic screen in the second room would not even wobble. The closest thing to a causal link is that we told the zombie there was a blue square in the third room, but we could have not followed through because we were in a rush, or we were out of blue paint. We could have lied. The likoscope would be completely unaffected.
We don’t need blueness in the third room to turn on blue pixels on the likoscope.
So why build the third room, if it can play no relevant role in the experiment?
As you might have guessed, that third room is there solely to prove a philosophical point, not to advance science, because I want to consider what it means to represent something.
Scientifically, the brain is often considered to be an organ that has the primary role of modelling an individual’s situation and interactions with the surrounding world, in order to optimise behaviour. According to this view, the brain doesn’t house the properties we find on introspection; it merely represents them.
When this representational relationship is forgotten, it creates an unnecessary sense of mystery. For instance, the philosopher McGinn famously asked a question that was the forerunner of the Hard Problem: “How can technicolour phenomenology arise from soggy grey matter?”
The puzzle doesn’t sound half as mysterious when it is expressed in representational terms: “How does soggy grey matter represent technicolour phenomenology?”
That sounds more like an engineering question, not an unsolvable enigma.
Obviously, there is widespread resistance to the demystifying potential of representational approaches to consciousness. Some of the resistance is based on the difficulties brains face when attempting Jacksonian derivation. Some of the resistance is based on the impression that our neural representations of things are more impressive to us, subjectively, than might have been reasonably expected under physicalism. Some of the resistance is based on the idea that zombies could represent things, too, and still be zombies.
This is the primary hardist complaint: when we start with the physical brain, attempt Jacksonian derivation, and then fail, we end up with something resembling a zombie.
As a non-hardist, I think we can concede all this, except that I would insist on an important point of pedantry: when we start with a description of a human brain, we end up with a description of a zombie. (When we start with an actual human brain, as far as we know, we end up with a human. The critical difference between description and actuality is a surprisingly neglected aspect of this puzzle.) At any rate, philosophical zombies are a natural conceptual match for the objective, scientific view of the human brain. The zombie’s interior is dead and bland in the sense that all the colours and other flavours in its head are only represented, not actual — but the same is true of our own interiors.
If a zombie is defined as what results when we consider the brain’s internal mental properties to be merely represented, then we are all zombies.
But that’s not how zombies are defined; they are defined as lacking the extra phenomenal richness that humans possess… Even though that extra richness must itself be represented and functional, because we can talk about it, and so can zombies, so it’s difficult to point to any legitimate extra.
A hardist zombie turns out to be a being just like us, with exactly the same internal representations, but (insists the hardist) where our representations of our own interiors are self-justifying, and legitimate, theirs mysteriously miss the mark, remaining unvindicated, delusory.
Representational issues, then, are at the very core of where hardists and non-hardists part ways.
Hence the third room.
Most philosophical discussions about representation in cognitive systems are concerned with the sort of intensional relationship that would exist between a human in the first room and the blue square in the third room. If we asked a human subject to think about the blue square in the third room, that would set up an aboutness relationship between the first and third rooms.
In some extreme cases, representationalist views of cognition have implied that the perceptual content in a human mind derives solely from this sort of aboutness relationship — though, in such discussions, the representational link is usually created through direct sensation, not through untested promises that some thing is in some distant room.
These naïve views of representationalism can sound plausible. Brains represent things in the world; science tells us that most of the properties we seem to find within the brain are not actually there inside the skull but outside, in the world; it must be the relationship between brain models and their targets that determine what it seems like inside a brain. In short, neurons can’t be blue or square or painful, so those properties must belong in the world and they are merely misattributed to the brain.
There is some initial appeal in this logic; there is nothing blue or square inside the skull when a human brain thinks about a blue square, so any real blueness or squareness must lie elsewhere.
Several philosophers have devised clever thought experiments to argue that this can’t be the case. For instance, one scenario suggested by Ned Block involves the application of a colour transformation to the entire planet. Corresponding counter-transforming goggles are worn by an unsuspecting participant of the thought experiment. The goggle-wearer looks at the colour of the sea. It is actually yellow, but the observer is tricked by the goggles into thinking the sea is blue. (And here we might ask a misguided question: What colour are the observer’s thoughts of the sea, blue or yellow?)
In another scenario, an entire human springs into being by an unlikely coagulation of atoms in a swamp. Immediately after coming into being, swamp-man might think about the blueness of the sea, not having had any causal interaction with the sea — in which case, where has the blue content of their thought come from? This unlikely event could have happened in a world with no seas, or yellow seas, or no blueness. So, what colour are the swamp-man’s thoughts of the sea?
David Papineau raises another concern, pointing out that there is often a significant time delay between an event targeted by a perception and the occurrence of the perception.
Some perceived facts ceased to exist long before they were perceived. In 1604 Kepler’s Supernova was visible to the naked eye in daytime for over three weeks. However, this massive explosion had in fact occurred at least 13,000 years earlier. This seems inconsistent with the idea that sensory consciousness of a supernova is constituted by a perceptual relation to the explosion itself. How can my sensory consciousness, which is here and now, be constituted by my bearing some relation to a long past event?
Papineau, David. The Metaphysics of Sensory Experience, 2021
Even when a brain is working well, interactions between neural models and their targeted contents can be fickle, or indirect, or delayed. If we add hallucinations and illusions, or the everyday faculties of imagination, the relationship begins to break down completely.
The conclusion usually reached after considering these issues is that the concept of representation is just not up to the task of putting blueness and other qualitative properties in the head. The idea of world-directed representationalism as an account of the brain’s interiority is then put aside as an idea that can’t stand scrutiny, leading to a general distrust of representationalist approaches to consciousness.
In our current experimental set-up, we can make the same basic point by noting that, if we tell your zombie twin to *think about the blue square in the third room, we don’t have to follow through and actually put a blue square in the third room. It makes no difference whatsoever. We could save time and effort by leaving the room empty, or just keeping whatever was there from earlier iterations of the experiment. There could be a red triangle hanging on the wall, and, it would still make no difference to what showed up on the likoscope, as long as the zombie was led to “*believe” (in a purely functional sense) that we had hung a blue square.
It obviously matters what the zombie “thinks about”, in one sense of aboutness, because we get different results when we ask it to *think about a red triangle and a blue square, but that sense of aboutness is not usefully captured in terms of an intensional arrow successfully linking the zombie’s brain in the first room to a target in the third room. The much discussed philosophical notion of intensionality, captured here as a third-room-aboutness relationship, turns out to be largely irrelevant to all matters likoscopic.
If that’s the sort of intensionality the zombie lacks, then it is a lack of no particular import. Of much greater relevance are the functional relationships in the first two rooms.
To find a form of representationalism that matters, we will need to distinguish between the implied content of neural models (shown on the likoscope in the second room), and the vindicating content of those models (displayed without effect in the third room).
That is, we can consider representational views of cognition in a second-room, brain-focussed sense (B-representation), or in a third-room, world-focussed sense (W-representation).
Philosophers have successfully argued that W-representationalism is dead.
To which I say: Long live B-representationalism!
On this occasion, let’s stipulate that we did actually hang a blue square in the third room, because we wanted to introduce some new vocabulary.
There are three notional square-related entities in this set up, one for each room (plus another one I will largely ignore). In what follows, I’ll use Latin to pin these three concepts in place, distinguishing these three specific jargon meanings from everyday notions of squares, and thoughts, and thoughts of squares in thoughts, which all tend to merge together.
Importantly, all three of these terms have functional interpretations, so it doesn’t matter that our test subject is a zombie. In fact, its zombiehood lets us cast aside some distractions.
In the first room, the neural model in the zombie’s head can be known as the quadratum substratum, or “substrate square”, because it solely consists of the neural substrate for the representation of a square. It is, of course, not actually square. If we were allowed to use the language of intentionality, we would say the q. substratum is about a square. Given that such language is banned, we must instead say that it is a functional model of a square.
In the second room, the likoscopic square on the screen can be thought of as a depiction of the implicit, modelled square in the first room. We can call this the quadratum ostensum, or the “ostended square” (or the “pointed-at square”). Note that this term, the q. ostensum, does not refer to the literal square on the likograph, but to the zombie’s neural model interpreted in terms of its content.
At a first approximation, the q. ostensum is the virtual conceptual entity that the zombie ostends to when it *tries to *think about a blue square, but this summary is rather too quick. The term “virtual” should not be taken to mean that the q. ostensum is not real — indeed, we will need to do much more work here to specify the ways in which it is real and the ways in which it is not. A key aspect of the q. ostensum is that it is by definition just the q. substratum approached from a more sympathetic viewpoint, not a whole new ontological entity — and so it is not really blue or square.
Importantly, when we talk about the q. ostensum, the square being “ostended” to is not the one the zombie was asked to think about. The zombie might *believe that it is *pointing to a representation of what is in the third room, but we know better. If the zombie *assumes, wrongly, that the wooden square is a shade of blue close to turquoise, for instance, and it *points to a model of that turquoise square within its pseudo-mind, the likograph in the second room will provide a better colour match to its model than to the notional target in the third room.
In the third room, there is the blue square that the zombie was asked to *think about, the one that would linked to the zombie by the traditional aboutness relationship if intensionality had not been banished. This square is outside the zombie’s head, and it provides no important functional insights into the zombie, so we will call it the quadratum externum, or “external square”.
In purely functional terms, we can say that the quadratum externum is the entity that potentially vindicates the content revealed by the likoscope. In cases of veridical modelling, it matches the quadratum ostensum quite closely. The q. externum can also exist as pure potentiality, in which case we will describe it as fictional. It is whatever would have to exist in the third room to make the zombie’s model a reliable model of a real thing, and whether it does exist turns out to be largely irrelevant to what things are like in the zombie’s brain, except when there is a direct sensory link from externum to substratum.
We don’t need true intensionality for this notion of vindication to be in operation. If you were offered a million dollars whenever the shape displayed in the third room matched the likoscopic screen in the second room, it would be very clear to you when the zombie’s model was vindicated, banned intensionality be damned.
There is a fourth square involved in this set-up, but it is not one that I will talk about except to dismiss it: the literal blue square on the likoscopic screen. This square doesn’t need a fancy Latin name, because it is no more than a helpful visualisation of the quadratum ostensum. We can turn the screen off, and the ostensum is still there, functionally implicit within the zombie’s neural model.
Of the three named square-like entities in this set-up, the ontology of two of them is reasonably straightforward.
The q. substratum is a bunch of neurons doing things in the dark and, because our subject is a zombie, we can all agree that its ontological story ends there, with neurons in the dark and no blue square.
The q. externum is just a wooden square, painted blue. Some readers might object that blueness itself is a controversial property, but here the blueness can be taken to mean that the painted surface objectively reflects photons within a restricted wavelength band. We will need to return to the controversies of subjective colour later.
The quadratum ostensum is the trickiest of the three. What status does it have? Is it real or not?
By definitional decree, I propose that this conceptual entity is real, and that it exists even when the likoscope is turned off. The q. ostensum would retain its essential nature even if the likoscope had not been invented — but in that case the q. ostensum would only be judged blue and square by a single observer (a judgement we might reject in this case because it comes from a zombie).
Note that, because the q. ostensum seems to have properties distinct from the q. substratum, we are at serious risk of getting ontologically confused. We seem to need two ontological entities to justify the two radically different views of the zombie’s model, but we only have one.
Indeed, we are within a hair’s breadth of committing what has been called the phenomenological fallacy. This is a term coined by Place (1956), and described by Thomas Metzinger (2003) as follows:
This logical mistake, which I shall refer to as the ‘phenomenological fallacy,’ is the mistake of supposing that when the subject describes his experience, when he describes how things look, sound, smell, taste, or feel to him, he is describing the literal properties of objects and events on a peculiar sort of internal cinema or television screen, usually referred to in the modern psychological literature as the ‘phenomenal field.’ (Metzinger, 2003.)
The q. ostensum, be assured, stops just short of this fallacy — you should trim back its ontological aspirations to fit physical reality. Call it the q. substratum viewed through the lens of a cognitive illusion, if you like.
Or viewed through a likoscope.
At this point, there are at least two different linguistic policies we could take about the reality of the q. ostensum, and we need to make sure we are not accidentally extending or curtailing our ontology or committing Place’s fallacy by confusing our linguistic choices with ontological judgements. We could dismiss the blueness and squareness of the q. ostensum as pure fiction and conclude that the q. ostensum itself is pure fiction. Or (my preferred option) we could merely say that the q. ostensum has been portrayed in a misleading fashion; it’s not really blue, and it’s not really square; it’s real, but it’s actually a conglomeration of neurons with complex functionality that includes representations of blueness and squareness.
Either of these policies would be defensible; what is important is that the term is used consistently. My intent is for the q. ostensum to capture something real; it is the real neural model approached from a representationally sympathetic perspective, whereas the q. substratum is the same model approached as a network of uninterpreted, untranslated neurons.
Conceptually, the q. ostensum is not at all like the q. substratum, but, by definitional decree, they ultimately provide different perspectives on the same ontological entity. When a zombie pseudo-imagines a square; there is a real, objective, functional sense in which that neural activity constitutes a model of a square. The q. ostensum gives us permission to think about that model is square-focussed terms. (Because, let’s be honest, what other choice do we have? How many of you were looking up neuroanatomical texts to know what to picture?)
The likoscope merely offers a physical demonstration of this pre-existing functional situation.
This is acceptable, as long as we make this linguistic decision knowingly, keeping in mind that the shape and hue of the q. ostensum are mere potentialities.
To think about any of this without making an ontological error is challenging.
On one hand, we are justified in remaining somewhat skeptical about the ontological status of the q. ostensum. We’re only think of it in terms of blueness and squareness, but we don’t believe it is actually square or blue. After an objectively defined translation process, the implicit squareness of the q. ostensum would be real enough to display on a likoscopic screen — but only because the translation process creates the literal square. (The blueness will require a more complex ontological account, the subject of a future post.) When we turn on the likoscope, we are not accurately displaying something that had existed in the displayed form before we turned the device on; the model existed, sure, but the device can only display its implied shape and hue, not what the q. ostensum is literally like. That translation process should not be considered as proof that there was some mental square in the head all along, something four-sided and blue. To interpret the likoscope in that way would be to commit the phenomenological fallacy that we are trying so had to avoid.
On the other hand, if the zombie looked through a window between the first and second rooms to see the likoscopic screen, it would probably say that, yes, that’s what its imagined square was “like”. Exactly like that! Great job! It might even add that its imagined square had been just like that before we turned on the screen.
What the neural model is “like” in the translated, interpreted sense is not dependent on there actually being any physical screen to show the likeness, so the neural model is “like” its implicit content even when we refrain from a literal display of that content.
The likoscope is therefore displaying something real; it is just not displaying that real thing accurately. What it is showing is what that real thing is “like” from a particular cognitive perspective. The machine is engaging in ontological promotion, elevating implicit content into reality, but the q. ostensum can nonetheless be considered as the genuine entity that was capable of such device-assisted promotion. Compared to the q. substratum, it adds an indexical element that effectively means “as interpreted by the surrounding cognitive system”.
This is, admittedly, a hazardous conception of “like”, because it jumps representational levels — but it is a form of likeness that is highly relevant to the puzzles of consciousness, and this meaning of “like” is widely in use. In fact, I propose that this is the hazardous sense of “like” embedded within the popular term “what-it’s-likeness”, which is often used as a synonym for phenomenal consciousness, and all the synonyms in the cluster inherit the same conceptual hazard.
In a strict sense, from a third-person perspective, there is no real justification for judging the zombie’s q. substratum to be “like” the q. ostensum as it is depicted on the screen. One is a complex collection of neurons undergoing chemical fluxes inside a dark skull; the other seems to be a bright blue square. They could not be more different.
So why would the zombie say that one was like the other?
It is common to show complete disinterest in the inner workings of zombies. It’s all just a void in there. But we should be interested in why your zombie twin says that a mere neural model is “like” the screen square, because its cognitive processes are isomorphic to your own.
In this scenario, because we are dealing with a zombie, there should be no ontological mystery involved. The linguistic issues are delicately balanced, but we fully understand the source of the “likeness” between the first and second rooms. The similarity is only there because complex circuitry has forced a match between two conceptual views that are actually very different.
If the zombie said that the screen square and its neural model were “like” each other, it would necessarily be using the term “like” in a very loose fashion that glossed over the need for a complex translation step from model to actual. If it *appealed to a literal resemblance in terms of shape and hue, it would be *forgetting that the blueness was added to the scene artificially, via activation of real screen pixels and the emission of real photons of the requisite wavelength. It would be *ignoring the fact that the emitted photons from the likograph were striking its retinas, activating the circuits responsible for colour processing, edge detection and so on, and eventually leading to a functional perception of the screen square. Only then would the comparison take place: in the dark, in a neural domain where the “hue” and “shape” of the “square” were only present in a faux, representational sense.
The *ease with which a match is *declared and the simplicity of the humble square on the likograph glosses over massive hidden complexity.
The outbound part of this complex comparative loop depends on advanced neural decoding techniques, the ones we patented in our invention of the likoscope. The inbound part ends with conversion of the screen square back into something functionally like the original neural model, using complex perceptual circuits that evolved over millions of years.
The resulting match would inevitably be discussed in terms of shape and hue, but it would really consist of neural comparisons. The model causally downstream from the screen, when the zombie turns its eyes to the likograph, is like the model causally upstream from the screen, in a purely neural sense of likeness. There is no actual match in shape and hue; there is a functional match between two neural models, one created through pseudo-imaginative faculties, the other through direct vision.
If the zombie of this experiment is your twin, and we run a parallel experiment with you in a second, identical likoscope lab, you will both be in behavioural and likoscopic synch. That means, if the zombie reports a likeness between what it imagined and what the likoscope shows (and why wouldn’t it?), you would also use the term “like” in this same loose fashion.
So why would you introspect, consider the q. substratum, and then turn to see the screen depiction of the q. ostensum, and say that one was “like” the other? Would you feel obliged to add some caveats? Would you pedantically insist that, no, the blue square on the screen was not like your imagined square, because your imagined square was actually a neural model? (If so, is that really how you talk about your own mental contents? Or is it an artificial stance you feel obliged to adopt because you are in the midst of a philosophical debate?)
“The screen is just like my imagined square,” says the zombie, despite having no coloured interior.
“The screen is just like my imagined square,” you say, imagining yourself to be reliably reporting how things seem to you.
But what do you mean? And how could you get to be right where your zombie’s words are, if not wrong, at least isomorphic with a false statement?
You aren’t directly responsible for what your zombie says, but, if we apply a strict accounting of the functional causes of your words, your reasons for saying anything are exactly the same as the zombie’s. (You are your zombie’s keeper.) If it says something we regard as indefensible and silly, that implies what you say is also logically indefensible. If the reasons for its utterances are unreliable, then your reasons are unreliable, too (even if we choose to consider it as an automaton that is incapable of truth or falsehood).
If you are a hardist, you might protest that you use the term “like” with more justification than your zombie twin because you really had some non-physical phenomenal blueness in your mind that has been functionally mimicked by the likoscopic screen. In that case, you would actually be declaring the match because of a colourless neural comparison taking place in the dark, just as in the zombie’s case, but you would be imagining that you were also privy to some deeper, more meaningful match in a mental space that let you compare real colours and real shapes (or their phenomenal equivalents), because you possessed some ill-defined mental instantiation of the blue square.
You might think that “real subjective blueness” is somehow like the screen blueness even without the complex translation in each direction.
To be frank, I think this idea is indefensible. We already know why the likoscopic screen shows pictures that are “like” your mind; we designed the likoscope that way on purely functional grounds. Furthermore, I propose that this complex, translated sense of likeness is the only one in operation even when a human sits in the likoscope lab. It is certainly the only sense of likeness that can motivate your speech or your conclusions, because non-functional extras can play no role. You can’t be privy to a comparison happening in a non-functional domain if that domain has been defined as having no causal effects on what you know and conclude.
If you are a non-hardist, like me, you might also use the term “like” and think of the match in terms of shape and hue — but with full acknowledgement of all the translation steps involved in the operation of the likoscope. In that case, this loose usage of language is entirely defensible; you are merely extending the linguistic scope of “like”, and you are doing so knowingly. Things in our mind are “like” things in the world (once we apply the appropriate sensory processing on the way in and the appropriate world-directed inferences on the way out), and that’s a very good arrangement. Our ancestors would have died long ago if they perceived neural models of sabre tooth tigers as neural models. Luckily, they saw them as “like” tigers and ran away.
So, is the q. ostensum real or not?
There is no definitive answer, because the word “real” was not designed for these sorts of nuances. Any adjudication we make will establish a linguistic policy with regards to usage of the word “real”, but it won’t change the true nature of the q. ostensum, which we should understand already, without the adjudication. I am declaring the q. ostensum to be real, but only as a complex functional potentiality in a physical neural model; it is only “like” the screen square in an extended sense of “like”. We could have reserved a similar term for the fiction of a blue square projected onto the phenomenal field, and then we would be talking about a non-existent entity, but we would still need a name for the most natural cause of utterances like, “The base of the square is horizontal.” Non-existent blue squares can’t cause statements of that nature.
One way of thinking about this situation is that the likoscope applies the functional inverse of an implicit sensory step that we routinely apply to the contents of brains whenever we think of them as containing minds. The language of introspection mimics the language of vision; the “spect” is there in the name, same as in “spectacles”. We imagine looking inside our minds and seeing what’s there. We also imagine that we point at things in there, and that we know what they’re like.
The likoscope merely operationalises this misleading metaphor, physically achieving the translation that we tend to infer anyway. It examines the contents of the zombie’s spice-free pseudo-mind with the vision metaphor explicitly in operation, turning visual models into literal images, depicting the contents after applying a functional, translational process that is the functional inverse of visual perception. The likoscope physically puts in a decoding step from a neural model in the dark to a visual equivalent.
We end up with a literal image that we can see on the screen, instead of a metaphorical pseudo-image that we imagine seeing. A misleading cognitive habit has been mimicked with technology.
Once the implicit square is rendered explicit on the screen, it can be subjected to a real sensory step. We can see it. The likoscope therefore reproduces, in reverse, the sensory step that is often implicitly added in these discussions, but rarely explicitly acknowledged.
Phase Four
This might seem like a lot of effort to expend on pinning down the status of an imaginary square in the brain of an impossible being.
Why bother?
The point, of course, is that the terminological distinction just considered for a blue square can be applied to all mental content. I propose that we can talk of the substratum, the ostensum and the externum without regard for whether it is a square or a tiger or a human mind that is being represented. The ostensum, in this collective sense, lines up with the closest functional equivalent to the hardist concept of “phenomenal consciousness”; it is the spice-free version, the one that does not break science. Where the Hard Problem is ill-posed because it chases after phenomenal consciousness conceptualised as what goes missing in a zombie; the challenge of explaining the spice-free ostensum casts off the ill-posed elements of Chalmers’ framing and keeps the genuine puzzle of consciousness.
The great achievement of the likoscopic lab is finding and displaying the hidden ostensum within a purely physical brain, but it is important to note that this technical challenge is the inverse of the usual epistemic situation. Outside a neuroscience lab, we don’t see neurons and struggle to see contents; we only ever see our neural models via their ostensum, and we have no access to the substratum. Delicate nuanced debates about whether the ostensum is ontologically legitimate get pushed aside by the necessities of living within our own cognition. It will do us no good if we always try to talk about neural models as physical conglomerations of neurons. Imagine replacing every conversational reference to an idea with its untranslated substratum equivalent; we would quickly go insane.
Because all our cognition takes place within a representational system, there is permanent, irreconcilable tension between the most natural view we take of our neural models (a second-room ostensum perspective) and their ontological underpinnings (the substratum in the first room).
Having separate names for these different conceptual views will be particularly important when we turn to the vexed question of whether phenomenal consciousness is real, and what counts as a veridical view of our own interiority.
Consider two different scenarios in our three-room laboratory.
In the first scenario, a zombie is asked to think about a wooden blue square hanging in the third room, and we stay true to our word; the square is there. In the second scenario, we give the zombie the same briefing, but we leave the third room empty.
In both scenarios, the likoscope shows the same blue square.
We could say that the zombie’s neural model has the same implicit content in both scenarios (a second-room matter), but, in the second scenario, that content is not vindicated (a third room matter). Similarly, if we were asked questions about the content of the zombie’s neural model, such as “Is the square real?”, we would need to know which square we were talking about. Suppose someone dared to suggest that the square was illusory.
We would need to distinguish claims about the virtual nature of the q. ostensum (a second-room matter that does not differ across scenarios and about which philosophers can split hairs forever) from claims about whether the content of the zombie’s model is vindicated (a third-room matter that differs markedly across scenarios, and could be settled by a quick peak).
In this pair of scenarios, with a wooden square as the notional target of the exercise, the best way of talking about the q. ostensum is already a subtle matter that language is poorly equipped to handle, and we might have some misgivings no matter how we adjudicate on the ontology of the virtual blue square. I would prefer to say that the q. ostensum is real, while conceding that its blueness and squareness have no direct ontological instantiation. The q. ostensum only gets to be real via a representational twist, by treating it as an interpreted view of the q. substratum.
By contrast, the presence or absence of the q. externum is entirely mundane. It’s made of wood. It’s there or it’s not. If it’s there, it is undeniably real, but it plays no role in the zombie’s pseudo-cognition anyway.
Hopefully, this pair of scenarios does not raise too much controversy, at least where the subject in the first room is a notional zombie and the target in the third room is a wooden shape. Nonetheless, it is important to draw the distinction between implied and vindicating content for mundane contents, like wooden squares, before we move on to more confusing targets.
When illusionists argue with their critics, is this a second-room or third-room dispute? Or a bit of both, thrown into a messy hybrid debate with poorly defined terms?
Suppose that, instead of hanging a wooden square in the third room, we tell the zombie that the third room contains a human, sitting on a chair, imagining a blue square.
We ask the zombie to think about that imagined square.
What will the likoscope show?
The subtle second-room issues surrounding the ontological status of the q. ostensum are still in operation, but now they affect what is supposed to be in the third room and the uncertain ontological status of the square enters the model, to be modelled in turn and subjected to a new round of ambiiguity. If there was already ontological uncertainty about the status of a wooden square imagined by a zombie, with a tendency for everyone to employ a loose conception of likeness, the situation is now dramatically more confusing.
What sort of entity will end up being modelled by the zombie?
If the zombie is a hardist, they will tend to *accept the implicit content of neural models as being real in some way that goes beyond the bland neural activity of the q. substratum. They will *believe that the human in the third room has access to a mental construct that is, in some way, genuinely blue. They will *think that there is a non-physical blue square in the third room, complete with a blueness quale that would be missing if the third room contained a zombie.
If the zombie is a non-hardist, they might *imagine something visually very similar, by envisaging the likoscopic potentiality that accompanies the human’s thoughts. But they would add several caveats.
Just as we had a quadratum substratum, a quadratum ostensum and a quadratum externum for the wooden square, we would now have a similar three-fold terminological split for the zombie’s modelling of the blueness quale: a quale substratum, a quale ostensum and a quale externum.
But it’s even worse than that, The substratum in the third room has its own physical representation in the first room, its own likoscopic display, its own criteria for vindication in the third room. The confusing implicit ostensum in the third room has its own physical representation in the first room, its own likoscopic depiction, and so on. The externum in the third room, the actual human-imagined square, has only a marginal claim on existence, but is again subject to the same three-fold potentiality.
In this situation, if we are trying to work out what will appear on the likoscopic screen, it does not really matter whether there is really such a thing as a blueness quale, just as it did not matter whether there was really a wooden square.
What can we say about the blueness quale? It’s made of spice, supposedly — some thing lacking in a zombie — but no one is sure what that is supposed to mean. You might believe it’s there in the third room with the human, or it’s not, depending on your personal metaphysics. But whether it is there or not plays no role in the zombie’s cognition anyway. It’s an irrelevant third-room matter. The third room is causally divorced from the experiment, and qualia are outside the nexus of physical causation anyway; they are imagined as being permanently side-lined in an ineffectual non-physical domain, providing vindication of hardist intuitions without ever influencing those intuitions.
If we are trying to predict what shows up on the likoscope in this scenario, we can put aside the vexed question of what a quale for blueness is supposed to be and whether it actually exists. That turns out to be irrelevant. All that matters is what the zombie pseudo-believes about the ontological status of minds and qualia, and what it *believes cannot have been influenced by qualia according to standard conceptions of zombies and qualia. Whether the human in the third room actually has qualia or not doesn’t matter. Given that the likoscope operates on functional principles, it doesn’t even matter whether it is you or your zombie twin in the first room.
Even if the blueness quale does not exist, and even if our test subject lacks qualia by virtue of being a zombie, the likoscope will be very likely to show a floating ethereal blue square.
If you and your zombie twin are hardists, the floating blue square will be accompanied by a notional footnote to the effect that the floating blue square is real in some way, and therefore a problem for physicalism. If you and your twin are non-hardists, the notional footnote will have a list of caveats of the sort discussed in this post; you are employing a loose conception of likeness; you are engaging in an sympathetic recreation of the imaginative act taking place in the third room, not attempting an accurate ontological survey, and so on.
Suppose we ask the zombie to imagine itself in the third room, thereby maximising the confusing recursion. Or suppose we just ask it to introspect and think about what it is like for itself to imagine a blue square, letting the first room double-up as the target. We will get the same result as when there was supposed to be a human imaginative act as the target, because the zombie already pseudo-believes itself to be human. The zombie will not look inside itself and find a void; it will find a model that it considers a partial match to real blue squares and an even better match to ethereal blue squares of floating mentality.
The upshot of all this is that, when the zombie *considers what it is like to be itself when thinking about a blue square, the likoscope will confirm that it performs an imaginative act that is entirely insensitive to the actual ontological status of the quale for blueness. If it is a hardist, it is likely to take that imaginative act as proof that it really has detected the quale for blueness. It’s there, and it’s blue, and the likoscope even confirms it. The same representational slippage that loosened its usage of “like” will also loosen its standard of proof.
If we were allowed to use mentalistic language, we would say that the zombie knows itself to be human, and it has introspective proof of its humanity.
In place of such language, we can only point to the likoscope.
In the previous post in this series, I suggested that an imaginary shape in a zombie’s brain, when imagined by that zombie to be what goes missing in zombies, has at least three claims on immateriality or non-physicality.
Those three claims can be reconsidered with imaginary blue squares, putting our new terminology to work.
1. The Q. Substratum Exhibits the Non-physicality of Mere Patternhood.
When a neural model of a square comes into being, nothing new is created, and no mass is gained; the model is a pattern, not a thing; it is a rearrangement of neural activity, not a physical addition. (Life is somewhat similar, rearranging matter rather than being its own separate thing.)
2. The Q. Ostensum Exhibits the Non-physicality of Representation.
The square that the model pseudo-represents has no actual existence within the brain; it is only a representation (or a pseudo-representation, or a functional feature that exists as a likoscopic potentiality). This would be the case even if the zombie’s *thought was inspired by an actual physical square, or had some claims on being about a real square, because removing the vindicating target makes no difference. (Books do something similar.)
3. The Q. Externum Exhibits Represented Non-physicality. If the square in the third room is imagined as being made of phenomenal spice, rather than, say, wood, then it is also non-physical within the context of the zombie’s ontological model of reality. The blue square is portrayed as being constituted by an immaterial essence, spice (or it is portrayed as being a virtual, not-quite-there potentiality just a little more real than the q. ostensum). The square and its blueness therefore has a represented non-physicality. Not only is it not backed up by a literal blue square; it is represented as not being backable by anything in physical ontology at all. (Books about ghosts do something similar.)
To this list of non-physical features, we will eventually need to add a cognitive account of the square’s pseudo-epiphenomenality and pseudo-irreducibility, which will strengthen the square’s apparent claim on non-physicality. These issues will be discussed in future posts, but each of the three types of non-physicality already considered come with their own associated form of conceptual pseudo-epiphenomenalism. Pseudo-irreducibility largely reflects the difficulty of Jacksonian derivation — which essentially consists of trying to form a likoscopic matching loop with a broken screen.
Finally, if we are hardists, we will need to posit a non-physical or non-functional domain to house the imaginary blue square (or at least the quale for blueness), extending the idea of non-physicality in some additional way that is not covered by these other features. We will need to believe that, when a human is asked to imagine the inner blueness that a zombie lacks, the resulting quale externum is real and non-physical; the square and its blueness has a genuine non-physical existence that goes beyond merely represented non-physicality. We will need to think that the quale for blueness has genuine irreducibility and epiphenomenality, rather than cognitive features that merely mimic these things.
If we are non-hardists, by contrast, we will draw a line at this point, and call this last posited form of non-physicality to be an unwarranted extrapolation from the forms of non-physicality already listed. We will argue that we already have enough to account for everything the physical brain claims, and for everything that shows up on the likoscope; we even have several legitimate reasons to condone talk about the non-physicality of thoughts. The not-quite-there status of the ostensum, its stark contrast with the substratum, and the epistemic barriers to Jacksonian derivation are enough to account for our anti-physicalist intuitions. We don’t need to extend the list.
When a zombie or a human pictures the quale for blueness as non-physical in this final sense, what would count as vindicating content and how would we ever know it existed? Adding an ontologically special non-physical extra to the third room, when it cannot play any part in the cognitive process anyway, adds no explanatory utility — and the conceptual addition necessarily comes from an overreaction to the other forms of non-physicality already considered, none of which are truly at odds with physicalism.
Why necessarily? Because the posited extra, spice, cannot motivate any concept of the posited extra. It has no effects on cognition.
Spice and the qualia externum are closely related; for now they can be viewed as the same basic conceptual entity. Later, I will argue that the central hardist notion of “experience” or “phenomenal consciousness”, conceptualised as spice, comes with a some specific theoretical commitments, such as a belief in the logical possibility of zombies, that are not there in a naïve acceptance of a vindicating entity that matches the ostensum. (When a child thinks that the sea is blue, they are effectively indulging a folk-psychological belief in the qualia externum for blueness; the blueness is out there, in the sea. When a hardist philosopher thinks a similar thought, it is theory laden and comes with an implicit acceptance of zombies and irreducibility and a challenge to physicalism.) Nonetheless, spice is primarily a conceptual descendant of the qualia externum, which is itself a conceptual descendent of the qualia ostensum.
Spice and the externum are both built on the false assumption that we need a vindicating target for our conceptions of our own interiors; the untranslated neural substrate of our thoughts is clearly inadequate to fill that role, so there has to be something real that is like our representations in the same promoted manner as shown on the likoscope.
Unfortunately for the hardist, the presence or absence of the vindicating target makes no difference to anything, and the consideration of the likoscopic screen suggests that an implicit target is already ample to explain what things are “like”.
If God forgot to create qualia-spice in whatever passes as the third room of our universe, we would literally not notice, according to the hardists themselves.
As noted in previous posts, many discussions about phenomenal consciousness get mired in confusion about whether they are talking about the phenomena found on introspection (Σ) or the posited non-functional extra missing in zombies. This post gives us new terminology to make a similar point, but we are ultimately exploring the complexities of the functional story. In one previous post, it was suggested that, within the functional story of the brain (ρ), we would need to find a representation of spice (Δ) — there would have to be an aspect of the functional story that gets projected into a non-physical domain and inspires the hardist notion of spice.
We have not yet told a full conceptual origin story for spice, but we can at least see the first hints of how a purely physical brain might come to “believe” in non-physical qualia.
Hopefully, it should also be clear by now how pointless it is to discuss whether “phenomenal consciousness” is real, unless we know which notional room we are talking about, how we have defined phenomenal consciousness, and how broadly we are interpreting what is necessary for something to be considered real.
Is phenomenal consciousness real?
The question is hopelessly vague (and we have not yet considered the difference between qualia and consciousness-the-container) but, when it comes to qualia, I broadly propose a different answer for each room in our current set-up.
For any elemental mental properties that we might be tempted to call “qualia”, I suggest we should be realists about the qualia substrata, virtualists about the qualia ostensa, and eliminativists about the qualia externa.
To be continued…
PostScript
If you feel like commenting, potential discussion points include 1) the difference between likoscopic derivation of qualia and Jacksonian derivation of qualia, and 2) the different forms of pseudo-epiphenomenalism.
You've now in a couple of places made one of your main arguments: if we think that experience includes some special stuff (spice), we do so for entirely functional reasons. That is because the functioning of neurons fully explains verbal reports, written descriptions, and even private thoughts about spice. We can therefore see that the source of spice isn't spice itself, but something about the wiring in our brains. This is illustrated by the idea of a zombie coming up with the same verbal reports, written descriptions, etc, even though spice cannot be the source of these descriptions for a zombie by definition of spice. Spice is thus an illusion, although illusion is perhaps not the best term to throw around in consciousness discussions. We still need to decide where this idea of spice comes from, and you've started to introduce the terminology to get there (Q. Ostensum being important I assume). But even before we see where spice comes from, we see spice is not real. This is a very strong argument.
Let me try to respond as a dualist. However keep in mind I'm not a 100% dualist, and I don't necessarily fully believe in this argument I'm about to give.
I'm a math guy, so here is a math analogy. Euclid defined lines as "breadthless length" and a point as "that which has no part", but these statements don't actually mean anything. In more modern treatments, we just leave these terms as undefined. A zombie can run through a geometry proof that if a triangle has two equal angles, they also have two equal sides. But all of that will be meaningless although correct symbol manipulation. Only when a human runs through the same proof do we get a proof that relates to real things that that human has experienced, because a human knows what lines and points actually are.
Similarly, if a zombie gives an eloquent argument about the magical nature of his own conscious experience, the argument isn't wrong per se but is simply meaningless because it's relying on undefined terms. When a human gives the same argument, it is correct because those same undefined terms take on meaning, not because the human is able to give a definition, but the human simply knows what they are through direct experience.
So you could say the source of spice for a zombie is an unfounded assumption, and for whatever reason it's a founded assumption on the part of the human.
I'm sorry but before I read it properly, what's a Hardist again? Can you put it in a sentence or two? My memory is shit and I have ADHD and I'm a slow reader but this seems really cool