Phase One
Suppose we asked your notional zombie twin to imagine a red triangle.
And, just to be clear, we are talking about philosophical zombies, creatures that are physical duplicates of their human counterparts, but lacking any qualitative interiors. If they were truly possible, then physicalism would be in trouble, for reasons discussed elsewhere.
Philosophical zombies can’t exist, you say? In that case, start by assuming that we are talking about a bare physical duplicate of you, with no special extras. But I want you to lean into every hardist instinct you can summon for the duration of this post.
(In the language of the last post, this is not a Scenario 3 situation, where you get to keep your definition of consciousness while you envisage a Zombie World; this is a Scenario 1 situation: you must already have a definition that makes zombies possible. You must believe in phenomenal spice, or pretend to. If that term means nothing, start here. If you don’t know what a hardist is, it’s someone who believes in the framing of the Hard Problem of Consciousness.)
If you are totally new to the idea of philosophical zombies, I suggest you start here, instead:
Just in case you need a reminder, this is what you must believe: humans have a special phenomenal cloud that lives inside their brain or floats in a domain linked to the physical brain; it adds awareness; it houses colours; it provides meaning to words; it gives humans a moral worth that would be lacking in a merely chemical machine. It feels like something to be inside a brain if the phenomenal cloud is there; it ceases to feel like anything if the cloud floats away or stops being generated by the special quantum processes in our microtubules, or the diffuse panpsychic hum in every atom. Or whatever. It doesn’t really matter how you imagine spice comes to exist in humans because your twin doesn’t have any spice. It’s dead inside.
Whenever the zombie pseudo-introspects and pseudo-finds something in its pseudo-mind, and it talks about spice-related ideas like qualia, we will remind ourselves that it is not really ostending to anything, which would imply that it has things in its head to point at. It is merely forming neural models — temporary coalitions of neurons that are not even being observed. All mental talk in relation to your zombie twin is in error, when we produce such talk. When the zombie uses mental terms, it is not even an error. Those are not even “terms”. A muscle twitch can’t be true or false. A seizure can’t be true or false. A flurry of zombie neural activity occurs in a semantic void.
What we gain, in this hypothetical situation, is the confidence that everything the zombie says and everything it seems to believe must have a functional explanation. When we ask it to imagine a red triangle, those words are just sounds. Its brain houses no magic; there is no phenomenal redness floating in its mind, and there is no Cartesian screen to display triangles.
Having cleaved off most of the philosophically challenging aspects of mentality, we can think about the zombie’s model of the red triangle without having to worry about a Hard Problem. We can completely dismiss the idea of any mysterious processes adding elusive phenomenal richness to the zombie’s pseudo-imaginative act, and instead consider some basic features of representation in a domain that seems ontologically simple.
Your twin’s brain is exactly what it looks like: a grey lump of flesh arranged such that it achieves sophisticated computation, without giving rise to any special extras, feeling nothing, and viewed by us with as little empathy as a rational person can sustain.
Suppose that we run a neuroscience laboratory. We take the zombie’s grey lump of brain-flesh (still housed in a body to keep it alive — we’re not total ghouls) and we ask it to imagine a red triangle.
It can’t do that, of course, because it is a full-blown hardist zombie. But, if the zombie is cooperative (in a strictly behavioural sense), the instruction to imagine a red triangle will trigger a set of neural events that will lead to the creation of an internal neural model of a red triangle, and that triangle is what I want to talk about in this post and the next.
The triangle I’ve just said we’re not allowed to talk about in mentalistic terms.
In particular, I want to know if it is like anything.
If you’re diligently trying to comply with the rules of this thought experiment, the answer to whether the triangle is like anything should be a simple “no”. Your zombie twin lacks spice, by definition. That means it has no colours in its head, and there is no mental triangle in there. Its neural model is not like a triangle. It’s nothing but neurons and other supporting cells, doing their physiological thing.
But perhaps I need to be more careful?
Perhaps I’m the one breaking the rules, here, already straying from a hardist conception of a zombie, simply by suggesting that any of its neural activities constitute a model.
A model might be seen as having unwanted mentalistic implications. We need to ditch those.
To make sure we are truly imagining your twin as a hardist would, I’d like to indulge a common hardist conceit: let’s stipulate that your twin completely lacks intensionality — the capacity to hold cognitive states that represent things. None of its neural activity should be thought of as having any meaning, and none of it should be conceptualised in mental terms, and none of it is about anything.
Now, intensionality is a jargon term that is defined differently by various philosophers, and people can spend entire careers exploring its many puzzles. Personally, I am a little skeptical about some of the uses of this concept, particularly in relation to zombies, but I want to put those objections to one side. We can consider intensionality in the broadest possible terms. Anything at all that can be scooped up by the term “intensionality” can be imagined as missing in your twin except where the merely physical processes of the zombie’s brain create features that are, functionally at least, a little bit like an ersatz form of pseudo-intentionality, a lame pseudo-aboutness that goes entirely unobserved in the zombie’s head.
The Stanford Encyclopedia of Philosophy has an entry on intensionality that begins like this:
In philosophy, intentionality is the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs. To say of an individual’s mental states that they have intentionality is to say that they are mental representations or that they have contents. Furthermore, to the extent that a speaker utters words from some natural language or draws pictures or symbols from a formal language for the purpose of conveying to others the contents of her mental states, these artifacts used by a speaker too have contents or intentionality. ‘Intentionality’ is a philosopher’s word: ever since the idea, if not the word itself, was introduced into philosophy by Franz Brentano in the last quarter of the nineteenth century, it has been used to refer to the puzzles of representation, all of which lie at the interface between the philosophy of mind and the philosophy of language. A picture of a dog, a proper name (e.g., ‘Fido’), the common noun ‘dog’ or the concept expressed by the word can mean, represent, or stand for, one or several hairy barking creatures. (https://plato.stanford.edu/entries/intentionality/ )
Your zombie twin, let us suppose, has none of that.
If we ask your zombie twin to fetch a beer from the fridge and it comes back with a beer, that’s a fortunate fact about the chemical machinery inside its head, making it a useful beer-fetcher, but it’s just a machine, and we must assume it has never entertained a thought about beer. En route to the fridge, it didn’t hold a state in its head that was about a beer and it didn’t think about you as it pseudo-chose your favourite brand; it didn’t think about itself or its own taste preferences as it reached to the back of the fridge to get a beer can of the sort it usually drinks, often expressing pseudo-disgust for your brand between sips. All that we can say is that some causal effects are complicated, and the sounds you made when you said “Is it beer o’clock yet?” brought a beer to your hand via the meaningless intermediary of some complex but contentless neural firings in its head.
Similarly, you might be tempted to think that the neural model it creates in response to the sounds “try-ang-gul” are about a triangle, but we must assume that this is not the case. Aboutness is banned. Write it on your hand if you think you might forget.
Of course, it’s not at all clear to me that we can just reach in and remove intensionality like this, but for this post I am going to assume that we can. You have an iron-clad guarantee from me, the scenario designer, that the zombie has nothing resembling thoughts, and we’re both going to resist any suggestion that we should think of its neural activity as being like your own thoughts in any way — apart from the fact that it’s head houses a purely physical duplication of the causal structure of your cognition.
We’ll have to put aside the fact that, if we were arguing with the zombie about intentionality itself, or about the ontological status of phenomenal redness, or zombies, its lack of intensionality would make no detectable difference to the conversation. If we argued with it about its lack of consciousness and meaning, and it argued back, its half of the conversation would not be about those things; the aboutness would stop and start with each change of speaker. We say meaningful things; it just responds with babble. If we considered its words as meaningful, and responded as though it had a point to make, we would be indulging in projection, assigning meaning that was not actually there. If it seemed to make errors or assumptions, those would not really be errors or assumptions until we added an unjustified interpretation, creating meaning on our side of the conversation and then projecting those meanings onto it, as we got inspired by meaning-like patterns within the dross of its empty sounds.
It’s true that, in purely practical terms, we might benefit from imagining it as having intensionality, even though it doesn’t. When it speaks, it would not be practical to hold its output in auditory memory as meaningless babble, and then think about what the words would mean; we’ll probably just listen to its speech and prepare our counterarguments.
It’s also true that, if we were trying to make any predictions about what the zombie would do in anything but simple situations, we would still need to employ a complex theory of mind (TOM) to have any chance of predictive success — but, in this case, we must insist on calling it a theory of pseudo-mind (TOP). A TOP is functionally like a TOM, but we add an asterisk next to every mentalistic term to indicate its pseudo-status.
Think of a red triangle, we say, and the zombie’s brain creates a *model. (Note the asterisk!)
By creation, of course, I don’t mean that any new things would come to exist, only that some existing things will be rearranged. Neurons that were inactive will become active, the strength of some synapses will be chemically altered, and so on.
The zombie’s *model of the triangle is therefore immaterial in a mundane but important sense: the *model consists purely of the way stuff is arranged, not the stuff itself. Arguably, the *model could be considered massless in the same way that a checkmate is massless or that life is massless.
Nonetheless, despite being physically no more than an arrangement, the zombie’s *model of a triangle would be a real feature of the physical world, and it would be a genuine cause of physical behaviour.
Tell us about the triangle, we say. If we have beards, we will stroke them at this point, or polish our glasses.
The zombie would consult the *model to answer questions about the *imagined triangle’s texture and orientation and its precise hue. It’s red, it’s an equilateral triangle, it’s pointing upwards, and so on.
Now, it has become very clear to me that many philosophers think it is reasonable or even mandatory at this point to show complete disinterest in what’s going on inside the zombie’s brain. The zombie is not really *answering *questions. It does stuff in there, and the result is behaviour, but we’re obliged to assume that none of it means anything and we can only treat its brain like featureless mush. To imagine the zombie’s internals as having any semantic significance or cognitive complexity is not sporting. We shouldn’t even plot out its *cognitive steps with asterisks.
That doesn’t show proper commitment to the idea of a zombie.
This idea can be expressed in several different ways, but it mostly comes down to respecting the voidiness of the zombie’s void. All zombie voids are necessarily alike, the hardist can insist. A zombie pseudo-imagining a red triangle is no different to a zombie pseudo-composing a symphony in its head or pseudo-plotting its next move in a corporate takeover or pseudo-wondering where to hide a body. Phenomenally, all zombie non-minds are voids, and hence identical, in the same way that a room not containing a pink elephant is identical to a physical duplicate of that room not containing a stack of gold bullion.
It might be useful to call upon our theory of pseudo-mind, but, for this thought experiment, we need to leave our TOP on the shelf; we can’t just add asterisks and carry on, or we will slip into mentalistic talk. Those are the rules of the thought experiment, and the hardists get to call the shots because it’s a zombie. The zombie’s neural patterns are just chemical fluxes, devoid of meaning, so we must diligently suppress any interest in what passes for thought (or *thought or pseudo-thought) inside the zombie’s head.
To which I say: Bad luck; I am interested. I’ll just promise not to use mentalistic language.
Or I’ll try.
A zombie can still be understood as a complex entity with behaviour that has causes. Irrespective of the zombie’s underlying ontology, I have at least as much reason to wonder what is happening inside a zombie’s brain as I have to wonder what a chess computer is likely to do if I advance my rook, or what I need to say to GPT4 to prevent it from answering with hallucinations. As a neurologist, I can deduce all sorts of things about the zombie’s physical brain, and the blood supply of its brain, by considering the zombie’s answers to my questions as answers. It would be foolish to pretend that the most complicated structure in the known universe can be treated without using some high-level concepts at our end. Those high-level concepts will mimic the banned mental talk, but that doesn’t mean they should be ignored; we will just need a bit of zombie-compatible reconceptualisation.
We will need to frame the zombie’s *models as functional arrangements while carefully dodging implications of intensionality, but we can still consider the complexity of its *models — which do indeed differ in meaningful ways when it switches from *imagining pink elephants to gold bullion. A zombie *imagining a red triangle and a zombie *plotting to stab you are in functionally different states, and the differences could be important, even without the aboutness.
We are walking a fine line, though, if we consider the zombie’s brain with any respect for its complexity.
At this point, an anti-hardist who was not fully on board with the idea of a zombie might try to sneak in some de facto mental talk, arguing that a TOP is not really different to a TOM, apart from some asterisks that make no important difference to anything. They might even argue that the intensionality we’ve removed is the spicy (Δ) version of intensionality; we still have the functional (ρ) equivalent — which is, after all, the only possible source of our theory of Δ-intensionality in the first place, and the very thing that inspired the “Intensionality” entry in the Standford Encyclopedia of Philosophy (SEP).
We still have all of that on the Zombie World.
Arguably, the anti-hardist might add, the functional (ρ) version of intensionality is what the SEP entry is about, even on this world. Because how could it get to be about spice (Δ), when spice did not lead to its creation or modify any of the author’s word choices?
An anti-hardist could point out that whatever our zombie twin says mirrors what we say, and vice versa, and all the neural steps involved are identical, so every single reportable aspect of our own complex epistemic situation in relation to phenomenal consciousness and imaginary red triangles and intensionality and every other issue in this field is closely mirrored in the zombie, down to the last sodium ion in the last action potential, and down to the last comma in the Stanford Encycopedia.
We don’t need to invoke the special hardist form of intensionality in the zombie, the anti-hardist might argue, but we do need to come up with names for the inter-relationships between its neural states.
I obviously have some sympathy for the anti-hardist, here. But we must resolutely resist this insidious slide towards mental talk.
We must insist that the zombie’s pseudo-cognition is irrelevant to our own cognition, providing no insights at all despite being causally and linguistically isomorphic. We are only allowed to account for what the zombie says in purely functional terms, without employing any concepts of intensionality. We can metaphorically burn the zombie version of the intentionality entry in the Stanford Encyclopedia of Philosophy, and ban talk even of the functional (ρ) equivalent of intensionality.
A stubborn anti-hardist might push back here, accusing us of drawing a distinction without meaning. Whether we want to assign a machine intentionality or not, all machines engaged in pseudo-cognitive activities are usefully approached from a perspective that keeps track of representational relationships, even if those machines are known to be too simple to have anything resembling consciousness. Designers of chess computers and AIs use representational language all the time, and those machines are not even conscious. Why do we have to throw away useful conceptual tools that are still applicable to unconscious machines in a spice-free setting? Not only that, the chemical machine of the zombie’s brain has evolved to represent the world; it is maintained at a high metabolic cost just to achieve that function. Think of childbirth, and the difficulty of getting that representational organ out of the maternal pelvis. To ignore the brain’s costly internals just because a hardist insists that we are not allowed to be interested — because the zombie “lacks intensionality” — would be silly. Even if we started from scratch, with intensionality banned, and tried to explain the behaviour of the zombie according to the mechanical actions of its 86 billion neurons, we will end up creating a science of complex functionality that will look exactly like it involves a form of pseudo-intensionality. We will restore the Stanford Encyclopedia from the ashes, on purely functional grounds.
And suppose, says the anti-hardist, we are scheduled to debate David Chalmers (the human version) before a live audience of mixed zombies and humans, with a million dollars up for grabs, but we only have a copy of the books and papers written by his zombie twin, not those produced by Human Chalmers. Alas, we have the meaningless versions. Would we drive across town to borrow a friend’s human version of the books, which are identical, but now mean something? Would we be equally happy to prepare for the debate by reading the zombie’s books upside down as right way up? Doesn’t our preference for correctly oriented text betray a concealed belief that the text actually means something, or can be treated as meaning something? For that matter, why do we write at all if we don’t trust printed words to capture meaning? How can the words be trusted to convey meaning if the neurons producing those words is to be thought of as a glorified muscle twitch?
Perhaps the anti-hardist has a point, here — or perhaps several points — but I must insist that, for this experiment, pseudo-intentionality is banned, too. Evolution pseudo-chose big brains for survival reasons, not to pseudo-create meanings. None of the zombie’s neural activity should be considered as meaning anything. Sure, someone could give it a copy of their unfinished novel and get useful feedback about the plot, the characters' motivations, the complex subtext, and so on. If the zombie has the right training, it could even get a job tutoring or lecturing on the philosophy of meaning, and no one could detect that it was a zombie in the way it used intensionality-related pseudo-concepts, but that’s just because this lump of flesh produces motor output that happens to mimic meaning so well that it can be used as a surrogate meaner.
The zombie’s dead inside, and it never means anything.
If we find its outputs useful, that’s a stroke of luck we shouldn’t question too deeply.
But wait, the anti-hardist says. In a previous post on this very Substack, phenomenal consciousness was defined like this:
I take P‑conscious properties to be distinct from any cognitive, intentional, or functional property. (Block, 1995)
If P-consciousness has no intentional properties to begin with, how did removing it kill intensionality? Shouldn’t intensionality remain behind when we remove the spice, which does not contain the intensionality?
To which I will calmly say: That was a different post. This is enhanced spice, which houses intensionality, and, when we remove it, we remove meaning, too. Those are the rules.
Given this maximally skeptical perspective, what’s the best way to think about the zombie’s neural responses to auditory exchanges that, when we produce them, sound to us like talk about a red triangle?
Phase Two
We have the zombie in our lab, transplanted from its world to ours. For whatever reason, it remains a zombie on our world; the defect in its psychophysical synching remains in effect. Perhaps the natural laws that create consciousness within us are blocked by a magic shield, functioning like a Faraday cage, keeping out the special extras, blocking meaning. No one understands these things. Imagine whatever you like.
We ask the zombie to *imagine a red triangle, as before, but this time we place a computer screen in front of it. On that screen, we display random shapes, asking it how much they are “like” what it just imagined.
We could ask it to indicate a percentage similarity score by twisting a dial from 0% to 100%.
The zombie’s *model of the red triangle is really just a bunch of neurons firing in the dark interior of its skull. But that model will objectively have a very specific higher‑order physical property. It will physically cause the zombie to score shapes appearing on the computer screen in a particular way: red triangles emitting the appropriate wavelength will consistently score higher on the likeness scale than grey triangles or red circles.
When a red triangle of a certain hue and texture appears on the screen, the score might even reach 100%.
Yes, the zombie might *say, that’s exactly what my imagined triangle was like.
Note that, for any given screen resolution, the set of screen images capable of achieving a score of 100% is a functionally defined set within the potential space of all possible screen configurations, and in principle that set is entirely derivable from the zombie’s physical structure. It is directly entailed by physical reality; it constitutes a natural way of thinking about the zombie’s *interior, and, by almost every metric apart from ontological accuracy, it provides a much more useful match for the zombie’s *interior than would any depiction of neurons firing in the dark.
How we determine the winning set of screen images is not the current issue, only that it is definable in principle, but we can sketch out a plausible backstory.
We could gradually refine our estimate over serial iterations, like the photofit process used to recreate a resemblance of a criminal from a witness’s memory.
More likely, armed with a more advanced theory of zombie pseudo-cognition or an AI trained on neural decoding, we could generate high-scoring images in a single round by making clever inferences about what the zombie’s pseudo-representational systems are doing and how they functionally relate to the range of potential visual inputs. Moderately successful visual mind-reading of this nature has already been achieved in humans, using functional magnetic resonance imaging scans with very poor resolution compared to the neurons engaged in the representational task, and further improvements can be expected in future.
We could make some semi-informed guesses about the triangle, show our chosen images to a digital simulation of the zombie brain, and then ask that simulated brain which guesses were successful. We could do this many times over, in parallel, keeping the best matches. With the massive parallelism that might be made available by a quantum computer, we might be able to define the entire set of images scoring 100%, or at least hone in on a single strong match before showing it to the physical zombie.
When the process is complete, we will end up with a literal red triangle on the screen that is functionally related to the *imaginary triangle in the zombie’s non-existent *mind.
We will have proceeded on functional grounds, looking for a specific higher order property of the neural *model, so we won’t have committed the fallacy of thinking that the zombie’s *mind is a Cartesian screen. Obviously we will have come close to committing this fallacy, but I propose we would be in the same position as a mathematician who knowingly took the square root of a negative number, where a child in primary school would be marked as incorrect for attempting the same thing. We would know what we were doing, and so we could add suitable disclaimers to the displayed image: “Caution, ontologically promoted reconstruction; this image has been drawn from a decoding of neural activity, assuming a subjective perspective that doesn’t actually pertain, and treating a model as being replaceable with its implied content.”
In this set-up, we know full-well that the zombie’s internal neural *model is not literally like the triangle on the screen, but there is another sense of “like” in which the imagined triangle is exactly like the triangle on the screen — and that is the sense being scored. Indeed the zombie *says so, and that correspondence is all that we are depicting on the screen. We are depicting the internal neural *model, but we have re-rendered it in a form that, when it is shown to the zombie’s visual system, produces a match. We have merely translated the neural *model into a pre-eyeball image such that, when it is *seen by an eyeball and subsequently encoded in neural terms within the zombie’s brain by objectively describable processes (very similar to the ones we had to reverse engineer to produce the image), the *seen triangle on the screen matches the already encoded *imaginary triangle, according to functional matching criteria in the zombie’s visual *cognition.
We will have derived the shape and colour of an *imaginary *image, based on objective, real-world features in the zombie brain, so every step of the process is defensible. We haven’t broken the rules of the thought experiment, because none of it means anything. It is as though we took a muscle twitch and graphed it without passing any semantic judgment — but with a few more steps.
We will need this idea later, so let’s give this technology a name: we can examine the zombie with a likoscope to produce a likograph.
Suppose we have automated this process, and we can peek into the zombie’s neural activity as though it had a mind — though, we must recall, it has no such thing.
We turn the screen off, knowing that we could turn it back on at any stage to see what the zombie’s *models were *like.
If we ask the zombie about its *imaginary triangle, the zombie will produce vocalizations that sound like language, and it will say that the triangle is red, that it has a certain shape and orientation, and so on. We might be particularly tempted to think of those words as meaningful — because, if we add the assumption that they are meaningful, we gain predictive powers for what the likoscopic screen would show.
But those are just sounds and the screen is off.
Perhaps a hardist might accuse us of cheating, here, of adding meaning with the likoscope? But how is that possible? If the zombie with its highly evolved, sophisticated brain has no meaning, how could our lab equipment create meaning out of thin air? The likograph is just wires and pixels, just as the zombie’s neural activity is just neurons and action potentials.
We can nonetheless think about the *model via the mental shortcut of its (still meaningless) likoscopic equivalent. We can think about the faux, potential triangle, which is a functional extension of the zombie that is not intrinsically meaningful but is meaningful to us.
Because it is *imaginary, and the screen is off so it is merely potential, the faux triangle in this tale can be considered non-physical.
That means, within the bare physical brain of a zombie, we already have two types of immateriality in play.
Firstly, the *model of the triangle is non-physical in a somewhat strained sense: it consists of a rearrangement of things that were already there. It is non-physical in the sense of not adding new mass, even though all of the neurons involved in the *model are physically present. It is a pattern, not a thing.
Secondly, the *modelled triangle is non-physical because it is non-existent. Not only is a representational view of the *model banned; there is nothing it could be a representation of, at least while we have the likoscopic screen turned off. The triangle is virtual.
If we try just a little harder, we can add a third type of immateriality.
Suppose we had asked your twin to *imagine a wooden red triangle. We would then have a triangle that was immaterial in the two ways mentioned, but wooden in a different sense. It’s not really wooden, of course, but if we turned on the screen of our likoscope, we would see the wood grain on the likograph.
But what if we asked your zombie twin to *imagine an “imaginary” triangle, one made of a floating cloud of phenomenality? Or we asked it to *imagine the mental triangle that comes into being when a human imagines a triangle? We could ask it to *imagine the mental triangle that is inevitably missing in a zombie. A triangle infused with irreducible red spice.
The imaginary wood has now been replaced by an entirely functional but still imaginary version of pseudo-spice.
Now we have three forms of immateriality, all arising in a scenario involving a purely physical zombie. That’s okay, because the triangle, in not existing, is clearly not like anything. There is still no intensionality. These are all just ways of talking about neurons firing in the dark. We still haven’t broken physics.
When we turn the screen back on, though, there is an ethereal triangle, with a three-way claim on immateriality, as red as fresh blood, shot through with a backlit glow and gently shimmering, as though it were floating in a haze of banned mentality.
To be continued…
Please hit the ❤️ “Like” button below if you enjoyed this post. I appreciate every click.
If you’re another Substack writer who works on the brain or consciousness, please consider adding the Zombie’s Delusion to your recommendations, so the conversation can expand to include others working on this puzzle.
Thanks for reading The Zombie's Delusion!
Start a free subscription to receive new posts.
Intriguing, fascinating. you've given me the second quotable quote of the day (the first one was from a poster on another of your articles)
"the voidiness of the zombie’s void." 😊
Now I know this is all very serious stuff and I'm not supposed to laugh, but i did actually laugh out loud at that one.
" Anything at all that can be scooped up by the term “intensionality” can be imagined as missing in your twin except where the merely physical processes of the zombie’s brain create features that are, functionally at least, a little bit like an ersatz form of pseudo-intentionality, a lame pseudo-aboutness that goes entirely unobserved in the zombie’s head."
And yet, there are people out there with their AI robotic companions who imagine that the piece of AI in their robot is 'intentional' and has 'human emotions'.
This is serious stuff for a monday morning - I think I need another tea before I venture into ethereal imagined imaginary triangles.
Thank you for this extraordinary thought experiment. (Even if I'm taking a while to follow the arguments 😳😂)
Congratulations! you are producing thought-provoking ideas at a rate far greater than I can keep up with, let alone respond to in any fully thought-out way - so what follows must be considered as half-baked.
I am wondering if it is something of a straw man to specify that zombies completely lack intensionality: what they definitionally lack is conscious experience (to put it as it is in the SEP's Zombies entry.) At least at first sight, these seem to be separable issues; for example, I have never had the conscious experience of perfect pitch, yet I can have thoughts, such as "my uncle has perfect pitch" in which the term seems to have an intension.
To be clear, I am not attempting to defend the zombie intuition, I am merely trying to follow your instruction to think like a (die-)hardist. It is a challenging task to accept, for the sake of argument, so many ideas which seem unjustifiable.