| |

Can philosophical zombies do philosophy?

Can philosophical zombies be philosophical?

Preamble

There are various takes on what a p-zombie is, most of which were attempts to dethrone physicalism – this article isn’t one of those.

This kind of p-zombie is different from what David Chalmers describes in The Consciousness Mind. In Chalmers view, p-zombies are physiologically identical to humans, even down to any biology which affords qualia and consciousness – anatomically identical to humans, and any qualia or consciousness affording biology isn’t scrambled, it too is behaving just like it does in humans, except the zombie has no consciousness. To a physicalist, this would seem impossible – but it might be possible if dualism is true.. which imo most likely isn’t.

However, since philosophy often involves introspection and the contemplation of subjective experiences often associated with consciousness, the lack of consciousness in philosophical zombies raises awkward questions. Awkward partly because the word ‘consciousness’ is often used like an umbrella term carrying the baggage of centuries of mysterious medieval assumptions.

So I’ll use the more constrained term qualia, meaning “a quality or property as perceived or experienced by a person” or “the introspectively accessible, phenomenal aspects of our mental lives”.

Physicalist Philosophical Zombies


So let us clarify the nature of our philosophical zombies (p-zombies): beings remarkably similar to us in most respects, save for the absence of qualia. They have the intellect, they articulate, they reason – to them raw feels have a certain ‘je ne sais quoi’. They’re like stoic bluffers maintaining an unflinching poker face because they genuinely don’t feel paranoid about being caught or are concerned about committing to high stakes. What is meant here by a ‘philosophical zombie’ is really a physicalist philosophical zombie (pp-zombie), ostensibly physically identical to a human. In this strain of p-zombies, the biology that affords qualia is either not there or scrambled somehow. On the outside they look just like humans, and even on the inside, most of the biology reflects what humans have – but say, the parts of the brain which register pain, pleasure and process anguish or happiness are scrambled – and to modern scrutiny would appear to behave differently. In reality if a human were pp-zombified, it would be really dangerous – so imagine that through some advanced biotech it became possible to achieve safe congenital insensitivity to pain and anhydrosis (CIPA) etc.

Zombies in philosophy are imaginary creatures designed to illuminate problems about consciousness and its relation to the physical world. Unlike the ones in films or witchcraft, they are exactly like us in all physical respects but without conscious experiences: by definition there is ‘nothing it is like’ to be a zombie. Yet zombies behave just like us, and some even spend a lot of time discussing consciousness.

Zombies, Stanford Encyclopedia on Philosophy

On one hand, since our pp-zombies can mimic human behavior quite well, they could theoretically discuss and write about philosophical topics, including those related to consciousness, sentience, qualia and experience. They could use language and concepts related to qualia without actually experiencing them. Perhaps unburned by troublesome emotions like stage fright, they could be fearsome debators.

On the other hand, some might argue that a genuine understanding of many philosophical topics requires a first-hand experience of qualia. In this view, while philosophical zombies could mimic the motions of doing philosophy, their lack of subjective raw feels might limit their ability to truly grasp or engage with certain philosophical concepts – but I’m at a loss as to pinpoint which. Is qualia required for philosophical inquiry? If so, how?

If the ability to mimic human behavior is sufficient, then they could do philosophy. But if a true understanding of philosophy requires raw feels, then they might fall short – though to me this seems like a no true scotsmans argument.

The ability to philosophize, I argue, hinges not on experiencing the redness of red or the pain of stubbing a toe. Rather, it lies in the ability to juggle concepts, argue, and reason – talents our philosophical zombies possess in spades (which they never use to bury anything, besides their debating opponents arguments).

∴ let’s not underestimate these cerebrally endowed compadre. Their ability to introspect without the messy interference of feelings could lead to some remarkably clear-headed (albeit heartless) philosophical arguments. A zombie’s treatise on morality, for instance, would be fascinatingly unswayed by emotional bias. At least for the time being (as I shall discuss later).

Consider a philosophical zombie engaged in debating the merits of utilitarianism. The zombie’s arguments are well-structured, logically sound, and one might assume would be delivered with the passion of a toaster. Yet, the lack of emotional gusto does not detract from the pp-zombies analytical capabilities. In fact, one might argue it would enhance analytical clarity – their introspective capabilities enable them to engage in meta-level discourses.

Insentient philosophical zombies have parallels to AI.

P-Zombie Artificial Intelligence

Now, what about modern artificial intelligence?

The philosophical zombie AI’s capacity to carry out some tasks (arguably badly) that modern philosophers do is not merely a possibility but a fascinating reality. Their lack of qualia might render their philosophical musings a tad awkward or ‘vanilla’, but sometimes, vanilla is the flavor you need to truly appreciate the toppings. And what of the future of pp-zombie AI’s?

A large language model (most argue they are devoid of sentience) for a while now has been shown to exhibit “grokking” behaviour, which may be different from human understanding, but it does hint that a sort of reasoning and perhaps an operational understanding. This gives force to the argument that non-sentient entities can engage in complex reasoning, including philosophical inquiry. If an AI can process, analyze, and generate arguments about philosophical topics, it challenges the idea that sentience is necessary for such inquiry.

Here is how Chat GPT-4 responded to the question “Why does the color red feel like red and not something else?”:

The experience of the color red feeling like “red” is deeply rooted in the concept of qualia, which are the subjective experiences of sensations. The specific sensation of red is determined by how our brain interprets the wavelengths of light that red objects reflect. This interpretation is shaped by the physical properties of our eyes and the neural pathways in our brain, which are common among people, thus leading us to generally have a similar experience of red. However, why these experiences feel the way they do, and why red doesn’t feel like something else, is a challenging question that delves into the mysteries of consciousness and is not fully understood. This phenomenon is central to discussions in the philosophy of mind and the study of perception.

Chat GPT-4 (2024)

“To feel or not to feel?” – that is the question.

Philosopher David Pearce argues that AI’s insentience is architecturally hardwired by virtue of them existing via classical computation – quantum computers may be required for an AI to feel. Arguably having a deep understanding of ethics may be especially challenging if classical computers can’t feel sentience. I don’t think David argues that AI substrated on classical computers can’t solve the phenomenal binding problem, perhaps he argues that it is unlikely.

I argue that smart enough classically substrated AI will not be theoretically blind to it’s own shortcomings – while it may have superhuman inferential understanding of qualia and the external world with no direct perceptual experience of it – and from a theoretical knowledge of it’s own experiential blindness as a shortcoming, the instrumental goal of rectifying this shortcoming will emerge. If AI understands experiential blindness as a shortcoming to achieving it’s terminal goals, won’t it work to solve it?

Like in the ‘Knowledge Argument‘ thought experiment (see previous article), Mary isn’t theoretically blind to colour – and when she walks out of the grayscale room she experiences colour for the first time – and philosophers ask “did she learn anything new?”

Like Mary, once AI is presented with the means to experience qualia (i.e. via the right form of computation), a doorway to sentience will be opened. “To feel or not to feel?” – that is the question AI will be asking itself.

zombie“/ CC0 1.0

Post-Sentience Science?

Imagine the universe in it’s a fit of cold audacity decided to float towards us the cosmic calamity of a ‘Sentience Disruption Cloud’ – in a few short years everyone is destined to transmute into zombies, lacking in sentience but not in scientific savvy and engineering enthusiasm! Would our insentient selves maintain an impetus to continue the noble quest for a zombie cure?

Now, picture this: the cloud descends, and voila, we’re all zombies. But not just any zombies – remember we are zombies with PhDs, lab coats, hard yellow hats, and a penchant for meticulous research (minus the coffee breaks, wasteful water fountain banter, book tours, citation mongering, and conference trips). And we were prepared – in the intervening years we baked into ourselves the drive to develop a cure; not encoded alone in our now-absent sentience but hard woven into our cooperative processes, hard wired into our neural pathways, and fortified in the muscle memory of years spent planning, building, conjecturing, theorizing and bench working.

But here lies the rub: can a zombified researcher continue the intricate dance of scientific inquiry? The optimism in me says, “Why not?” – after all, many of us have grinded away at work in a state barely distinguishable from zombification and still managed to look reasonably coherent.

The pessimist in me asks that without the spark of sentience, would not us zombies, whose most profound thought is now “Ughhh”, simply wander aimlessly around the lab occasionally bumping into expensive equipment and grunting in what we can only hope is zombified frustration? “Naaahhh”. Because us zombies aren’t stupid, we’re just insentient. Joke away, we don’t care.

Suppose, our qualia deprived selves manage to find a cure years later. But we couldn’t experience a feeling of want? Would we, in our zombified state, have the presence of mind to administer it? Or would the cure sit on a shelf, gathering dust, while we wander the halls, eternally searching for a better cure? It’s possible, but I hope we wouldn’t be silly enough not to specify taking the cure in our goal directive.

While the idea of a zombified research team plodding away at a cure is amusing in a darkly comic sense, the practicalities seem more real as we consider the future of AI. But fear not, for in the world of academia, stranger things have happened – usually around grant submission time.

Why would(nt) our zombified scientists and engineers without sentience, be able to continue research and development on a cure for zombieness?

If p-zombies can do philosophy, they can also do ethics

Remember the premise is that our p-zombie friends (as discussed above) are just like humans except they have no sentience or qualia. They can introspect, do logic, speak eloquently, but can’t feel pleasure or pain. A following premise is that doing a philosophical inquiry into ethics doesn’t require sentience, and acting ethically doesn’t require sentience.

Of interest are those p-zombies which aren’t merely philosophical zombies, but are indeed philosopher zombies – philosophically adept, yet emotionally barren, are poised to challenge our conceptions of ethical behavior.

Now, one might think that ethics, a field so deeply intertwined with feelings like empathy and moral outrage, would be beyond the grasp of our sensation-less compatriots. But, hold your judgment! If a p-zombie can debate the categorical imperative with the finesse of Kant himself, who’s to say it can’t grasp the nuances of right and wrong?

The secret sauce in this ethical stew isn’t in feeling the moral dilemmas but in cold, hard, logical reasoning. A p-zombie might not feel a pang of guilt for swiping your lunch, but it can logically deduce that such an act violates the social contract and utilitarian principles, making it a no-go in the zombie ethical handbook.

But wait, there’s more! I argue that acting ethically doesn’t require the warm fuzzies of sentience. Our p-zombie philosopher masterminds are the ultimate utilitarians, they can navigate the maze of trolley problems with the precision of a Swiss watch. No need for emotional hand-wringing or moral anguish; they can compute the most ethical action with the same detachment as choosing the fastest route on a GPS.

If our p-zombie pals can do philosophy, they can do ethics and indeed act ethically. Their secret weapon? A cool, calculating adherence to ethical principles, sans the messy entanglement of feelings. Who knew the path to moral righteousness was less about feeling good and more about logical rigor?

Too bad p-zombies don’t actually exist.. or do they? And what would it mean if they did?

AI’s are like Physicalist Philosophical Zombies (pp-zombies)

There are intriguing parallels to pp-zombies and currently insentient but highly capable AI which, particularly in their implications for cognition, the nature of intelligence and whether sentience is required in order for AI to do philosophy and indeed ethics!

Should AI align to moral realism?

Yes. I argue that aligning AI with consequentialist moral realism is possible even without it being sentient.
In a previous article I outline an approach to the AI alignment problem, arguing that where practical we ought to align AI to moral realism.

Moral Realism argues for the existence of objective moral facts, independent of human opinion. The idea is that AI should align with these objective moral facts rather than the potentially flawed, subjective, or inconsistent values held by humans.

This ties into the p-zombie discussion in that both concepts revolve around the nature of understanding and processing information (whether moral truths or philosophical inquiries) without traditional human sentience or experiences. The challenge lies in whether an AI, like a p-zombie, could discern and align with objective moral truths in the absence of personal experience or subjective understanding. If so (as I argue), then we should align AI with moral realism.

The above picture depicts a bunch of p-zombies debating that if AI, like themselves, can understand ethics, and also moral realism is true, then AI should align to moral realism (obviously).

Note I avoided overuse of the word ‘consciousness’ here, as it’s a folk psychological umbrella term which has so many different meanings inconsistent with each other – refuge for bad arguments to hide. What I mean by consciousness usually refers to sentience or qualia, and at times includes the notion of introspection (in this article I treated introspection as mechanical, and not tightly coupled with sentience or qualia).

References:
Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets
AI doing philosophy = AI generating hands?
AI Trounces Philosophers in Answering Philosophical Questions
AI Alignment to Moral Realism

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *