|

The Knowledge Argument Applied to Ethics

A group of interested AI enthusiasts have been discussing Engineering Machine Consciousness in Melbourne for over a decade. In a recent interview with Jamais Cascio on Engineering Happy People & Global Catastrophic Risks, we discussed the benefits of amplifying empathy without the nasty side effects (possibly through cultural progress or technological intervention – a form of moral enhancement). I have been thinking further about how an agent might think and act differently if it had no ‘raw feels’ – any self-intimating conscious experience.

I posted to the Hedonistic Imperative Facebook group:

Is the limitations of empathy in humans distracting us from the in principle benefits of empathy?
The side effects of empathy in humans include increased distrust of the outgroup – and limitations in the amount of people we humans can feel strong empathy for – though in principle the experience of understanding another person’s condition from their perspective seems quite useful – at least while we are still motivated by our experience.
But what of the future? Are our post human descendants likely to be motivated by their ‘experiences of’ as well as their ‘knowledge about’ in making choices regarding others and about the trajectories of civilizational progress?

I wonder whether all the experiences of can be understood in terms of knowledge about – can the whole of ethics be explained without being experienced – though knowledge about without any experience of? Reminds me of the Mary’s Room/Knowledge Argument* thought experiment. I leaned towards the position that Mary could with a fully working knowledge of the visual system and relevant neuroscience wouldn’t ‘learn’ anything new when walking outside the grey-scale room and into the colourful world outside.
Imagine an adaptation of the Mary’s Room thought experiment – for the time being let’s call it Autistic Savant Angela’s Condition – in that:

class 1

Angela is a brilliant ethicist and neuroscientist (an expert in bioethics, neuroethics etc), whom (for whatever reason) is an Autistic savant with congenital insensitivity to pain and pleasure – she can’t at all feel pain, suffering or experience what it is like to be someone else who does experience pain or suffering – she has no intuition of ethics. Throughout her whole life she has been forced to investigate the field of ethics and the concepts of pleasure, bliss, pain and suffering through theory alone. She has a complete mechanical understanding of empathy, and brain states of subjects participating on various trolley thought experiments, hundreds of permutations of Milgrim experiments, is an expert in philosophies of ethics from Aristotle to Hume to Sidgwick etc. Suddenly there is a medical breakthrough in gene-therapy that would guarantee normal human function to feel without impairing cognitive ability at all. If Angela were to undergo this gene-therapy, would she learn anything more about ethics?

class 2

Same as above except Angela has no concept of other agents.

class 3

Same as class 2 except Angela is a superintelligent AI, and instead of gene-therapy, the AI recieves a software/hardware upgrade that allows the AI access to ‘fire in the equations’, to experience. Would the AI learn anything more about ethics? Would it act in a more ethical way? Would it produce more ethical outcomes?

 

Implications

Should an effective altruist support a completely dispassionate approach to cause prioritization?

If we were to build an ethical superintelligence – would having access to visceral experiences (i.e. pain/pleasure) change it’s ethical outcomes?
If a superintelligence were to perform Coherent Extrapolated Volition, or Coherent Aggregated Volition, would the kind of future which it produced differ if it could experience? Would likelihoods of various ethical outcomes change?

Is experience required to fully understand ethics? Is experience required to effectively implement ethics?
Robo Brain

 

footnotes

The Knowledge Argument Thought Experiment

jacksons-knowledge-argumentMary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?

Similar Posts

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *