Posts

Wireheading with David Pearce

Is the Hedonistic Imperative equivalent to wire-heading?
People are often concerned about the future being a cyber-puink dystopia where people are hard wired into pleasure centers like smacked out like lotus eating milk-sops devoid of meaningful existence. Does David Pearce’s Hedonistic Imperative entail a future where we are all in thrall to permanent experiential orgasms – intravenously hotwired into our pleasure centers via some kind of soma like drug turning us into blissful-idiots?

Adam Ford: I think some people often conflate or distill the Hedonistic Imperative to mean ‘wireheading’ – what do you (think)?

David Pearce: Yes, I mean, clearly if one does argue that were going to phase out the biology of suffering and live out lives of perpetual bliss then it’s very natural to assimilate this to something like ‘wireheading’ – but for all sorts of reasons I don’t think wireheading (i.e. intercrainial self-stimulation of the reward centers and it’s pharmacological equivalent) is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.
I think a much more credible scenario is the idea that were going to re-calibrate the hedonic treadmill and allow ourselves and our future children to enjoy lives based on gradients of intelligent bliss. And one of the advantages of re-calibration rather than straight forward hedonic maximization is that by urging recalibration one isn’t telling people they ought to be giving up their existing preferences or values is that if your hedonic set-point (i.e. your average state of wellbeing) is much higher than it is now your quality wireheads - white of life will really be much higher – but it doesn’t involve any sacrifice of the values you hold most dear.
As a rather simplistic way of putting it – clearly where one lies basically on the hedonic axis will impose serious cognitive biases (i.e. someone who is let’s say depressive or prone to low mood) at least will have a very different set of biases from someone who is naturally cheerful. But none-the-less it doesn’t entail, so long as we aim for a motivational architecture of gradients of bliss, it doesn’t entail giving up anything you want to hold onto. I think that’s really important because a lot of people will be worried that somehow that if, yes, we do enter into some kind of secular paradise – it will involve giving up their normal relationships, their ordinary values and what they hold most dear. Re-calibration does not entail this (wireheading).

Adam Ford: That’s interesting – people think that you know as soon as you turn on the Hedonistic Imperative you are destined for a very narrow set of values – that could be just one peek experience being replayed over and over again – in some narrow local maximum.

wirehead-utility-function-hijacking1024x448David Pearce: Yes – I suppose one thinks of (kind of) crazed wirehead rats – in fairness, if one does imagine orgasmic bliss most people don’t complain that their orgasms are too long (and I’m not convinced that there is something desperately wrong with orgasmic bliss that lasts weeks, months, years or even centuries) but one needs to examine the wider sociological picture – and ask ‘is it really sustainable for us to become blissed out as distinct form blissful’.

Adam Ford: Right – and by blissed out you mean something like the lotus eaters found in Odysseus?

David Pearce: Yes, I mean clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated. It seems that, crudely speaking, motivation (which is mediated by the meso-limbic dopamene system) and raw bliss (which is associated with mu-opiod activation of our twin hedonic-hotspots) – the axis are orthogonal. Now they’re very closely interrelated (thanks to natural selection) – but in principle we can amplify one or damp down the other. Empirically, at any rate it seems to be the case today that the happiest people are also the most motivated – they have the greatest desires – I mean, this runs counter to the old buddhist notion that desire is suffering – but if you actually look at people who are depressive or chronically depressed quite frequently they have an absence of desire or motivation. But the point is we should be free to choose – yes it is potentially hugely liberatery – this control over our reward architecture, our pleasure circuitry that biotechnology offers – but let’s get things right. We don’t want to mess things up and produce the equivalent of large numbers of people on Heroin – and this is why I so strenuously urge the case for re-calibration – in the long run genetically, in the short run by various no-recreational drugs.

Clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated.David Pearce

Adam Ford: Ok… People may be worried that re-calibrating someone is akin to disrupting the continuum of self (or this enduring metaphysical ego) – so the person at the other end wouldn’t be really a continuation of the person at the beginning. What do you think? How would you respond to that sort of criticism?

wireheading - static David PearceDavid Pearce: It depends how strict ones conception of what personal identity is. Now, would you be worried if to learn tomorrow that you had won the national lottery (for example)? It would transform your lifestyle, your circle of friends – would this trigger the anxiety that the person who was living the existence of a multi-millionaire wasn’t really you? Well perhaps you should perhaps you should be worried about this – but on the whole most people would be relatively relaxed at the prospect. I would see this more as akin to a small child growing up – yes in one sense as one becomes a mature adult one has killed the toddler or lost the essence of what it was to be a toddler – but only in a very benign sense. And by aiming for re-calibration and hedonic enrichment rather than maximization, there is much less of a risk of loosing anything that you think is really valuable or important.

Adam Ford: Okay – well that’s interesting – we’ll talk about value. In order to not loose forms of value – even if you don’t use it (the values) much – you might have some values that you leave up in the attic to gather dust – like toys that you don’t play with anymore – but you might want to pick up once in a thousand years or what not. How do you then preserve complexity of value while also achieving high hedonic states – do you think they can go hand in hand? Or do you think preserving complexity of value reduces the likelihood that you will be able to achieve optimal hedonic states?

David Pearce: As an empirical matter – and I stress empirical here – it seems to be the case that the happiest are responsive to the broadest possible range of rewarding stimuli – it tends to be depressives who get stuck in a rut. So other things being equal – by re-calibrating ourselves, becoming happy and then superhappy – we can potentially at any rate, yes, enrich the complexity of our lives with a range of rewarding stimuli – it makes getting stuck in a rut less likely both for the individual and for civilization as a whole.
I think one of the reasons we are afraid of some kind of loss of complexity is that the idea of heaven – including in traditional christian heaven – it can sound a bit monotonous, and for happy people at least one of the experiences they find most unpleasant is boredom. But essentially it should be a matter of choice – yes, someone who is very happy to, let’s say, listen to a piece of music or contemplate or art, should be free to do so, and not forced into leading a very complex or complicated life – but equally folk who want to do a diverse range of things – well that’s feasible too.

For all sorts of reasons I don’t think wireheading… is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.David Pearce

– video/audio interview continues on past 10:00

The Knowledge Argument Applied to Ethics

A group of interested AI enthusiasts have been discussing Engineering Machine Consciousness in Melbourne for over a decade. In a recent interview with Jamais Cascio on Engineering Happy People & Global Catastrophic Risks, we discussed the benefits of amplifying empathy without the nasty side effects (possibly through cultural progress or technological intervention – a form of moral enhancement). I have been thinking further about how an agent might think and act differently if it had no ‘raw feels’ – any self-intimating conscious experience.

I posted to the Hedonistic Imperative Facebook group:

Is the limitations of empathy in humans distracting us from the in principle benefits of empathy?
The side effects of empathy in humans include increased distrust of the outgroup – and limitations in the amount of people we humans can feel strong empathy for – though in principle the experience of understanding another person’s condition from their perspective seems quite useful – at least while we are still motivated by our experience.
But what of the future? Are our post human descendants likely to be motivated by their ‘experiences of’ as well as their ‘knowledge about’ in making choices regarding others and about the trajectories of civilizational progress?

I wonder whether all the experiences of can be understood in terms of knowledge about – can the whole of ethics be explained without being experienced – though knowledge about without any experience of? Reminds me of the Mary’s Room/Knowledge Argument* thought experiment. I leaned towards the position that Mary could with a fully working knowledge of the visual system and relevant neuroscience wouldn’t ‘learn’ anything new when walking outside the grey-scale room and into the colourful world outside.
Imagine an adaptation of the Mary’s Room thought experiment – for the time being let’s call it Autistic Savant Angela’s Condition – in that:

class 1

Angela is a brilliant ethicist and neuroscientist (an expert in bioethics, neuroethics etc), whom (for whatever reason) is an Autistic savant with congenital insensitivity to pain and pleasure – she can’t at all feel pain, suffering or experience what it is like to be someone else who does experience pain or suffering – she has no intuition of ethics. Throughout her whole life she has been forced to investigate the field of ethics and the concepts of pleasure, bliss, pain and suffering through theory alone. She has a complete mechanical understanding of empathy, and brain states of subjects participating on various trolley thought experiments, hundreds of permutations of Milgrim experiments, is an expert in philosophies of ethics from Aristotle to Hume to Sidgwick etc. Suddenly there is a medical breakthrough in gene-therapy that would guarantee normal human function to feel without impairing cognitive ability at all. If Angela were to undergo this gene-therapy, would she learn anything more about ethics?

class 2

Same as above except Angela has no concept of other agents.

class 3

Same as class 2 except Angela is a superintelligent AI, and instead of gene-therapy, the AI recieves a software/hardware upgrade that allows the AI access to ‘fire in the equations’, to experience. Would the AI learn anything more about ethics? Would it act in a more ethical way? Would it produce more ethical outcomes?

 

Implications

Should an effective altruist support a completely dispassionate approach to cause prioritization?

If we were to build an ethical superintelligence – would having access to visceral experiences (i.e. pain/pleasure) change it’s ethical outcomes?
If a superintelligence were to perform Coherent Extrapolated Volition, or Coherent Aggregated Volition, would the kind of future which it produced differ if it could experience? Would likelihoods of various ethical outcomes change?

Is experience required to fully understand ethics? Is experience required to effectively implement ethics?
Robo Brain

 

footnotes

The Knowledge Argument Thought Experiment

jacksons-knowledge-argumentMary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?

The Life You Can Save – Interview with Peter Singer by Adam Ford

Transcript of interview with Peter Singer on ‘The Life You Can Save‘:

The Life You Can Save Amazon 41Cnq4M0rzL._SX322_BO1,204,203,200_I’ve been writing & thinking about the question of global poverty and what affluent people ought to do about it for more than four years now. The first article I published on that ‘Famine Affluence & Morality’ came out in 1972.
But I’d never written a book on that topic until a few years ago when I was encouraged by greater interest in the question of global poverty and greater interest in the question of the obligations of the affluent, and some very positive responses I had to an article that appeared in the New York Times Sunday Magazine. So it was then that I decided that it was time for a book on this topic – I wrote “The Life You Can Save” which came out in 2009 – essentially to encourage people to think that – if you really want to think of yourself as ethical – you want to say “I’m an ethical person, I’m living an ethical life” – it’s not enough to say “I don’t kill.. I don’t steal.. I don’t cheat.. I don’t assault people..” and so on and all those sorts of ‘thou shalt not’ kind of commands – but if you are an affluent person, or even just a middle class person in an affluent country (that’s undoubtedly affluent by global standards) then you’re not really living an ethical life unless you’re prepared to share some of the good fortune that you have with people who are less fortunate – and you know, just the fact that you are able to live in an affluent country means that you are fortunate.
One of the points that I try to emphasize is that the differences in the amount of wealth that people who are middle class or above in affluent societies as compared to people in extreme poverty have is so great that things that are really pretty frivolous in our lives (that we spend money on that are not significant) could make a difference in the lives of other people. So, we spend money on all kinds of things from going to a cafe and having coffee or buying a bottle of water when we could drink water out of a tap, to going on vacations, making sure we have the latest iPhones – a whole lot of different things that really are not at all life changing to us. But the amount that we spend on those things could be life changing to people in extreme poverty who are not able to afford minimal health care for themselves or their family, who don’t get enough to eat, to perhaps are not able to send their children to school because they need them to work in the fields – a whole of different things that we can help people with.
I think we ought to be helping – I think to do nothing is wrong. Now, you can then debate “well, how much ought we to be doing?” which is something I do discuss in the book – but I think that it’s clear that we ought to be setting aside some of what we have to donate to organizations that are effective and that have been proven to be effective in helping the global poor.
So that’s really what the book is about – and perhaps the other thing I ought to say is that many people feel that somehow world poverty is like a bottomless pit that we are pouring money down into – but I think that’s a mistake – I think that it’s clear that we are making a difference – that we’ve dramatically reduced the number of people who die from preventable poverty related illnesses. Even just since my book as been published (in the last 5 years for instance) the number people – the number of children who die each year (children under 5) has dropped from nearly 10 million a year to 6.6 million a year – so, you know, that real progress in just 5 years – and we can make progress with poverty if we are careful about the way in which we do it and fund organizations that are open to evidence about what works and what doesn’t.

Many people feel that somehow world poverty is like a bottomless pit that we are pouring money down into – but I think that’s a mistake – I think that it’s clear that we are making a difference.

Peter Singer – The Life You Can Save – Video inteview by Adam Ford


The Life You Can Save

Acting Now to End World Poverty is a 2009 book by Australian philosopher Peter Singer. The author argues that citizens of affluent nations are behaving immorally if they do not act to end the poverty they know to exist in developing nations.

The book is focused on giving to charity, and discusses philosophical considerations, describes practical and psychological obstacles to giving, and lists available resources for prospective donors (e.g. charity evaluators). Singer concludes the book by proposing a minimum ethical standard of giving.

Christian Barry and Gerhard Overland (both from the Centre for Applied Philosophy and Public Ethics) described the widespread acceptance for the notion that “the lives of all people everywhere are of equal fundamental worth when viewed impartially”. They then wonder, during the book review in the Journal of Bioethical Inquiry, why “the affluent do so little, and demand so little of their governments, while remaining confident that they are morally decent people who generally fulfil their duties to others?” The reviewers agree with Singer, and say they see a conflict between the behaviours of the affluent and the claims of the affluent to being morally decent people. The reviewers also discuss other practical ways to fight poverty.

Support The Life You Can SaveHistorically, every dollar donated to The Life You Can Save typically moves an additional three to four dollars to our effective charities. Our recommended charities provide services and support to men, women, and children in needy communities across the globe. Your support for our work means that you'll get the most out of your charitable giving!

Can I Really Save Lives?  The good news, in fact the great news, is that you can! While there are endless problems in the world that you as an individual cannot solve, you can actually save lives and reduce unnecessary suffering and premature death. Should you do it? Watch this video and decide for yourself. The information on our website can help you give most effectively to become a life saver.

Also see the Life You Can Save Infograph Video

Buy The Life You Can Save on Amazon

The Life You Can Save Facebook Page

Philosophy & Effective Altruism – Peter Singer, David Pearce, Justin Oakley, Hilary Greaves

Panelists ([from left to right] Hilary Greaves, Peter Singer, Justin Oakley & David Pearce) discuss what they believe are important philosophical aspects of the Effective Altruism movement – from practical philosophy we can use today to possible endpoints implied by various frameworks in applied ethics. The panelists navigate through a wide range of fascinating and important topics that aspiring effective altruists and anyone whom is philosophically inclined will find both useful and enjoyable.

Panel moderated by Kerry Vaughan.

Panel Transcript

(in progress)

0:35 Question “What are the hot topics in philosophy that might change what effective altruists might focus on?”
Hilary Greaves – So, my answer to that – the one I’m most directly familiar with is the one I already mentioned in my talk earlier. I think that population ethics can make a massive difference to a significant proportion of the things we should worry about as EAs. In particular, the thing that gives rise to this is the situation where – at the moment we have lots of moral philosophers who really like their ivory tower abstract theorising – those people have done a lot of discussing this abstract question of ‘ok what is my theory of population ethics’ – then at the real world extreme we have lots of people engaging directly with the real world issues thinking, ok, how should we do our cost-benefit analysis, for example family planning. We have a big gap in the middle – we don’t really have a well developed community of people are both in touch with the background moral philosophy and who are interested in applying it to the real world. So because there is that gap I think there’s a lot of low hanging fruit at the moment for people who have a background in moral philosophy and who are also plugged into the EA community to build these bridges from theory to practise and see what it all means for the real world.

01:56 Peter Singer – I actually agree with that – that population ethics is an important area – and another place that connects to what Hilary was talking about earlier is for the existential risk questions. Because, we need to think about – suppose that the world were destroyed – is what’s so bad about that the fact that 7.5 people have lost their lives or is it the loss of the untold billions of people that Nick Bostrom has (10^56 or something, I don’t know – some vastly unimaginable number) of possible future lives that could have been good, and that would be lost. So that seems to me to be a real issue. If you want something that’s a little more nitty gritty towards what we are talking about today – another issue is – how do we get a grip on the nature and extent of animal suffering? (something that we will be talking a bit about in a moment) It’s really just hard to say – David just talked about factory farming and the vast amount of billions of animals suffering in factory farms – and I totally agree that this is a top priority issue – but in terms of assessing priorities, how do we compare the suffering of a chicken in a factory farm to, let’s say, a mother who has to watch her child dying of malaria? Is there some way we can get a better handle on that?

03:23 Justin Oakley – For me, I think, one of the key issues in ethics at the moment that bears on Effective Altruism at the moment is what’s known as a ‘situationists critique of virtue ethics’ – so trying to understand not only on how having a better character helps people to act well but also what environment they are in. Subtle environmental influences that might either support or subvert a person acting well – in particular having the virtue perhaps of liberality – so there is lots of interesting work being done on that – some people think that debate is beginning to die down – but it seems to be just starting up again with a couple of new books that are coming out looking at a new twist on that. So for me, I’m keen to do that – I guess my own work at Monash I teach a lot of health professionals so keen to look at what environmental influences there are on doctors that impede them having a theraputic relationship on patients – not only thinking about how to help them be more virtuous – I suppose which is not the only thing I aim to do with the doctors that I teach but I hope to have that influence to some extent.

Panel Greaves Singer Oakley Pearce - Orgasmatronium - 1

The Utilitarianism at the End of the Universe – Panelists Hilary Greaves, Peter Singer, Justin Oakley & David Pearce laugh about the possible ‘end games’ of classical utilitarianism.

04:25 David Pearce – Yes well I’m perhaps slightly out of touch now with analytic philosophy – but one issue that I haven’t really seen tackled by analytic philosophy is this disguised implication of classical utilitarianism of what we ought to be doing, which is essentially optimising matter and energy throughout the world – and perhaps the accessible universe – for maximum bliss. A questioner earlier was asking ‘Well, as a negative utilitarian, do you accept this apparent counter-intuitive consequence that one ought to wipe out the world to prevent the suffering of one person.’ But if one is a classical utilitarian then it seems to be a disguised consequence that it’s not good enough to aim merely for a civilization in which all sentient beings could flourish and enjoy gradients of intelligent bliss – instead one must go on remorselessly to when matter and energy is nothing but pure orgasmic bliss.
05:35 Peter Singer – I find it a remorseless and unusual term to describe it
[laughter…] 5:40 David Pearce – Well, I think this is actually rather an appealing idea to me but I know not everyone shares this intuition
[laughter…] 5:50 Question “So Peter I’d be interested to know if you have thoughts on whether you think that’s an implication of classical utilitarianism” – Peter Singer – If I accept that implication? Well – David and I talked about this a little bit earlier over lunch – I sort of, I guess, maybe I accept it but I have difficulty in grasping what it is to talk about converting matter and energy into bliss – unless we assume that there are going to be conscious minds that are going to be experiencing this bliss. And, of course then David would then very strongly agree that conscious minds not only have to experience bliss but also not experience any suffering certainly, presumably minimize anything that they experience other than bliss (because that’s not converting matter and energy into bliss) – so if what I’m being asked to imagine is a universe with a vast number of conscious minds that are experiencing bliss – yeah, maybe I do accept that implication.

06:43 Question “So this is a question mostly for Justin – effective altruists often talk about doing the ‘most good’ – should EAs be committed to doing ‘enough good’ instead of the ‘most good’?”

Justin Oakley – Yeah, that’s a good question to ask – one of the thinks I didn’t emphasize in my talk on virtue ethics is that standardly virtue ethics thinks that we should strive to be an excellent human being, which can fall a little way short of producing the maximum good.  So if you produce an excellent level of liberality or perhaps good or benefit to someone else then that’s enough for virtue.  I guess in some of the examples I was giving in my talk you might choose a partner who – although you’re not the ultimate power couple (you are the sub-optimal power couple) but you are none the less attracted to that other person very strongly – from the perspective of effective altruism it might sound like you are doing the wrong thing – but intuitively it doesn’t seem to be wrong.  That’s one example.

07:51 Hilary Graves – Surely, I mean – something I can say a bit about a similar issue looks like from a more consequentialist perspective – when people think of consequentialism they sometimes assume that consequentialists think that there is a moral imperative to maximize it – you absolutely have to do the most good and anything less than that is wrong.  But it’s worth emphasising that not all consequentialists emphasise that at all – not all consequentialists think that it’s even helpful to buy into this language of right and wrong.  So you don’t have to be a virtue ethicist to somewhat feel alienated from a moral imperative to do the absolute most good – you could just think something like the following: never mind right and wrong, never mind what I should vs not allowed to be doing.  I might just want to make the world better – I might just think I could order all the things I could possibly do in terms of better or worse.   And then you know, if I give away 1% of my income, that’s better thank giving away nothing – if I give 5% that’s better than giving away 1% – if I give away 50% that’s better than anything less – but I don’t have to impose some sharp cutoff and say that I’m doing something morally wrong if I give less than that – I think if we think in this way then we tend to alienate both ourselves and other people less – there’s something very alienating about holding up a very high standard and saying that anybody including ourselves who is falling short of this very high standard is doing something wrong with a capital ‘R’.

Panel including Peter Singer

09.14 Peter Singer – So, in a way I agree – you are talking about a spectrum view (of where we have a spectrum of black to white – or maybe we don’t want to use those terms for it) from one end to the other and you’re somewhere on the spectrum and you try and work your way further up the spectrum perhaps – which, I’m reasonably comfortable with that.  Another way of looking at it (and this goes back to something that Sidgewick also said) is that we ought to be clearer about distinguishing when we regard the act as the right act or the wrong act and when we regard it appropriate to praise or blame people for doing it.  And these are really separate things – especially if you are a consequentialist because praising or blaming someone is an act – and you ought to only do that if it will have good consequences.  So I suppose that we think that somebody in particular personal circumstances ought to  be giving 50% of his earnings away – but he is only giving 10% – but he is living in a society like ours in which by giving 10% he is at the top 99.99% of what people are giving.  Well to blame him saying ‘oh well your only giving 10% – you should be giving more’ looks like it’s going to be very counter productive – you really want to praise him in front of other people so that more people will give 10%.  So I think if we understand it that way that’s another way of looking at it – I’m not sure if it’s nesseccarily better than the spectrum view that you [Hilary] was suggesting – but it is another way of, if you like, softening this black white morality idea that is either is right or is wrong.

10:54 Question “A question for Hilary – You mention that one might find the ‘uncertainty’ that your talk generates kind of paralyzing – but you mention that wasn’t your conclusion – can you expland on why this is (paralyzation) not your conclusion?”
Hilary Greaves

 

  • Transcribed by Adam Ford

Biographies

Hilary Greaves - LectureHilary Greaves is an Associate Professor in Philosophy, at Somerville College in the University of Oxford. My current research focusses on various issues in ethics. Hilary’s interests include: foundational issues in consequentialism (‘global’ and ‘two-level’ forms of consequentialism), the debate between consequentialists and contractualists, aggregation (utilitarianism, prioritarianism and egalitarianism), moral psychology and selective debunking arguments, population ethics, the interface between ethics and economics, the analogies between ethics and epistemology, and formal epistemology. Hilary currently (2014-17) directs the project Population Ethics: Theory and Practice, based at the Future of Humanity Institute, and funded by The Leverhulme Trust.

Peter Singer - Non-Human Animal Ethics - EA Global Melbourne 2015Peter Singer is an Australian moral philosopher. He is currently the Ira W. DeCamp Professor of Bioethics at Princeton University, and a Laureate Professor at the Centre for Applied Philosophy and Public Ethics at the University of Melbourne. He specializes in applied ethics and approaches ethical issues from a secular, utilitarian perspective.  He is known in particular for his book, Animal Liberation (1975), a canonical text in animal rights/liberation theory. For most of his career, he supported preference utilitarianism, but in his later years became a classical or hedonistic utilitarian, when co-authoring The Point of View of the Universe with Katarzyna de Lazari-Radek.

Justin Oakley - Virtue & Effective Altruism - EA Global Melbourne 2015Justin Oakley is an Associate Professor at Monash University – the School of Philosophical, Historical & International Studies, and Centre for Human Bioethics. Justin has been part of the revival of the ethical doctrine known as virtue ethics, an Aristotelian doctrine which has received renewed interest in the past few decades.  Oakley is particularly well known for his work on professional ethics and also the so-called ‘problem’ of friendship. The problem of friendship looks at how a strict application of impartialist ethical doctrines, such as utilitarianism and Kantianism, conflicts with our notions of friendship or ‘true friendship’.

David PearceDavid Pearce is a British philosopher who promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.

Be Greedy For The Most Good You Can Do – Kerry Vaughan – EA Global Melbourne 2015

Filmed at EA Global Melbourne 2015 Slides of talk are here
Kerry Vaughan discusses:
What is effective altruism? what is it’s history? what isn’t EA? and how to succeed at being an effective altruist.
Approaches to doing good include:
– Being Skeptical – using the case study of play pumps in africa – hoping to utilize the renewable energy of children playing – on the surface it looked like a good idea, but unfortunately it didn’t work – so be skeptical
– Changing your Mind – you can score social points in the EA movement by changing your mind – so yay! Moving beyond entrenched beliefs to better ways of thinking leads decision making – do change your mind, update your beliefs when there is evidence to support you doing so
– Do it! – when you find out better approaches to being altruistic, actually follow up and do it – without getting too involved in theorizing whether you have a moral obligation to solve the problem, just go solve it
– 3 strands to the history of EA – Peter Singer’s work, Holden Karlovsky and Elie Hassenfeld at Give Well, the rationalist movement (inc CFAR)
Kerry then discusses the growth of the EA movement.
Approaches to EA based on evidence (empiricism) and also strong philosophical arguments (esp in the absence of evidence – for instance with Existential Risks or far future scenarios)
How to succeed at EA Global: get help, and make radical life change.

Kerry Vaughan - Be Greedy for the Most Good - EA Global Melbourne 2015 - Effective Altruism 2

Many thanks for watching!
Support SciFuture via Patreon
Please Subscribe to the SciFuture Channel
Science, Technology & the Future website

Nietzsche, the Overhuman, and Transhumanism – Stefan Lorenz Sorgner

Did Nietzsche have something like Transhumanism in mind when he wrote about the Übermensch?


Abstract

Bostrom rejects Nietzsche as an ancestor of the transhumanist movement, as he claims that there were merely some “surface-level similarities with the Nietzschean vision” (Bostrom 2005a, 4). In contrast to Bostrom, I think that significant similarities between the posthuman and the overhuman can be found on a fundamental level. In addition, it seems to me that Nietzsche explained the relevance of the overhuman by referring to a dimension which seems to be lacking in transhumanism. In order to explain my position, I will progress as follows. First, I will compare the concept of the posthuman to that of Nietzsche’s overhuman, focusing more on their similarities than their differences. Second, I will contextualise the overhuman in Nietzsche’s general vision, so that I can point out which dimension seems to me to be lacking in transhumanist thought.”

Introduction

Nietz-wordsWhen I first became familiar with the transhumanist movement, I immediately thought that there were many fundamental similarities between transhumanism and Nietzsche’s philosophy, especially concerning the concept of the posthuman and that of Nietzsche’s overhuman. This is what I wish to show in this article. I am employing the term “overhuman instead of “overman,” because in German the term Übermensch can apply to both sexes, which the notion overhuman can, but overman cannot. I discovered, however, that Bostrom, a leading transhumanist, rejects Nietzsche as an ancestor of the transhumanist movement, as he claims that there are merely some “surface-level similarities with the Nietzschean vision” (Bostrom 2005a, 4).

In contrast to Bostrom, I think that significant similarities between the posthuman and the overhuman can be found on a fundamental level. Habermas agrees with me in that respect, as he has already referred to the similarities in these two ways of thinking. However, he seems to regard both of them as absurd. At least, he refers to transhumanists as a bunch of mad intellectuals who luckily have not managed to establish support for their elitist views from a bigger group of supporters (Habermas 2001, 43).1

In addition, it seems to me that Nietzsche explained the relevance of the overhuman by referring to a dimension which seems to be lacking in transhumanism. In order to explain my position, I will progress as follows. First, I will compare the concept of the posthuman to that of Nietzsche’s overhuman, focusing more on their similarities then on their differences. Second, I will contextualise the overhuman in Nietzsche’s general vision, so that I can point out which dimension seems to me to be lacking in transhumanist thought.
Nietzsche, the Overhuman, and Transhumanism – Journal of Evolution and Technology

Bio: Dr. Stefan Lorenz Sorgner is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET) and teaches philosophy at the University of Erfurt. He studied philosophy at King’s College/University of London (BA), the University of Durham (MA by thesis; examiners: David E. Cooper, Durham ; David Owen, Southampton), the University of Giessen and the University of Jena (Dr. phil.; examiners: Wolfgang Welsch, Jena; Gianni Vattimo, Turin). In recent years, he taught at the Universities of Jena (Germany), Erfurt (Germany), Klagenfurt (Austria) and Erlangen-Nürnberg (Germany). His main fields of research are Nietzsche, the philosophy of music, bioethics and meta-, post- and transhumanism.

 

sorgner

Also see David Pearce’s critique on whether Nietzsche was a transhumanist.
Various articles on transhumanism and Nietzsche at IEET.

Science, Technology & the Future

Meta: Overman / Übermensch, Will to Power & Transhumanism, The Last Man
2014 08 02 01 16 09

Was Friedrich Nietzsche a Transhumanist? A critique by David Pearce

Bioconservatives often quote a line from Nietzsche: “That which does not crush me makes me stronger.” But alas pain often does crush people: physically, emotionally, morally. Chronic, uncontrolled pain tends to make the victim tired, depressed and weaker. True, some people are relatively resistant to physical distress. For example, high testosterone function may make someone “tougher”, more “manly”, more resilient, and more able to deal with physically painful stimuli. But such strength doesn’t necessarily make the subject more empathetic or a better person. Indeed, if I may quote W. Somerset Maugham, “It is not true that suffering ennobles the character; happiness does that sometimes, but suffering, for the most part, makes men petty and vindictive.”

To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities – I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them, because I wish them the only thing that can prove today whether one is worth anything or not – that one endures.Friedrich Nietzsche - The Will to Power, p 481
You want, if possible – and there is no more insane “if possible” – to abolish suffering. And we? It really seems that we would rather have it higher and worse than ever. Well-being as you understand it – that is no goal, that seems to us an end, a state that soon makes man ridiculous and contemptible – that makes his destruction desirable. The discipline of suffering, of great suffering – do you not know that only this discipline has created all enhancements of man so far?Friedrich Nietzsche - Beyond Good and Evil, p 225
“I do not point to the evil and pain of existence with the finger of reproach, but rather entertain the hope that life may one day become more evil and more full of suffering than it has ever been.Friedrich Nietzsche (1844-1900)

Of course, suffering doesn’t always enfeeble and embitter. By analogy, someone who is emotionally depressed may feel that despair is the only appropriate response to the horrors of the world. But the solution to the horrors of the world is not for us all to become depressed. Rather it’s to tackle the biology of depression. Likewise, the solution to the horrors of physical pain is not to flagellate ourselves in sympathy with the afflicted. Instead it’s to tackle the biological roots of suffering.

See also the article at IEET

i09 article on eliminating suffering

Subscribe to this Channel

Automating Science: Panel – Stephen Ames, John Wilkins, Greg Restall, Kevin Korb

A discussion among philosophers, mathematicians and AI experts on whether science can be automated, what it means to automate science, and the implications of automating science – including discussion on the technological singularity.

– implementing science in a computer – Bayesian methods – most promising normative standard for doing inductive inference
– vehicle : causal Bayesian networks – probability distributions over random variables showing causal relationships
– probabilifying relationships – tests whose evidence can raise the probability

05:23 does Bayesianism misrepresent the majority of what people do in science?

07:05 How to automate the generation of new hypotheses?
– Is there a clean dividing line between discovery and justification? (Popper’s view on the difference between the context of discovery and context of justification) Sure we discuss the difference between the concepts – but what is the difference between the implementation?

08:42 Automation of Science from beginning to end: concept formation, discovery of hypotheses, developing experiments, testing hypotheses, making inferences … hypotheses testing has been done – through concept formation is an interestingly difficult problem

Panel---Automating-Science-and-Artificial-Intelligence---Kevin-Korb,-Greg-Restall,-John-Wilkins,-Stephen-Ames-1920x10839:38 – does everyone on the panel agree that automation of science is possible? Stephen Ames: not yet, but the goal is imminent, until it’s done it’s an open question – Kevin/John: logically possible, question is will we do it – Greg Restall: Don’t know, can there be one formal system that can generate anything classed as science? A degree of open-endedness may be required, the system will need to represent itself etc (Godel!=mysticism, automation!=representing something in a formal deductive theory)

13:04 There is a Godel theorem that applies to a formal representation for automating science – that means that the formal representation can’t do everything – therefore what’s the scope of a formal system that can automate science? What will the formal representation and automated science implementation look like?

14:20 Going beyond formal representations to automate science (John Searle objects to AI on the basis of formal representations not being universal problem solvers)

15:45 Abductive inference (inference to the best explanation) – & Popper’s pessimism about a logic of discovery has no foundation – where does it come from? Calling it logic (if logic means deduction) is misleading perhaps – abduction is not deductive, but it can be formalised.

17:10 Some classified systems fall out of neural networks or clustering programs – Google’s concept of a cat is not deductive (IFAIK)

19:29 Map & territory – Turing Test – ‘if you can’t tell the difference between the model and the real system – then in practice there is no difference’ – the behavioural test is probably a pretty good one for intelligence

22:03 Discussion on IBM Watson on Jeopardy – a lot of natural language processing but not natural language generation

24:09 Bayesianism – in mathematics and in humans reasoning probabilistically – it introduced the concept of not seeing everything in black and white. People get statistical problems wrong often when they are asked to answer intuitively. Is the technology likely to have a broad impact?

26:26 Human thinking, subjective statistical reasoning – and the mismatch between the public communicative act often sounding like Boolean logic – a mismatch between our internal representation and the tools we have for externally representing likelihoods
29:08 Low hanging fruit in human communication probabilistic reasoning – Bayesian nets and argument maps (Bayesian nets strengths between premises and conclusions)

29:41 Human inquiry, wondering and asking questions – how do we automate asking questions (as distinct from making statements)? Scientific abduction is connected to asking questions – there is no reason why asking questions can’t be automated – there is contrasted explanations and conceptual space theory where you can characterise a question – causal explanation using causal Bayesian networks (and when proposing an explanation it must be supported some explanatory context)

32:29 Automating Philosophy – if you can automate science you can automate philosophy –

34:02 Stanford Computational Metaphysics project (colleagues with Greg Restall) – Stanford Computational Metaphysics project – formalization of representations of relationships between concepts – going back to Leibniz – complex notions can be boiled down to simpler primitive notions and grinding out these primitive notions computationally – they are making genuine discoveries
Weak Reading: can some philosophy be automated – yes
Strong Reading of q: can All of philosophy be automated? – there seem to be some things that count as philosophy that don’t look like they will be automated in the next 10 years

35:41 If what we’re is interested in is to represent and automate the production of reasoning formally (not only to evaluate), as long as the domain is such that we are making claims and we are interested in the inferential connections between the claims, then a lot of the properties of reasoning are subject matter agnostic.

36:46 (Rohan McLeod) Regarding Creationism is it better to think of it as a poor hypothesis or non-science? – not an exclusive disjunct, can start as a poor hypothesis and later become not-science or science – it depends on the stage at the time – science rules things out of contention – and at some point creationism had not been ruled out

38:16 (Rohan McLeod) Is economics a science or does it have the potential to be (or is it intrinsically not possible for it to be a science) and why?
Are there value judgements in science? And if there are how do you falsify a hypothesis that conveys a value judgement? physicists make value judgements on hypothesis “h1 is good, h2 is bad” – economics may have reducible normative components but physics doesn’t (electrons aren’t the kinds of things that economies are) – Michael ??? paper on value judgements – “there is no such thing as a factual judgement that does not involve value” – while there are normative components to economics, it is studied from at least one remove – problem is economists try to make normative judgements like “a good economy/market/corporation will do X”

42:22 Problems with economics – incredibly complex, it’s hard to model, without a model exists a vacuum that gets filled with ideology – (are ideologies normative?)

42:56 One of the problems with economics is it gets treated like a natural system (in physics or chemistry) which hides all the values which are getting smuggled in – commitments and values which are operative and contribute to the configuration of the system – a contention is whether economics should be a science (Kevin: Yes, Stephen: No) – perhaps economics could be called a nascent science (in the process of being born)

44:28 (James Fodor) Well known scientists have thought that their theories were implicit in nature before they found them – what’s the role of intuition in automating science & philosophy? – need intuitions to drive things forward – intuition in the abduction area – to drive inspiration for generating hypothesis – though a lot of what get’s called intuition is really the unconscious processing of a trained mind (an experienced driver doesn’t have to process how to drive a car) – Louis Pasteur’s prepared mind – trained prior probabilities

46:55 The Singularity – disagreement? John Wilkins suspects it’s not physically possible – Where does Moore’s Law (or its equivalents in other hardware paradigms) peter out? The software problem could be solved near or far. Kevin agrees with I.J. Good – recursively improving abilities without (obvious) end (within thermodynamic limits). Kevin Korb explains the intelligence explosion.

50:31 Stephen Ames discusses his view of the singularity – but disagrees with uploading on the grounds of needing to commit to philosophical naturalism

51:52 Greg Restall mistrusts IT corporations to get uploading right – Kevin expresses concerns about using star-trek transporters – the lack of physical continuity. Greg discusses theories of intelligence – planes fly as do birds, but planes are not birds – they are differing

54:07 John Wilkins – way too much emphasis is put on propositional knowledge and communication in describing intelligence – each human has roughly the same amount of processing power – too much rests on academic pretense and conceit.

54:57 The Harvard Rule – under conditions of consistent lighting, feeding etc – the organism will do as it damn well pleases. But biology will defeat simple models.. Also Hulls rule – no matter what the law in biology is there is an exception (inc Hull’s law) – so simulated biology may be difficult. We won’t simulate an entire organism – we can’t simulate a cell. Kevin objects

58:30 Greg R. says simulations and models do give us useful information – even if we isolate certain properties in simulation that are not isolated in the real world – John Wilkins suggests that there will be a point where it works until it doesn’t

1:00:08 One of the biggest differences between humans and mice is 40 million years of evolution in both directions – the problem is in evo biol is your inductive projectability – we’ve observed it in these cases, therefore we expect it in this – it fades out relatively rapidly in direct disproportion to the degree of relatedness

1:01:35 Colin Kline – PSYCHE – and other AI programs making discoveries – David Chalmers have proposed the Hard Problem of Consciousness – pZombies – but we are all pZombies, so we will develop systems that are conscious because there is to such thing as consciousness. Kevin is with Dennet – info processing functioning is what consciousness supervenes upon
Greg – concept formation in systems like PSYCHE – but this milestone might be very early in the development of what we think of as agency – if the machine is worried about being turned off or complains about getting board, then we are onto something

Bayeswatch – The Pitfalls of Bayesian Reasoning – Chris Guest

Chris Guest - Headshot 1Bayesian inference is a useful tool in solving challenging problems in many fields of uncertainty. However, inferential arguments presented with a Bayesian formalism should be subject to the same critical scrutiny that we give to informal arguments. After an introduction to Bayes’ theorem, some examples of its misuse in history and theology will be discussed.

Chris is a software developer with an academic background in Philosophy, Mathematics and Machine Learning. He is also President of the Australian Skeptics Victorian Branch. Chris is interested in applying critical reasoning to boundary problems in skepticism and is involved in consumer complaints and skeptical advocacy.

 

Talk was held at the Philosophy of Science Conference in Melbourne 2014

Video can be found here.

The Revolutions of Scientific Structure – Colin Hales

colin hales orange bg“The Revolutions of Scientific Structure” reveals an empirically measured discovery, by science, about the natural world that is the human scientist. The book’s analysis places science at the cusp of a major developmental transformation caused by science targeting the impossible: the science of consciousness, which was started in the late 1980s by a science practice that cannot, in principle, ever succeed. This impossible science must fail, not because it is malformed, but because it cannot deliver to engineers what is needed to build artificial consciousness.

The book formally reveals how fully expressed scientific behaviour actually has two faces, like the Roman god Janus. Currently we only use one face, the ‘Appearance-Aspect’ and it is measured and properly documented by the book for the first time. Where some scientists accidentally use the other, the two faces are shown to be confused as one. There are actually two fundamental kinds of ‘laws of nature’ that jointly account for the one underlying natural world. The recognition and addition of the second kind, the ‘Structure-Aspect’, is the book’s proposed transformation of science.

The upgraded framework is called ‘Dual Aspect Science’ and is posited as the adult form of science that had to wait for computers before it could emerge a fully formed butterfly from its millennial larval form that is single (appearance)-aspect science. Only ‘Structure-Aspect’ computation can scientifically reveal the principles underlying the nature of consciousness — in the form of the consciousness that is/underlies scientific observation. While this outcome ultimately affects all scientists, initially only neuroscience and physics are those that, together, have the responsibility for the empirical work needed for the introduction of Dual-Aspect science. This is not philosophy. This is empirical science.

More information on this title can be found at: http://www.worldscientific.com/worldscibooks/10.1142/9211#t=aboutBook .

Document of presentation available here: