What is the relationship between anti-aging and the reduction of suffering? What are some common objections to the ideas of solving aging? How does Anti-Aging stack up against other cause areas (like climate change, or curing specific diseases)? How can we better convince people of the virtues of undoing the diseases of old age?
Keith Comito, interviewed by Adam Ford at the Undoing Aging 2019 conference in Berlin, discusses why solving the diseases of old age is powerful cause. Note the video of this interview will be available soon. He is a computer programmer and mathematician whose work brings together a variety of disciplines to provoke thought and promote social change. He has created video games, bioinformatics programs, musical applications, and biotechnology projects featured in Forbes and NPR.
In addition to developing high-profile mobile applications such as HBO Now and MLB AtBat, he explores the intersection of technology and biology at the Brooklyn community lab Genspace, where he helped to create games which allow players to direct the motion of microscopic organisms.
Seeing age-related disease as one of the most profound problems facing humanity, he now works to accelerate and democratize longevity research efforts through initiatives such as Lifespan.io.
He earned a B.S. in Mathematics, B.S. in Computer science, and M.S. in Applied Mathematics at Hofstra University, where his work included analysis of the LMNA protein.
Aging is a technical problem with a technical solution – finding the solution requires clear thinking and focused effort. Once solving aging becomes demonstrably feasible, it is likely attitudes will shift regarding its desirability. There is huge potential, for individuals and for society, in reducing suffering through the use of rejuvenation therapy to achieve new heights of physical well-being. I also discuss the looming economic implications of large percentages of illness among aging populations – and put forward that focusing on solving fundamental problems of aging will reduce the incidents of debilitating diseases of aging – which will in turn reduce the economic burden of illness. This mini-documentary discusses the implications of actually solving aging, as well as some misconceptions about aging.
The above video is the latest version with a few updates & kinks ironed out.
‘The End of Aging’ was Adam Ford’s submission for the Longevity Film Competition – all the contestants did a great job. Big thanks to the organisers of competition, it inspires people to produce videos to help spread awareness and understanding about the importance of ending aging.
It’s important to see that health in old age is desirable at population levels – rejuvenation medicine – repairing the bodies ability to cope with stressors (or practical reversal of the aging process), will end up being cheaper than traditional medicine based on general indefinite postponement of ill-health on population levels (especially in the long run when rejuvenation therapy becomes efficient).
According to the World Health Organisation:
- Between 2015 and 2050, the proportion of the world’s population over 60 years will nearly double from 12% to 22%.
- By 2020, the number of people aged 60 years and older will outnumber children younger than 5 years.
- In 2050, 80% of older people will be living in low- and middle-income countries.
- The pace of population ageing is much faster than in the past.
- All countries face major challenges to ensure that their health and social systems are ready to make the most of this demographic shift.
Happy Longevity Day 2018! 😀
 * The Longevity Film Competition is an initiative by the Healthy Life Extension Society, the SENS Research Foundation, and the International Longevity Alliance. The promoters of the competition invited filmmakers everywhere to produce short films advocating for healthy life extension, with a focus on dispelling four usual misconceptions and concerns around the concept of life extension: the false dichotomy between aging and age-related diseases, the Tithonus error, the appeal to nature fallacy, and the fear of inequality of access to rejuvenation biotechnologies.
Michio Kaku on Nanotechnology – Michio is the author of many best sellers, recently the Future of the Mind!
The Holy Grail of Nanotechnology
Merging with machines is on the horizon and Nanotechnology will be key to achieving this. The ‘Holy Grail of Nanotechnology’ is the replicator: A microscopic robot that rearranges molecules into desired structures. At the moment, molecular assemblers exist in nature in us, as cells and ribosomes.
Sticky Fingers problem
How might nanorobots/replicators look and behave?
Because of the ‘Sticky /Fat Fingers problem’ in the short term we won’t have nanobots with agile clippers or blow torches (like what we might see in a scifi movie).
The 4th Wave of High Technology
Humanity has seen an acceleration in history of technological progress from the steam engine and industrial revolution to the electrical age, the space program and high technology – what is the 4th wave that will dominate the rest of the 21st century?
Nanotechnology (molecular physics), Biotechnology, and Artificial Intelligence (reducing the curcuitry of the brain down to neurons) – “these three molecular technologies will propel us into the future”!
Michio Kaku – Bio
Michio Kaku (born January 24, 1947) is an American theoretical physicist, the Henry Semat Professor of Theoretical Physics at the City College of New York, a futurist, and a communicator and popularizer of science. He has written several books about physics and related topics, has made frequent appearances on radio, television, and film, and writes extensive online blogs and articles. He has written three New York Times Best Sellers: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014).
Kaku is the author of various popular science books:
– Beyond Einstein: The Cosmic Quest for the Theory of the Universe (with Jennifer Thompson) (1987)
– Hyperspace: A Scientific Odyssey through Parallel Universes, Time Warps, and the Tenth Dimension (1994)
– Visions: How Science Will Revolutionize the 21st Century (1998)
– Einstein’s Cosmos: How Albert Einstein’s Vision Transformed Our Understanding of Space and Time (2004)
– Parallel Worlds: A Journey through Creation, Higher Dimensions, and the Future of the Cosmos (2004)
– Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel (2008)
– Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (2011)
– The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind (2014)
The topic of Moral Enhancement is controversial (and often misrepresented); it is considered by many to be repugnant – provocative questions arise like “who’s morals?”, “who are the ones to be morally enhanced?”, “will it be compulsory?”, “won’t taking a morality pill decrease the value of the intended morality if it skips the difficult process we normally go through to become better people?”, “Shouldn’t people be concerned that use of enhancements which alter character traits might consumer’s authenticity?”
potentially very dangerous. So we might need to update our moral systems and that is the interesting question of moral enhancement: can we make ourselves more fit for a current work?Anders Sandberg - Are we morally equipped for the future?
Let’s build some context. For millennia humans have lived in complex social structures constraining and encouraging certain types of behaviour. More recently for similar reasons people go through years of education at the end of which (for the most part) are more able to morally function in the modern world – though this world is very different from that of our ancestors, and when considering the possibilities for vastly radical change at breakneck speed in the future, it’s hard to know how humans will keep up both intellectually and ethically. This is important to consider as the degree to which we shape the future for the good depends both on how well and how ethically we solve the problems needed to achieve change that on balance (all things equal) benefits humanity (and arguably all morally relevant life-forms).
Can we engineer ourselves to be more ethically fit for the future?
Peter Singer discussed how our circles of care and compassion have expanded over the years – through reason we have been able to expand our natural propensity to act morally and the circumstances in which we act morally.
We may need to expand our circle of ethical consideration to include artificial life – considering certain types of software as moral patients.
So, if we think we could use a boost in our propensity for ethical progress,
How do we actually achieve ideal Moral Enhancement?
That’s a big topic (see a list of papers on the subject of ME here) – the answers may depend on what our goals and preferences. One idea (among many others) is to regulate the level of Oxytocin (the cuddle hormone) – though this may come with the drawback of increasing distrust in the out-group.
Since morality depends on us being able to make accurate predictions and solve complex ethical problems, ‘Intelligence Enhancement‘ could be an effective aspect of moral enhancement.
How we decide whether to use Moral Enhancement Therapy will be interesting – it may be needed to help solve global coordination problems; to increase the likelihood that we will, as a civilization, cooperate and cope with many known and as yet to be realised complex ethical quandaries as we move through times of unprecedented social and technological change.
This interview is part of a larger series that was completed in Oxford, UK late 2012.
So humans have a kind of built-in capacity of learning moral systems from their parents and other people we’re not born with any particular moral [code] but the ability to learn it just like we can learn languages. The problem is of course this built-in facility might have worked quite well back in the Stone Age when we were evolving in small tribal communities – but doesn’t work that well when surrounded with a high-tech civilization, millions of other people and technology that could be potentially very dangerous. So we might need to update our moral systems. And that is the interesting question of moral enhancement:
- can we make ourselves more fit for a current work?
- And what kind of fitness should we be talking about?
For example we might want to improve on altruism – that we should be coming to strangers. But in a big society, in a big town – of course there are going to be some stranger’s that you shouldn’t trust. So it’s not just blind trust you want to enhance – you actually want to enhance ability to make careful judgements; to figure out what’s going to happen on whom you can trust. So maybe you want to have some other aspect, maybe the care – the circle of care – is what you want to expand.
Peter Singer pointed out that there are circles of care and compassion have been slowly expanding from our own tribe and their own gender, to other genders, to other people and eventually maybe to other species. But this is still biologically based a lot of it is going on here in the brain and might be modified. Maybe we should artificially extend these circles of care to make sure that we actually do care about those entities we ought to be caring about. This might be a problem of course, because some of these agents might be extremely different for what we used to.
For example machine intelligence might produce more machines or software that is a ‘moral patient’ – we actually ought to be caring about the suffering of software. That might be very tricky because our pattern receptors up in the brain are not very tuned for that – we tend to think that if it’s got a face and the speaks then it’s human and then we can care about it. But who thinks about Google? Maybe we could get super-intelligences that we actually ought to care a lot about, but we can’t recognize them at all because they’re so utterly different from ourselves.
So there are some easy ways of modifying how we think and react – for example by taking a drug. So the hormone oxytocin is sometimes called ‘the cuddle hormone’ – it’s released when breastfeeding and when having bodily contact with your loved one, and it generally seems to be making us more altruistic; more willing to trust strangers. You can kind of sniff it and run an economic game and you can immediately see a change in response. It might also make you a bit more ego-centric. It does enlarge feelings of comfort and family friendliness – except that it’s
only within what you consider to be your family. So we might want to tweak that.
Similarly we might think about adding links to our brains that allow us to think in better ways. After all, morality is dependent on us being able to predict what’s going to happen when we do something. So various forms of intelligence enhancement might be very useful also for becoming more moral. Our ability to control our reactions that allow our higher-order values to control our lower order values is also important, that might actually require us to literally rewire or have biochips that help us do it.
But most important is that we need the information we need to retrain the subtle networks in a brain in order to think better. And that’s going to require something akin to therapy – it might not necessarily be about lying on a sofa and telling your psychologist about your mother. It might very well be a bit of training, a bit of cognitive enhancement, maybe a bit of brain scanning – to figure out what actually ails you. It’s probably going to look very very different from anything Freud or anybody else envisioned for the future.
But I think in the future we’re actually going to try to modify ourselves so we’re going to be extra certain, maybe even extra moral, so we can function in a complex big world.
Neuroenhancement of Love and Marriage: The Chemicals Between Us
Anders contributed to this paper ‘Neuroenhancement of Love and Marriage: The Chemicals Between Us‘. This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.
Human Engineering and Climate Change
Anders also contributed to the paper “Human Engineering and Climate Change” which argues that cognitive, moral and biological enhancement could increase human ecological sustainability.
Many thanks for watching!
Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create
– Science, Technology & the Future: http://scifuture.org
Dr Randal Koene covers the motivation for human technological augmentation and reasons to go beyond biological life extension.
Competition is an inescapable occurrence in the animate and even in the inanimate universe. To give our minds the flexibility to transfer and to operate in different substrates bestows upon our species the most important competitive advantage.” I am a neuroscientist and neuroengineer who is currently the Science Director at Foundation 2045, and the Lead Scientist at Kernel, and I head the organization carboncopies.org, which is the outreach and roadmapping organization for the development of substrate-independent minds (SIM) and also previously participated in the ambitious and fascinating efforts of the nanotechnology startup Halcyon Molecular in Silicon Valley.
Points discussed in the talk:
1. Biological Life-Extension is Not Enough Randal A. Koene Carboncopies.org
3. No one wants to live longer just to live longer. Motivation informs Method.
4. Having an Objective, a Goal, requires that you have some notion of success.
5. Creating (intelligent) machines that have the capabilities we do not — is not as good as being able to experience them ourselves… Imagine… creating/playing music. Imagine… being the kayak.Imagine… perceiving the background radiation of the universe.
6. Is being out of the loop really your goal?
7. Near-term goals: Extended lives without expanded minds are in conflict with creative development.
9. Gene survival is extremely dependent on an environment — it is unlikely to survive many changes.Worse… gene replication does not sustain that which we care most about!
10. Is CTGGAGTAC better than GTTGACTGAC? We are vessels for that game — but for the last10,000 years something has been happening!
11. Certain future experiences are desirable, others are not — these are your perspectives, the memes you champion…Death keeps stealing our champions, our experts.
12. Too early to do uploading? – No! The big perspective is relevant now. We don’t like myopic thinking in our politicians, lets not be myopic about world issues ourselves.
14. Life-extension in biology may increase the fragility of our species & civilization… More people? – Resources. Less births? – Fewer novel perspectives. Expansion? – Environmental limitation.
15. Biological life-extension within the same evolutionary niche = further specialization to the same performance “over-training” in conflict with generalization
16. Aubrey de Grey: Ultimately, desires “uploading”
18. Significant biological life-extension is incredibly difficult and beset by threats. Reality vs. popular perception.
19. Life-extension and Substrate-Independence are two different objectives
20. Developing out of a “catchment area” (S. Gildert) may demand iterations of exploration — and exploration involves risk.Hard-wired delusions and drives. What would an AGI do? Which types of AGI would exist in the long run?
21. “Uploading” is just one step of many — but a necessary step — for a truly advanced species
22. Thank You firstname.lastname@example.org
There is a short promo-interview for the Singularity Summit AU 2012 conference that Adam Ford did with Dr. Koene, though unfortunately the connection was a bit unreliable, which is noticeable in the video:
Most of those videos are available through the SciFuture YouTube channel: http://www.youtube.com/user/TheRationalFuture
The Quantified Self movement is usefully encouraging loads of people to record metrics about their personal health – the resulting big data will be useful to mine in doing research on how food and lifestyle impact health.
There are also some ethical issues around privacy if personal metrics are used inappropriately.
Here is an interview with Anders Sandberg on QS:
Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create
– Science, Technology & the Future: http://scifuture.org
My sociology of knowledge students read Yuval Harari’s bestselling first book, Sapiens, to think about the right frame of reference for understanding the overall trajectory of the human condition. Homo Deus follows the example of Sapiens, using contemporary events to launch into what nowadays is called ‘big history’ but has been also called ‘deep history’ and ‘long history’. Whatever you call it, the orientation sees the human condition as subject to multiple overlapping rhythms of change which generate the sorts of ‘events’ that are the stuff of history lessons. But Harari’s history is nothing like the version you half remember from school.
In school historical events were explained in terms more or less recognizable to the agents involved. In contrast, Harari reaches for accounts that scientifically update the idea of ‘perennial philosophy’. Aldous Huxley popularized this phrase in his quest to seek common patterns of thought in the great world religions which could be leveraged as a global ethic in the aftermath of the Second World War. Harari similarly leverages bits of genetics, ecology, neuroscience and cognitive science to advance a broadly evolutionary narrative. But unlike Darwin’s version, Harari’s points towards the incipient apotheosis of our species; hence, the book’s title.
This invariably means that events are treated as symptoms if not omens of the shape of things to come. Harari’s central thesis is that whereas in the past we cowered in the face of impersonal natural forces beyond our control, nowadays our biggest enemy is the one that faces us in the mirror, which may or may not be able within our control. Thus, the sort of deity into which we are evolving is one whose superhuman powers may well result in self-destruction. Harari’s attitude towards this prospect is one of slightly awestruck bemusement.
Here Harari equivocates where his predecessors dared to distinguish. Writing with the bracing clarity afforded by the Existentialist horizons of the Cold War, cybernetics founder Norbert Wiener declared that humanity’s survival depends on knowing whether what we don’t know is actually trying to hurt us. If so, then any apparent advance in knowledge will always be illusory. As for Harari, he does not seem to see humanity in some never-ending diabolical chess match against an implacable foe, as in The Seventh Seal. Instead he takes refuge in the so-called law of unintended consequences. So while the shape of our ignorance does indeed shift as our knowledge advances, it does so in ways that keep Harari at a comfortable distance from passing judgement on our long term prognosis.
This semi-detachment makes Homo Deus a suave but perhaps not deep read of the human condition. Consider his choice of religious precedents to illustrate that we may be approaching divinity, a thesis with which I am broadly sympathetic. Instead of the Abrahamic God, Harari tends towards the ancient Greek and Hindu deities, who enjoy both superhuman powers and all too human foibles. The implication is that to enhance the one is by no means to diminish the other. If anything, it may simply make the overall result worse than had both our intellects and our passions been weaker. Such an observation, a familiar pretext for comedy, wears well with those who are inclined to read a book like this only once.
One figure who is conspicuous by his absence from Harari’s theology is Faust, the legendary rogue Christian scholar who epitomized the version of Homo Deus at play a hundred years ago in Oswald Spengler’s The Decline of the West. What distinguishes Faustian failings from those of the Greek and Hindu deities is that Faust’s result from his being neither as clever nor as loving as he thought. The theology at work is transcendental, perhaps even Platonic.
In such a world, Harari’s ironic thesis that future humans might possess virtually perfect intellects yet also retain quite undisciplined appetites is a non-starter. If anything, Faust’s undisciplined appetites point to a fundamental intellectual deficiency that prevents him from exercising a ‘rational will’, which is the mark of a truly supreme being. Faust’s sense of his own superiority simply leads him down a path of ever more frustrated and destructive desire. Only the one true God can put him out of his misery in the end.
In contrast, if there is ‘one true God’ in Harari’s theology, it goes by the name of ‘Efficiency’ and its religion is called ‘Dataism’. Efficiency is familiar as the dimension along which technological progress is made. It amounts to discovering how to do more with less. To recall Marshall McLuhan, the ‘less’ is the ‘medium’ and the ‘more’ is the ‘message’. However, the metaphysics of efficiency matters. Are we talking about spending less money, less time and/or less energy?
It is telling that the sort of efficiency which most animates Harari’s account is the conversion of brain power to computer power. To be sure, computers can outperform humans on an increasing range of specialised tasks. Moreover, computers are getting better at integrating the operations of other technologies, each of which also typically replaces one or more human functions. The result is the so-called Internet of Things. But does this mean that the brain is on the verge of becoming redundant?
Those who say yes, most notably the ‘Singularitarians’ whose spiritual home is Silicon Valley, want to translate the brain’s software into a silicon base that will enable it to survive and expand indefinitely in a cosmic Internet of Things. Let’s suppose that such a translation becomes feasible. The energy requirements of such scaled up silicon platforms might still be prohibitive. For all its liabilities and mysteries, the brain remains the most energy efficient medium for encoding and executing intelligence. Indeed, forward facing ecologists might consider investing in a high-tech agronomy dedicated to cultivating neurons to function as organic computers – ‘Stem Cell 2.0’, if you will.
However, Harari does not see this possible future because he remains captive to Silicon Valley’s version of determinism, which prescribes a migration from carbon to silicon for anything worth preserving indefinitely. It is against this backdrop that he flirts with the idea that a computer-based ‘superintelligence’ might eventually find humans surplus to requirements in a rationally organized world. Like other Singularitarians, Harari approaches the matter in the style of a 1950s B-movie fan who sees the normative universe divided between ‘us’ (the humans) and ‘them’ (the non-humans).
The bravest face to put on this intuition is that computers will transition to superintelligence so soon – ‘exponentially’ as the faithful say — that ‘us vs. them’ becomes an operative organizing principle. More likely and messier for Harari is that this process will be dragged out. And during that time Homo sapiens will divide between those who identify with their emerging machine overlords, who are entitled to human-like rights, and those who cling to the new acceptable face of racism, a ‘carbonist’ ideology which would privilege organic life above any silicon-based translations or hybridizations. Maybe Harari will live long enough to write a sequel to Homo Deus to explain how this battle might pan out.
NOTE ON PUBLICATION: Homo Deus is published in September 2016 by Harvil Secker, an imprint of Penguin Random House. Fuller would like to thank The Literary Review for originally commissioning this review. It will appear in a subsequent edition of the magazine and is published here with permission.
Video Interview with Steve Fuller covering the Homo Deus book
Steve fuller discusses the new book Homo Deus, how it relates to the general transhumanist philosophy and movementfactors around the success of these ideas going mainstream, Yuval Noah Harari’s writing style, why there has been a bias within academia (esp sociology) to steer away from ideas which are less well established in history (and this is important because our successfully navigating the future will require a lot of new ideas), existential risk, and we contrast a posthuman future with a future dominated by an AI superintelligence.
Yuval Harari’s books
– ‘Homo Deus: A Brief History of Tomorrow’: https://www.amazon.com/Homo-Deus-Brief-History-Tomorrow-ebook/dp/B019CGXTP0/
– ‘Sapiens: A Brief History of Humankind’: https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095/
Discussion on the Coursera course ‘A Brief History of Humankind’ (which I took a few years ago): https://www.coursetalk.com/providers/coursera/courses/a-brief-history-of-humankind
David Pearce reflects on the motivation for human enhancement to phase out our violent nature. Do we want to perpetuate the states of experience which are beholden to our violent default biological imperatives .. or re-engineer ourselves?
Now although ordinary everyday life for many of us in the world is no longer involves the kind of endemic violence that was once the case (goodness knows how many deaths one will witness on screen in the course of a lifetime) one still enjoys violence and quite frequently watch men being very nasty towards each other – competing against each other.
Do we want to perpetuate these states of mind indefinitely? Or do we want to re-engineer ourselves?David Pearce
Is the Hedonistic Imperative equivalent to wire-heading?
People are often concerned about the future being a cyber-puink dystopia where people are hard wired into pleasure centers like smacked out like lotus eating milk-sops devoid of meaningful existence. Does David Pearce’s Hedonistic Imperative entail a future where we are all in thrall to permanent experiential orgasms – intravenously hotwired into our pleasure centers via some kind of soma like drug turning us into blissful-idiots?
Adam Ford: I think some people often conflate or distill the Hedonistic Imperative to mean ‘wireheading’ – what do you (think)?
David Pearce: Yes, I mean, clearly if one does argue that were going to phase out the biology of suffering and live out lives of perpetual bliss then it’s very natural to assimilate this to something like ‘wireheading’ – but for all sorts of reasons I don’t think wireheading (i.e. intercrainial self-stimulation of the reward centers and it’s pharmacological equivalent) is a plausible scenario for our future. Not least there will presumably always be selection pressure against wireheading – wireheads do not want to have baby wireheads and raise wirehead children.
I think a much more credible scenario is the idea that were going to re-calibrate the hedonic treadmill and allow ourselves and our future children to enjoy lives based on gradients of intelligent bliss. And one of the advantages of re-calibration rather than straight forward hedonic maximization is that by urging recalibration one isn’t telling people they ought to be giving up their existing preferences or values is that if your hedonic set-point (i.e. your average state of wellbeing) is much higher than it is now your quality of life will really be much higher – but it doesn’t involve any sacrifice of the values you hold most dear.
As a rather simplistic way of putting it – clearly where one lies basically on the hedonic axis will impose serious cognitive biases (i.e. someone who is let’s say depressive or prone to low mood) at least will have a very different set of biases from someone who is naturally cheerful. But none-the-less it doesn’t entail, so long as we aim for a motivational architecture of gradients of bliss, it doesn’t entail giving up anything you want to hold onto. I think that’s really important because a lot of people will be worried that somehow that if, yes, we do enter into some kind of secular paradise – it will involve giving up their normal relationships, their ordinary values and what they hold most dear. Re-calibration does not entail this (wireheading).
Adam Ford: That’s interesting – people think that you know as soon as you turn on the Hedonistic Imperative you are destined for a very narrow set of values – that could be just one peek experience being replayed over and over again – in some narrow local maximum.
David Pearce: Yes – I suppose one thinks of (kind of) crazed wirehead rats – in fairness, if one does imagine orgasmic bliss most people don’t complain that their orgasms are too long (and I’m not convinced that there is something desperately wrong with orgasmic bliss that lasts weeks, months, years or even centuries) but one needs to examine the wider sociological picture – and ask ‘is it really sustainable for us to become blissed out as distinct form blissful’.
Adam Ford: Right – and by blissed out you mean something like the lotus eaters found in Odysseus?
David Pearce: Yes, I mean clearly it is one version of paradise and bliss – they call it meditative tranquility (not doing anything) – but there are other versions of bliss in which one is hyper-motivated. It seems that, crudely speaking, motivation (which is mediated by the meso-limbic dopamene system) and raw bliss (which is associated with mu-opiod activation of our twin hedonic-hotspots) – the axis are orthogonal. Now they’re very closely interrelated (thanks to natural selection) – but in principle we can amplify one or damp down the other. Empirically, at any rate it seems to be the case today that the happiest people are also the most motivated – they have the greatest desires – I mean, this runs counter to the old buddhist notion that desire is suffering – but if you actually look at people who are depressive or chronically depressed quite frequently they have an absence of desire or motivation. But the point is we should be free to choose – yes it is potentially hugely liberatery – this control over our reward architecture, our pleasure circuitry that biotechnology offers – but let’s get things right. We don’t want to mess things up and produce the equivalent of large numbers of people on Heroin – and this is why I so strenuously urge the case for re-calibration – in the long run genetically, in the short run by various no-recreational drugs.
Adam Ford: Ok… People may be worried that re-calibrating someone is akin to disrupting the continuum of self (or this enduring metaphysical ego) – so the person at the other end wouldn’t be really a continuation of the person at the beginning. What do you think? How would you respond to that sort of criticism?
David Pearce: It depends how strict ones conception of what personal identity is. Now, would you be worried if to learn tomorrow that you had won the national lottery (for example)? It would transform your lifestyle, your circle of friends – would this trigger the anxiety that the person who was living the existence of a multi-millionaire wasn’t really you? Well perhaps you should perhaps you should be worried about this – but on the whole most people would be relatively relaxed at the prospect. I would see this more as akin to a small child growing up – yes in one sense as one becomes a mature adult one has killed the toddler or lost the essence of what it was to be a toddler – but only in a very benign sense. And by aiming for re-calibration and hedonic enrichment rather than maximization, there is much less of a risk of loosing anything that you think is really valuable or important.
Adam Ford: Okay – well that’s interesting – we’ll talk about value. In order to not loose forms of value – even if you don’t use it (the values) much – you might have some values that you leave up in the attic to gather dust – like toys that you don’t play with anymore – but you might want to pick up once in a thousand years or what not. How do you then preserve complexity of value while also achieving high hedonic states – do you think they can go hand in hand? Or do you think preserving complexity of value reduces the likelihood that you will be able to achieve optimal hedonic states?
David Pearce: As an empirical matter – and I stress empirical here – it seems to be the case that the happiest are responsive to the broadest possible range of rewarding stimuli – it tends to be depressives who get stuck in a rut. So other things being equal – by re-calibrating ourselves, becoming happy and then superhappy – we can potentially at any rate, yes, enrich the complexity of our lives with a range of rewarding stimuli – it makes getting stuck in a rut less likely both for the individual and for civilization as a whole.
I think one of the reasons we are afraid of some kind of loss of complexity is that the idea of heaven – including in traditional christian heaven – it can sound a bit monotonous, and for happy people at least one of the experiences they find most unpleasant is boredom. But essentially it should be a matter of choice – yes, someone who is very happy to, let’s say, listen to a piece of music or contemplate or art, should be free to do so, and not forced into leading a very complex or complicated life – but equally folk who want to do a diverse range of things – well that’s feasible too.
– video/audio interview continues on past 10:00
The grand goal of AI is to develop systems that exhibit general intelligence on a human-level or beyond. If achieved, this would have a far greater impact on human society than all previous inventions together, likely resulting in a post-human civilization that only faintly resembles current humanity.