Posts

One Big Misconception About Consciousness – Christof Koch

Christof Koch (Allen Institute for Brain Science) discusses Shannon information and it’s theoretical limitations in explaining consciousness –

Information Theory misses a critical aspect of consciousnessChristof Koch

Christof argues that we don’t need observers to have conscious experiences (other poeple, god, etc), the underlying assumptions behind traditional information theory assumes Shannon information – and that a big misconception about the structure of consciousness stems from this idea – assuming that Shannon information is enough to explain consciousness.  Shannon information is about “sending information from a channel to a receiver – consciousness isn’t about sending anything to anybody.”  So what other kind of information is there?

The ‘information’ in Integrated Information Theory (IIT) does not refer to Shannon information.  Etymologically, the word ‘information’ derives from ‘informare’ – “it refers to information in the original sense of the word ‘Informare’ – to give form to” – that is to give form to a high dimensional structure.

 

 

It’s worth noting that many disagree with Integrated Information Theory – including Scott Aaronson – see here, here and here.

 

See interview below:

“It’s a theory that proceeds from phenomenology to as it were mechanisms in physics”.

IIT is also described in Christof Koch’s Consciousness: Confessions of a Romantic Reductionist’.

Axioms and postulates of integrated information theory

5 axioms / essential properties of experience of consciousness that are foundation to IIT – the intent is to capture the essential aspects of all conscious experience. Each axiom should apply to every possible experience.

  • Intrinsic existence: Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual).
  • Composition: Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
  • Information: Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.
  • Integration: Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book.
  • Exclusion: Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure. Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.

So, does IIT solve what David Chalmers calls the “Hard Problem of consciousness”?

Christof Koch  is an American neuroscientist best known for his work on the neural bases of consciousness. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

This interview is a short section of a larger interview which will be released at a later date.

The future of neuroscience and understanding the complexity of the human mind – Brains and Computers

Two of the world’s leading brain researchers will come together to discuss some of the latest international efforts to understand the brain. They will discuss two massive initiatives – the US based Allen Institute for Brain Science and European Human Brain Project. By combining neuroscience with the power of computing both projects are harnessing the efforts of hundreds of neuroscientists in unprecedented collaborations aimed at unravelling the mysteries of the human brain.

This unique FREE public event, hosted by ABC Radio and TV personality Bernie Hobbs, will feature two presentations by each brain researcher followed by an interactive discussion with the audience.

This is your chance to ask the big brain questions.

[Event Registration Page] | [Meetup Event Page]

ARC Centre of Excellence for Integrative Brain Function

Monday, 3 April 2017 from 6:00 pm to 7:30 pm (AEST)

Melbourne Convention and Exhibition Centre
2 Clarendon Street
enter via the main Exhibition Centre entrance, opposite Crown Casino
South Wharf, VIC 3006 Australia

Professor Christof Koch
President and Chief Scientific Officer, Allen Institute for Brain Science, USA

Professor Koch leads a large scale, 10-year effort to build brain observatories to map, analyse and understand the mouse and human cerebral cortex. His work integrates theoretical, computational and experimental neuroscience. Professor Koch pioneered the scientific study of consciousness with his long-time collaborator, the late Nobel laureate Francis Crick. Learn more about the Allen Institute for Brain Science and Christof Koch.

Professor Karlheinz Meier
Co-Director and Vice Chair of the Human Brain Project
Professor of Physics, University of Heidelberg, Germany

Professor Meier is a physicist working on unravelling theoretical principles of brain information processing and transferring them to novel computer architectures. He has led major European initiatives that combine neuroscience with information science. Professor Meier is a co-founder of the European Human Brain Project where he leads the research to create brain-inspired computing paradigms. Learn more about the Human Brain Project and Karlheinz Meier.

 

 

This event is brought to you by the Australian Research Council Centre of Excellence for Integrative Brain Function.

Discovering how the brain interacts with the world.

The ARC Centre of Excellence for Integrative Brain Function is supported by the Australian Research Council.

Consciousness in Biological and Artificial Brains – Prof Christof Koch

Event Description: Human and non-human animals not only act in the world but are capable of conscious experience. That is, it feels like something to have a brain and be cold, angry or see red. I will discuss the scientific progress that has been achieved over the past decades in characterizing the behavioral and the neuronal correlates of consciousness, based on clinical case studies as well as laboratory experiments. I will introduce the Integrated Information Theory (IIT) that explains in a principled manner which physical systems are capable of conscious, subjective experience. The theory explains many biological and medical facts about consciousness and its pathologies in humans, can be extrapolated to more difficult cases, such as fetuses, mice, or non-mammalian brains and has been used to assess the presence of consciousness in individual patients in the clinic. IIT also explains why consciousness evolved by natural selection. The theory predicts that deep convolutional networks and von Neumann computers would experience next to nothing, even if they perform tasks that in humans would be associated with conscious experience and even if they were to run software faithfully simulating the human brain.

[Meetup Event Page]

Supported by The Florey Institute of Neuroscience & Mental Health, the University of Melbourne and the ARC Centre of Excellence for Integrative Brain Function.

 

 

Who: Prof Christof Koch, President and Chief Scientific Officer, Allen Institute for Brain Sciences, Seattle, USA

Venue: Melbourne Brain Centre, Ian Potter Auditorium, Ground Floor, Kenneth Myer Building (Building 144), Genetics Lane, 30 Royal Parade, University of Melbourne, Parkville

This will be of particular interest to those who know of David Pearce, Andreas Gomez, Mike Johnson and Brian Tomasik’s works – see this online panel:

Zombie Rights

andrew-dun-zombie-rightsAndrew Dun provides an interesting discussion on the rights of sentient entities. Drawing inspiration from quantum complementarity, defends a complementary notion of ontological dualism, countering zombie hypotheses. Sans zombie concerns, ethical discussions should therefore focus on assessing consciousness purely in terms of the physical-functional properties of any putatively conscious entity.

Below is the video of the presentation:

At 12:17 point, Andrew introduces the notion of Supervenience (where high level properties supervene on low-level properties) – do zombies have supervenience? Is consciousness merely a supervenient property that supervenes on characteristics of brain states? If so, we should be able to compute whether a system is conscious (if we do know its full physical characterization). The zombie hypothesis suggests that consciousness does not logically supervene on the physical.

Slides for presentation can be found on slide-share!


Andrew Dun spoke at the Singularity Summit. Talk title : “Zombie Rights”.

Andrew’s research interest relates to both the ontology and ethics of consciousness. Andrew is interested in the ethical significance of consciousness, including the way in which our understanding of consciousness impacts our treatment of other humans, non-human animals, and artifacts. Andrew defends the view that the relationship between physical and conscious properties is one of symmetrical representation, rather than supervenience. Andrew argues that on this basis we can confidently approach ethical questions about consciousness from the perspective of ‘common-sense’ materialism.

Andrew also composes and performs original music.

Sam Harris on AI Implications -The Ruben Report

A transcription of Sam Harris’ discussion on the Implications of Strong AI during recent appearance on the Ruben Report. Sam contrasts narrow AI with strong AI, AI Safety, the possibility of rapid AI self-improvement, the idea of AI superintelligence may seem alien to us, and he also brings up the idea that it is important to solve consciousness before superintelligence (especially if superintelligence wipes us out) in hope for a future inclusive of the value that consciousness experience entails – instead of a mechanized future with no consciousness to experience it.
I explored the idea of a consciousness in artificial intelligence in ‘The Knowledge Argument Applied to Ethics‘ – which deals with whether an AI will act differently if it can experience ‘raw feels’ – and this seems to me to be of importance to AI Safety and (if we are ethically serious, and also assume value in ‘raw feels’ or) about preserving the future of value.

Dave Rubin asks the question: “If we get to a certain point with Artificial Intelligence and robots become aware and all that stuff… this can only end horribly right? …it will be pretty good for a while, but then at some point, by their own self-preservation basically, they will have to turn on their masters… I want the answer right now…”

Sam Harris responds: “..I worry about it [AI] to that degree but not quite in those terms. The concern for me is not that we will build superintelligent AI or superintelligent robots which initially seem to work really well and then by some process we don’t understand will become malevolent; and kill us – you know – the terminator movies. That’s not the concern…. Most people who are really worried about this – that’s not really what they are worried about. Although that’s not inconceivable – it’s almost worse than that. What’s more reasonable is that will.. As we’re building right now… we’re building machines that embody intelligence to increasing degree.. But it’s narrow AI.. so the best chess player on earth is a computer but it can’t play tic-tac-toe – it’s narrowly focused on a specific kind of goal – and that’s broadening more and more as we get machines that can play many different kinds of games for instance well. So we’re creeping up on what is now called ‘general intelligence’ – the ability to think flexibly in multiple domains – and we’re you’re learning in one domain doesn’t cancel you’re learning in another – and so it’s something more like how human beings can acquire many different skills and engage in many different modes of cognition and not have everything fall apart – that’s the Holy Grail of artificial intelligence – we want ‘general intelligence’ and something that’s robust – it’s not brittle…it’s something that if parts of it fail it’s not catastrophic to the whole enterprise… and I think there is no question that we will get there, but there are many false assumptions about the path ahead. One is that what we have now is not nearly as powerful as the human mind – and we’re just going to incrementally get to something that is essentially a human equivalent. Now I don’t see that as the path forward at all… all of our narrow intelligence … much of our narrow intelligence insomuch as we find it interesting is already superhuman, right, so like we have your calculator on your phone and it’s superhuman for arithmetic – and the chess playing program is superhuman – it’s not almost as good as a human – it’s better than any human on earth and will always be better than any human on earth right? Um, and more and more we will get that piecemeal effort of superhuman narrow AIs and when this is ever brought together in a general intelligence what you’re going to have is not just another ordinary human level intelligence – you’re going to have something that is in some ways may be radically foreign – in some ways it’s not going to be everything about us emulated in the system – but whatever is intelligent there is not going to be superhuman almost by definition and if it isn’t t=0 it’s going to be the next day – it’s just going to improve so quickly and when you talk about a system that can improve itself – if we ever build intelligent AI that then becomes the best source of it’s own improvement – so something that can improve it’s source code better than any human could improve it’s source code – once we start that process running, and the temptation to do that will be huge, then we have – what has been worried about now for 75 years – the prospect of an intelligence explosion – where the birth of this intelligence could get away from us – it’s now improving itself in a way that is unconstrained.  So people talk about ‘the Singularity’ now which is what happens when that takes off – it’s a horizon line in technological innovation that we can’t see beyond – and we can’t predict beyond because it’s now just escaping – you’re getting 1000’s of years of progress in minutes – right if in fact this process gets initiated – and so it’s not that we have superhuman robots that are just well behaved and it goes on for decades and then all of the sudden they get quirky and they take their interests to heart more than they take ours to heart and … you know the game is over. I think what is more likely is we’ll build intelligent systems that are so much more competent than we are – that even the tiniest misalignment between their goals and our own – will ultimately become completely hostile to our well being and our survival.”

The video of the conversation is here, more of the transcription below the video

Dave Rubin: “That’s scarier, pretty much, than what I laid out right? I laid out sort of a futuristic .. ahh there going to turn on us and start shooting us one day maybe because of an error or something – but you’re laying out really that they would… almost at some point that they would, if they could become aware enough, that they simply wouldn’t need us – because they would become ‘super-humans’ in effect – and what use would we serve for them at some point right? (maybe not because of consciousness…)”

Sam Harris: “I would put consciousness and awareness aside because – I mean it might be that consciousness comes along for the ride – it may be the case that you can’t be as intelligent as a human and not be conscious – but I don’t know if that’s right…”

Dave Rubin: “That’s horizon mind stuff right?”

Sam Harris: “Well I just don’t know if that’s actually true – it’s quite possible that we could build something as intelligent as we are – in a sense that it can meet any kind of cognitive or perceptual challenge or logical challenge we would pose it better than we can – but there is nothing that is like to be that thing – if the lights aren’t on it doesn’t experience happiness, though it might say it experiences happiness right? I think what will happen is that we will definitely – you know the notion of a Turing test?”

Dave Rubin: “This is like, if you type – it seems like it’s responding to you but it’s not actually really…”

Sam Harris: “Well, Allan Turing, the person who is more responsible than anyone else for giving us computers once thought about what it would mean to have intelligent machines – and he proposed what has been come to be known as the ‘Turing Test’.”

Dave Rubin: “It’s like the chat right?”

Sam Harris: “Yeah but .. when you can’t tell whether you’re interacting with a person or a computer – that computer in that case is passing the Turing Test – and as a measure of intelligence – that’s certainly a good proxy for a more detailed analysis of what it would mean to have machine intelligence… if I’m talking to something at length about anything that I want – and I can’t tell it’s not a person, and it turns out it’s somebody’s laptop – that laptop is passing the Turing Test. It may be that you can pass the Turing Test without even the subtlest glimmer of consciousness arising. Right, so that laptop is no more conscious than that glass of water is – right? That may in fact be the case, it may not be though – so I just don’t know there. If that’s the case, for me that’s just the scariest possibility – because what’s happening is .. I even heard at least one computer scientist and it was kind of alarming but I don’t have a deep argument against it – if you assume that consciousness comes along for the ride, if you assume that anything more intelligent than us gives rise to – either intentionally for by happenstance – is more conscious than we are, experiences a greater range of creative states – in well-being and can suffer more – by definition, in my view ethically, it becomes more important… if we’re more important than Cocker Spaniels or ants or anything below us – then if we create something that’s obviously above us in every conceivable way – and it’s conscious – right?”

Dave Ruben: “It would view us in the same way any we view anything that [???] us”

Sam Harris: “It’s more important than us right? And I’d have to grant that even though I’d not be happy about it deciding to annihilate us… I don’t have a deep ethical argument against why… I can’t say from a god’s eye view that it’s bad that we gave birth to super beings that then trampled on us – but then went on to become super in any ways we can’t possibly imagine – just as, you know, bacteria can’t imagine what we’re up to – right. So there are some computer scientists who kind of solve the fears, or silence the fears with this idea – that say just listen, if we build something that’s god like in that respect – we will have given birth to – our descendants will not be apes, they will be gods, and that’s a good thing – it’s the most beautiful thing – I mean what could be more beautiful than us creating the next generation of intelligent systems – that are infinitely profound and wise and knowledgeable from our point of view and are just improving themselves endlessly up to the limit of the resources available in the galaxy – what could be more rewarding than that?”

Dave Ruben: “Sounds pretty good”

Sam Harris: “And the fact that we all destroyed ourselves in the process because we were the bugs that hit their windshield when they were driving off – that’s just the price you pay. Well ok that’s possible but it’s also conceivable that all that could happen without consciousness right? That we could build mere mechanism that is competent in all the ways so as to plow us under – but that there is no huge benefit on the side of deep experience and well being and beauty and all that – it’s all just blind mechanism, which is intelligent mechanism .. in the same way as the best chess playing program – which is highly intelligent with respect to chess but nobody thinks as conscious. So that’s the theory … but on the way there – there is many weird moments where I think we will build machines that will pass the Turing Test – which is to say that they will seem conscious to us, they will seem to be able to detect our emotions and respond to our emotions, you know will say ‘you know what – you look tired, and maybe you should take a nap’ – and it will be right you know, it will be a better judge of your emotions than your friends are – right? And yet at a certain point certainly if you emulate this in a system whether it’s an avatar online or an actual robot that has a face right? That can display it’s own emotion and we get out of the uncanny valley where it just looks creepy and begins to look actually beautiful and rewarding and natural – then our intuitions that we are in dialog with a conscious other will be played upon perfectly right? .. and I think we will lose sight of it being an interesting problem – it will no longer be interesting to wonder whether our computers are conscious because they will be demonstrating as much as any person has ever demonstrated it – and in fact even more right? And unless we understand exactly how consciousness emerges in physical systems, at some point along the way of developing that technology – I don’t think we will actually know that they’re conscious – and that will be interesting – because we will successfully fool ourselves into just assuming – it will seem totally unethical to kill your robot off – it will be a murder worse than you killing a person because at a certain point it will be the most competent person – you know, the wisest person.”

Dave Ruben: “Sam, I don’t know if you’re writing a book about this – but you clearly should write a book about this – I’ll write one of the intros or something – there you go. Well listen we did two hours here – so I’m not going to give you the full Rogen treatment ”

Sam Harris: “We did a half Rogen”

Dave Ruben: “We did a half Rogen – but you know you helped me launch the first season – you’re launching second season – legally you have to now launch every season..”

* Some breaks in conversation (sentences, words, ums and ahs) have been omitted to make it easier to read

AGI Progress & Impediments – Progress in Artificial Intelligence Panel

Panelists: Ben Goertzel, David Chalmers, Steve Omohundro, James Newton-Thomas – held at the Singularity Summit Australia in 2011

Panelists discuss approaches to AGI, progress and impediments now and in the future.
Ben Goertzel:
Ben Goertzle with backdrop of headsBrain Emulation, Broad level roadmap simulation, bottleneck, lack of imaging technology, we don’t know what level of precision we need to reverse engineer biological intelligence. Ed Boyed – optimal brain imageing.
Not by Brain emulation (engineering/comp sci/cognitive sci), bottleneck is funding. People in the field believe/feel they know how to do it. To prove this, they need to integrate their architectures which looks like a big project. Takes a lot of money, but not as much as something like Microsoft Word.

David Chalmers (time 03:42):
DavidChalmersWe don’t know which of the two approaches. Though what form the singularity will take will likely be dependent on the approach we use to build AGI. We don’t understand the theory yet. Most don’t think we will have a perfect molecular scanner that scans the brain and its chemical constituents. 25 Years ago David Chalmers worked in Douglass Hofstadter’s AI lab, but his expertise in AI is now out of date. To get to Human Level AI by brute force or through cognitive psychology knows that the cog-sci is not in very good shape. Third approach is a hybrid of ruffly brain augmentation (through technology we are already using like ipads and computers etc) and technological extension and uploading. If using brain augmentation through tech and uploading as a first step in a Singularity then it is including Humans in the equation along with humanities values which may help shape a Singularity with those values.

Steve Omohundro (time 08:08):
steve_omohundro_headEarly in history AI, there was a distinction: The Neats and the Scruffies. John McCarthy (Stanford AI Lab) believed in mathematically precise logical representations – this shaped a lot of what Steve thought about how programming should be done. Marvin Minsky (MIT Lab) believed in exploring neural nets and self organising systems and the approach of throwing things together to see how it self-organises into intelligence. Both approaches are needed: the logical, mathematically precise, neat approach – and – the probabilistic, self-organising, fuzzy, learning approach, the scruffy. They have to come together. Theorem proving without any explorative aspect probably wont succeed. Purely Neural net based simulations can’t represent semantics well, need to combine systems with full semantics and systems with the ability to adapt to complex environments.

James Newton-Thomas (time 09:57)
james.newton-thomasJames has been playing with Neural-nets and has been disappointed with them not being thinks that Augmentation is the way forward. The AI problem is going to be easier to solve if we are smarter to solve it. Conferences such as this help infuse us with a collective empowerment of the individuals. There is an impediment – we are already being dehumanised with our Ipad, where the reason why we are having a conversation with others is a fact about our being part of a group and not about the information that can be looked up via an IPad. We need to careful in our approach so that we are able to maintain our humanity whilst gaining the advantages of the augmentation.

General Discussion (time 12:05):
David Chalmers: We are already becoming cyborgs in a sense by interacting with tech in our world. the more literal cyborg approach we are working on now. Though we are not yet at the point where the technology is commercialization to in principle allow a strong literal cyborg approach. Ben Goertzel: Though we could progress with some form of brain vocalization (picking up words directly from the brain), allowing to think a google query and have the results directly added to our mind – thus bypassing our low bandwidth communication and getting at the information directly in our heads. To do all this …
Steve Omohundro: EEG is gaining a lot of interest to help with the Quantified Self – brain interfaces to help measure things about their body (though the hardware is not that good yet).
Ben Goertzel: Use of BCIs for video games – and can detect whether you are aroused and paying attention. Though the resolution is very course – hard to get fine grained brain state information through the skull. Cranial jacks will get more information. Legal systems are an impediment.
James NT: Alan Snyder using time altering magnetic fields in helmets that shut down certain areas of the brain, which effectively makes people smarter in narrower domains of skill. Can provide an idiot savant ability at the cost of the ability to generalize. The brain that becomes to specific at one task is doing so at the cost of others – the process of generalization.

Ben Goertzel, David Chalmers, Steve Omohundro - A Thought Experiment

Ben Goertzel, David Chalmers, Steve Omohundro – A Thought Experiment

Altered States of Consciousness through Technological Intervention

A mini-documentary on possible modes of being in the future – Ben Goertzel talks about the Singularity and exploring Altered States of Consciousness, Stelarc discusses Navigating Mixed Realities, Kent Kemmish muses on the paradox of strange futures, and Max More compares Transhumanism to Humanism

Altered-States-of-Consciousness-Thorough-Technological-Intervention---Geortzel-Stelarc-Kemmish-Max-Mored

Starring: Ben Goertzel, Stelarc, Kent Kemmish, Max More
Edited: Adam Ford

Topics : Singularity, Trasnshumanism, and States of Consciousness
Thanks to NASA for some of the b-roll

 

Transcript

Ben Goertzel

It’s better perhaps to think of the singularity in terms of human experience. Right now due to the way our brains are built we have a few states of consciousness that follow us around every day.

There’s the ordinary waking state of consciousness, there’s various kinds of sleep, there’s a flow state of consciousness that we get into when we’re really into the work, we’re doing or playing music and we’re really into it. There are various enlightened states you can get into by meditating a really long time. The spectrum of states of consciousness that human beings can enter into is a tiny little fragment of all the possible ways of experience. When the singularity comes it’s going to bring us a wild variety of states of consciousness, a wild variety of ways of thinking and feeling and experiencing the world.

Stelarc
Well I think we’re expected to increasingly perform in mixed realities, so sometimes we’re biological bodies, sometimes we’re machiningly augmented and accelerated, and other times we have to manage data streams in virtual systems. So we have to seamlessly slide between these three modes of operation, and engineering new interfaces, more intimate interfaces so we can do this more seamlessly is an important strategy.

Kent Kemmish
Plenty of scientists would say that it’s crazy and there’s no way, I guess we could have that debate. But they might agree with me that if it is crazy, it’s crazy because of how the world works socially and not because of how difficult it is intrinsically. It’s not crazy for scientific reasons; it’s crazy because the world is crazy.

Max More
I think that people when they look at the future, if they do accept this idea that there’s going to be drastic changes and great advances, they will necessarily try to fit that very complex, impossible to really understand future, into very familiar mental models because they want to put things in boxes, they want to feel like they have some sort of grip on that. So I won’t be surprised to see Christian transhumanists and Mormon transhumanists and even Buddhist transhumanists and every other group will have some kind of set of ideas, they will gradually accept them, but they will make their future world fit with their pre-existing views as to how it will be.

And I think that the essence of transhumanism is not religious, it’s really based on humanism, it’s an extension of humanism, hence transhumanism. It’s really based on ideas of reason and progress and enlightenment and a kind of a secularism. But that doesn’t mean it’s incompatible with trying to make certain of the transhumanist ideas of self-improvement, of enhancement. I think those are potentially compatible with at least non fundamentalist forms of religion.

– Many thanks to Tom Richards for the transcription