Posts

Anders Sandberg -The Technological Singularity

Anders Sandberg.00_23_53_16.Still031Anders gives a short tutorial on the Singularity – clearing up confusion and highlighting important aspects of the Technological Singularity and related ideas, such as accelerating change, horizons of predictability, self-improving artificial intelligence, and the intelligence explosion.

Tutorial Video:

Points covered in the tutorial:

  • The Mathematical Singularity
  • The Technological Singularity: A Horizon of predictability
  • Confusion Around The Technological Singularity
  • Drivers of Accelerated Growth
  • Technology Feedback Loops
  • A History of Coordination
  • Technological Inflection Points
  • Difficult of seeing what happens after an Inflection Point
  • The Intelligence Explosion
  • An Optimisation Power Applied To Itself
  • Group Minds
  • The HIVE Singularity: A Networked Global Mind
  • The Biointelligence explosion
  • Humans are difficult to optimise

An Overview of Models of the Technological Singularity

anders-sandberg-technology-feedback-loopsSee Anders’ paper ‘An overview of models of technological singularity
This paper reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models. Such models are useful for examining the dynamics of the world-system and possible types of future crisis points where fundamental transitions are likely to occur. Current models suggest that, generically, even small increasing returns tends to produce radical growth. If mental capital becomes copyable (such as would be the case for AI or brain emulation) extremely rapid growth would also become likely.
http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf

[The] Technological singularity is of increasing interest among futurists both as a predicted possibility in the midterm future and as subject for methodological debate. The concept is used in a variety of contexts, and has acquired an unfortunately large number of meanings. Some versions stress the role of artificial intelligence, others refer to more general technological change. These multiple meanings can overlap, and many writers use combinations of meanings: even Vernor Vinge’s seminal essay that coined the term uses several meanings. Some of these meanings may imply each other but often there is a conflation of different elements that likely (but not necessarily) occur in parallel. This causes confusion and misunderstanding to the extent that some critics argue that the term should be avoided altogether. At the very least the term ‘singularity’ has led to many unfortunate assumptions that technological singularity involves some form of mathematical singularity and can hence be ignored as unphysical.Anders Sandberg

A list of models described in the paper:

A. Accelerating change

Exponential or superexponential technological growth (with linked economical growth and social change) (Ray Kurzweil (Kur05), John Smart (Smang))

B. Self improving technology

Better technology allows faster development of new and better technology. (Flake (Fla06))

C. Intelligence explosion

Smarter systems can improve themselves, producing even more intelligence in a strong feedback loop. (I.J. Good (Goo65), Eliezer Yudkowsky)

D. Emergence of superintelligence

(Singularity Institute) 1

E. Prediction horizon

Rapid change or the emergence of superhuman intelligence makes the future impossible to predict from our current limited knowledge and experience. (Vinge, (Vin93))

F. Phase transition

The singularity represents a shift to new forms of organisation. This could be a fundamental difference in kind such as humanity being succeeded by posthuman or artificial intelligences,
a punctuated equilibrium transition or the emergence of a new meta-system level. (Teilhard de Chardin, Valentin Turchin (Tur77), Heylighen (Hey07))

G. Complexity disaster

Increasing complexity and interconnectedness causes increasing payoffs, but increases instability. Eventually this produces a crisis, beyond which point the dynamics must be different.
(Sornette (JS01), West (BLH+07))

H. Inflexion point

Large-scale growth of technology or economy follows a logistic growth curve. The singularity represents the inflexion point where change shifts from acceleration to de-acceleration. (Extropian
FAQ, T. Modis (Mod02))

I. Infinite progress

The rate of progress in some domain goes to infinity in nite time. (Few, if any, hold this to be plausible 2 )

anders-sandberg-the-technological-singularity-predictability-horizon

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the YouTube channel:
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates

Science, Technology & the Future: http://scifuture.org

Sam Harris on AI Implications -The Ruben Report

A transcription of Sam Harris’ discussion on the Implications of Strong AI during recent appearance on the Ruben Report. Sam contrasts narrow AI with strong AI, AI Safety, the possibility of rapid AI self-improvement, the idea of AI superintelligence may seem alien to us, and he also brings up the idea that it is important to solve consciousness before superintelligence (especially if superintelligence wipes us out) in hope for a future inclusive of the value that consciousness experience entails – instead of a mechanized future with no consciousness to experience it.
I explored the idea of a consciousness in artificial intelligence in ‘The Knowledge Argument Applied to Ethics‘ – which deals with whether an AI will act differently if it can experience ‘raw feels’ – and this seems to me to be of importance to AI Safety and (if we are ethically serious, and also assume value in ‘raw feels’ or) about preserving the future of value.

Dave Rubin asks the question: “If we get to a certain point with Artificial Intelligence and robots become aware and all that stuff… this can only end horribly right? …it will be pretty good for a while, but then at some point, by their own self-preservation basically, they will have to turn on their masters… I want the answer right now…”

Sam Harris responds: “..I worry about it [AI] to that degree but not quite in those terms. The concern for me is not that we will build superintelligent AI or superintelligent robots which initially seem to work really well and then by some process we don’t understand will become malevolent; and kill us – you know – the terminator movies. That’s not the concern…. Most people who are really worried about this – that’s not really what they are worried about. Although that’s not inconceivable – it’s almost worse than that. What’s more reasonable is that will.. As we’re building right now… we’re building machines that embody intelligence to increasing degree.. But it’s narrow AI.. so the best chess player on earth is a computer but it can’t play tic-tac-toe – it’s narrowly focused on a specific kind of goal – and that’s broadening more and more as we get machines that can play many different kinds of games for instance well. So we’re creeping up on what is now called ‘general intelligence’ – the ability to think flexibly in multiple domains – and we’re you’re learning in one domain doesn’t cancel you’re learning in another – and so it’s something more like how human beings can acquire many different skills and engage in many different modes of cognition and not have everything fall apart – that’s the Holy Grail of artificial intelligence – we want ‘general intelligence’ and something that’s robust – it’s not brittle…it’s something that if parts of it fail it’s not catastrophic to the whole enterprise… and I think there is no question that we will get there, but there are many false assumptions about the path ahead. One is that what we have now is not nearly as powerful as the human mind – and we’re just going to incrementally get to something that is essentially a human equivalent. Now I don’t see that as the path forward at all… all of our narrow intelligence … much of our narrow intelligence insomuch as we find it interesting is already superhuman, right, so like we have your calculator on your phone and it’s superhuman for arithmetic – and the chess playing program is superhuman – it’s not almost as good as a human – it’s better than any human on earth and will always be better than any human on earth right? Um, and more and more we will get that piecemeal effort of superhuman narrow AIs and when this is ever brought together in a general intelligence what you’re going to have is not just another ordinary human level intelligence – you’re going to have something that is in some ways may be radically foreign – in some ways it’s not going to be everything about us emulated in the system – but whatever is intelligent there is not going to be superhuman almost by definition and if it isn’t t=0 it’s going to be the next day – it’s just going to improve so quickly and when you talk about a system that can improve itself – if we ever build intelligent AI that then becomes the best source of it’s own improvement – so something that can improve it’s source code better than any human could improve it’s source code – once we start that process running, and the temptation to do that will be huge, then we have – what has been worried about now for 75 years – the prospect of an intelligence explosion – where the birth of this intelligence could get away from us – it’s now improving itself in a way that is unconstrained.  So people talk about ‘the Singularity’ now which is what happens when that takes off – it’s a horizon line in technological innovation that we can’t see beyond – and we can’t predict beyond because it’s now just escaping – you’re getting 1000’s of years of progress in minutes – right if in fact this process gets initiated – and so it’s not that we have superhuman robots that are just well behaved and it goes on for decades and then all of the sudden they get quirky and they take their interests to heart more than they take ours to heart and … you know the game is over. I think what is more likely is we’ll build intelligent systems that are so much more competent than we are – that even the tiniest misalignment between their goals and our own – will ultimately become completely hostile to our well being and our survival.”

The video of the conversation is here, more of the transcription below the video

Dave Rubin: “That’s scarier, pretty much, than what I laid out right? I laid out sort of a futuristic .. ahh there going to turn on us and start shooting us one day maybe because of an error or something – but you’re laying out really that they would… almost at some point that they would, if they could become aware enough, that they simply wouldn’t need us – because they would become ‘super-humans’ in effect – and what use would we serve for them at some point right? (maybe not because of consciousness…)”

Sam Harris: “I would put consciousness and awareness aside because – I mean it might be that consciousness comes along for the ride – it may be the case that you can’t be as intelligent as a human and not be conscious – but I don’t know if that’s right…”

Dave Rubin: “That’s horizon mind stuff right?”

Sam Harris: “Well I just don’t know if that’s actually true – it’s quite possible that we could build something as intelligent as we are – in a sense that it can meet any kind of cognitive or perceptual challenge or logical challenge we would pose it better than we can – but there is nothing that is like to be that thing – if the lights aren’t on it doesn’t experience happiness, though it might say it experiences happiness right? I think what will happen is that we will definitely – you know the notion of a Turing test?”

Dave Rubin: “This is like, if you type – it seems like it’s responding to you but it’s not actually really…”

Sam Harris: “Well, Allan Turing, the person who is more responsible than anyone else for giving us computers once thought about what it would mean to have intelligent machines – and he proposed what has been come to be known as the ‘Turing Test’.”

Dave Rubin: “It’s like the chat right?”

Sam Harris: “Yeah but .. when you can’t tell whether you’re interacting with a person or a computer – that computer in that case is passing the Turing Test – and as a measure of intelligence – that’s certainly a good proxy for a more detailed analysis of what it would mean to have machine intelligence… if I’m talking to something at length about anything that I want – and I can’t tell it’s not a person, and it turns out it’s somebody’s laptop – that laptop is passing the Turing Test. It may be that you can pass the Turing Test without even the subtlest glimmer of consciousness arising. Right, so that laptop is no more conscious than that glass of water is – right? That may in fact be the case, it may not be though – so I just don’t know there. If that’s the case, for me that’s just the scariest possibility – because what’s happening is .. I even heard at least one computer scientist and it was kind of alarming but I don’t have a deep argument against it – if you assume that consciousness comes along for the ride, if you assume that anything more intelligent than us gives rise to – either intentionally for by happenstance – is more conscious than we are, experiences a greater range of creative states – in well-being and can suffer more – by definition, in my view ethically, it becomes more important… if we’re more important than Cocker Spaniels or ants or anything below us – then if we create something that’s obviously above us in every conceivable way – and it’s conscious – right?”

Dave Ruben: “It would view us in the same way any we view anything that [???] us”

Sam Harris: “It’s more important than us right? And I’d have to grant that even though I’d not be happy about it deciding to annihilate us… I don’t have a deep ethical argument against why… I can’t say from a god’s eye view that it’s bad that we gave birth to super beings that then trampled on us – but then went on to become super in any ways we can’t possibly imagine – just as, you know, bacteria can’t imagine what we’re up to – right. So there are some computer scientists who kind of solve the fears, or silence the fears with this idea – that say just listen, if we build something that’s god like in that respect – we will have given birth to – our descendants will not be apes, they will be gods, and that’s a good thing – it’s the most beautiful thing – I mean what could be more beautiful than us creating the next generation of intelligent systems – that are infinitely profound and wise and knowledgeable from our point of view and are just improving themselves endlessly up to the limit of the resources available in the galaxy – what could be more rewarding than that?”

Dave Ruben: “Sounds pretty good”

Sam Harris: “And the fact that we all destroyed ourselves in the process because we were the bugs that hit their windshield when they were driving off – that’s just the price you pay. Well ok that’s possible but it’s also conceivable that all that could happen without consciousness right? That we could build mere mechanism that is competent in all the ways so as to plow us under – but that there is no huge benefit on the side of deep experience and well being and beauty and all that – it’s all just blind mechanism, which is intelligent mechanism .. in the same way as the best chess playing program – which is highly intelligent with respect to chess but nobody thinks as conscious. So that’s the theory … but on the way there – there is many weird moments where I think we will build machines that will pass the Turing Test – which is to say that they will seem conscious to us, they will seem to be able to detect our emotions and respond to our emotions, you know will say ‘you know what – you look tired, and maybe you should take a nap’ – and it will be right you know, it will be a better judge of your emotions than your friends are – right? And yet at a certain point certainly if you emulate this in a system whether it’s an avatar online or an actual robot that has a face right? That can display it’s own emotion and we get out of the uncanny valley where it just looks creepy and begins to look actually beautiful and rewarding and natural – then our intuitions that we are in dialog with a conscious other will be played upon perfectly right? .. and I think we will lose sight of it being an interesting problem – it will no longer be interesting to wonder whether our computers are conscious because they will be demonstrating as much as any person has ever demonstrated it – and in fact even more right? And unless we understand exactly how consciousness emerges in physical systems, at some point along the way of developing that technology – I don’t think we will actually know that they’re conscious – and that will be interesting – because we will successfully fool ourselves into just assuming – it will seem totally unethical to kill your robot off – it will be a murder worse than you killing a person because at a certain point it will be the most competent person – you know, the wisest person.”

Dave Ruben: “Sam, I don’t know if you’re writing a book about this – but you clearly should write a book about this – I’ll write one of the intros or something – there you go. Well listen we did two hours here – so I’m not going to give you the full Rogen treatment ”

Sam Harris: “We did a half Rogen”

Dave Ruben: “We did a half Rogen – but you know you helped me launch the first season – you’re launching second season – legally you have to now launch every season..”

* Some breaks in conversation (sentences, words, ums and ahs) have been omitted to make it easier to read

Peter Singer & David Pearce on Utilitarianism, Bliss & Suffering

Moral philosophers Peter Singer & David Pearce discuss some of the long term issues with various forms of utilitarianism, the future of predation and utilitronium shockwaves.

Topics Covered

Peter Singer

– long term impacts of various forms of utilitarianism
– Consciousness
– Artificial Intelligence
– Reducing suffering in the long run and in the short term
– Practical ethics
– Pre-implantation genetic screening to reduce disease and low mood
– Lives today are worth the same as lives in the future – though uncertainty must be brought to bear in deciding how one weighs up the importance of life
– The Hedonistic Imperative and how people react to it
– Correlation to high hedonic set points with productivity
existential risks and global catastrophic risks
– Closing factory farms

David Pearce

– Veganism and reducitarianism
– Red meat vs white meat – many more chickens are killed per ton of meat than beef
– Valence research
– Should one eliminate the suffering? And should we eliminate emotions of happiness?
– How can we answer the question of how far suffering is present in different life forms (like insects)?

Talk of moral progress can make one sound naive. But even the darkest cynic should salute the extraordinary work of Peter Singer to promote the interests of all sentient beings.David Pearce
 

 

Many thanks for watching!
– Support me via Patreon: https://www.patreon.com/scifuture
– Please Subscribe to this Channel: http://youtube.com/subscription_cente…
– Science, Technology & the Future website: http://scifuture.org

Is there a Meaningful Future for Non-Optimal Moral Agents?

In an interview last year, I had a discussion with John Danaher on the Hedonistic Imperative & Superintelligence – a concern he has with HI is that it denies or de-emphasises some kind of moral agency – in moral theory there is a distinction between moral agents (being a responsible actor able to make moral decisions, influence direction of moral progress, shapes its future, and owes duties to others) and moral patients who may be deemed to have limited or no grounds for moral agency/autonomy/responsibility – they are simply a recipient of moral benefits – in contrast to humans, animals could be classified as moral patients – (see Stanford writing on Grounds for Moral Status).

As time goes on, the notion of strong artificial intelligence leading to Superintelligence (which may herald in something like an Intelligence Explosion) and ideas like the hedonistic imperative becomes less sensational sci-fi concepts and more like visions of realizable eventualities. Thinking about moral endpoints comes to me a paradoxical feeling of triumph and disempowerment.

John’s concern is that ensuring the well-being of humans (conscious entities) is consistent with denying their moral agency – minimizing their capacity to act – that there is a danger that the outcome of HI or an Intelligence Explosion may result in sentient life being made very happy forever, but unable to make choices – with a focus on a future entirely based on bliss whilst ignoring other aspects of what makes for a valuable or worthwhile existence.

Artificial Heart chipsSo even if we have a future where a) we are made very happy and b) we are subject to a wide variety of novelty (which I argue for in Novelty Utilitarianism) without some kind of self-determination we may not be able to enjoy part of what arguably makes for a worthwhile existence.

If the argument for moral agency is completely toppled by the argument against free will then I can see why there would be no reason for it – and that bliss/novelty may be enough – though I personally haven’t been convinced that this is the case.

Also the idea that moral agency and novelty should be ranked as auxiliary aspects to the main imperative of reducing suffering/increasing bliss seems problematic – I get the sense that they (agency/novelty) could easily be swapped out for most non-optimal moral agents in the quest for -suffering/+bliss troublesome.
The idea that upon evaluating grounds for moral status, our ethical/moral quotient may not match or even come close to a potential ethical force of a superintelligence is also troubling. If we are serious about the best ethical outcomes, when the time comes, should we be committed to resigning all moral agency to agents that are more adept at producing peek moral outcomes?
ancillary-one-esk-glitchIs it really possible for non-optimal agents to have a meaningful moral input in a universe where they’ve been completely outperformed by moral machines? Is a life of novelty & bliss the most optimal outcome we can hope for?

There probably should be some more discussion on trade-offs between moral agency, peek experience and novelty.

Discussion in this video here starts at 24:02

Below is the whole interview with John Danaher:

The long-term future of AI (and what we can do about it) : Daniel Dewey at TEDxVienna

daniel deweyThis has been one of my favourite simple talks on AI Impacts – Simple, clear and straight to the point. Recommended as an introduction to the ideas (referred to in the title).

I couldn’t find the audio of this talk at TED – it has been added to archive.org:

 

Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Daniel worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute.

http://www.tedxvienna.at/

 

Brian Greene on Artificial Intelligence, the Importance of Fundamental Physics, Alien Life, and the Possible Future of Our Civilization

March 14th was Albert Einstein’s birthday, and also PI day, so it was a fitting day to be interviewing well known theoretical physicist and string theorist Brian Greene – the author of a number of books including, The Elegant Universe, Icarus at the Edge of Time, The Fabric of the Cosmos, and The Hidden Reality!
Think-Inc-logo2Many thanks to Suzi and Desh at THINKINC for helping organize this interview & for bringing Brian Greene to Australia for a number of shows (March 16 in Perth, March 18 in Sydney and March 19 in Melbourne) – check out www.thinkinc.org.au for more info!

Audio recording of the interview:

About the Interview with Brian Greene

Brian Greene discusses implications Artificial Intelligence and news of DeepMind AI (AlphaGo) beating the world grand champion in the board game Go.  He then discusses physics string theory, the territory of opinion on grand unifying theories of physics, the importance of supporting fundamental science, the possibility of alien life, the possible future of our space-faring civilization and of course gravitational waves!

In answer to the question on the importance of supporting fundamental research in science, Brain Greene said:

I tell them to wake up! Wake up and recognize that fundamental science has radically changed the way they live their lives today. If any of these individuals have a cell phone, or a personal computer, or perhaps they themselves or loved ones has been saved by an MRI machine.. I mean any of these devices rely on integrated circuits, which they themselves rely on quantum physics – so IF those folks who were in charge in the 1920s had have said, ‘hey you guys working on quantum physics, that doesn’t seem to be relevant to anything in the world around as so were going to cut your funding – well those people would have short circuited on of the greatest revolutions that our species has gone through – the information age, the technological age – so the bottom line is we need to support fundamental research because we know historically that when you gain a deep understanding of how things work – we can often leverage that to then manipulate the world around us in spectacular ways! And that needs to be where our fundamental focus remains – in science!

 

Layered art of Brian Greene, background and series titleBrian Randolph Greene is an American theoretical physicist and string theorist. He has been a professor at Columbia University since 1996 and chairman of the World Science Festival since co-founding it in 2008. Greene has worked on mirror symmetry, relating two different Calabi–Yau manifolds (concretely, relating the conifold to one of its orbifolds). He also described the flop transition, a mild form of topology change, showing that topology in string theory can change at the conifold point.

Greene has become known to a wider audience through his books for the general public, The Elegant Universe, Icarus at the Edge of Time, The Fabric of the Cosmos, The Hidden Reality, and related PBS television specials. He also appeared on The Big Bang Theory episode “The Herb Garden Germination“, as well as the films Frequency and The Last Mimzy. He is currently a member of the Board of Sponsors of the Bulletin of the Atomic Scientists.

stf-science-technology-future-blueLogo-light-and-dark-grey-555x146-trans

Many thanks for listening!
Support me via Patreon
Please Subscribe to the YouTube Channel
Science, Technology & the Future on the web

Brian-Greene---Science,Technology-and-the-Future__square-1080x1080

Juergen Schmidhuber on DeepMind, AlphaGo & Progress in AI

In asking AI researcher Juergen Schmidhuber about his thoughts on progress at DeepMind and about the AlphaGo vs Lee Sedol Go tournament – provided some initial comments. I will be updating this post with further interview.

juergen288x466genova1Juergen Schmidhuber: First of all, I am happy about DeepMind’s success, also because the company is heavily influenced by my former students: 2 of DeepMind’s first 4 members and their first PhDs in AI came from my lab, one of them co-founder, one of them first employee. (Other ex-PhD students of mine joined DeepMind later, including a co-author of our first paper on Atari-Go in 2010.)

Go is a board game where the Markov assumption holds: in principle, the current input (the board state) conveys all the information needed to determine an optimal next move (no need to consider the history of previous states). That is, the game can be tackled by traditional reinforcement learning (RL), a bit like 2 decades ago, when Tesauro used RL to learn from scratch a backgammon player on the level of the human world champion (1994). Today, however, we are greatly profiting from the fact that computers are at least 10,000 times faster per dollar.

In the last few years, automatic Go players have greatly improved. To learn a good Go player, DeepMind’s system combines several traditional methods such as supervised learning (from human experts) and RL based on Monte Carlo Tree Search. It will be very interesting to see the system play against the best human Go player Lee Sedol in the near future.

Unfortunately, however, the Markov condition does not hold in realistic real world scenarios. That’s why games such as football are much harder for machines than Go, and why Artificial General Intelligence (AGI) for RL robots living in partially observable environments will need more sophisticated learning algorithms, e.g., RL for recurrent neural networks.

For a comprehensive history of deep RL, see Section 6 of my survey with 888 references:
http://people.idsia.ch/~juergen/deep-learning-overview.html

Also worth seeing Juergen’s AMA here.

Juergen Schmidhuber’s website.

Future Day in Melbourne – 1st of March 2016

Future Day in Melbourne – Future Day is a way of focusing and celebrating the energy that more and more people around the world are directing toward creating a radically better future.

WHAT: Fun! Also.. Clear thinking about the future – 3 speakers/demonstrators
WHEN: Tues March 1st (2016) at 6.00pm for a 6.30pm start
WHERE: Bull & Bear Tavern – 347 Flinders Ln Melbourne

Speakers

  • Craig Pearce: Past, Present and Future Considerations for Computer Security and Safety – Also displaying a number of awesome retro computing specimens for you to drool at (and not on).
  • Adam Karlovsky: Transgenics – Gene Inserts Unlocking the Power of Pharmacology – Potentials increase by orders of magnitude when combining gene inserts with drugs.
  • Brendan Hill: Progress in Artificial Intelligence – Will AlphaGo Become the New Go Grandmaster this March? Discussion on AlphaGo – will it beat Lee Sodel later in March? (bets involved)

Future-Day-Flyer---Melbourne-2016---sml

Holidays provide a fantastic way of channeling peoples’ attention and energy.

Most of our holidays are focused on past events or individuals, or on the rhythms of nature. History and nature are wonderful and should be honored — but the amazing future we are building together should be honored as well.

Future Day Links: Facebook | Twitter | Website | Google+ Community | Videos

Subscribe to the Science, Technology & the Future YouTube Channel

stf-science-technology-future-blueLogo-light-and-dark-grey-555x146-trans

Science, Technology & the Future

Michio Kaku – The Future of the Mind – Intelligence Enhancement & the Singularity

Scifuture interview with popular scientist Michio Kaku on the Scientific Quest to Understand, Enhance & Empower the Mind!

The audio of this interview is found here.

Dr. Michio Kaku advocates thinking about some of the radical Transhumanist ideas we all know and love – here he speaks on the frontiers of Neuroscience, Intelligence Enhancement, the Singularity, and his new book ‘The Future of the Mind’!

String theory stems from Albert Einstein’s legacy; it combines the theory of general relativity and quantum mechanics by assuming the multiverse of universes. String field theory then uses the mathematics of fields to put it all into perspectives. Dr Kaku’s goal is to unite the four fundamental forces of nature into one ‘unified field theory’, a theory that seeks to summarise all fundamental laws of the universe in one simple equation.

Note Scifuture did another interview with Michio Kaku – the article can be found here, audio can be found here, and the video can be found here.

MichioKaku12162013

The Future of the Mind‘ – Book on Amazon.

Many thanks to Think Inc. who brought Dr Kaku to Australia!

Subscribe to the Science, Technology & the Future YouTube Channel

stf-science-technology-future-blueLogo-light-and-dark-grey-555x146-trans

Science, Technology & the Future

Jamais Cascio – The Future and You! Security, Privacy, AI, Geoengineering

Jamais Cascio discusses the Participatory Panopticon, Privacy & Secrecy, the ramifications of Disconnecting from the Chorus, what it means to be a Futurist, the Arc of Human Evolution, Artificial Intelligence, the Need for Meaning, Building Agents to Listen to Us, WorldChanging.com / OpenTheFuture.com, Geoengineering and the Viridian Green movement.

We pollute our data-streams, to control we have over our identifying information. The motivation behind social networks is not to keep your information private.

Jamais Cascio - Privacy, Security the Future and YouInterview was conducted at the Humanity+ conference in San Francisco late 2012.
Jamais Cascio is a San Francisco Bay Area-based writer and ethical futurist specializing in design strategies and possible outcomes for future scenarios.
Jamais Cascio resides in the San Francisco Bay Area Cascio received his undergraduate degree from UC Santa Cruz and later attended UC Berkeley. In the 1990s, Cascio worked for the futurist and scenario planning firm Global Business Network. In 2007 he was a lead author on the Metaverse Roadmap Overview.

Worldchanging

From 2003 to 2006 Cascio helped in the formation of Worldchanging. His activities covered topics related energy and climate change to global development, open source, and bio and nanotechnologies.
On November 29, 2010, Worldchanging announced that due to fundraising difficulties it would shut down. It has since merged with Architecture for Humanity, though detailed plans for the site’s future have not been released.

Open the Future

In early 2006, Cascio established Open The Future as his online home, a title based on his WorldChanging essay, The Open Future.

cascio_jamais_headshot-smSelected by Foreign Policy magazine as one of the Top 100 Global Thinkers of 2009, Cascio writes about the intersection of emerging technologies, environmental dilemmas, and cultural transformation, specializing in the design and creation of plausible scenarios of the future. His work focuses on the importance of long-term, systemic thinking, emphasizing the power of openness, transparency and flexibility as catalysts for building a more resilient society.

Cascio’s work appears in publications as diverse as Metropolis, the Atlantic Monthly, The Wall Street Journal, and Foreign Policy. He has been featured in multiple documentaries discussing social and environmental futures, including National Geographic Television’s SIX DEGREES, its 2008 program on the effects of global warming, the 2010 Canadian Broadcasting Company feature, SURVIVING THE FUTURE, and the 2013 independent film FIXED: THE SCIENCE/FICTION OF HUMAN AUGMENTATION. He has also been featured in several science-oriented television documentary series.

Cascio currently serves as Director of Impacts Analysis for the Center for Responsible Nanotechnology. He is a Senior Fellow at the Institute for Ethics and Emerging Technologies. Cascio was a speaker on the “On The Edge of Independent User-Creation In Gamespace” panel at the 2007 SXSW Interactive Festival. He is a Research Fellow at the Institute for the Future where together with Jane McGonigal in 2008 he helped create and administer the large scale collaborative multiplayer game Superstruct as an advanced strategy to engage lots of other hopeful thinkers in the pursuit of possible strategies and positive outcomes of a proposed future scenario occurring in 2019.
In 2006, Cascio presented a TED Talk at the TED conference “The Future We Will Create,” in Monterey, California. In the presentation he outlined possible available solutions for the emerging world climate and energy crisis.