Brian Greene on Artificial Intelligence, the Importance of Fundamental Physics, Alien Life, and the Possible Future of Our Civilization

March 14th was Albert Einstein’s birthday, and also PI day, so it was a fitting day to be interviewing well known theoretical physicist and string theorist Brian Greene – the author of a number of books including, The Elegant Universe, Icarus at the Edge of Time, The Fabric of the Cosmos, and The Hidden Reality!
Think-Inc-logo2Many thanks to Suzi and Desh at THINKINC for helping organize this interview & for bringing Brian Greene to Australia for a number of shows (March 16 in Perth, March 18 in Sydney and March 19 in Melbourne) – check out for more info!

Audio recording of the interview:

About the Interview with Brian Greene

Brian Greene discusses implications Artificial Intelligence and news of DeepMind AI (AlphaGo) beating the world grand champion in the board game Go.  He then discusses physics string theory, the territory of opinion on grand unifying theories of physics, the importance of supporting fundamental science, the possibility of alien life, the possible future of our space-faring civilization and of course gravitational waves!

In answer to the question on the importance of supporting fundamental research in science, Brain Greene said:

I tell them to wake up! Wake up and recognize that fundamental science has radically changed the way they live their lives today. If any of these individuals have a cell phone, or a personal computer, or perhaps they themselves or loved ones has been saved by an MRI machine.. I mean any of these devices rely on integrated circuits, which they themselves rely on quantum physics – so IF those folks who were in charge in the 1920s had have said, ‘hey you guys working on quantum physics, that doesn’t seem to be relevant to anything in the world around as so were going to cut your funding – well those people would have short circuited on of the greatest revolutions that our species has gone through – the information age, the technological age – so the bottom line is we need to support fundamental research because we know historically that when you gain a deep understanding of how things work – we can often leverage that to then manipulate the world around us in spectacular ways! And that needs to be where our fundamental focus remains – in science!


Layered art of Brian Greene, background and series titleBrian Randolph Greene is an American theoretical physicist and string theorist. He has been a professor at Columbia University since 1996 and chairman of the World Science Festival since co-founding it in 2008. Greene has worked on mirror symmetry, relating two different Calabi–Yau manifolds (concretely, relating the conifold to one of its orbifolds). He also described the flop transition, a mild form of topology change, showing that topology in string theory can change at the conifold point.

Greene has become known to a wider audience through his books for the general public, The Elegant Universe, Icarus at the Edge of Time, The Fabric of the Cosmos, The Hidden Reality, and related PBS television specials. He also appeared on The Big Bang Theory episode “The Herb Garden Germination“, as well as the films Frequency and The Last Mimzy. He is currently a member of the Board of Sponsors of the Bulletin of the Atomic Scientists.


Many thanks for listening!
Support me via Patreon
Please Subscribe to the YouTube Channel
Science, Technology & the Future on the web


Juergen Schmidhuber on DeepMind, AlphaGo & Progress in AI

In asking AI researcher Juergen Schmidhuber about his thoughts on progress at DeepMind and about the AlphaGo vs Lee Sedol Go tournament – provided some initial comments. I will be updating this post with further interview.

juergen288x466genova1Juergen Schmidhuber: First of all, I am happy about DeepMind’s success, also because the company is heavily influenced by my former students: 2 of DeepMind’s first 4 members and their first PhDs in AI came from my lab, one of them co-founder, one of them first employee. (Other ex-PhD students of mine joined DeepMind later, including a co-author of our first paper on Atari-Go in 2010.)

Go is a board game where the Markov assumption holds: in principle, the current input (the board state) conveys all the information needed to determine an optimal next move (no need to consider the history of previous states). That is, the game can be tackled by traditional reinforcement learning (RL), a bit like 2 decades ago, when Tesauro used RL to learn from scratch a backgammon player on the level of the human world champion (1994). Today, however, we are greatly profiting from the fact that computers are at least 10,000 times faster per dollar.

In the last few years, automatic Go players have greatly improved. To learn a good Go player, DeepMind’s system combines several traditional methods such as supervised learning (from human experts) and RL based on Monte Carlo Tree Search. It will be very interesting to see the system play against the best human Go player Lee Sedol in the near future.

Unfortunately, however, the Markov condition does not hold in realistic real world scenarios. That’s why games such as football are much harder for machines than Go, and why Artificial General Intelligence (AGI) for RL robots living in partially observable environments will need more sophisticated learning algorithms, e.g., RL for recurrent neural networks.

For a comprehensive history of deep RL, see Section 6 of my survey with 888 references:

Also worth seeing Juergen’s AMA here.

Juergen Schmidhuber’s website.

Future Day in Melbourne – 1st of March 2016

Future Day in Melbourne – Future Day is a way of focusing and celebrating the energy that more and more people around the world are directing toward creating a radically better future.

WHAT: Fun! Also.. Clear thinking about the future – 3 speakers/demonstrators
WHEN: Tues March 1st (2016) at 6.00pm for a 6.30pm start
WHERE: Bull & Bear Tavern – 347 Flinders Ln Melbourne


  • Craig Pearce: Past, Present and Future Considerations for Computer Security and Safety – Also displaying a number of awesome retro computing specimens for you to drool at (and not on).
  • Adam Karlovsky: Transgenics – Gene Inserts Unlocking the Power of Pharmacology – Potentials increase by orders of magnitude when combining gene inserts with drugs.
  • Brendan Hill: Progress in Artificial Intelligence – Will AlphaGo Become the New Go Grandmaster this March? Discussion on AlphaGo – will it beat Lee Sodel later in March? (bets involved)


Holidays provide a fantastic way of channeling peoples’ attention and energy.

Most of our holidays are focused on past events or individuals, or on the rhythms of nature. History and nature are wonderful and should be honored — but the amazing future we are building together should be honored as well.

Future Day Links: Facebook | Twitter | Website | Google+ Community | Videos

Subscribe to the Science, Technology & the Future YouTube Channel


Science, Technology & the Future

Michio Kaku – The Future of the Mind – Intelligence Enhancement & the Singularity

Scifuture interview with popular scientist Michio Kaku on the Scientific Quest to Understand, Enhance & Empower the Mind!

The audio of this interview is found here.

Dr. Michio Kaku advocates thinking about some of the radical Transhumanist ideas we all know and love – here he speaks on the frontiers of Neuroscience, Intelligence Enhancement, the Singularity, and his new book ‘The Future of the Mind’!

String theory stems from Albert Einstein’s legacy; it combines the theory of general relativity and quantum mechanics by assuming the multiverse of universes. String field theory then uses the mathematics of fields to put it all into perspectives. Dr Kaku’s goal is to unite the four fundamental forces of nature into one ‘unified field theory’, a theory that seeks to summarise all fundamental laws of the universe in one simple equation.

Note Scifuture did another interview with Michio Kaku – the article can be found here, audio can be found here, and the video can be found here.


The Future of the Mind‘ – Book on Amazon.

Many thanks to Think Inc. who brought Dr Kaku to Australia!

Subscribe to the Science, Technology & the Future YouTube Channel


Science, Technology & the Future

Jamais Cascio – The Future and You! Security, Privacy, AI, Geoengineering

Jamais Cascio discusses the Participatory Panopticon, Privacy & Secrecy, the ramifications of Disconnecting from the Chorus, what it means to be a Futurist, the Arc of Human Evolution, Artificial Intelligence, the Need for Meaning, Building Agents to Listen to Us, /, Geoengineering and the Viridian Green movement.

We pollute our data-streams, to control we have over our identifying information. The motivation behind social networks is not to keep your information private.

Jamais Cascio - Privacy, Security the Future and YouInterview was conducted at the Humanity+ conference in San Francisco late 2012.
Jamais Cascio is a San Francisco Bay Area-based writer and ethical futurist specializing in design strategies and possible outcomes for future scenarios.
Jamais Cascio resides in the San Francisco Bay Area Cascio received his undergraduate degree from UC Santa Cruz and later attended UC Berkeley. In the 1990s, Cascio worked for the futurist and scenario planning firm Global Business Network. In 2007 he was a lead author on the Metaverse Roadmap Overview.


From 2003 to 2006 Cascio helped in the formation of Worldchanging. His activities covered topics related energy and climate change to global development, open source, and bio and nanotechnologies.
On November 29, 2010, Worldchanging announced that due to fundraising difficulties it would shut down. It has since merged with Architecture for Humanity, though detailed plans for the site’s future have not been released.

Open the Future

In early 2006, Cascio established Open The Future as his online home, a title based on his WorldChanging essay, The Open Future.

cascio_jamais_headshot-smSelected by Foreign Policy magazine as one of the Top 100 Global Thinkers of 2009, Cascio writes about the intersection of emerging technologies, environmental dilemmas, and cultural transformation, specializing in the design and creation of plausible scenarios of the future. His work focuses on the importance of long-term, systemic thinking, emphasizing the power of openness, transparency and flexibility as catalysts for building a more resilient society.

Cascio’s work appears in publications as diverse as Metropolis, the Atlantic Monthly, The Wall Street Journal, and Foreign Policy. He has been featured in multiple documentaries discussing social and environmental futures, including National Geographic Television’s SIX DEGREES, its 2008 program on the effects of global warming, the 2010 Canadian Broadcasting Company feature, SURVIVING THE FUTURE, and the 2013 independent film FIXED: THE SCIENCE/FICTION OF HUMAN AUGMENTATION. He has also been featured in several science-oriented television documentary series.

Cascio currently serves as Director of Impacts Analysis for the Center for Responsible Nanotechnology. He is a Senior Fellow at the Institute for Ethics and Emerging Technologies. Cascio was a speaker on the “On The Edge of Independent User-Creation In Gamespace” panel at the 2007 SXSW Interactive Festival. He is a Research Fellow at the Institute for the Future where together with Jane McGonigal in 2008 he helped create and administer the large scale collaborative multiplayer game Superstruct as an advanced strategy to engage lots of other hopeful thinkers in the pursuit of possible strategies and positive outcomes of a proposed future scenario occurring in 2019.
In 2006, Cascio presented a TED Talk at the TED conference “The Future We Will Create,” in Monterey, California. In the presentation he outlined possible available solutions for the emerging world climate and energy crisis.

The Singularity & Prediction – Can there be an Intelligence Explosion? – Interview with Marcus Hutter

Can there be an Intelligence Explosion?  Can Intelligence Explode?
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. What could it mean for intelligence to explode?
We need to provide more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. Irving John Good - 'Good Thinking: The Foundations of Probability and Its Applications'

team-marcus-hutterPaper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.

See also:

High Impact Technologies with Andrew Barron

Well that’s an open-ended question: What technologies will have high impact in the future?
I think what we are seeing at the moment – and we are seeing it quite rapidly – is the fusion of the biological sciences and the information sciences period – so it goes beyond AI.
We’re seeing a capacity to manipulate and rewrite genomes – again we are actually need an involvement of an AI to do that properly – but we really are truly seeing a fusion of the biological and information sciences which is opening absolutely transformative technologies – that we probably can’t quite properly predict or name currently – but I imagine that the future would see explosive growth in this area and the emergence disciplines that we can’t even imagine currently.

Andrew Barron - High Impact Technology.New Capacities to manipulate genomes
Yep.. exactly… exactly – so new capacities to manipulate genomes – and equally we are getting much smarter about the risks of that and much more careful with that – but we are realizing also that the genome itself is highly self-organized and massively data-heavy – and yet this fusion of biology and information sciences is liberating entirely new disciplines – and it’s happening at such a pace.  I mean, just in terms of my life as a scientist – we’ve gone from – when I was at school we were told that the human genome was impossible – informatically impossible – there would never be enough computing power in the world to sequence the human genome.  Now we’re sequencing genomes for just over $1000 very very quickly and easily – our challenge now is to be intelligent in what to do with that data.

Andrew Barron - High Impact Technology- Significant Strides in Technology
What’s interesting about the iPhone is not the technology itself – it’s the way that it has changed human behavior.  So we’ve suddenly adapted very very very rapidly to a carryable device that enables us to have immediate communication / immediate access to databases and reference libraries – and their capacity to store endless amounts of images if we choose to do so – and we’ve adapted to that seamlessly – to a point when people feel lost and panic if their phone is broken or is taken away from them.  That’s the more interesting interesting impact of the iPhone – and I think what that says is that we’re going to see humans adapt very quickly easily to other forms of wearable or insert-able technologies – I think we’ve shown by the iPhone example that we have a capacity embrace that kind of change – if it offers convenience and ease and improves our connectivity and quality of life.
In terms of technology though I think that the biggest strides will come from biological – there is research that fuses biology and technology – I think that we are on the cusp of that – the more interesting technological changes will come not through simple technology – but by an understanding of how our brains work – by understanding the human brain.  If we can actually crack that and then interface that with technology – that will get completely transformative technological solutions.
So in the far future could we see humans being a mixture of technology and organic solutions – and would be basically see a re-imagining of humanity – in a far future?  Again I see no reason why not in a far future.

Andrew Barron - AI.00_03_16_08.Still001

Andrew Barron is an Associate Professor in the Department of Biological Sciences at Macquarie University. With his team at Macquarie they are exploring the neurobiology of major behavioural systems such as memory, goal-directed behaviour and stress from a comparative and evolutionary perspective. In 2015 Andrew was awarded an ARC Future Fellowship to develop a computational model of the honey bee brain.

Andrew’s PhD (Department of Zoology, University of Cambridge 1999) considered the possibility of the retention of memory through metamorphosis in Drosophila. Prior to his move to Macquarie in 2007 Andrew had the opportunity to work with and be mentored by Prof. Ben Oldroyd (University of Sydney), Prof. Gene Robinson (University of Illinois), Prof. Mandayam Srinivasan and Prof. Ryszard Maleszka (Australian National University).

Andrew Barron - High Impact Technology. - title

Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 – The Future of Intelligence

Panel - Ray Kurzweil Stuart Russell Max Tegmark Harry Shum - mod Margaret BodenShould science and society welcome ‘the singularity’ – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?
The discussion has been growing over decades, institutes dedicated to solving AI friendliness have popped up, and more recently the ideas have found popular advocates. Certainly super intelligent machines could help solve classes of problems that humans struggle with, and also if not designed well may cause more problems that they solve.

Is the question of fear or hope in AI a false dichotomy?

Ray Kurzweil

Ray Kurzweil

While Kurzweil agrees that AI risks are real argues that we already face risks involving biotechnology – I think Kurzweil believes we can solve the biotech threat and other risks though building superintelligence.

Stuart Russell believes that a) we should be exactly sure what we want before we let the AI genie out of the bottle, and b) it’s a technological problem in much the same way as the containment of nuclear fusion is a technological problem.

Max Tegmark says we should both welcome and fear the Technological Singularity. We shouldn’t just bumble into it unprepared. All technologies have been double edged swords – in the past we learned from mistakes (i.e. with out of control fires) but with AI we may only get one chance.

Harry Shum says we should be focussing on what we believe we can develop with AI in the next few decades. We find it difficult to talk about AGI. Most of the social fears are around killer robots.

Maggie Boden

Maggie Boden

Maggie Boden poses an audience question about how will AI cope with our lack of development in ethical and moral norms?

Stuart Russell answers that machines have to come to understand what human values are. If the first sudo-general-purpose AI’s don’t get human values well enough they may end up cooking it’s owners cat – this could irreparably tarnish the AI and home robot industry.

Kurzweil adds that human society is getting more ethical – it seems that statistically we are making ethical progress.

Max Tegmark

Max Tegmark

Max Tegmark brings up that intelligence is defined by the degree of ability to achieve goals – so we can’t ignore the question of what goals to give the system if we are building highly intelligent AI. We need to make AI systems understand what humans really want, not what they say they want.

Harry Shum says that the important ethical question for AI systems needs to address data and user privacy.

Panelists: Harry Shum (Microsoft Research EVP of Tech), Max Tegmark (Cosmologist, MIT) Stuart Russell (Prof. of Computer Science, UC Berkeley) and Ray Kurzweil (Futurist, Google Director of Engineering). Moderator: Margaret Boden (Prof. of Cognitive Science, Uni. of Sussex).

This debate is from the 2015 edition of the meeting, held in Gothenburg, Sweden on 9 Dec.

AGI Progress & Impediments – Progress in Artificial Intelligence Panel

Panelists: Ben Goertzel, David Chalmers, Steve Omohundro, James Newton-Thomas – held at the Singularity Summit Australia in 2011

Panelists discuss approaches to AGI, progress and impediments now and in the future.
Ben Goertzel:
Ben Goertzle with backdrop of headsBrain Emulation, Broad level roadmap simulation, bottleneck, lack of imaging technology, we don’t know what level of precision we need to reverse engineer biological intelligence. Ed Boyed – optimal brain imageing.
Not by Brain emulation (engineering/comp sci/cognitive sci), bottleneck is funding. People in the field believe/feel they know how to do it. To prove this, they need to integrate their architectures which looks like a big project. Takes a lot of money, but not as much as something like Microsoft Word.

David Chalmers (time 03:42):
DavidChalmersWe don’t know which of the two approaches. Though what form the singularity will take will likely be dependent on the approach we use to build AGI. We don’t understand the theory yet. Most don’t think we will have a perfect molecular scanner that scans the brain and its chemical constituents. 25 Years ago David Chalmers worked in Douglass Hofstadter’s AI lab, but his expertise in AI is now out of date. To get to Human Level AI by brute force or through cognitive psychology knows that the cog-sci is not in very good shape. Third approach is a hybrid of ruffly brain augmentation (through technology we are already using like ipads and computers etc) and technological extension and uploading. If using brain augmentation through tech and uploading as a first step in a Singularity then it is including Humans in the equation along with humanities values which may help shape a Singularity with those values.

Steve Omohundro (time 08:08):
steve_omohundro_headEarly in history AI, there was a distinction: The Neats and the Scruffies. John McCarthy (Stanford AI Lab) believed in mathematically precise logical representations – this shaped a lot of what Steve thought about how programming should be done. Marvin Minsky (MIT Lab) believed in exploring neural nets and self organising systems and the approach of throwing things together to see how it self-organises into intelligence. Both approaches are needed: the logical, mathematically precise, neat approach – and – the probabilistic, self-organising, fuzzy, learning approach, the scruffy. They have to come together. Theorem proving without any explorative aspect probably wont succeed. Purely Neural net based simulations can’t represent semantics well, need to combine systems with full semantics and systems with the ability to adapt to complex environments.

James Newton-Thomas (time 09:57)
james.newton-thomasJames has been playing with Neural-nets and has been disappointed with them not being thinks that Augmentation is the way forward. The AI problem is going to be easier to solve if we are smarter to solve it. Conferences such as this help infuse us with a collective empowerment of the individuals. There is an impediment – we are already being dehumanised with our Ipad, where the reason why we are having a conversation with others is a fact about our being part of a group and not about the information that can be looked up via an IPad. We need to careful in our approach so that we are able to maintain our humanity whilst gaining the advantages of the augmentation.

General Discussion (time 12:05):
David Chalmers: We are already becoming cyborgs in a sense by interacting with tech in our world. the more literal cyborg approach we are working on now. Though we are not yet at the point where the technology is commercialization to in principle allow a strong literal cyborg approach. Ben Goertzel: Though we could progress with some form of brain vocalization (picking up words directly from the brain), allowing to think a google query and have the results directly added to our mind – thus bypassing our low bandwidth communication and getting at the information directly in our heads. To do all this …
Steve Omohundro: EEG is gaining a lot of interest to help with the Quantified Self – brain interfaces to help measure things about their body (though the hardware is not that good yet).
Ben Goertzel: Use of BCIs for video games – and can detect whether you are aroused and paying attention. Though the resolution is very course – hard to get fine grained brain state information through the skull. Cranial jacks will get more information. Legal systems are an impediment.
James NT: Alan Snyder using time altering magnetic fields in helmets that shut down certain areas of the brain, which effectively makes people smarter in narrower domains of skill. Can provide an idiot savant ability at the cost of the ability to generalize. The brain that becomes to specific at one task is doing so at the cost of others – the process of generalization.

Ben Goertzel, David Chalmers, Steve Omohundro - A Thought Experiment

Ben Goertzel, David Chalmers, Steve Omohundro – A Thought Experiment

Metamorphogenesis – How a Planet can produce Minds, Mathematics and Music – Aaron Sloman

The universe is made up of matter, energy and information, interacting with each other and producing new kinds of matter, energy, information and interaction.
How? How did all this come out of a cloud of dust?
In order to find explanations we first need much better descriptions of what needs to be explained.

By Aaron Sloman
Abstract – and more info – Held at Winter Intelligence Oxford – Organized by the Future of Humanity Institute

Aaron Sloman

Aaron Sloman

This is a multi-disciplinary project attempting to describe and explain the variety of biological information-processing mechanisms involved in the production of new biological information-processing mechanisms, on many time scales, between the earliest days of the planet with no life, only physical and chemical structures, including volcanic eruptions, asteroid impacts, solar and stellar radiation, and many other physical/chemical processes (or perhaps starting even earlier, when there was only a dust cloud in this part of the solar system?).

Evolution can be thought of as a (blind) Theorem Prover (or theorem discoverer).
– Proving (discovering) theorems about what is possible (possible types of information, possible types of information-processing, possible uses of information-processing)
– Proving (discovering) many theorems in parallel (including especially theorems about new types of information and new useful types of information-processing)
– Sharing partial results among proofs of different things (Very different biological phenomena may share origins, mechanisms, information, …)
Combining separately derived old theorems in constructions of new proofs (One way of thinking about symbiogenesis.)
– Delegating some theorem-discovery to neonates and toddlers (epigenesis/ontogenesis). (Including individuals too under-developed to know what they are discovering.)
– Delegating some theorem-discovery to social/cultural developments. (Including memes and other discoveries shared unwittingly within and between communities.)
– Using older products to speed up discovery of new ones (Using old and new kinds of architectures, sensori-motor morphologies, types of information, types of processing mechanism, types of control & decision making, types of testing.)

The “proofs” of discovered possibilities are implicit in evolutionary and/or developmental trajectories.

They demonstrate the possibility of development of new forms of development, evolution of new types of evolution learning new ways to learn evolution of new types of learning (including mathematical learning: by working things out without requiring empirical evidence) evolution of new forms of development of new forms of learning (why can’t a toddler learn quantum mechanics?) – how new forms of learning support new forms of evolution amd how new forms of development support new forms of evolution (e.g. postponing sexual maturity until mate-selection mating and nurturing can be influenced by much learning)
…. and ways in which social cultural evolution add to the mix

These processes produce new forms of representation, new ontologies and information contents, new information-processing mechanisms, new sensory-motor
morphologies, new forms of control, new forms of social interaction, new forms of creativity, … and more. Some may even accelerate evolution.

A draft growing list of transitions in types of biological information-processing.

An attempt to identify a major type of mathematical reasoning with precursors in perception and reasoning about affordances, not yet replicated in AI systems.

Even in microbes I suspect there’s much still to be learnt about the varying challenges and opportunities faced by microbes at various stages in their evolution, including new challenges produced by environmental changes and new opportunities (e.g. for control) produced by previous evolved features and competences — and the mechanisms that evolved in response to those challenges and opportunities.

Example: which organisms were first able to learn about an enduring spatial configuration of resources, obstacles and dangers, only a tiny fragment of which can be sensed at any one time?
What changes occurred to meet that need?

Use of “external memories” (e.g. stigmergy)
Use of “internal memories” (various kinds of “cognitive maps”)

More examples to be collected here.