Vernor Vinge on the Turing Test, Artificial Intelligence

Preface

the_imitation_game_bOn the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable from humans.
Vernor Vinge is a mathematician and science fiction author who is well known for many Hugo Award-winning novels and novellas*   and his 1993 essay “The Coming Technological Singularity”, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended”, such that no current models of reality are sufficient to predict beyond it.

 

Alan Turing and the Computability of Intelligence

Adam Ford: Alan Turing is considered the “Father of Theoretical Computer Science and Artificial Intelligence” – his view about the potential of AI contrasts with much of the skepticism that has subsequently arose.  What is at the root of this skepticism?

Vinge_Singularity_Omni_face250x303Vernor Vinge: The emotional source of the skepticism is the ineffable feeling that many (most?)  people have against the possibility that self-awareness could arise from simple, constructed devices.

 

AF: Many theorists feel that the combined talents of pure machines and humans will always produce more creative and therefore useful output – what are your thoughts?

VV: When it comes to intelligence, biology just doesn’t have legs. _However_ in the near term, teams of people plus machines can be much smarter than either — and this is one of the strongest reasons for being optimistic that we can manage the new era safely, and project that safety into the farther future.

 

AF: Is the human brain essentially a computer?

VV: Probably yes, but if not the lack can very likely be made up for with machine improvements that we humans can devise.

 

AF: Even AI critics John Searle and Hubert Dreyfus (i.e. “What Computers (Still) Can’t Do”) agree that a brain simulation is possible in theory, though they argue that merely mimicking the functioning brain would in itself be an admission of ignorance (concerning intelligence) – what are your thoughts?

VV: The question of whether there is self-awareness behind a mimick may be the most profound issue, but for almost all practical purposes it isn’t relevant: in a few years, I think we will be able to make machines that can run circles around any human mind by all externally measured criteria. So what if no one is really home inside that machine?

Offhand, I can think of only one practical import to the answer, but that _is_ something important: If such minds are self-aware in the human sense, then uploads suddenly become very important to us mortality-challenged beings.

For reductionists interested in _that_ issue, some confidence might be achieved with superintelligence architectures that model those structures in our minds that reductionists come to associate with self-awareness. (I can imagine this argument being carried on by the uploaded supermind children of Searle and Moravec — a trillion years from now when there might not be any biological minds around whatsoever.)

 

AF: Do you think Alan Turing’s reasons for believing in the potential of AI are different from your own and other modern day theorists?  If so in what ways?

VV: My guess is there is not much difference.

 

AF: Has Alan Turing and his work influenced your writing? If it has, how so?

VV: I’m not aware of direct influence. As a child, what chiefly influenced me was the science-fiction I was reading! Of course, those folks were often influenced by what was going in science and math and engineering of the time.

Alan Turing has had a multitude of incarnations in science fiction…   I think that besides being a broadly based math and science genius, Turing created accessible connections between classic philosophical questions and current issues.

 

AF: How do you think Alan Turing would respond to the specific concept of the Technological Singularity as described by you in your paper “The Coming Technological Singularity: How to Survive in the Post-Human Era“?

VV: I’d bet that Turing (and many AI pioneers) had extreme ideas about the consequences of superhuman machine intelligence. I’m not sure if Turing and I would agree about the potential for Intelligence Amplification and human/machine group minds.

I’d be _very_ interested in his reaction to modern analysis such as surveyed in Bostrom’s recent _Superintelligence_ book.

 

AF: In True Names, agents seek to protect their true identity. The guardian of the Coven’s castle is named ‘Alan Turing’ – what was the reason behind this?

It was a tip of the hat in Turing’s direction. By the time I wrote this story I had become quite aware of Alan Turing (contrasting with my childhood ignorance that I mentioned earlier).

 

AF: Your first novella Bookworm Run! was themed around brute forcing simpler-than-human-intelligence to super-intelligence (in it a chimpanzee’s intelligence is amplified).  You also explore the area of intelligence amplification in Marooned in Realtime.
Do you think it is possible for a Singularity to bootstrap from brute forcing simple cognitive models? If so do you think Super-Intelligence will be achieved through brute-forcing simple algorithms?

VV: I view “Intelligence Amplification” (IA) as a finessing of the hardest questions by building on top of what already exists. Thus even UI design lies on the path to the Singularity. One could argue that Intelligence Amplification is the surest way of insuring humanity in the super-intelligence (though some find that a very scary possibility in itself).

 

The Turing Test and Beyond

AF: Is the Turing Test important? If so, why, and how does it’s importance match up to tracking progress in Strong AI?

VV: In its general form, I regard the Turing Test as a marvelous, zen-like, bridge between reductionism and the inner feelings most people have about their own self-awareness.  Bravo Dr. Turing!

 

AF: Is a text conversation is ever a valid test for intelligence? Is blackbox testing enough for a valid test for intelligence?

VV: “Passing the Turing Test” depends very much on the setup:
a) The examining human (child? adult? fixated or afflicted adult? –see Sherry Turkle’s examples of college students who passed a chatbot).
b) The duration of the test.
c) The number of human examiners participating.
d) Restrictions on the examination domain.

In _The Emperor’s New Mind_, Penrose has a (mostly negative) critique of the Turing Test. But at the end he says that if the test was very broad, lasting years, and convincing to him (Penrose), then it might be meaningful to talk about a “pass grade”.

 

AF: The essence of Roger Penrose’s argument (in the Emperor’s New Mind)
–  It is impossible for a Turing machine to enumerate all possible Godel sentences. Such a program will always have a Godel sentence derivable from its program which it can never discover
–  Humans have no problem discovering these sentences and seeing the truth of them
And he concludes that humans are not reducible to turing machines.  Do you agree with Roger’s assessment  – Are humans not reducible to turing machines?

VV: This argument depends on comparing a mathematical object (the Turing Machine) with whatever kind of object the speaker considers a “human mind” to be.  As a logical argument, it leaves me dubious.

 

AF: Are there any existing interpretations of the Turing Test that you favour?

VV: I think Penrose’s version (described above) is the most important.

In conversation, the most important thing is that all sides know which flavor of the test they are talking about 🙂

 

AF: You mentioned it has been fun tracking Turing Test contests, what are your thoughts on attempts at passing the Turing Test so far?

VV: So far, it seems to me that the philosophically important thresholds are still far away. Fooling certain people, or fooling people for short periods of time seems to have been accomplished.

 

AF: Is there any specific type of intelligence we should be testing machines for?

VV: There are intelligence tests that would be very interesting to me, but I rather not call them versions of the Turing Test. For instance, I think we’re already in the territory where more and more [forms->sorts] of superhuman forms of creativity and “intuition” are possible.

I think there well also be performance tests for IA and group mind projects.

 

AF: Some argue that testing for ‘machine consciousness’ is more interesting – what are your thoughts?

VV: Again, I’d keep this possibility separate from Turing Test issues, though I do think that a being that could swiftly duplicate itself and ramp intellect up or down per resource and latency constraints would have a vastly different view of reality compared to the severe and static time/space/mortality restrictions that we humans live with.

 

AF: The Turing Test seems like a competitive sport.  Though some interpretations of the Turing Test have conditions which seem to be quite low.  The competitive nature of how the Turing Test is staged seems to me to select for the cheapest and least sophisticated methods to fool judges on a Turing Test panel.

VV: Yes.

 

AF: Should we be focusing on developing more complex and adaptive Turing style tests (more complex measurement criteria? more complex assessment)? What alternatives to a Turing Test competition (if any) would you suggest to motivate regular testing for machine intelligence?

VV: The answers to these questions may grow out of hard engineering necessity more than from the sport metaphor. Going forward, I imagine that different engineering requirements will acquire various tests, but they may look more like classical benchmark tests.

 

Tracking Progress in Artificial Intelligence

AF: Why is tracking progress towards AI important?

VV: Up to a point it could be important for the sort of safety reasons Bostrom discusses in _Superintelligence_. Such tracking could also provide some guidance for machine/human/society teams that might have the power to guide events along safe paths.

 

AF: What do you see as the most useful mechanisms for tracking progress towards a) human equivalence in AI, b) a Technological Singularity?

VV: The approach to human equivalence might be tracked with a broad range of tests. Such would also apply to the Singularity, but for a soft takeoff, I imagine there would be a lot of economic effects that could be tracked. For example:
–  trends in employment of classic humans, augmented humans, and computer/human teams;
–  trends in what particular jobs still have good employment;
–  changes in the personal characteristics of the most successful CEOs.

Direct tests of automation performance (such as we’ve discussed above) are also important, but as we approach the Singularity, the center of gravity shifts from the programmers to the programs and how the programs are gaming the systems.

 

AF: If you had a tardis and you could bring Alan Turing forward into the 21st century, would he be surprised at progress in AI?  What kinds of progress do you think he would be most interested in?

VV: I don’t have any special knowledge of Turing, but my guess is he would be pleased — and he would want to _understand_ by becoming a super himself.

 

AF: If and when the Singularity becomes imminent – is it likely that the majority of people will be surprised?

VV: A hard takeoff would probably be a surprise to most people. I suspect that a soft takeoff would be widely recognized.

 

Implications

AF: What opportunities could we miss if we are not well prepared (This includes opportunities for risk mitigation)?

VV: Really, the risk mitigation is the serious issue. Other categories of missed opportunities will probably be quickly fixed by the improving tech.  For pure AI, some risk mitigation is the sort of thing MIRI is researching.

For pure AI, IA, and group minds, I think risk mitigation involves making use of the human equivalent minds that already exist in great numbers (namely, the human race). If these teams and early enhancements recognized the issues, they can form a bridge across to the more powerful beings to come.

 

AF: You spoke about an AI Hard Takeoff as being potentially very bad – can you elaborate here?

VV: A hard takeoff is too fast for normal humans to react and accommodate to.  To me, a Hard Takeoff would be more like an explosion than like technological progress. Any failure in mitigation planning is suddenly beyond the ability of normal humans to fix.

 

AF: What stood out for you after reading Nick Bostrom’s book ‘Superintelligence – paths, dangers, strategies’?

VV: Yes. I think it’s an excellent discussion especially of the pure AI path to superintelligence. Even people who have no intense interest in these issues would find the first few chapters interesting, as they sketch out the problematic issues of pure AI superintelligence — including some points that may have been missed back in the twentieth century. The book then proceeds to a fascinating analysis of how to cope with these issues.

My only difference with the analysis presented is that while pure AI is likely the long term important issue, there could well be a period (especially in the case of a Soft Takeoff) where the IA and groupmind trajectories are crucial.

vernor_vinge_LosCon

Vernor Vinge at Los Con 2012

Notes:
* Hugo award winning novels & novellas include: A Fire Upon the Deep (1992), A Deepness in the Sky (1999), Rainbows End (2006), Fast Times at Fairmont High (2002), and The Cookie Monster (2004), and The Peace War (1984).

Also see video interview with Vernor Vinge on the Technological Singularity.

Why did Sam Altman join OpenAI as CEO?

Sam Altman leaves role as president at YCombinator and joins OpenAI as CEO – why?

Elon Musk created OpenAI to to ensure that artificial intelligence, especially powerful artificial general intelligence (AGI), is “developed in a way that is safe and is beneficial to humanity,” – it’s an interesting bet – because AGI doesn’t exist yet – and the tech industries forecasts about when AGI will be realised spans across a wide spectrum of relatively soon to perhaps never.

We are trying to build safe artificial general intelligence. So it is my belief that in the next few decades, someone; some group of humans, will build a software system that is smarter and more capable than humans in every way. And so it will very quickly go from being a little bit more capable than humans, to something that is like a million, or a billion times more capable than humans… So we’re trying to figure out how to do that technically, make it safe and equitable, share the benefits of it – the decision making of it – over the world…Sam Altman

Sam and others believe that developing AGI is a large project, and won’t be cheap – and could require upwards of billions of dollars “in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers”. OpenAI was once a non-profit org, but recently it restructred as a for-profit with caveats.. Sam tells investors that it isn’t clear on the specifics of how return on investment will work in the short term, though ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.’

So, first create AGI and then use it to money… But how much money?

Capped profit at 100x investment – then excess profit goes to the rest of the world. 100x is quite a high bar no? The thought is that AGI could be so powerful it could..

“maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.”

If we take the high standards of Future of Humanity Institute* for due diligence in perusing safe AI – are these standards being met at OpenAI? While Sam seems to have some sympathy for the arguments for these standards, he seems to believe it’s more important to focus on societal consequences of superintelligent AI. Perhaps convincing key players of this in the short term will help incubate an environment where it’s easier to pursue strict safety standards for AGI development.

I really do believe that the work we are doing at OpenAI will not only far eclipse the work I did at YC, but any of the work anyone in the tech industry does…Sam Altman

See this video (at approx 25.30 minute mark and onwards)

 

* See Nick Bostrom’s book ‘Superintelligence

Reason – Philosophy Of Anti Aging: Ethics, Research & Advocacy

Reason was interviewed at the Undoing Aging conference in Berlin 2019 by Adam Ford – focusing on philosophy of anti-aging, ethics, research & advocacy. Here is the audio!

Topics include philosophical reasons to support anti-aging, high impact research (senolytics etc), convincing existence proofs that further research is worth doing, how AI can help and how human research (bench-work) isn’t being replaced by AI atm or in the foreseeable future, suffering mitigation and cause prioritization in Effective Altruism – how the EA movement sees anti-aging and why it should advocate for it, population effects (financial & public health) of an aging population and the ethics of solving aging as a problem…and more.

Reason is the founder and primary blogger at FightAging.org
 

Jerry Shay – The Telomere Theory of Ageing – Interview At Undoing Ageing, Berlin, 2019

“When telomeres get really short that could lead to a dna damage signal and cause cells to undergo a phenomenon called ‘replicative senescence’…where cells can secrete things that are not necessarily very good for you..”

Why is it that immune cells don’t work as well in older age?

Listen to the interview here

Jerry and his team compared a homogeneous group of centenarians in northern Italy to 80 year olds and 30 year olds – and tested their immune cells (T-Cells) for function (through RNA sequencing) – what was observed was all the young people clustered apart from most of the old people clustered.. but the centenarians didn’t cluster in any one spot.  It was found that the centenarians clustered along side the younger cohorts had better telomere length.

Out of 7 billion people on earth, there is only about ~ half a million centenarians – most of them are frail – though the ones with longer telomeres and more robust T-Cell physiology seem to be quite different to the frail centenarians.   What usually happens is when telomeres wear down the DNA in the cell gets damaged, triggering a DNA damage response. From this, Jerry and his team made a jump in logic – maybe there are genes (i.e. telomere [telomere expression genes?]) that when the telomeres are long these genes are repressed, and when the telomeres short the genes get activated – circumventing the need for a DNA damage response.  What is interesting is that they found genes that are really close to the telomere genes (cytokines – inflammatory gene responses – TNF Alpha, Ennalucan 1 etc) – are being activated in humans – a process called ‘Telomere Looping’. As we grow and develop our telomeres get longer, and at a certain length they start silencing certain inflammation genes, then as we age some of these genes get activated – this is sometimes referred to as the ‘Telomere Clock’.  Centenarians who are healthy maintain longer telomeres and don’t have these inflammation genes activated.

 

During early fetal development (12-18 weeks) telomerase gets silenced – it’s always been thought that this was to stop early onset of cancer – but Dr Shay asked, ‘why is it that all newborns have about the same length of telomeres?’ – and it’s not just in humans, it’s in other animals like whales, elephants, and many large long-lived mammal – this doesn’t occur in smaller mammals like mice, rats or rabbits.   The concept is that when the telomere is long enough, it loops over and silences its own gene, which stays silent until we are older (and in need of it again to help prevent cancer).

This Telomere Looping probably evolved as part of Antagonistic Pleiotropy – where things that may have a protection or advantage early in life may have unpredicted negative consequences later in life. This is what telomerase is for – we as humans need it in very early development, as do large long-lived mammals, and  a mechanism to shut it off – then at a later older age it can be activated again to fight against cancer.

 

There is a fair amount of evidence for accumulated damage as hallmarks for ageing – can we take a damage repair approach to rejuvenation medicine?

Telomere spectrum disorders or telomeropathies – human diseases of telomere disfunction – diseases like idiopathic pulmonary fibrosis in adults and dyskeratosis congenita in young children who are born with reduced amounts of telomeres and telomerase – they get age related diseases very early in life.  Can they be treated? Perhaps through gene therapy or by transiently elongating their telomeres. But can this be applied for the general population too?  People don’t lose their telomeres at the same rate – we know it’s possible for people to keep their telomeres long for 100 years or more – it’s just not yet known how.  It could be luck, likely it has a lot to do with genetics.

 

Ageing is complex – no one theory is going to explain everything about ageing – the telomere hypothesis of ageing perhaps makes up for about on average 5% or 10% of aging – though understanding it enough might give people an extra 10% of healthy life.   Eventually it will be all about personalised medicine – with genotyping we will be able to say you have about a 50% chance of bone marrow failure when you’re 80 years old – then if so you may be a candidate for bone marrow rejuvenation.

What is possible in the next 10 years?

 

Inflammation is highly central to causing age related disease.  Chronic inflammation can lead to a whole spectrum of diseases. The big difference between the subtle low grade inflammation that we have drugs for – like TNF blockers (like Humira and Enbrel) which subtly reduce inflammation – people can go into remission from many diseases after taking this.

There are about 40 million people on Metformin in the USA – which may help reduce the consequences of ageing – this and other drugs like it are safe drugs – if we can find further safe drugs to reduce inflammation etc this could go a long way – Aspirin perhaps (it’s complicated) – but it doesn’t take much to get a big bang out of a little intervention – the key to all this is safety – we don’t want to do any harm – so metformin and Asprin have been proven to be safe over time – now we need to learn how to repurpose those to specifically address the ageing problem.

 

Historically we have more or less ignored the fundamental problem of ageing and targeted specific diseases – but by the time you are diagnosed, it’s difficult to treat the disease – by the time you have been diagnosed with cancer, it’s likely so far advanced that it’s difficult to stop the eventual outcomes.   The concept of intervening in the ticking clock of ageing is becoming more popular now. If we can intervene early in the process we may be able to mitigate downstream diseases.

Jerry has been working on what they call a ‘Telomerase Mediated Inhibitor’ (see more about telomerase meditation here) – “it shows amazing efficacy in reducing tumor burden and improving immune cell function at the same time – it gets rid of the bad immune cells in the micro environment, and guess what?  the tumors disappear – so I think there’s ways to take advantage of the new knowledge of ageing research and apply it to diseases – but I think it’s going to be a while before we think about prevention.”

Unfortunately in the USA, and really globally “people want to have their problems their lifestyles the way they want them, and when something goes wrong, they want the doctor to come and and give them a pill to fix the problem instead of taking personal responsibility and saying that what we should be doing is preventing it in the first place.”  We all know that prevention is important, though most don’t want to practise prevention over the long haul.

 

The goal of all this not necessarily to live longer, but to live healthier – we now know that the costs associated with intervening with the pathologies associated with ageing is enormous.  Someone said that the 25% of medicare costs in the USA is in treating people that are on dialysis – that’s huge. If we could compress the number of years of end of life morbidities into a smaller window, it would pay for itself over and over again.   So the goal is to increase healthspan and reduce the long period of chronic diseases associated with ageing. We don’t want this to be a selected subgroup who have access to future regenerative medicine – there are many people in the world without resources or access at this time – we hope that will change.

Jerry’s goal is to take some of the discovered bio-markers of both healthy and less healthy older people – and test them out on larger population numbers – though it’s very difficult to get the funding one needs to conduct large population studies.

Keith Comito on Undoing Ageing

How can solving aging reduce suffering? What are some common objections to the ideas of solving aging? How does Anti-Aging stack up against other cause areas (like climate change, or curing specific diseases)? How can we better convince people of the virtues of undoing the diseases of old age?

Keith Comito, interviewed by Adam Ford at the Undoing Aging 2019 conference in Berlin, discusses why solving the diseases of old age is powerful cause. Note the video of this interview will be available soon.

Keith is a computer programmer and mathematician whose work brings together a variety of disciplines to provoke thought and promote social change. He has created video games, bioinformatics programs, musical applications, and biotechnology projects featured in Forbes and NPR.

In addition to developing high-profile mobile applications such as HBO Now and MLB AtBat, he explores the intersection of technology and biology at the Brooklyn community lab Genspace, where he helped to create games which allow players to direct the motion of microscopic organisms.

Seeing age-related disease as one of the most profound problems facing humanity, he now works to accelerate and democratize longevity research efforts through initiatives such as Lifespan.io.

He earned a B.S. in Mathematics, B.S. in Computer science, and M.S. in Applied Mathematics at Hofstra University, where his work included analysis of the LMNA protein.

Future Day Melbourne 2019

Future Day is nigh – sporting a spectacular line of speakers!

Agenda

5.30Doors open – meet and greet other attendees
5.45Introduction
6.00Drew Berry – “The molecular machines that create your flesh and blood” [abstract]
6.45Brock Bastian – “Happiness, culture, mental illness, and the future self” [abstract]
7.30Lynette Plenderleith: “The future of biodiversity starts now” [abstract]
8.15Panel: Drew Berry, Brock Bastian, Lynette Plenderleith
Join the MeetupFuture Day is on the 21st of March - sporting a spectacular line of speakers ranging from Futurology, Philosophy, Biomedical Animation & Psychology!

Venue: KPMG Melbourne – 727 Collins St [map link] – Collins Square – Level 36 Room 2

Limited seating to about 40, though if there is overflow, there will be standing room.

PLEASE have a snack/drink before you come. Apparently we can’t supply food/drink at KPMG, so eat something beforehand – or work up and appetite…
Afterwards we will sojourn at a local pub for some grub and ale.

I’m looking forward to seeing people I have met before, and some new faces as well.

Drew Berry – Biomedical Animator @ The Walter and Eliza Hall Institute of Medical Research
Brock BastianMelbourne School of Psychological Sciences University of Melbourne

Check out the Future Day Facebook Group, and the Twitter account!

Abstracts

The molecular machines that create your flesh and blood

By Drew Berry – Abstract: A profound technological revolution is underway in bio-medical science, accelerating development of new therapies and treatments for the diseases that afflict us and also transforming how we perceive ourselves and the nature of our living bodies. Coupled to the accelerating pace of scientific discovery is an ever expanding need to explain to the public and develop appreciation of our new biomedical capabilities, to prepare the public for the tsunami of new knowledge and medicines that will impact patients, our families and community.
Drew Berry will present the latest visualisation experiments in creating cinematic movies and real-time interactive 3D molecular worlds, that reveal the current state of the art scientific discovery, focusing on the molecular engines that covert the food you eat into the chemical energy that powers your cells and tissues. Leveraging the incredible power of game GPU technology, vast molecular landscapes can be generated for 3D 360 degree cinema for museum and science centre dome theatres, interactive exploration in VR, and Augmented Reality education via student mobile phones.

 

Happiness, culture, mental illness, and the future self

By Brock Bastian – Abstract: What is the future of human happiness and wellbeing. We are currently treating mental illness at the level of individuals, yet rates of mental illness are not going down, and in some cases continue to rise. I will present research indicating that we need to start to tackle this problem at the level of culture. The cultural values places on particular emotional states may play a role in how people respond to their own emotional worlds. Furthermore, I will present evidence that basic cultural differences in how we explain events, predict the future and understand ourselves may also impact on the effectiveness of our capacity to deal with emotional events. This suggests that we need to begin to take culture seriously in how we treat mental illness. It also provides some important insights into what kind of thinking styles we might seek to promote and how we might seek to understand and shape our future selves. This also has implications for how we might find happiness in a world increasingly characterized by residential mobility, weak ties, and digital rather than face-to-face interaction.

 

The future of biodiversity starts now

By Lynette Plenderleith – Abstract: Biodiversity is vital to our food security, our industries, our health and our progress. Yet never before has the future of biodiversity been so under threat as we modify more land, burn more fossil fuels and transport exotic organisms around the planet. But in the face of catastrophic biodiversity collapse, scientists, community groups and not-for-profits are working to discover new ways to conserve biodiversity, for us and the rest of life on our planet. From techniques as simple as preserving habitat to complex scientific techniques like de-extinction, Lynette will discuss our options for the future to protect biodiversity, how the future of biodiversity could look and why we should start employing conservation techniques now. Our future relies on the conservation of  biodiversity and its future rests in our hands. We have the technology to protect it.

 

Biographies

Dr Drew Berry

Dr Drew Berry is a biomedical animator who creates beautiful, accurate visualisations of the dramatic cellular and molecular action that is going on inside our bodies. He began his career as a cell biologist and is fluent navigating technical reports, research data and models from scientific journals. As an artist, he works as a translator, transforming abstract and complicated scientific concepts into vivid and meaningful visual journeys. Since 1995 he has been a biomedical animator at the Walter and Eliza Hall Institute of Medical Research, Australia. His animations have exhibited at venues such as the Guggenheim Museum, MoMA, the Royal Institute of Great Britain and the University of Geneva. In 2010, he received a MacArthur Fellowship “Genius Grant”.

Recognition and awards

• Doctorate of Technology (hc), Linköping University Sweden, 2016
• MacArthur Fellowship, USA 2010
• New York Times “If there is a Steven Spielberg of molecular animation, it is probably Drew Berry” 2010
• The New Yorker “[Drew Berry’s] animations are astonishingly beautiful” 2008
• American Scientist “The admirers of Drew Berry, at the Walter and Eliza Hall Institute in Australia, talk about him the way Cellini talked about Michelangelo” 2009
• Nature Niche Prize, UK 2008
• Emmy “DNA” Windfall Films, UK 2005
• BAFTA “DNA Interactive” RGB Co, UK 2004

Animation http://www.wehi.tv
TED http://www.ted.com/talks/drew_berry_animations_of_unseeable_biology
Architectural projection https://www.youtube.com/watch?v=m9AA5x-qhm8
Björk video https://www.youtube.com/watch?v=Wa1A0pPc-ik
Wikipedia https://en.wikipedia.org/wiki/Drew_Berry

Assoc Prof Brock Bastian

Brock Bastian is a social psychologist whose research focuses on pain, happiness, and morality.

In his search for a new perspective on what makes for the good life, Brock Bastian has studied why promoting happiness may have paradoxical effects; why we need negative and painful experiences in life to build meaning, purpose, resilience, and ultimately greater fulfilment in life; and why behavioural ethics is necessary for understanding how we reason about personal and social issues and resolve conflicts of interest. His first book, The Other Side of Happiness, was published in January 2018.

 

The Other Side of Happiness: Embracing a More Fearless Approach to Living

Our addiction to positivity and the pursuit of pleasure is actually making us miserable. Brock Bastian shows that, without some pain, we have no real way to achieve and appreciate the kind of happiness that is true and transcendent.

Read more about The Other Side of Happiness

Dr. Lynette Plenderleith

Dr. Lynette Plenderleith is a wildlife biologist by training and is now a media science specialist, working mostly in television, with credits including children’s show WAC!
World Animal Championships and Gardening Australia. Lynette is Chair and Founder of Frogs Victoria, President of the Victorian branch of Australian Science Communicators and occasional performer of live science-comedy. Lynette has a Ph.D from Monash University, where she studied the ecology of native Australian frogs, a Master’s degree in the spatial ecology of salamanders from Towson University in the US and a BSc in Natural Sciences from Lancaster University in her homeland – the UK.
Twitter: @lynplen
Website: lynplen.com

 

 

The Future is not a product

It’s more exciting than gadgets with shiny screens and blinking lights.

Future Day is a way of focusing and celebrating the energy that more and more people around the world are directing toward creating a radically better future.

How should Future Day be celebrated? That is for us to decide as the future unfolds!

  • Future Day could be adopted as an official holiday by countries around the world.
  • Children can do Future Day projects at school, exploring their ideas and passions about creating a better future.
  • Future Day costume parties — why not? It makes at least as much sense as dressing up to celebrate halloween!
  • Businesses giving employees a day off from routine concerns, to think creatively about future projects
  • Special Future Day issues in newspapers, magazines and blogs
  • Use your imagination — that’s what the future is all about!

The Future & You

It’s time to create the future together!

Our aspirations are all too often sidetracked in this age of distraction. Lurking behind every unfolding minute is a random tangent with no real benefit for our future selves – so let’s ritualize our commitment to the future by celebrating it! Future Day is here to fill our attention economies with useful ways to solve the problems of arriving at desirable futures, & avoid being distracted by the usual gauntlet of noise we run every other day. Our future is very important – accelerating scientific & technological progress will change the world even more than it already has. While other days of celebration focus on the past – let’s face the future – an editable history of a time to come – a future that is glorious for everyone.

Videos from Previous Future Day Events / Interviews

The Ghost in the Quantum Turing Machine – Scott Aaronson

Interview on whether machines can be conscious with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.

Check out interview segment “The Winding Road to Quantum Supremacy” with Scott Aaronson – covering progress in quantum computation, whether there are things that quantum computers could do that classical computers can’t etc..

Transcript

Adam Ford: In ‘Could a Quantum Computer have Subjective Experience?‘ you speculate where the process has to fully participate in the arrow of time to be conscious and this points to decoherence. If pressed, how might you try to formalize this?

Scott Aaronson: So yeah so I did write this kind of crazy essay five or six years ago that was called “The Ghost in the Quantum Turing Machine“, where I tried to explore a position that seemed to me to be mysteriously under-explored! And all of the debates about ‘could a machine be conscious?’ and we want to be thoroughgoing materialists right? There’s no magical ghost that defies the laws of physics; the brains or physical systems that obey the laws physics
just like any others.
But there is at least one very interesting difference between a brain and any digital computer that’s ever been built – and that is that the state of a brain is not obviously copyable; that is not obviously knowable to an outside person well enough to predict what a person will do in the future, without having to scan the person’s brain so invasively that you would kill them okay. And so there is a sort of privacy or opacity if you like to a brain that there is not to a piece of code running on a digital computer.
And so there are all sorts of classic philosophical conundrums that play on that difference. For example suppose that a human-level AI does eventually become possible and we have simulated people who were running a inside of our computers – well if I were to murder such a person in the sense of deleting their file is that okay as long as I kept the backup somewhere? As long as I can just restore them from backup? Or what if I’m running two exact copies of the program on two computers next to each other – is that instantiating two consciousnesses? Or is it really just one consciousness? Because there’s nothing to distinguish the one from the other?
So could I blackmail an AI to do what I wanted by saying even if I don’t have access to you as an AI, I’m gonna say if you don’t give me a million dollars then I’m just going to – since I have your code – I’m gonna create a million copies of your of the code and torture them? And – if you think about it – you are almost certain to be one of those copies because there’s far more of them than there are of you, and they’re all identical!
So yeah so there’s all these puzzles that philosophers have wondered about for generations about: the nature of identity, how does identity persist across time, can it be duplicated across space, and somehow in a world with copy-able AIs they would all become much more real!
And so one one point of view that you could take is that: well if I can predict exactly what someone is going to do right – and I don’t mean you know just saying as a philosophical matter that I could predict your actions if I were a Laplace demon and I knew the complete state of the universe right, because I don’t in fact know the complete state of the universe okay – but imagine that I could do that as an actual practical matter – I could build an actual machine that would perfectly predict down to the last detail every thing you would do before you had done it.
Okay well then in what sense do I still have to respect your personhood? I mean I could just say I have unmasked you as a machine; I mean my simulation has every bit as much right to personhood as you do at this point right – or maybe they’re just two different instantiations of the same thing.
So another possibility, you could say, is that maybe what we like to think of is consciousness only resides in those physical systems that for whatever reason are uncopyable – that if you try to make a perfect copy then you know you would ultimately run into what we call the no-cloning theorem in quantum mechanics that says that: you cannot copy the exact physical state of a an unknown system for quantum mechanical reasons. And so this would suggest of you where kind of personal identity is very much bound up with the flow of time; with things that happen that are evanescent; that can never happen again exactly the same way because the world will never reach exactly the same configuration.
A related puzzle concerns well: what if I took your conscious or took an AI and I ran it on a reversible computer? Now some people believe that any appropriate simulation brings about consciousness – which is a position that you can take. But now what if I ran the simulation backwards – as I can always do on a reversible computer? What if I ran the simulation, I computed it and then I uncomputed it? Now have I caused nothing to have happened? Or did I cause one forward consciousness, and then one backward consciousness – whatever that means? Did it have a different character from the forward consciousness?
But we know a whole class of phenomena that in practice can only ever happen in one direction in time – and these are thermodynamic phenomena right; these are phenomena that create waste heat; create entropy; that may take these little small microscopic unknowable degrees of freedom and then amplify them to macroscopic scale. And in principle there was macroscopic records could could get could become microscopic again. Like if I make a measurement of a quantum state at least according to the let’s say many-worlds quantum mechanics in principle that measurement could always be undone. And yet in practice we never see those things happen – for the same for basically the same reasons why we never see an egg spontaneously unscramble itself, or why we why we never see a shattered glass leap up to the table and reassemble itself right, namely these would represent vastly improbable decreases of entropy okay. And so the speculation was that maybe this sort of irreversibility in this increase of entropy that we see in all the ordinary physical processes and in particular in our own brains, maybe that’s important to consciousness?
Right uh or what we like to think of as free will – I mean we certainly don’t have an example to say that it isn’t – but you know the truth of the matter is I don’t know I mean I set out all the thoughts that I had about it in this essay five years ago and then having written it I decided that I had enough of metaphysics, it made my head hurt too much, and I was going to go back to the better defined questions in math and science.

Adam Ford: In ‘Is Information Physical?’ you note that if a system crosses a Swartzschild Bound it collapses into a black-hole – do you think this could be used to put an upper-bound on the amount of consciousness in any given physical system?

Scott Aaronson: Well so I can decompose your question a little bit. So there is what quantum gravity considerations let you do, it is believed today, is put a universal bound on how much computation can be going on in a physical system of a given size, and also how many bits can be stored there. And I the bounds are precise enough that I can just tell you what they are. So it appears that a physical system you know, that’s let’s say surrounded by a sphere of a given surface area, can store at most about 10 to the 69 bits, or rather 10 to the 69 qubits per square meter of surface area of the enclosing boundary. And it has a similar limit on how many computational steps it can do over it’s it’s whole history.
So now I think your question kind of reduces to the question: Can we upper-bound how much consciousness there is in a physical system – whatever that means – in terms of how much computation is going on in it; or in terms of how many bits are there? And that’s a little hard for me to think about because I don’t know what we mean by amount of consciousness right? Like am I ten times more conscious than a frog? Am I a hundred times more conscious? I don’t know – I mean some of the time I feel less conscious than a frog right.
But I am sympathetic to the idea that: there is some minimum of computational interestingness in any system that we would like to talk about as being conscious. So there is this ancient speculation of panpsychism, that would say that every electron, every atom is conscious – and do me that’s fine – you can speculate that if you want. We know nothing to rule it out; there were no physical laws attached to consciousness that would tell us that it’s impossible. The question is just what does it buy you to suppose that? What does it explain? And in the case of the electron I’m not sure that it explains anything!
Now you could say does it even explain anything to suppose that we’re conscious? But and maybe at least not for anyone beyond ourselves. You could say there’s this ancient conundrum that we each know that we’re conscious presumably by our own subjective experience and as far as we know everyone else might be an automaton – which if you really think about that consistently it could lead you to become a solipsist. So Allen Turing in his famous 1950 paper that proposed the Turing test had this wonderful remark about it – which was something like – ‘A’ is liable to think that ‘A’ thinks while ‘B’ does not, while ‘B’ is liable to think ‘B’ thinks but ‘A’ does not. But in practice it is customary to adopt the polite convention that everyone thinks. So it was a very British way of putting it to me right. We adopt the polite convention that solipsism is false; that people who can, or any entities let’s say, that can exhibit complex behaviors or goal-directed intelligent behaviors that are like ours are probably conscious like we are. And that’s a criterion that would apply to other people it would not apply to electrons (I don’t think), and it’s plausible that there is some bare minimum of computation in any entity to which that criterion would apply.

Adam Ford: Sabine Hossenfelder – I forget her name now – {Sabine Hossenfelder yes} – she had a scathing review of panpsychism recently, did you read that?

Scott Aaronson: If it was very recent then I probably didn’t read it – I mean I did read an excerpt where she was saying that like Panpsychism – is what she’s saying that it’s experimentally ruled out? If she was saying that I don’t agree with that – know I don’t even see how you would experimentally rule out such a thing; I mean you’re free to postulate as much consciousness as you want on the head of a pin – I would just say well it’s not if it doesn’t have
an empirical consequence; if it’s not affecting the world; if it’s not affecting the behavior of that head of a pin, in a way that you can detect – then Occam’s razor just itches to slice it out from our description of the world – always that’s the way that I would put it personally.\
So I put a detailed critique of integrated information theory (IIT), which is Giulio Tononi’s proposed theory of consciousness on my blog, and my critique was basically: so Tononi know comes up with a specific numerical measure that he calls ‘Phi’ and he claims that a system should be regarded as conscious if and only if the Phi is large. Now the actual definition of Phi has changed over time – it’s changed from one paper to another, and it’s not always clear how to apply it and there are many technical objections that could be raised against this criterion. But you know what I respect about IIT is that at least it sticks its neck out right. It proposes this very clear criterion, you know are we always much clearer than competing accounts do right – to tell you this is which physical systems you should regard as conscious and which not.
Now the danger of sticking your neck out is that it can get cut off right – and indeed I think that IIT is not only falsifiable but falsified, because as soon as this criterion is written down (what the point I was making is that) it is easy to construct physical systems that have enormous values of Phi – much much larger then a human has – that I don’t think anyone would really want to regard as intelligent let alone conscious or even very interesting.
And so my examples show that basically Phi is large if and only if your system has a lot of interconnection – if it’s very hard to decompose into two components that interact with each other only weakly – and so you have a high degree of information integration. And so my the point of my counter examples was to try to say well this cannot possibly be the sole relevant criterion, because a standard error correcting code as is used for example on every compact disc also has an enormous amount of information integration – but should we therefore say that you know ‘every error correcting code that gets implemented in some piece of electronics is conscious?’, and even more than that like a giant grid of logic gates just sitting there doing nothing would have a very large value of Phi – and we can multiply examples like that.
And so Tononi then posted a big response to my critique and his response was basically: well you’re just relying on intuition; you’re just saying oh well yeah these systems are not a conscious because my intuition says that they aren’t – but .. that’s parochial right – why should you expect a theory of consciousness to accord with your intuition and he just then just went ahead and said yes the error correcting code is consciouss, yes the giant grid of XOR gates is conscious – and if they have a thousand times larger value of Phi than a brain, then there are a thousand times more conscious than a human is. So you know the way I described it was he didn’t just bite the bullet he just devoured like a bullet sandwich with mustard. Which was not what I was expecting but now the critique that I’m saying that ‘any scientific theory has to accord with intuition’ – I think that is completely mistaken; I think that’s really a mischaracterization of what I think right.
I mean I’ll be the very first to tell you that science has overturned common sense intuition over and over and over right. I mean like for example temperature feels like an intrinsic quality of a of a material; it doesn’t feel like it has anything to do with motion with the atoms jiggling around at a certain speed – okay but we now know that it does. But when scientists first arrived at that modern conception of temperature in the eighteen hundreds, what was essential was that at least you know that new criterion agreed with the old criterion that fire is hotter than ice right – so at least in the cases where we knew what we meant by hot or cold – the new definition agreed with the old definition. And then the new definition went further to tell us many counterintuitive things that we didn’t know before right – but at least that it reproduced the way in which we were using words previously okay.
Even when Copernicus and Galileo where he discovered that the earth is orbiting the Sun right, the new theory was able to account for our observation that we were not flying off the earth – it said that’s exactly what you would expect to have happened even in the in ?Anakin? because of these new principles of inertia and so on okay.
But if a theory of consciousness says that this giant blank wall or this grid is highly highly conscious just sitting there doing nothing – whereas even a simulated person or an AI that passes the Turing test would not be conscious if it’s organized in such a way that it happens to have a low value of Phi – I say okay the burden is on you to prove to me that this Phi notion that you have defined has anything whatsoever to do with what I was calling consciousness you haven’t even shown me any cases where they agree with each other where I should therefore extrapolate to the hard cases; the ones where I lack an intuition – like at what point is an embryo conscious? or when is an AI conscious? I mean it’s like the theory seems to have gotten wrong the only things that it could have possibly gotten right, and so then at that point I think there is nothing to compel a skeptic to say that this particular quantity Phi has anything to do with consciousness.

The Winding Road to Quantum Supremacy – Scott Aaronson

Interview on quantum computation with Scott Aaronson, theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott blogged about this and other segments of our interview – his blog is very popular and has way more comments than this one does – check it out.

Check out interview segment “The Ghost in the Quantum Turing Machine” – covering whether a machine can be conscious, whether information is physical and integrated information theory.

Transcript

Scott Aaronson: Okay so – Hi, I’m Scott Aaronson. I’m a computer science professor at the University of Texas at Austin and my main interest is the capabilities and limits of quantum computers, and more broadly what computer science and physics have to tell each other. And I got interested in it I guess because it was hard not to be – because as a teenager it just seemed clear to me that the universe is a giant video game and it just obeys certain rules, and so if I really wanted to understand the universe maybe I could ignore the details of physics and just think about computation.
But then with the birth of quantum computing and the dramatic discoveries in the mid-1990s (like Shor’s algorithm for factoring huge numbers) it became clear that physics actually changes the basic rules of computation – so that was something that I felt like I had to understand. And 20 years later we’re still trying to understand it, and we may also be able to build some devices that can outperform classical computers namely quantum computers and use them to do some interesting things.
But to me that’s that’s really just icing on the cake; really I just want to understand how things fit together. Well to tell you the truth when I first heard about quantum computing (I think from reading some popular article in the mid 90s about Shor’s algorithm which had only recently been discovered) my first reaction was this sounds like obvious hogwash; this sounds like some physicists who just do not understand the first thing about computation – and they’re just inventing some physics proposal that sounds like it just tries every possible solution in parallel. But none of these things are going to scale and in computer science there’s been decades of experience of that; of people saying: well why don’t you build a computer using a bunch of mirrors? or using soap bubbles? or using folding proteins?
And there’s all kinds of ideas that on paper look like they could evaluate an exponential number of solutions at only a linear amount of time, but they’re always kind of idealizing something? So it’s always when you examine them carefully enough you find that the amount of energy or scales explose up on you exponentially, or the precision with which you would need to measure becomes exponentially precise, or something becomes totally unrealistic – and I thought the same must be true of quantum computing. But in order to be sure I had to read something about it.
So I while I was working over a summer at Bell Labs doing work that had nothing to do with quantum computing, well my boss was nice enough to let me spend some time learning about and reading up on the basics of quantum computing – and that was really a revelation for me because I accepted [that] quantum mechanics is the real thing. It is a thing of comparable enormity to the basic principles of computation – you can say the principles of Turing – and it is exactly the kind of thing that could modify some of those principles. But the biggest surprise of all I think was that I despite not being a physicist not having any skill that partial differential equations or the others tools of the physicists that I could actually understand something about quantum mechanics.
And ultimately to learn the basic rules of how a quantum computer would work and start thinking about what they would be good for – quantum algorithms and things like that – it’s enough to be conversant with vectors and matrice. So you need to know a little bit of math but not that much. You need to be able to know linear algebra okay and that’s about it.
And I feel like this is a kind of a secret that gets buried in almost all the popular articles; they make it sound like quantum mechanics is just this endless profusion of counterintuitive things. That it’s: particles can be in two places at once, and a cat can be both dead and alive until you look at it, and then why is that not just a fancy way of saying well either the cat’s alive or dead and you don’t know which one until you look – they they never quite explained that part, and particles can have spooky action at a distance and affect each other instantaneously, and particles can tunnel through walls! It all sounds hopelessly obscure and you know there’s no hope for anyone who’s not a PhD in physics to understand any of it.
But the truth of the matter is there’s this one counterintuitive hump that you have to get over which is the certain change to or generalization of the rules of probability – and once you’ve gotten that then all the other things are just different ways of talking about or different manifestations of that one change. And a quantum computer in particular is just a computer that tries to take advantage of this one change to the rules of probability that the physicists discovered in the 1920s was needed to account for our world. And so that was really a revelation for me – that even you’re computer scientists are math people; people who are not physicists can actually learn this and start contributing to it – yeah!

Adam Ford: So it’s interesting that often when you try to pursue an idea, the practical gets in the way – we try to get to the ideal without actually considering the practical – and they feel like enemies. Should we be letting the ideal be the enemy of the practical?

Scott Aaronson: Well I think that from the very beginning it was clear that there is a theoretical branch of quantum computing which is where you just assume you have as many of these quantum bits (qubits) as you could possibly need, and they’re perfect; they stay perfectly isolated from their environment, and you can do whatever local operations on them you might like, and then you just study how many operations would you need to factor a number, or solve some other problem of practical importance. And the theoretical branch is really the branch where I started out in this field and where I’ve mostly been ever since.
And then there’s the practical branch which asks well what will it take to actually build a device that instantiates this theory – where we have to have qubits that are actually the energy levels of an electron, or the spin states of an atomic nucleus, or are otherwise somehow instantiated in the physical world. And they will be noisy, they will be interacting with their environment – we will have to take heroic efforts to keep them sufficiently isolated from their environments – which is needed in order to maintain their superposition state. How do we do that?
Well we’re gonna need some kind of fancy error correcting codes to do that, and then there are there are theoretical questions there as well but how do you design those correcting codes?
But there’s also practical questions: how do you engineer a system where the error rates are low enough that these codes can even be used at all; that if you try to apply them you won’t simply be creating even more error than you’re fixing. What should be the physical basis for qubits? Should it be superconducting coils? Should it be ions trapped in a magnetic field? Should it be photons? Should it be some new topological state of matter? Actually all four of those proposals and many others are all being pursued now!
So I would say that until fairly recently in the field, like five years ago or so, the theoretical and the practical branches we’re pretty disjointed from each other; they were never enemies so to speak. I mean we might poke fun at each other sometimes but we were we were never enemies. The the field always sort of rose or fell as a whole and we all knew that. But we just didn’t have a whole lot to scientifically say to each other because the experimentalists we’re just trying to get one or two qubits to work well, and they couldn’t even do that much, and we theorists we’re thinking about – well suppose you’ve got a billion cubits, or some arbitrary number, what could you do? And what would still be hard to do even then?
A lot of my work was has actually been about the limitations of quantum computers, but I also like to say the study of what you can’t do even with computers that you don’t have. And only recently the experimentalists have finally gotten the qubits to work pretty well in isolation so that now it finally makes sense to start to scale things up – not yet to a million qubits but maybe 50 qubits, maybe to 60, maybe to a hundred. This as it happens is what
Google and IBM and Intel and a bunch of startup companies are trying to do right now. And some of them are hoping to have devices within the next year or two, that might or might not do anything useful but if all goes well we hope will at least be able to do something interesting – in the sense of something that would be challenging for a classical computer to simulate, and that at least proves the point that we can do something this way that is beyond what classical computers can do.
And so as a result the most nitty-gritty experimentalists are now actually talking to us theorists because now they need to know – not just as a matter of intellectual curiosity, but as a fairly pressing practical matter – once we get 50 or 100 cubits working what do we do with them? What do we do with them first of all that you know is hard to simulate classically? How sure are you that there’s no fast classical method to do the same thing? How do we verify that we’ve really done it , and is it useful
for anything?
And ideally they would like us to come up with proposals that actually fit the constraints of the hardware that they’re building, where you could say you know eventually none of this should matter, eventually a quantum programmer should be able to pay as little attention to the hardware as a classical programmer has to worry about the details of the transistors today.
But in the near future when we only have 50 or 100 cubits you’re gonna have to make the maximum use of each and every qubit that you’ve got, and the actual details of the hardware are going to matter, and the result is that even we theorists have had to learn about these details in a way that we didn’t before.
There’s been a sort of coming together of the theory and practical branches of the field just in the last few years that to me has been pretty exciting.

Adam Ford: So you think we will have something equivalent to functional programming for quantum computing in the near future?

Scott Aaronson: Well there actually has been a fair amount of work on the design of quantum programming languages. There’s actually a bunch of them out there now that you can download and try out if you’d like. There’s one called Quipper, there’s another one called a Q# from Microsoft, and there are several others. Of course we don’t yet have very good hardware to run the programs on yet, mostly you can just run them in classical simulation, which naturally only works well for up to about 30 or 40 cubits, and then it becomes too slow. But if you would like to get some experience with quantum programming you can try these things out today, and many of them do try to provide higher level functionalities, so that you’re not just doing the quantum analog of assembly language programming, but you can think in higher-level modules, or you can program functionally. I would say that in quantum algorithms we’ve mostly just been doing theory and we haven’t been implementing anything, but we have had to learn to think that way. If we had to think in terms of each individual qubit, each individual operation on one or two
qubits, well we would never get very far right? And so we have to think in higher-level terms like there are certain modules that we know can be done – one of them is called the Quantum Fourier Transform and that’s actually the heart of Shor’s famous algorithm for factoring numbers (it has other applications as well). Another one is called Amplitude Amplification that’s the heart of Grover’s famous algorithm for searching long long lists of numbers
in about the square root of the number of steps that you would need classically, and that’s also like a quantum algorithm design primitive that we can just kind of plug in as a black box and it has many applications.
So we do think in these higher level terms, but there’s a different set of higher level abstractions than there would be for classical computing – and so you have to learn those. But the basic idea of decomposing a complicated
problem by breaking it down into its sub components that’s exactly the same in quantum computing as it is in classical computing.

Adam Ford: Are you optimistic with regards to quantum computing in the short to medium term?

Scott Aaronson: You’re asking what am I optimistic about – so I am I mean like I feel like the field has made amazing progress: both on theory side and on the experimental side. We’re not there yet, but we know a lot more than we did a decade ago. Some of what were my favorite open problems as a theorist a decade ago have now been resolved – some of them within the last year – actually and the hardware the qubits are not yet good enough to build a scalable quantum computer – in that sense the skeptics can clearly legitimately say we’re not there yet – well no duh we’re not – okay but: if you look at the coherence times of the qubits, you look at what you can do with them, and you compare that to where they were 10 years ago or 20 years ago – there’s been orders of magnitude type of progress. So the analogy that I like to make: Charles Babbage laid down the basic principles of classical computing in the 1820s right? I mean not with as much mathematical rigor as Turing would do later, but the basic ideas were there. He had what today we would call a design for a universal computer.
So now imagine someone then saying ‘well so when is this analytical engine gonna get built? will it be in the 1830s or will it take all the way until the 1840s?’ Well in this case it took more than a hundred years for a technology to be invented – namely the transistor – that really fully realized Babbage’s vision. I mean the vacuum tube came along earlier, and you could say partially realized that but it was just not reliable enough to really be scalable in the way that the transistor was. And optimistically now we’re in the very very early vacuum tube era of quantum computing. We don’t yet have the quantum computing analog of the transistor as people don’t even agree about which technology is the right one to scale up yet. Is it superconducting? Is it trapped ions? Is it photonics? Is it a topological matter? So they’re pursuing all these different approaches in parallel. The partisans of each approach have what sounds like compelling arguments as to why none of the other approaches could possibly scale. I hope that they’re not all correct uh-huh. People have only just recently gotten to the stage where one or two qubits work well in isolation, and where it makes sense to try to scale up to 50 or 100 of them and see if you can get them working well together at that kind of scale.
And so I think the the big thing to watch for in the next five to ten years is what’s been saddled with the somewhat unfortunate name of ‘Quantum Supremacy’ (and this was coined before Trump I hasten to say). But so this is just a term to refer to doing something with a quantum computer it’s not necessarily useful but that at least is classically hard. So you know as I was saying earlier, proves the point that you can do something that would take a lot longer to simulate it with a classical computer. And this is the thing that Google and some others are going to take their best shot at within the next couple of years so. What puts that in the realm of possibility is that just a mere 50 or 100 cubits if they work well enough should already be enough to get us this. In principle you know you may be able to do this without needing error correction – once you need error correction then that enormously multiplies the resources that you need to do even the simplest of what’s called ‘Fault-Tolerant Computing’ might take many thousands of physical qubits, okay, even though everyone agrees that ultimately if you want to scale to realize the true promise of quantum computing – or let’s say to threaten our existing methods of cryptography – then you’re going to need this fault tolerance. But that I expect we’re not gonna see in the next five to ten years.
If we do see it I mean that will be a huge shock – as big a shock as it would be if you told someone in 1939 that there was going to be a nuclear weapon in six years. In that case there was a world war that sort of accelerated the timeline you could say from what it would otherwise be. In this case I hope there won’t be a world war that accelerates this timeline. But my guess would be that if all goes well then quantum supremacy might be achievable within the next decade, and I hope that after that we could start to see some initial applications of quantum computing which will probably be some very very specialized ones; some things that we can already get with a hundred or so non-error-corrected qubits. And by necessity these are going to be very special things – they might mostly be physics simulations or simulations of some simple chemistry problems.
I actually have a proposed application for near-term quantum computers which is to generate cryptographically secure random numbers – those random numbers that you could prove to a skeptic really were generated randomly – turns out that even like a 50 or 60 qubit quantum computer should already be enough to give us that. But true scalable quantum computing the kind that could threaten cryptography and that could also speed up optimization problems and things like that – that will probably require error correction – and I could be pleasantly surprised . I’m not optimistic about that part becoming real on the next five to ten years, but you know since every everyone likes an optimist I guess I’ll I try to be optimistic that we will take big steps in that direction and maybe even get there within my lifetime.

Also see this and this of an interview with Mike Johnson conducted by Andrés Gómez Emilson and I. Also this interview with Christof Koch will likely be of interest.

Cognitive Biases & In-Group Convergences with Joscha Bach

True & false vs right & wrong – People converge their views to set of rights and wrongs relative to in-group biases in their peer group.
As a survival mechanism, convergence in groups is sometimes more healthy than being right – so one should optimize for convergence sometimes even at the cost of getting stuff wrong – so humans probably have an evolutionary propensity to favor convergence over truth.
However by optimizing for convergence may result in the group mind being more stupid than the smartest people in the group.

 

 
Joscha highlights the controversy of Yonatan Zunger being fired for sending out an email about biological differences between men and women effecting abilities as engineers – where Zunger’s arguments may be correct – now regardless of what the facts are about how biological differences effect differences in ability between men & women, google fired him because they thought supporting these arguments would make for a worse social environment.

This sort of thing leads to an interesting difference in discourse, where:
* ‘nerds’ tend to focus on ‘content‘, on imparting ideas and facts where everyone can judge these autonomously and form their own opinions – in view that in order to craft the best solutions we need to have the best facts
* most people the purpose of communication is ‘coordination‘ between individuals and groups (society, nations etc) – where the value on a ‘fact’ is it’s effect on the coordination between people

So is Google’s response to the memo controversy about getting the facts right, or about how Google at this point should be organised?

What’s also really interesting is that different types of people read this ‘memo’ very differently – making it very difficult to form agreement about the content of this memo – how can one agree on whats valuable about communication – whether it’s more about imparting ideas and facts or whether it’s more about coordination?

More recently there has been a lot of talk about #FakeNews – where it’s very difficult to get people to agree to things that are not in their own interests – and including, as Joshca points out, the idea that truth matters.

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

The Generative Universe Hypothesis

Remembering Lee Smolin’s theory of the dynamical evolution of the universe  where through a form of natural selection, black holes spawn new universes, I thought that if a superintelligent civilization understood its mechanics, they may try to control it, and engineer or bias the physics in the spawned universe – and possibly migrate to this new universe.   Say that they found out how to talk along the parent/child relations between universes, it may be a an energy efficient way achieve some of the outcomes of simulations (as described in Nick Bostrom’s Simulation Hypothesis).

The idea of moving to a more hospitable universe could be such a strong attractor to post-singularity civs that, once discovered, it may be the an obvious choice for a variety of reasons.   A) Better computation by faster/easier networking – Say for instance, that the speed of light were a lot faster, and information could travel over longer distances than in this universe – then network speed may not be as much of a hindrance to developing larger civs, distributed computation, and mega-scale galactic brains. B) As a means of escape – If it so happened that neighbouring alien civs were close enough to pose a threat, then escaping this universe to a new generated universe could be ideal – especially if one could close the door behind, or lay a trap at the opening to the generated universe to capture probes or ships that weren’t ones own.  C) Mere curiosity – it may not be full blown utility maximization that is the lone object of the endeavor,  it could be just simple curiosity about how (stable) universes may operate if fine tuned differently. (How far can you take simulations in this universe to test how hypothetical universes could operate without actually generating and testing the universes?)  D) To escape the ultimate fate of this universe – according to the most popular current estimates, we have about 10100 years until the heat death of this universe. E) Better computation by a ‘cooler’ environment – A colder yet stable universe to compute in – similar to the previous point and the first point.  Some hypothesise that civs may sleep until the universe gets colder when computation can be done far more efficiently, where these civs long for the heat death so that they can get really get started with whatever projects they have in mind that require the computing power only made possible by extremely low temperatures more abundantly available at or near the heat death.  Well, what if you could engineer a universe to achieve temperatures far lower than that which would be available in this universe, while also allowing the benefit of the universe being relatively steady (say that’s something that’s needed) – and if it could be achieved sooner by a generative universe solution than waiting around for this universes heat death then why not?  F) Fault tolerance – distributing a civ across (generated) universes may preserve the civ against risks of the current one going unexpectedly pear shaped – the more fault tolerance the merrier G) Load balancing – if it’s posisble to communicate between parent/child relationships, then civs may generate universes merely to act as containers for computation, helping solve really really really big problems far faster, or scaffold extremely detailed virtual realities far more efficiently – less lag, less jitters – deeper immersion! 

If this Perhaps we will find evidence of alien civs around black holes generating and testing new universes before taking the leap to transcend so to speak.

Why leave the future evolution of universes up to blind natural selection?  Advanced post-singularity alien civs might hypothesize an extremely strict set of criteria to allow for the formation of the right kinds of matter and energy in child universes to either mirror our own universe, or more likely take it up a notch or two;  to new levels of interestingness – while computational capacity is limited if constrained by the laws of this containing universe, it may be that spawning a new universe could allow for more interesting and efficient computation.

It may also be a great way to escape the heat death of the universe 🙂

I spoke about the idea with Andrew Arnel a while ago while out for a drink, where I came up with a really cool name for this idea – though I can’t remember what it was 🙂  perhaps it only sounds good after a few beers – perhaps it was something like the ‘generative’, spawnulation or ‘genulation’ hypothesis…

 

Update: also more recently I commented about this idea on a FB post by Mike Johnson:
I may have a similar idea relating to smolins darwinistic black-hole universe generation. Why build simulations where it would be more efficient to actually generate new universes not computationally bounded by or contained by the originating universe – by nudging the physics that would emerge in the new universe to be more able to support flourishing life, more computation and wider novelty possibility spaces.


Furthermore I spoke to Sundance Bilson Thomson (a physicist in Australia who was supervised by Lee Smolin) about whether what influenced the physics in the child universes was local phenomena surrounding the black hole in the parent universe, or global phenomena of the parent universe.  He said it was global phenomena based on something to do with the way stars are formed.  So this might lower my credence in the Generative Universe hypothesis as it pertains to Lee Smolin’s idea – though I need to seek out whether the nature of the generated child universes could still be nudged or engineered.

Why Technology Favors a Singleton over a Tyranny

Is democracy loosing its credibility, will it cede to dictatorship?  Will AI out-compete us in all areas of economic usefulness – making us the future useless class?

It’s difficult to get around the bottlenecks of networking and coordination in distributed democracies. In the past quite naturally distributed systems being scattered are more redundant wer in many ways fault tolerant and adaptive – though these payoffs for most of us may dwindle if humans become less and less able to compete with Ex Machina. If the relative efficiency of democracies to dictatorships tips towards the latter nudging a transition to centralized dictatorships, while solving some distribution & coordination problems, the concentration of resource allocation may be exaggerated beyond historical examples of tyranny. Where the ‘once was proletariat’ now new ‘useless class’ have little to no utility to the concentration of power – the top 0.001% – the would be tyrants will likely give up on ruling and tyrannizing – and instead find it easier to cull the resource hungry and rights demanding horde – more efficient that way. Ethics is fundamental to fair progress – ethics is philosophy with a deadline creeping closer – what can we do to increase the odds of a future where the value of life is evaluated beyond it’s economic usefulness?
I found ‘Why Technology Favors Tyranny by Yuval Noah Harari‘ was a good read – I enjoy his writing, and it provokes me to think.  About 5 years ago I did the ‘A Brief History of Humankind’ course via coursera – urging my friends to join me.  Since then Yuval has taken the world by storm.
The biggest and most frightening impact of the AI revolution might be on the relative efficiency of democracies and dictatorships. […] We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all available information fast enough and make the right decisions. […]Why Technology Favors Tyranny
I assume AI superintelligence is highly probable if we don’t go extinct first.  For the same reason that the proletariat’s become useless I think ultimately the AI-Human combination will likely become useless too, and cede to Superintelligent AI – so all humans becomes useless. The bourgeoisie elite may initially feel safe in the idea that they don’t need to be useful, they just need to maintain control of power. Though the sliding relative dumbness of bourgeoisie to superintelligence will worry them.. perhaps not long after wiping out the useless class, the elite bourgeoisie will then see the importance of the AI control problem, and that their days are numbered too – at which point will they see ethics and the value of life beyond economic usefulness as important?
However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you’ll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. An authoritarian government that orders all its citizens to have their DNA sequenced and to share their medical data with some central authority would gain an immense advantage in genetics and medical research over societies in which medical data are strictly private. The main handicap of authoritarian regimes in the 20th century—the desire to concentrate all information and power in one place—may become their decisive advantage in the 21st century.Why Technology Favors Tyranny
Yuval Noah Harari believes that we could be heading for a technologically enabled tyranny as AI automates all jobs away – and we become the useless class. Though if superintellignece is likely, then human’s will likely to be a bottleneck in any AI/Human hybrid use case – if tyranny happens, it won’t last for long – what use is a useless class to the elite?

Technology without ethics favors singleton utility monsters – not a tyranny – what use is it to tyrannize over a useless class?