Vernor Vinge on the Turing Test, Artificial Intelligence

Preface

the_imitation_game_bOn the coat-tails of a the blockbuster film “The Imitation Game” I saw quite a bit of buzz on the internet about Alan Turing, and the Turing Test.  The title of the movie refers to the idea of the Turing Test may someday show that machines would ostensibly be (at least in controlled circumstances) indistinguishable from humans.
Vernor Vinge is a mathematician and science fiction author who is well known for many Hugo Award-winning novels and novellas*   and his 1993 essay “The Coming Technological Singularity”, in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended”, such that no current models of reality are sufficient to predict beyond it.

 

Alan Turing and the Computability of Intelligence

Adam Ford: Alan Turing is considered the “Father of Theoretical Computer Science and Artificial Intelligence” – his view about the potential of AI contrasts with much of the skepticism that has subsequently arose.  What is at the root of this skepticism?

Vinge_Singularity_Omni_face250x303Vernor Vinge: The emotional source of the skepticism is the ineffable feeling that many (most?)  people have against the possibility that self-awareness could arise from simple, constructed devices.

 

AF: Many theorists feel that the combined talents of pure machines and humans will always produce more creative and therefore useful output – what are your thoughts?

VV: When it comes to intelligence, biology just doesn’t have legs. _However_ in the near term, teams of people plus machines can be much smarter than either — and this is one of the strongest reasons for being optimistic that we can manage the new era safely, and project that safety into the farther future.

 

AF: Is the human brain essentially a computer?

VV: Probably yes, but if not the lack can very likely be made up for with machine improvements that we humans can devise.

 

AF: Even AI critics John Searle and Hubert Dreyfus (i.e. “What Computers (Still) Can’t Do”) agree that a brain simulation is possible in theory, though they argue that merely mimicking the functioning brain would in itself be an admission of ignorance (concerning intelligence) – what are your thoughts?

VV: The question of whether there is self-awareness behind a mimick may be the most profound issue, but for almost all practical purposes it isn’t relevant: in a few years, I think we will be able to make machines that can run circles around any human mind by all externally measured criteria. So what if no one is really home inside that machine?

Offhand, I can think of only one practical import to the answer, but that _is_ something important: If such minds are self-aware in the human sense, then uploads suddenly become very important to us mortality-challenged beings.

For reductionists interested in _that_ issue, some confidence might be achieved with superintelligence architectures that model those structures in our minds that reductionists come to associate with self-awareness. (I can imagine this argument being carried on by the uploaded supermind children of Searle and Moravec — a trillion years from now when there might not be any biological minds around whatsoever.)

 

AF: Do you think Alan Turing’s reasons for believing in the potential of AI are different from your own and other modern day theorists?  If so in what ways?

VV: My guess is there is not much difference.

 

AF: Has Alan Turing and his work influenced your writing? If it has, how so?

VV: I’m not aware of direct influence. As a child, what chiefly influenced me was the science-fiction I was reading! Of course, those folks were often influenced by what was going in science and math and engineering of the time.

Alan Turing has had a multitude of incarnations in science fiction…   I think that besides being a broadly based math and science genius, Turing created accessible connections between classic philosophical questions and current issues.

 

AF: How do you think Alan Turing would respond to the specific concept of the Technological Singularity as described by you in your paper “The Coming Technological Singularity: How to Survive in the Post-Human Era“?

VV: I’d bet that Turing (and many AI pioneers) had extreme ideas about the consequences of superhuman machine intelligence. I’m not sure if Turing and I would agree about the potential for Intelligence Amplification and human/machine group minds.

I’d be _very_ interested in his reaction to modern analysis such as surveyed in Bostrom’s recent _Superintelligence_ book.

 

AF: In True Names, agents seek to protect their true identity. The guardian of the Coven’s castle is named ‘Alan Turing’ – what was the reason behind this?

It was a tip of the hat in Turing’s direction. By the time I wrote this story I had become quite aware of Alan Turing (contrasting with my childhood ignorance that I mentioned earlier).

 

AF: Your first novella Bookworm Run! was themed around brute forcing simpler-than-human-intelligence to super-intelligence (in it a chimpanzee’s intelligence is amplified).  You also explore the area of intelligence amplification in Marooned in Realtime.
Do you think it is possible for a Singularity to bootstrap from brute forcing simple cognitive models? If so do you think Super-Intelligence will be achieved through brute-forcing simple algorithms?

VV: I view “Intelligence Amplification” (IA) as a finessing of the hardest questions by building on top of what already exists. Thus even UI design lies on the path to the Singularity. One could argue that Intelligence Amplification is the surest way of insuring humanity in the super-intelligence (though some find that a very scary possibility in itself).

 

The Turing Test and Beyond

AF: Is the Turing Test important? If so, why, and how does it’s importance match up to tracking progress in Strong AI?

VV: In its general form, I regard the Turing Test as a marvelous, zen-like, bridge between reductionism and the inner feelings most people have about their own self-awareness.  Bravo Dr. Turing!

 

AF: Is a text conversation is ever a valid test for intelligence? Is blackbox testing enough for a valid test for intelligence?

VV: “Passing the Turing Test” depends very much on the setup:
a) The examining human (child? adult? fixated or afflicted adult? –see Sherry Turkle’s examples of college students who passed a chatbot).
b) The duration of the test.
c) The number of human examiners participating.
d) Restrictions on the examination domain.

In _The Emperor’s New Mind_, Penrose has a (mostly negative) critique of the Turing Test. But at the end he says that if the test was very broad, lasting years, and convincing to him (Penrose), then it might be meaningful to talk about a “pass grade”.

 

AF: The essence of Roger Penrose’s argument (in the Emperor’s New Mind)
–  It is impossible for a Turing machine to enumerate all possible Godel sentences. Such a program will always have a Godel sentence derivable from its program which it can never discover
–  Humans have no problem discovering these sentences and seeing the truth of them
And he concludes that humans are not reducible to turing machines.  Do you agree with Roger’s assessment  – Are humans not reducible to turing machines?

VV: This argument depends on comparing a mathematical object (the Turing Machine) with whatever kind of object the speaker considers a “human mind” to be.  As a logical argument, it leaves me dubious.

 

AF: Are there any existing interpretations of the Turing Test that you favour?

VV: I think Penrose’s version (described above) is the most important.

In conversation, the most important thing is that all sides know which flavor of the test they are talking about 🙂

 

AF: You mentioned it has been fun tracking Turing Test contests, what are your thoughts on attempts at passing the Turing Test so far?

VV: So far, it seems to me that the philosophically important thresholds are still far away. Fooling certain people, or fooling people for short periods of time seems to have been accomplished.

 

AF: Is there any specific type of intelligence we should be testing machines for?

VV: There are intelligence tests that would be very interesting to me, but I rather not call them versions of the Turing Test. For instance, I think we’re already in the territory where more and more [forms->sorts] of superhuman forms of creativity and “intuition” are possible.

I think there well also be performance tests for IA and group mind projects.

 

AF: Some argue that testing for ‘machine consciousness’ is more interesting – what are your thoughts?

VV: Again, I’d keep this possibility separate from Turing Test issues, though I do think that a being that could swiftly duplicate itself and ramp intellect up or down per resource and latency constraints would have a vastly different view of reality compared to the severe and static time/space/mortality restrictions that we humans live with.

 

AF: The Turing Test seems like a competitive sport.  Though some interpretations of the Turing Test have conditions which seem to be quite low.  The competitive nature of how the Turing Test is staged seems to me to select for the cheapest and least sophisticated methods to fool judges on a Turing Test panel.

VV: Yes.

 

AF: Should we be focusing on developing more complex and adaptive Turing style tests (more complex measurement criteria? more complex assessment)? What alternatives to a Turing Test competition (if any) would you suggest to motivate regular testing for machine intelligence?

VV: The answers to these questions may grow out of hard engineering necessity more than from the sport metaphor. Going forward, I imagine that different engineering requirements will acquire various tests, but they may look more like classical benchmark tests.

 

Tracking Progress in Artificial Intelligence

AF: Why is tracking progress towards AI important?

VV: Up to a point it could be important for the sort of safety reasons Bostrom discusses in _Superintelligence_. Such tracking could also provide some guidance for machine/human/society teams that might have the power to guide events along safe paths.

 

AF: What do you see as the most useful mechanisms for tracking progress towards a) human equivalence in AI, b) a Technological Singularity?

VV: The approach to human equivalence might be tracked with a broad range of tests. Such would also apply to the Singularity, but for a soft takeoff, I imagine there would be a lot of economic effects that could be tracked. For example:
–  trends in employment of classic humans, augmented humans, and computer/human teams;
–  trends in what particular jobs still have good employment;
–  changes in the personal characteristics of the most successful CEOs.

Direct tests of automation performance (such as we’ve discussed above) are also important, but as we approach the Singularity, the center of gravity shifts from the programmers to the programs and how the programs are gaming the systems.

 

AF: If you had a tardis and you could bring Alan Turing forward into the 21st century, would he be surprised at progress in AI?  What kinds of progress do you think he would be most interested in?

VV: I don’t have any special knowledge of Turing, but my guess is he would be pleased — and he would want to _understand_ by becoming a super himself.

 

AF: If and when the Singularity becomes imminent – is it likely that the majority of people will be surprised?

VV: A hard takeoff would probably be a surprise to most people. I suspect that a soft takeoff would be widely recognized.

 

Implications

AF: What opportunities could we miss if we are not well prepared (This includes opportunities for risk mitigation)?

VV: Really, the risk mitigation is the serious issue. Other categories of missed opportunities will probably be quickly fixed by the improving tech.  For pure AI, some risk mitigation is the sort of thing MIRI is researching.

For pure AI, IA, and group minds, I think risk mitigation involves making use of the human equivalent minds that already exist in great numbers (namely, the human race). If these teams and early enhancements recognized the issues, they can form a bridge across to the more powerful beings to come.

 

AF: You spoke about an AI Hard Takeoff as being potentially very bad – can you elaborate here?

VV: A hard takeoff is too fast for normal humans to react and accommodate to.  To me, a Hard Takeoff would be more like an explosion than like technological progress. Any failure in mitigation planning is suddenly beyond the ability of normal humans to fix.

 

AF: What stood out for you after reading Nick Bostrom’s book ‘Superintelligence – paths, dangers, strategies’?

VV: Yes. I think it’s an excellent discussion especially of the pure AI path to superintelligence. Even people who have no intense interest in these issues would find the first few chapters interesting, as they sketch out the problematic issues of pure AI superintelligence — including some points that may have been missed back in the twentieth century. The book then proceeds to a fascinating analysis of how to cope with these issues.

My only difference with the analysis presented is that while pure AI is likely the long term important issue, there could well be a period (especially in the case of a Soft Takeoff) where the IA and groupmind trajectories are crucial.

vernor_vinge_LosCon

Vernor Vinge at Los Con 2012

Notes:
* Hugo award winning novels & novellas include: A Fire Upon the Deep (1992), A Deepness in the Sky (1999), Rainbows End (2006), Fast Times at Fairmont High (2002), and The Cookie Monster (2004), and The Peace War (1984).

Also see video interview with Vernor Vinge on the Technological Singularity.

The Point of View of the Universe – Peter Singer

Peter Singer discusses the new book ‘The Point Of View Of The Universe – Sidgwick & Contemporary Ethics’ (By Katarzyna de Lazari-Radek and Peter Singer) He also discusses his reasons for changing his mind about preference utilitarianism.

 

Buy the book here: http://ukcatalogue.oup.com/product/97… Bart Schultz’s (University of Chicago) Review of the book: http://ndpr.nd.edu/news/49215-he-poin… “Restoring Sidgwick to his rightful place of philosophical honor and cogently defending his central positions are obviously no small tasks, but the authors are remarkably successful in pulling them off, in a defense that, in the case of Singer at least, means candidly acknowledging that previous defenses of Hare’s universal prescriptivism and of a desire or preference satisfaction theory of the good were not in the end advances on the hedonistic utilitarianism set out by Sidgwick. But if struggles with Singer’s earlier selves run throughout the book, they are intertwined with struggles to come to terms with the work of Derek Parfit, both Reasons and Persons (Oxford, 1984) and On What Matters (Oxford, 2011), works that have virtually defined the field of analytical rehabilitations of Sidgwick’s arguments. The real task of The Point of View of the Universe — the title being an expression that Sidgwick used to refer to the impartial moral point of view — is to defend the effort to be even more Sidgwickian than Parfit, and, intriguingly enough, even more Sidgwickian than Sidgwick himself.”

One Big Misconception About Consciousness – Christof Koch

Christof Koch (Allen Institute for Brain Science) discusses Shannon information and it’s theoretical limitations in explaining consciousness –

Information Theory misses a critical aspect of consciousnessChristof Koch

Christof argues that we don’t need observers to have conscious experiences (other poeple, god, etc), the underlying assumptions behind traditional information theory assumes Shannon information – and that a big misconception about the structure of consciousness stems from this idea – assuming that Shannon information is enough to explain consciousness.  Shannon information is about “sending information from a channel to a receiver – consciousness isn’t about sending anything to anybody.”  So what other kind of information is there?

The ‘information’ in Integrated Information Theory (IIT) does not refer to Shannon information.  Etymologically, the word ‘information’ derives from ‘informare’ – “it refers to information in the original sense of the word ‘Informare’ – to give form to” – that is to give form to a high dimensional structure.

 

 

It’s worth noting that many disagree with Integrated Information Theory – including Scott Aaronson – see here, here and here.

 

See interview below:

“It’s a theory that proceeds from phenomenology to as it were mechanisms in physics”.

IIT is also described in Christof Koch’s Consciousness: Confessions of a Romantic Reductionist’.

Axioms and postulates of integrated information theory

5 axioms / essential properties of experience of consciousness that are foundation to IIT – the intent is to capture the essential aspects of all conscious experience. Each axiom should apply to every possible experience.

  • Intrinsic existence: Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual).
  • Composition: Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
  • Information: Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.
  • Integration: Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book.
  • Exclusion: Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure. Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.

So, does IIT solve what David Chalmers calls the “Hard Problem of consciousness”?

Christof Koch  is an American neuroscientist best known for his work on the neural bases of consciousness. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at the California Institute of Technology.

This interview is a short section of a larger interview which will be released at a later date.

Amazing Progress in Artificial Intelligence – Ben Goertzel

At a recent conference in Beijing (the Global Innovators Conference) – I did yet another video interview with the legendary AGI guru – Ben Goertzel. This is the first part of the interview, where he talks about some of the ‘amazing’ progress in AI over recent years, including Deep Mind’s AlphaGo sealing a 4-1 victory over Go grandmaster Lee Sedol, progress in hybrid architectures in AI (Deep Learning, Reinforcement Learning, etc), interesting academic research in AI being taken up by tech giants, and finally providing some sobering remarks on the limitations of deep neural networks.

The future of neuroscience and understanding the complexity of the human mind – Brains and Computers

Two of the world’s leading brain researchers will come together to discuss some of the latest international efforts to understand the brain. They will discuss two massive initiatives – the US based Allen Institute for Brain Science and European Human Brain Project. By combining neuroscience with the power of computing both projects are harnessing the efforts of hundreds of neuroscientists in unprecedented collaborations aimed at unravelling the mysteries of the human brain.

This unique FREE public event, hosted by ABC Radio and TV personality Bernie Hobbs, will feature two presentations by each brain researcher followed by an interactive discussion with the audience.

This is your chance to ask the big brain questions.

[Event Registration Page] | [Meetup Event Page]

ARC Centre of Excellence for Integrative Brain Function

Monday, 3 April 2017 from 6:00 pm to 7:30 pm (AEST)

Melbourne Convention and Exhibition Centre
2 Clarendon Street
enter via the main Exhibition Centre entrance, opposite Crown Casino
South Wharf, VIC 3006 Australia

Professor Christof Koch
President and Chief Scientific Officer, Allen Institute for Brain Science, USA

Professor Koch leads a large scale, 10-year effort to build brain observatories to map, analyse and understand the mouse and human cerebral cortex. His work integrates theoretical, computational and experimental neuroscience. Professor Koch pioneered the scientific study of consciousness with his long-time collaborator, the late Nobel laureate Francis Crick. Learn more about the Allen Institute for Brain Science and Christof Koch.

Professor Karlheinz Meier
Co-Director and Vice Chair of the Human Brain Project
Professor of Physics, University of Heidelberg, Germany

Professor Meier is a physicist working on unravelling theoretical principles of brain information processing and transferring them to novel computer architectures. He has led major European initiatives that combine neuroscience with information science. Professor Meier is a co-founder of the European Human Brain Project where he leads the research to create brain-inspired computing paradigms. Learn more about the Human Brain Project and Karlheinz Meier.

 

 

This event is brought to you by the Australian Research Council Centre of Excellence for Integrative Brain Function.

Discovering how the brain interacts with the world.

The ARC Centre of Excellence for Integrative Brain Function is supported by the Australian Research Council.

Building Brains – How to build physical models of brain circuits in silicon

Event Description: The brain is a universe of 100 billion cells interacting through a constantly changing network of 1000 trillion synapses. It runs on a power budget of 20 Watts and holds an internal model of the world.   Understanding our brain is among the key challenges for science, on equal footing with understanding genesis and the fate of our universe. The lecture will describe how to build physical, neuromorphic models of brain circuits in silicon. Neuromorphic systems can be used to gain understanding of learning and development in biological brains and as artificial neural systems for cognitive computing.

Event Page Here | Meetup Event Page Here

Date: Wednesday 5 April 2017 6-7pm

Venue:  Monash Biomedical Imaging 770 Blackburn Road Clayton

Karlheinz Meier

Karlheinz Meier (* 1955) received his PhD in physics in 1984 from Hamburg University in Germany. He has more than 25 years of experience in experimental particle physics with contributions to 4 major experiments at particle colliders at DESY in Hamburg and CERN in Geneva. After fellowships and scientific staff positions at CERN and DESY he was appointed full professor of physics at Heidelberg University in 1992. In Heidelberg he co-founded the Kirchhoff-Institute for Physics and a laboratory for the development of microelectronic circuits for science experiments. For the ATLAS experiment at the Large Hadron Collider (LHC) he led a 10-year effort to design and build a large-scale electronic data processing system providing on-the-fly data reduction by 3 orders of magnitude enabling among other achievements the discovery of the Higgs Boson in 2012. In particle physics he took a leading international role in shaping the future of the field as president of the European Committee for Future Accelerators (ECFA).
Around 2005 he gradually shifted his scientific interests towards large-scale electronic implementations of brain-inspired computer architectures. His group pioneered several innovations in the field like the conception of a platform-independent description language for neural circuits (PyNN), time-compressed mixed-signal neuromorphic computing systems and wafer-scale integration for their implementation. He led 2 major European initiatives, FACETS and BrainScaleS, that both demonstrated the rewarding Interdisciplinary collaboration of neuroscience and information science. In 2009 he was one of the initiators of the European Human Brain Project (HBP) that was approved in 2013. In the HBP he leads the subproject on neuromorphic computing with the goal of establishing brain-inspired computing paradigms as research tools for neuroscience and generic hardware systems for cognitive computing, a new way of processing and interpreting the spatio-temporal structure of large data volumes. In the HBP he is a member of the project directorate and vice-chair of the science and infrastructure board.
Karlheinz Meier engages in public dissemination of science. His YouTube channel with physics movies has received more than a Million hits and he delivers regular lectures to the public about his research and general science topics.

 

Consciousness in Biological and Artificial Brains – Prof Christof Koch

Event Description: Human and non-human animals not only act in the world but are capable of conscious experience. That is, it feels like something to have a brain and be cold, angry or see red. I will discuss the scientific progress that has been achieved over the past decades in characterizing the behavioral and the neuronal correlates of consciousness, based on clinical case studies as well as laboratory experiments. I will introduce the Integrated Information Theory (IIT) that explains in a principled manner which physical systems are capable of conscious, subjective experience. The theory explains many biological and medical facts about consciousness and its pathologies in humans, can be extrapolated to more difficult cases, such as fetuses, mice, or non-mammalian brains and has been used to assess the presence of consciousness in individual patients in the clinic. IIT also explains why consciousness evolved by natural selection. The theory predicts that deep convolutional networks and von Neumann computers would experience next to nothing, even if they perform tasks that in humans would be associated with conscious experience and even if they were to run software faithfully simulating the human brain.

[Meetup Event Page]

Supported by The Florey Institute of Neuroscience & Mental Health, the University of Melbourne and the ARC Centre of Excellence for Integrative Brain Function.

 

 

Who: Prof Christof Koch, President and Chief Scientific Officer, Allen Institute for Brain Sciences, Seattle, USA

Venue: Melbourne Brain Centre, Ian Potter Auditorium, Ground Floor, Kenneth Myer Building (Building 144), Genetics Lane, 30 Royal Parade, University of Melbourne, Parkville

This will be of particular interest to those who know of David Pearce, Andreas Gomez, Mike Johnson and Brian Tomasik’s works – see this online panel:

David Brin on Marching for Science and the Future

March Fourth – on March 4th for Science and the Future!
Interview: https://www.youtube.com/watch?v=zwW3nIPQYwc

A discussion on science advocacy & the future! David discussed how to think about strategic foresight (because it was kind of Future Day being March fourth) and science advocacy (especially in relation to the global science march). We also covered the kinds of social systems and attractor states – what we can do to wittingly steer away from a return to feudalism – and hopefully towards a brighter future.

Points Covered in the Interview:

– David Brin’s futurist advisory role at NASA
– Future Day – paying close attention to the future (especially politics)
– Self Preventing Prophecies (not fulfilling) http://www.davidbrin.com/nonfiction/tomorrowsworld.html
– AI and other dangers
– Our feudalistic history, likely a strong ‘attractor state’ – and how to get unstuck from feudalism
– Feudalism as one of the 110 explanations for the Fermi Paradox
– Athenian Democracy – and it being toppled by feudalism – https://en.wikipedia.org/wiki/Athenian_democracy
– The March for Science

David Brin: https://en.wikipedia.org/wiki/David_Brin
Future Day: http://future-day.org #FutureDay


p.s. Future Day is sometimes celebrated on the 1st of March, sometimes on the 4th (‘March Fourth…’ get it??), and sometimes for the whole month.

March for Science: http://marchforscience.com #ScienceMarch #MarchForScience

 

David Brin earned a Master of Science in applied physics in and a Doctor of Philosophy degree in space science. He currently serves on the advisory board of NASA’s Innovative and Advanced Concepts group. He has also been a participant in discussions at the Philanthropy Roundtable and other groups seeking innovative problem solving approaches.
He has won numerous awards for his science fiction – one of his novels, The Postman, was turned into a motion picture.

“The March for Science is a celebration of our passion for science and a call to support and safeguard the scientific community. Recent policy changes have caused heightened worry among scientists, and the incredible and immediate outpouring of support has made clear that these concerns are also shared by hundreds of thousands of people around the world. The mischaracterization of science as a partisan issue, which has given policymakers permission to reject overwhelming evidence, is a critical and urgent matter. It is time for people who support scientific research and evidence-based policies to take a public stand and be counted.

ON APRIL 22, 2017, WE WALK OUT OF THE LAB AND INTO THE STREETS.

We are scientists and science enthusiasts. We come from all races, all religions, all gender identities, all sexual orientations, all abilities, all socioeconomic backgrounds, all political perspectives, and all nationalities. Our diversity is our greatest strength: a wealth of opinions, perspectives, and ideas is critical for the scientific process. What unites us is a love of science, and an insatiable curiosity. We all recognize that science is everywhere and affects everyone.

Science is often an arduous process, but it is also thrilling. A universal human curiosity and dogged persistence is the greatest hope for the future. This movement cannot and will not end with a march. Our plans for policy change and community outreach will start with marches worldwide and a teach-in at the National Mall, but it is imperative that we continue to celebrate and defend science at all levels – from local schools to federal agencies – throughout the world.”

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel:
b) Donating via Patreon:
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

Future Day Melbourne 2017

WHERE: The Bull & Bear Tavern – 347 Flinders Lane (btw Queen and Elizabeth streets) Melbourne  WHEN – Wednesday March 1st 2017
See the Facebook event, and the Meetup Event.

SCHEDULE

* Noushin Shabab ‘The Evolution of Cybersecurity – Looking Towards 2045’ (Senior Security Researcher at Kaspersky Lab) – 20 mins
* Luke James (Science Party Melbourne) a (nonpartisan) talk about promises and pitfalls of government and future technology – 20 mins
* Dushan Phillips – To be what one is.. (spoken word) – 20 mins
* Patrick Poke – The Future of Finance – 20 mins
* There will also be discussion on the up and coming March for Science in Melbourne! (April 22nd) – 10 – 15 mins

Abstracts/Synopsis:

Promises and Pitfalls of Government and Future Technology

By Luke James

My talk is focusing on the interaction between technological developments (future tech) and government. From the point of view of government and from the point of view of those developing and trying to use new tech. I have a couple of scenarios to go over in which government has reacted poorly and well to new technologies and when new tech has integrated poorly and well with governments. Then I’ll speak about the policies and systems governments can utilise to encourage and take advantage of new tech. Which will lead me in to my final topic which will be a few minutes about the March for Science. I’ll leave a few minutes for questions at the end as well.
Throughout the speech I’ll be speaking about government purely from a systematic standpoint.

The Evolution of Cybersecurity – Looking Towards 2045

By Noushin Shabab

“Journey through the top cybersecurity criminal cases caught by the Global Research And Analysis Team (GReAT) from Kaspersky Lab and find out their current and future trends in cybercriminal activity.”

The Future of Finance

By Patrick Poke

 

  • I’ll start off with a bit of an introduction on what the finance industry is really about and where we are now.
  • I’ll then discuss some of the problems/opportunities that we face now (as these will form the basis for future changes.
  • I’ll go through some expectations over the short-term, medium-term, and long-term.
  • Finally, look at some of the over-hyped areas where I don’t think we’ll see as much change as people expect.

 

To be what one is..

By Dushan Phillips

TBA

 

About Future Day

“Humanity is on the edge of understanding that our future will be astoundingly different from the world we’ve lived in these last several generations. Accelerating technological change is all around us, and transformative solutions are near at hand for all our problems, if only we have the courage to see them. Future Day helps us to foresee our personal potentials, and acknowledge that we have the power to pull together and push our global system to a whole new level of collective intelligence, resiliency, diversity, creativity, and adventure. Want to help build a more foresighted culture? Don’t wait for permission, start celebrating it now!” – John Smart

Future Day is a global day of focusing and celebrating the energy that more and more people around the world are directing toward creating a radically better future.

The Future & You

We all have aspirations, yet we are all too often sidetracked in this age of distraction – however, to firmly ritualize our commitment to the future, each year we celebrate the future seeking to address the glorious problems involved in arriving at a future that we want. Lurking behind every unfolding minute is the potential for a random tangent with no real benefit for our future selves – so it is Future Day to the rescue! A day to remind us to include more of the future in our attention economies, and help us to procrastinate being distracted by the usual gauntlet of noise we run every other day. We take seriously the premise that our future is very important – the notion that *accelerating technological progress will change the world* deserves a lot more attention than that which can be gleaned from most other days of celebration. So, let us remind ourselves to remember the future – an editable history of a time to come – a future, that without our conscious deliberation and positive action, may not be the future that we intended.

Can we build AI without losing control over it? – Sam Harris

San Harris (author of The Moral Landscape and host of the Waking Up podcast) discusses the need for AI Safety – while fun to think about, we are unable to “martial an appropriate emotional response” to improvements in AI and automation and the prospect of dangerous AI – it’s a failure of intuition to respond to it like one would a sci-fi like doom scenario.

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Marching for Science with John Wilkins – a perspective from Philosophy of Science

Recent video interview with John Wilkins!

  • What should marchers for science advocate for (if anything)? Which way would you try to bias the economy of attention to science?
  • Should scientists (as individuals) be advocates for particular causes – and should the scientific enterprise advocate for particular causes?
  • The popular hashtag #AlternativeFacts and Epistemic Relativism – How about an #AlternativeHypotheses hashtag (#AltHype for short 😀 ?)
  • Some scientists have concerns for being involved directly – other scientists say they should have a voice and be heard on issues that matter and stand up and complain when public policy is based on erroneous logic and/or faulty assumptions, bad science. What’s your view? What are the risks?

John Wilkins is a historian and philosopher of science, especially biology. Apple tragic. Pratchett fan. Curmudgeon.

We will cover scientific realism vs structuralism in another video in the near future!
Topics will include:

  • Scientific Realism vs Scientific Structuralism (or Structuralism for short)
  • Ontic (OSR) vs Epistemic (ESR)
  • Does the claim that one can know only the abstract structure of the world trivialize scientific knowledge? (Epistemic Structural Realism and Ontic Structural Realism)
  • If we are in principle happy to accept scientific models (especially those that have graduated form hypothesis to theory) as structurally real – then does this give us reasons never to be overconfident about our assumptions?

Come to the Science March in Melbourne on April 22nd 2017 – bring your friends too 😀