Future Day Melbourne 2017

WHERE: The Bull & Bear Tavern – 347 Flinders Lane (btw Queen and Elizabeth streets) Melbourne  WHEN – Wednesday March 1st 2017
See the Facebook event, and the Meetup Event.


* Noushin Shabab ‘The Evolution of Cybersecurity – Looking Towards 2045’ (Senior Security Researcher at Kaspersky Lab) – 20 mins
* Luke James (Science Party Melbourne) a (nonpartisan) talk about promises and pitfalls of government and future technology – 20 mins
* Dushan Phillips – To be what one is.. (spoken word) – 20 mins
* Patrick Poke – The Future of Finance – 20 mins
* There will also be discussion on the up and coming March for Science in Melbourne! (April 22nd) – 10 – 15 mins


Promises and Pitfalls of Government and Future Technology

By Luke James

My talk is focusing on the interaction between technological developments (future tech) and government. From the point of view of government and from the point of view of those developing and trying to use new tech. I have a couple of scenarios to go over in which government has reacted poorly and well to new technologies and when new tech has integrated poorly and well with governments. Then I’ll speak about the policies and systems governments can utilise to encourage and take advantage of new tech. Which will lead me in to my final topic which will be a few minutes about the March for Science. I’ll leave a few minutes for questions at the end as well.
Throughout the speech I’ll be speaking about government purely from a systematic standpoint.

The Evolution of Cybersecurity – Looking Towards 2045

By Noushin Shabab

“Journey through the top cybersecurity criminal cases caught by the Global Research And Analysis Team (GReAT) from Kaspersky Lab and find out their current and future trends in cybercriminal activity.”

The Future of Finance

By Patrick Poke


  • I’ll start off with a bit of an introduction on what the finance industry is really about and where we are now.
  • I’ll then discuss some of the problems/opportunities that we face now (as these will form the basis for future changes.
  • I’ll go through some expectations over the short-term, medium-term, and long-term.
  • Finally, look at some of the over-hyped areas where I don’t think we’ll see as much change as people expect.


To be what one is..

By Dushan Phillips



About Future Day

“Humanity is on the edge of understanding that our future will be astoundingly different from the world we’ve lived in these last several generations. Accelerating technological change is all around us, and transformative solutions are near at hand for all our problems, if only we have the courage to see them. Future Day helps us to foresee our personal potentials, and acknowledge that we have the power to pull together and push our global system to a whole new level of collective intelligence, resiliency, diversity, creativity, and adventure. Want to help build a more foresighted culture? Don’t wait for permission, start celebrating it now!” – John Smart

Future Day is a global day of focusing and celebrating the energy that more and more people around the world are directing toward creating a radically better future.

The Future & You

We all have aspirations, yet we are all too often sidetracked in this age of distraction – however, to firmly ritualize our commitment to the future, each year we celebrate the future seeking to address the glorious problems involved in arriving at a future that we want. Lurking behind every unfolding minute is the potential for a random tangent with no real benefit for our future selves – so it is Future Day to the rescue! A day to remind us to include more of the future in our attention economies, and help us to procrastinate being distracted by the usual gauntlet of noise we run every other day. We take seriously the premise that our future is very important – the notion that *accelerating technological progress will change the world* deserves a lot more attention than that which can be gleaned from most other days of celebration. So, let us remind ourselves to remember the future – an editable history of a time to come – a future, that without our conscious deliberation and positive action, may not be the future that we intended.

Australian Humanist Convention 2017

Ethics In An Uncertain World

After an incredibly successful convention in Brisbane in May, 2016, the Humanist Society of Victoria together with the Council of Australian Humanist Societies will be hosting Australian Humanists at the start of April to discuss and learn about some of the most pressing issues facing society today and how Humanists and the world view we hold can help to shape a better future for all of society.

Official Conference LinkGet Tickets Here | Gala Dinner | FAQs | Meetup Link | Google Map Link


AC Grayling – Humanism, the individual and society
Peter Singer – Public Ethics in the Trump Era
Clive Hamilton – Humanism and the Anthropocene
Meredith Doig – Interbelief presentations in schools
Monica Bini – World-views in the school curriculum
James Fodor – ???
Adam Ford – Humanism & Population Axiology

SciFuture supports and endorses the Humanist Convention in 2017 in efforts to explore ethics foundational in enlightenment values, march against prejudice, and help make sense of the world. SciFuture affirms that human beings (and indeed many other nonhuman animals) have the right to flourish, be happy, and give meaning and shape to their own lives.

Peter Singer wrote about Taking Humanism Beyond Speciesism – Free Inquiry, 24, no. 6 (Oct/Nov 2004), pp. 19-21

AC Grayling’s talk on Humanism at the British Humanists Association:


Zombie Rights

andrew-dun-zombie-rightsAndrew Dun provides an interesting discussion on the rights of sentient entities. Drawing inspiration from quantum complementarity, defends a complementary notion of ontological dualism, countering zombie hypotheses. Sans zombie concerns, ethical discussions should therefore focus on assessing consciousness purely in terms of the physical-functional properties of any putatively conscious entity.

Below is the video of the presentation:

At 12:17 point, Andrew introduces the notion of Supervenience (where high level properties supervene on low-level properties) – do zombies have supervenience? Is consciousness merely a supervenient property that supervenes on characteristics of brain states? If so, we should be able to compute whether a system is conscious (if we do know its full physical characterization). The zombie hypothesis suggests that consciousness does not logically supervene on the physical.

Slides for presentation can be found on slide-share!

Andrew Dun spoke at the Singularity Summit. Talk title : “Zombie Rights”.

Andrew’s research interest relates to both the ontology and ethics of consciousness. Andrew is interested in the ethical significance of consciousness, including the way in which our understanding of consciousness impacts our treatment of other humans, non-human animals, and artifacts. Andrew defends the view that the relationship between physical and conscious properties is one of symmetrical representation, rather than supervenience. Andrew argues that on this basis we can confidently approach ethical questions about consciousness from the perspective of ‘common-sense’ materialism.

Andrew also composes and performs original music.

Extending Life is Not Enough

Dr Randal Koene covers the motivation for human technological augmentation and reasons to go beyond biological life extension.

randal_koene_squareCompetition is an inescapable occurrence in the animate and even in the inanimate universe. To give our minds the flexibility to transfer and to operate in different substrates bestows upon our species the most important competitive advantage.” I am a neuroscientist and neuroengineer who is currently the Science Director at Foundation 2045, and the Lead Scientist at Kernel, and I head the organization carboncopies.org, which is the outreach and roadmapping organization for the development of substrate-independent minds (SIM) and also previously participated in the ambitious and fascinating efforts of the nanotechnology startup Halcyon Molecular in Silicon Valley.

Slides of talk online here
Video of Talk:

Points discussed in the talk:
1. Biological Life-Extension is Not Enough Randal A. Koene Carboncopies.org
3. No one wants to live longer just to live longer. Motivation informs Method.
4. Having an Objective, a Goal, requires that you have some notion of success.
5. Creating (intelligent) machines that have the capabilities we do not — is not as good as being able to experience them ourselves… Imagine… creating/playing music. Imagine… being the kayak.Imagine… perceiving the background radiation of the universe.
6. Is being out of the loop really your goal?
7. Near-term goals: Extended lives without expanded minds are in conflict with creative development.
8. Social
9. Gene survival is extremely dependent on an environment — it is unlikely to survive many changes.Worse… gene replication does not sustain that which we care most about!
10. Is CTGGAGTAC better than GTTGACTGAC? We are vessels for that game — but for the last10,000 years something has been happening!
11. Certain future experiences are desirable, others are not — these are your perspectives, the memes you champion…Death keeps stealing our champions, our experts.
12. Too early to do uploading? – No! The big perspective is relevant now. We don’t like myopic thinking in our politicians, lets not be myopic about world issues ourselves.
14. Life-extension in biology may increase the fragility of our species & civilization… More people? – Resources. Less births? – Fewer novel perspectives. Expansion? – Environmental limitation.
15. Biological life-extension within the same evolutionary niche = further specialization to the same performance “over-training” in conflict with generalization
16. Aubrey de Grey: Ultimately, desires “uploading”
18. Significant biological life-extension is incredibly difficult and beset by threats. Reality vs. popular perception.
19. Life-extension and Substrate-Independence are two different objectives
20. Developing out of a “catchment area” (S. Gildert) may demand iterations of exploration — and exploration involves risk.Hard-wired delusions and drives. What would an AGI do? Which types of AGI would exist in the long run?
21. “Uploading” is just one step of many — but a necessary step — for a truly advanced species
22. Thank You carboncopies.orgrandal.a.koene@carboncopies.org


There is a short promo-interview for the Singularity Summit AU 2012 conference that Adam Ford did with Dr. Koene, though unfortunately the connection was a bit unreliable, which is noticeable in the video:

Most of those videos are available through the SciFuture YouTube channel: http://www.youtube.com/user/TheRationalFuture


Lawrence Krauss, Ben Goertzel and Steve Omohundro on the Perils of Prediction

Panel on the Perils of Prediction where Lawrence Krauss , Steve Omohundro and Ben Goertzel set sail on an epic adventure careening through the perilous waves of prediction! And the seas are angry my friends! Our future stands upon the prow our past drowns in the wake. Our most foolish sailors leave the shore without a compass and an eyeglass. We need to stretch our forecasting abilities further than our intuitions and evolved biases allow.

Video of the panel

Filmed at the Singularity Summit Australia 2011 http://2011.singularitysummit.com.au

Lawrence Krauss - SmilingLawrence Maxwell Krauss (born May 27, 1954) is a Canadian-American theoretical physicist who is a professor of physics, Foundation Professor of the School of Earth and Space Exploration, and director of the Origins Project at Arizona State University. He is the author of several bestselling books, including The Physics of Star Trek and A Universe from Nothing. He is an advocate of scientific skepticism, science education, and the science of morality.


Ben Goertzel (born December 8, 1966 in Rio de Janeiro, Brazil), is an American author and researcher in the field of artificial intelligence. He currently leads Novamente LLC, a privately held software company that attempts to develop a form of strong AI, which he calls “Artificial General Intelligence”. He is also the CEO of Biomind LLC, a company that markets a software product for the AI-supported analysis of biological microarray data; and he is an advisor to the Singularity Institute for Artificial Intelligence, and formerly its Director of Research.

steve_omohundro_headSteve Omohundro is an American scientist known for his research on Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making.

Conference: Thinking Machines in the Physical World

“Thinking Machines in the Physical World” invites cross-disciplinary conversations about the opportunities and threats presented by advances in cognitive computing:
  – What concrete, real-world possibilities does intelligence-focused technology open up?
  – What potential effects will “smart computers” exert on labor and jobs around the globe?
  – What are the broader social implications of these changes?

When: Wednesday, July 13, 2016 8:30 AM until Friday ~6pm (then dinner)
Where: Melbourne Uni Law School Building, Level 10 185 Pelham Street, Carlton

Keynotes (see details here):

Prof Brian Anderson – Distinguished Professor at ANU College of Engineering and Computer Science.

Dr James Hughes – Executive Director of the Institute for Ethics and Emerging Technologies.

Prof M. Vidyasagar – Cecil & Ida Green Chair in Systems Biology Science

Prof Judy Wajcman – Anthony Giddens Professor of Sociology, London School of Economics

Dr. Juerg von Kaenel, IBM Research – Cognitive Computing – IBM Watson

Register here | Main website | Program

Professor Graeme Clark, AC Laureate Professor Emeritus  says “It gives me great pleasure to have the opportunity to welcome your interest in the work of Norbert Wiener and invite you to Melbourne to participate in this important conference.”

Official Website: http://21stcenturywiener.org/
Video: https://www.youtube.com/watch?v=etBMY6Orj50
Meetup: http://www.meetup.com/Science-Technology-and-the-Future/events/228816058/
Google+: https://plus.google.com/events/chcmpbupi30ffps4kf94gtn2rpc
Facebook Event: https://www.facebook.com/events/625367860953411/

The long-term future of AI (and what we can do about it) : Daniel Dewey at TEDxVienna

daniel deweyThis has been one of my favourite simple talks on AI Impacts – Simple, clear and straight to the point. Recommended as an introduction to the ideas (referred to in the title).

I couldn’t find the audio of this talk at TED – it has been added to archive.org:


Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Daniel worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute.



Can Intelligence Explode? – Marcus Hutter at Singularity Summit Australia 2012

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.



Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.

Also see:

Marcus Hutter (born 1967) is a German computer scientist and professor at the Australian National University. Hutter was born and educated in Munich, where he studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber’s group at the Swiss Artificial Intelligence lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff’s theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

Hutter’s notion of universal AI describes the optimal strategy of an agent that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter’s only assumption is that the reactions of the environment in response to the agent’s actions follow some unknown but computable probability distribution.


Professor Marcus Hutter

Research interests:

Artificial intelligence, Bayesian statistics, theoretical computer science, machine learning, sequential decision theory, universal forecasting, algorithmic information theory, adaptive control, MDL, image processing, particle physics, philosophy of science.


Marcus Hutter is Professor in the RSCS at the Australian National University in Canberra, Australia. He received his PhD and BSc in physics from the LMU in Munich and a Habilitation, MSc, and BSc in informatics from the TU Munich. Since 2000, his research at IDSIA and now ANU is centered around the information-theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in 100+ publications and several awards. His book “Universal Artificial Intelligence” (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50’000€ H-prize).

Should We Fear or Welcome the Singularity? Nobel Week Dialogue 2015 – The Future of Intelligence

Panel - Ray Kurzweil Stuart Russell Max Tegmark Harry Shum - mod Margaret BodenShould science and society welcome ‘the singularity’ – the idea of the hypothetical moment in time when artificial intelligence surpasses human intelligence?
The discussion has been growing over decades, institutes dedicated to solving AI friendliness have popped up, and more recently the ideas have found popular advocates. Certainly super intelligent machines could help solve classes of problems that humans struggle with, and also if not designed well may cause more problems that they solve.

Is the question of fear or hope in AI a false dichotomy?

Ray Kurzweil

Ray Kurzweil

While Kurzweil agrees that AI risks are real argues that we already face risks involving biotechnology – I think Kurzweil believes we can solve the biotech threat and other risks though building superintelligence.

Stuart Russell believes that a) we should be exactly sure what we want before we let the AI genie out of the bottle, and b) it’s a technological problem in much the same way as the containment of nuclear fusion is a technological problem.

Max Tegmark says we should both welcome and fear the Technological Singularity. We shouldn’t just bumble into it unprepared. All technologies have been double edged swords – in the past we learned from mistakes (i.e. with out of control fires) but with AI we may only get one chance.

Harry Shum says we should be focussing on what we believe we can develop with AI in the next few decades. We find it difficult to talk about AGI. Most of the social fears are around killer robots.

Maggie Boden

Maggie Boden

Maggie Boden poses an audience question about how will AI cope with our lack of development in ethical and moral norms?

Stuart Russell answers that machines have to come to understand what human values are. If the first sudo-general-purpose AI’s don’t get human values well enough they may end up cooking it’s owners cat – this could irreparably tarnish the AI and home robot industry.

Kurzweil adds that human society is getting more ethical – it seems that statistically we are making ethical progress.

Max Tegmark

Max Tegmark

Max Tegmark brings up that intelligence is defined by the degree of ability to achieve goals – so we can’t ignore the question of what goals to give the system if we are building highly intelligent AI. We need to make AI systems understand what humans really want, not what they say they want.

Harry Shum says that the important ethical question for AI systems needs to address data and user privacy.

Panelists: Harry Shum (Microsoft Research EVP of Tech), Max Tegmark (Cosmologist, MIT) Stuart Russell (Prof. of Computer Science, UC Berkeley) and Ray Kurzweil (Futurist, Google Director of Engineering). Moderator: Margaret Boden (Prof. of Cognitive Science, Uni. of Sussex).

This debate is from the 2015 edition of the meeting, held in Gothenburg, Sweden on 9 Dec.

Rationality & Moral Judgement – Simon Laham

Rationality & Moral Judgement – A view from Moral Psychology. Talk given at EA Global Melbourne 2015. Slides here.

Simon Laham - QandAWhat have we learned from an empirical approach to moral psychology – especially in relation to the role of rationality in most every day morality?
What are some lessons that the EA movement can take from moral psychology?

Various moral theorists over the years have had different emphasis on the roles that the head and heart play in moral judgement. Early conceptions of the role of the head in morality were that it drives moral judgement. A Kantian might say that the head/reasoning drives moral judgement – when presented with a dillema of some kind, the human engages with ‘system 2’ like processes in a controlled rational nature. An advocate of a Humean model may favor the idea that emotion or the heart (‘system 1’ thinking) plays the dominant role in moral judgement. Modern psychologists often take a hybrid model where both system 1 and system 2 styles of thinking are at play in contributing to the way we judge right from wrong.

Moral Judgement & Decision making is driven by a variety of factors:

  • Emotions (e.g., Valdesolo & DeSteno, 2006)
  • Values (e.g., Crone & Laham, 2015)
  • Relational and group membership concerns (e.g., Cikara et al., 2010)

Across a wide range of studies, a majority of people do not consistently apply abstract moral principles – Moral judgments are not decontextualized, depersonalized and asocial (i.e., not System 2)
Simon Laham - Rationality & Moral Judgement - Effective Altruism Global Melbourne 2015
Not only do people inconsistently apply rationality in moral judgments, many reject the idea that consequentialist rationality should have any place in the moral domain.

  • Appeals to consequentialist logic may backfire (Kreps and Monin, 2014)
  • People who give consequentialist justifications for their moral positions are viewed as less committed and less authentic

Is trying to change people’s minds the best way to expand the EA movement?
Moral judgment is subject to a variety of contextual effects. Knowledge of such effects can be used to ‘nudge’ people towards utilitarianism (see Thaler & Sunstein, 2008).

‘Practical’ take-home
Things beside rationality matter in morality and people believe that things beside rationality should matter.
(a) present EA in a manner that does not trade utilitarian options off against deeply held values, identities, or emotions
(b) use decision framing techniques to ‘nudge’ people towards utilitarian choices


Consider watching Simon’s talk at the festival of dangerous ideas about his book ‘The Joy of Sin‘.

Also Simon wrote an article for Huffington post where he says : “I confess it, I am a sinner. I begin most days in a haze of sloth and lust (which, coincidentally, is also how I end most days); gluttony takes hold over breakfast and before I know it I’m well on my way to hell and it’s not yet 9 a.m. Pride, lust, gluttony, greed, envy, sloth and anger, the seven deadly sins, these are my daily companions.

And you? Are you a sinner?

The simple fact is that we all sin (or rather ‘sin’), and we do it all the time. But fear not: the seven deadly sins aren’t as bad for you as you might think.”

Simon Laham BandWSimon Laham is a senior lecturer in the psychology department at Melbourne University. He has worked over the last 8 years on the psychology of morality from the point of view of experimental social psychology.
Key research questions : How do we make moral judgments? How do others influence what we do?

Many thanks for watching!
– Support SciFuture via Patreon
– Please Subscribe the SciFuture YouTube Channel
Science, Technology & the Future