Future Day 2026

Talks for Future Day 2026 will likely span a number of days centered around the beginning of March!

 Details (including agenda) for Future Day 2026 event: scifuture.org/events/future-day-2026 

Why are nearly all our holidays focused on celebrating the past, or the cyclical processes of nature? 

Why not celebrate the amazing future we are collectively creating?

Participants include: Joscha Bach, Anders Sandberg, Ben Goertzel, Roman Yampolskiy, Robin Hanson, Christine Peterson, Aubrey de Grey, Lev Lafayette, Adam Ford, James Hughes, Eyal Aharoni, Danica Dillion, Hugo de Garis, Andrés Gómez-Emilsson and Angela Livingstone.

Some of the sessions details – see agenda for more details:

  • James Hughes – How Billionaires Ruined Futurism
    • For decades, futurism promised a horizon of radical possibility—a Star Trek civilization of post-scarcity, longevity, and universal emancipation. But today, that horizon has been enclosed by a caste of tech-billionaires who have curdled the public’s hope for tomorrow into dread. From Musk’s erratic dismantling of the digital public sphere to Thiel’s surveillance capitalism and the generalized retreat into bunkers and seasteads, the oligarchs have rebranded “the future” as an escape hatch for the wealthy and a panopticon for the rest.
      This talk argues that the concentration of technological power in the hands of a few erratic plutocrats is not just a moral failure, but an epistemological crisis; we can no longer predict the future because the “trends” of history have been hijacked by the whims of a few dozen men. Techno-optimism is dead so long as it remains a mascot for neo-feudalism. To save the transhumanist promise of liberation, we must move beyond asking billionaires for charity and proceed to the necessary conclusion: we must expropriate their platforms, democratize their firms, and reclaim the machinery of the future for the public good.
  • Robin Hanson – Futarchy: Competent Governance Soon?!
    • The biggest reason that our world is messed up in so many identifiable ways is that we use pretty broken systems of governance. With a competent governance, we could instead point our systems to the solvable problems we see, and they’d actually solve them. You might think we’ve tried all possible systems, but in fact we’ve hardly tried any of them. I invented a particular promising approach that is now undergoing successful trials. 
  • Aubrey de Grey – How close are we to robust mouse rejuvenation, and why does that matter?
    • The “damage repair” approach to bringing aging under medical control has made huge strides since I first proposed it 25 years ago. However, since it is a divide-and-conquer strategy, we should not be surprised at the absence of progress in the “bottom line” of life extension, even in mice. Can we realistically expect that to change any time soon? I will present reasons to believe that we can, in the form of accelerating progress in proofs of efficacy of individual treatments, together with initial proof of concept that combining damage repair modalities will give additive benefits.
  • Christine Peterson – Top 10 Longevity Strategies Today that you may not already know
    • A choose-your-own adventure tour.
      Longevity research is making progress! But how do we stay alive long enough to reach Longevity Escape Velocity? We’ll explore a few less-discussed strategies accessible to non-billionaires.
  • Debate – Ben Goertzel & Hugo de Garis (mod: Adam Ford) – Should Humanity Become the Number Two Species?
  • Anders Sandberg – Living inside the cyborg leviathan: artificial intelligence from the 17th century to the posthuman future
    • Being human is hard: we are stupid and somewhat selfish, yet need to work together with other stupid and selfish people with their own goals. We survive by building societies, filled with institutions and habits that help us solve these tough coordination problems. These institutions often act as extended cognition, allowing us to go far beyond individual power. We are to some extent living inside artificial intelligence systems, and they have enabled us to take control over the planet… as well as caused the worst disasters in history. As we build AI, we are also making something that can slip inside our extended cognitive systems and enhance them into literal cyborg systems. We need not just enough of “first order alignment” – getting AI to do things we want safely, but also “second order alignment” – AI that plays well with our societies and structures. Otherwise there is a real risk we may lose our own ecological niche and find ourselves in a world that may be safe and prosperous, yet unfit for human flourishing. If we play it right, however, we might become part of something far grander: a cyborg civilization able to reach full autonomy.
  • Adam Ford – More Moral than Us
    • We’ve built machines that can out-calculate, out-game and out-predict us – and may soon generally out-reason us. But could they ever out-care us – or will it with all that power remain indifferent?
      Encoding human ethics is hard – so many clashes of attitudes, incoherence between different nations, cultures, people within cultures and even within individuals. The challenge of coherent preference/value aggregate across populations of humans is hard enough – but ideally other sentience should be considered too, which makes it harder. What happens when some groups push to have their interests (often steeply) over-represented at the cost of others? Is morality something more that can be discovered rather than invented, like good epistemics? There are already implicitly assumed real world features to which stance independent principles apply when people in earnest argue that ‘this way is better’, or ‘x is more important’ – assuming that these principles support their position – sometimes people do this without realising they are flirting with objectivity.
      As AI becomes more and more intelligent, I hope that AI will be able to make more sense of ethics than humans can – i.e. reason about it in a human-legible way – such that the epistemic surface areas are rendered easy to inspect (what is being asserted, what the dependencies are, what would change the conclusions, and where the evidence lives). So that productive disagreement becomes cheap rather than bespoke archaeological expeditions for each and every one. We ought to be wary of blind epistemic deference to AI, be in a position to sense making in ethics and understanding what, to the best approximation under uncertainty is permissible -> stance-independently accurate. Instead of AI aligning to human value, AI may align to something like moral realism, and so can humans – I call this indirect alignment.
      Then there is the moral motivation problem of whether AI (and humans) will care, even if they actually have high-fidelity understandings of ethics.
  • Angela Livingstone –

Don’t miss this special Future Day 2026 event – where the future isn’t just discussed, but actively shaped.

Note the details, including the time of this event may change though it will be online – via zoom or streamed.

“Celebrating and honoring the past and the cyclical processes of nature is a valuable thing,” says Goertzel. “But in these days of rapid technological acceleration, it is our future that needs more attention, not our past.

“My hope is that Future Day can serve as a tool for helping humanity focus its attention on figuring out what kind of future it wants, and striving to bring these visions to reality.”

“The past is over; the present is fleeting; we live in the future.” — Ray Kurzweil re Future Day

“Future Day is designed to center the impossible in the public mind once a year as a temptation too delicious to resist,” says Howard Bloom, author of Global Brain.

Previous years event is here.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *