Why should we prioritize improving the long-term future?
Longtermism is an ethical stance motivates the reduction of existential risks such as nuclear war, engineered pandemics and emerging technologies like AI and nanotechnology.
Sigal Samuel summarizes the key argument for longtermism as follows: “future people matter morally just as much as people alive today; (…) there may well be more people alive in the future than there are in the present or have been in the past; and (…) we can positively affect future peoples’ lives.”
What kinds of events can we influence in the near-term which will likely have very long-lasting, predictable future effects?
How can we make better (useful, accurate) predictions of the effects of our actions over very long time horizons?
How can we balance near-term needs with long-term ones? .. deprioritizing more immediate issues?
How can we know what to strive for long-term? Can we safely specify a concrete end-goal of human and technological development? How can something like a long reflection help?
Panelists include Stuart Armstrong, John Smart and Anders Sandberg.
This panel is part of the ‘Stepping Into the Future‘ conference.