Posts

The Great Filter, a possible explanation for the Fermi Paradox – interview with Robin Hanson

I grew up wondering about the nature of alien life, what it might look like, what they might do, and whether we will discover any time soon. Though aside from a number of conspiracy theories, and conjecture on Tabby’s Star, so far we have not discovered any signs of life out there in the cosmos. Why is it so?
Given the Drake Equation (which attempts to quantify the likelihood and detectability of extraterrestrial civilizations), it seems as though the universe should be teaming with life.  So where are all those alien civilizations?

The ‘L’ in the Drake equation (length of time civilizations emit detectable signs out into space) for a technologically advanced civilization could be a very long time – why haven’t we detected any?

There are alternative many explanations for reasons why we have not yet detected evidence of an advanced alien civilization, such as:
– Rare earth hypothesis – Astrophysicist ‘Michael H. Hart’ argues for a very narrow habitable zone based on climate studies.
– John Smart’s STEM theory
– Some form of transcendence

The universe is a pretty big place. If it’s just us, seems like an awful waste of space.Carl Sagan - 'Contact'

 

Our observable universe being seemingly dead implies that expansionist civilizations are extremely rare; a vast majority of stuff that starts on the path of life never makes it, therefore there must be at least one ‘great filter’ that stops the majority of life from evolving towards an expansionist civilization.

Peering into the history of biological evolution on earth, we have seen various convergences in evolution – these ‘good tricks’ include things like transitions from single cellular to multi-cellular (at least 14 times), eyes, wings etc. If we can see convergences in both evolution, and in the types of tools various human colonies created after being geographically dispersed, Deducing something about the directions complex life could take, especially ones that become technologically adept could inform us about our future.

The ‘Great Filter’ – should we worry?

The theory is, given estimates (including the likes of the Drake Equation), it’s not an unreasonable to argue that there should have been more than enough time and space for cosmic expansionist civilizations (Kardashev type I, II, III and beyond) to arise that are at least a billion years old – and that at least one of their light cones should have intersected with ours.  Somehow, they have been filtered out.  Somehow, planets with life on them make some distance towards spacefaring expansionist civs, but get’s stopped along the way. While we don’t specifically know what that great filter is, there have been many theories – though if the filter is real, seems that it has been very effective.

The argument in Robin’s piece ‘The Great Filter – Are We Almost Past It?’ is somewhat complex, here are some points I found interesting:

  • Life Will Colonize – taking hints from evolution and the behavoir of our human ancestors, it feasible that our ancestors will colonize the cosmos.
    • Looking at earth’s ecosystem, we see that life has consistently evolved to fill almost every ecological niche in the seas, on land and below. Humans as a single species has migrated from the African Savannah to colonize most of the planet filling new geographic and economic niches as requisite technological reach is achieved to take advantage of reproductively useful resources.
    • We should expect humanity to expand to other parts of the solar system, then out into the galaxy in so far as there exists motivation and freedom to do so.  Even if most of society become wireheads or VR addicted ‘navel gazers’, they will want more and more resources to fuel more and more powerful computers, and may also want to distribute civilization to avoid local disasters.
    • This indicates that alien life will attempt to do the same, and eventually, absent great filters, expand their civilization through the cosmos.
  • The Data Point – future technological advances will likely enable civilization to expand ‘explosively’ fast (relative to cosmological timescales) throughout the cosmos – however we a yet have no evidence of this happening, and if there was available evidence, we would have likely detected it by now – much of the argument for the great filter follows from this.
    • within at most the next million years (absent filters) it is foreseeable that our civilization may reach an “explosive point”; rapidly expanding outwards to utilize more and more available mass and energy resources.
    • Civilization will ‘scatter & adapt’ to expand well beyond the reach of any one large catastrophy (i.e. a supernova) to avoid total annihilation.
    • Civilization will recognisably disturb the places it colonizes, adapting the environment into ideal structures (i.e. create orbiting solar collectors, dyson spheres or matrioshka brains thereby substantially changing the star’s spectral output and appearance.  Really really advanced civs may even attempt wholesale reconstruction of galaxies)
    • But we haven’t detected an alien takeover on our planet, or seen anything in the sky to reflect expansionalist civs – even if earth or our solar system was kept in a ‘nature preserve’ (look up the “Zoo Hypothesis”) we should be able to see evidence in the sky of aggressive colonization of other star systems.  Despite great success stories in explaining how natural phenomenon in the cosmos works (mostly “dead” physical processes), we see no convincing evidence of alien life.
  • The Great Filter – ‘The Great Silence’ implies that at least one of the 9 steps to achieving an advanced expansionist civilization (outlined below) is very improbable; somewhere between dead matter and explosive growth lies The Great Filter.
    1. The right star system (including organics)
    2. Reproductive something (e.g. RNA)
    3. Simple (prokaryotic) single-cell life
    4. Complex (archaeatic & eukaryotic) single-cell life
    5. Sexual reproduction
    6. Multi-cell life
    7. Tool-using animals with big brains
    8. Where we are now
    9. Colonization explosion
  • Someone’s Story is Wrong / It Matters Who’s Wrong –  the great silence, as mentioned above seems to indicate that more or more of plausible sounding stories we have about the transitions through each of the 9 steps above is less probable than they look or just plain wrong. To the extent that the evolutionary steps to achieve our civilization were easy, our future success to achieve a technologically advanced / superintelligent / explosively expansionist civilization is highly improbable.  Realising this helps may help inform how we approach how we strategize our future.
    • Some scientists think that transitioning from prokaryotic (single-celled) life and archaeatic or eukaryotic life is rare – though it seems it has happened at least 42 times
    • Even if most of society wants to stagnate or slow down to stable speeds of expansion, it’s not infeasible that some part of our civ will escape and rapidly expand
    • Optimism about our future opposes optimisim about the ease at which life can evolve to where we are now.
    • Being aware of the Great Filter may at least help us improve our chances
  • Reconsidering Biology – Several potentially hard trial-and-error steps between dead matter and modern humans (lifecomplexitysexsocietycradle and language etc) – the harder they were, the more likely they can account for the great silence
  • Reconsidering AstroPhysics – physical phenomena which might reduce the likelihood we would see evidence of an expansionist civ
    • fast space travel may be more difficult even for superintelligence, the lower the maximum speed, the more it could account for the great silence.
    • The relative size of the universe could be smaller than we think, containing less constellations
    • There could be natural ‘baby universes’ which erupt with huge amounts of matter/energy which keep expansionist civs occupied, or effectively trapped
    • Harvesting energy on a large scale may be impossible, or the way in which it is done always preserves natural spectra
    • Advanced life may consistently colonize dark matter
  • Rethinking Social Theories – in order for advanced civs to be achieved, they must first loose ‘predispositions to territoriality and aggression’ making them ‘less likely to engage in galactic emperialism’

We can’t detect expansionist civs, and our default assumption is that there was plenty of time and hospitable space for advanced enough life to arise – especially if you agree with panspermia – that life could be seeded by precursors on roaming cosmic bodies (i.e. comets) – resulting in more life-bearing planets on them.  We can assume plausible reasons for a series of filters which slow down or halt evolutionary progress which would otherwise finally arrive at technologically savvy life capable of expansionist civs – but why all of them?

It seems like we as a technologically capable species are on the verge of having our civilizaiton escape earths gravity well and go spacefaring – so how far along the great filter are we?

Though it’s been thought to be less accurate than some of its predecessors, and more of a rallying point – let us revisit the Drake Equation anyway because its a good tool for helping understand the apparent contradiction between high probability estimates for the existence of extraterrestrial civilizations, and the complete lack of evidence that such civilizations exist.

The number of active, communicative extraterrestrial civilizations in the Milky Way galaxy N is assumed to be equal to the mathematical product of:

  1. R, the average rate of star formations, in our galaxy,
  2. fp, the fraction of formed stars that have planets,
  3. ne for stars that have planets, the average number of planets that can potentially support life,
  4. fl, the fraction of those planets that actually develop life,
  5. fi, the fraction of planets bearing life on which intelligent, civilized life, has developed,
  6. fc, the fraction of these civilizations that have developed communications, i.e., technologies that release detectable signs into space, and
  7. L, the length of time over which such civilizations release detectable signals,

 

Which of the values on the right side of the equation (1 to 7 above) are the biggest reasons (or most significant filters) for the ‘N’ value  (the estimated number of alien civilizations in our galaxy capable of communication) being so small?  if a substantial amount of the great filter is explained by ‘L’, then we are in trouble because the length of time expansionist civs emit signals likely correlates with how long they survive before disappearing (which we can assume likely means going extinct, though there are other possible explanations for going silent).  If other civs don’t seem to last long, then we can infer statistically that our civ might not either.  The larger the remaining filter we have ahead of us, the more cautious and careful we ought to be to avoid potential show stoppers.

So let’s hope that the great filter is behind us, or a substantial proportion is – meaning that the seemingly rare occurrence of expansionist civs is likely because the emergence of intelligent life is rare, rather than it being because the time expansionist civs exist is short.

The more we develop our theories about the potential behaviours of expansionist civs the more we may expand upon or adapt the ‘L’ section of the drake equation.

Many of the paramaters in the Drake Equation are really hard to quantify – exoplanet data from the Keplar Telescope has been used to adapt the Drake equation already – based on this data it seems that there seems to be far more potentially earth like habitable planets within our galaxy, which both excites me because news about alien life is exciting, and frustrates me because it decreases the odds that the larger portion of the great filter is behind us.

Only by doing the best we can with the very best that an era offers, do we find the way to do better in the future.'Frank Drake' - A Reminiscence of Project Ozma, Cosmic Search Vol. 1, No. 1, January 1979

Interview

…we should remember that the Great Filter is so very large that it is not enough to just find some improbable steps; they must be improbable enough. Even if life only evolves once per galaxy, that still leaves the problem of explaining the rest of the filter: why we haven’t seen an explosion arriving here from any other galaxies in our past universe? And if we can’t find the Great Filter in our past, we’ll have to fear it in our future.Robin Hanson - The 'Great Filter' - should we worry?

As stated on the Overcoming Bias blog:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

“What’s the worst that could happen?” – in 1996 (revised in 1998) Robin Hanson wrote:

Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?The Great Filter - Are We Almost Past It? - 'Robin Hanson'
If the ‘Great Filter’ is ahead of us, we could fatalistically resign ourselves to the view that human priorities too skewed to coordinate towards avoiding being ‘filtered’, or we can try to do something to decrease the odds of being filtered. To coordinate what our way around a great filter we need to have some idea of plausible filters.
How may a future great filter manifest?
– Reapers (mass effect)?
– Bezerker probes sent out to destroy any up and coming civilization that reaches a certain point? (A malevolent alien teenager in their basement could have seeded self-replicating bezerker probes as a ‘practical joke’)
– A robot takeover? (If this has been the cause of great filters in the past then why don’t we see evidence of expansionist robot civilizations? see here.  Also if the two major end states of life are either dead or genocidal intelligence explosion, and we aren’t the first, then it is speculated that we should live in a young universe.)

Robin Hanson gave a TedX talk on the Great Filter:

Bio

Robin Hanson is an associate professor of economics at George Mason University, a research associate at the Future of Humanity Institute of Oxford University, and chief scientist at Consensus Point. After receiving his Ph.D. in social science from the California Institute of Technology in 1997, Robin was a Robert Wood Johnson Foundation health policy scholar at the University of California at Berkeley. In 1984, Robin received a masters in physics and a masters in the philosophy of science from the University of Chicago, and afterward spent nine years researching artificial intelligence, Bayesian statistics, and hypertext publishing at Lockheed, NASA, and independently.

Robin has over 70 publications, and has pioneered prediction markets since 1988, being a principal architect of the first internal corporate markets, at Xanadu in 1990, of the first web markets, the Foresight Exchange since 1994, of DARPA’s Policy Analysis Market, from 2001 to 2003, and of Daggre/Scicast, since 2010.

Links

Robin Hanson’s 1998 revision on the paper he wrote on the Great Filter in 1996
– The Drake Equation at connormorency  (where I got the Drake equation image – thanks)|
Slate Star Codex – Don’t Fear the Filter
Ask Ethan: How Fast Could Life Have Arisen In The Universe?
Keith Wiley – The Fermi Paradox, Self-Replicating Probes, Interstellar Transport Bandwidth

Can we build AI without losing control over it? – Sam Harris

San Harris (author of The Moral Landscape and host of the Waking Up podcast) discusses the need for AI Safety – while fun to think about, we are unable to “martial an appropriate emotional response” to improvements in AI and automation and the prospect of dangerous AI – it’s a failure of intuition to respond to it like one would a sci-fi like doom scenario.

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Can We Improve the Science of Solving Global Coordination Problems? Anders Sandberg

Anders Sandberg discusses solving coordination problems:

anders-s-02_40_16_03-still042Includes discussion on game theory including:the prisoners dilemma (and the iterated form), the tit-for-tat strategy, and reciprocal altruism. He then discusses politics, and why he considers himself a ‘heretical libertarian’ – then contrasts the benefits and risks of centralized planning vs distributed trial & error and links this in with discussion on Existential Risk – centralizing very risky projects at the risk of disastrous coordination failures. He discusses groupthink and what forms of coordination work best. Finally he emphasises the need for a science of coordination – a multidisciplinary approach including:

  1. Philosophy
  2. Political Science
  3. Economics
  4. Game Theory

Also see the tutorial on the Prisoners Dilemma:

And Anders paper on AGI models.

A metasystem transition is the evolutionary emergence of a higher level of organisation or control in a system. A number of systems become integrated into a higher-order system, producing a multi-level hierarchy of control. Within biology such evolutionary transitions have occurred through the evolution of self-replication, multicellularity, sexual reproduction, societies etc. where smaller subsystems merge without losing differentiation yet often become dependent on the larger entity. At the beginning of the process the control mechanism is rudimentary, mainly coordinating the subsystems. As the whole system develops further the subsystems specialize and the control systems become more effective. While metasystem transitions in biology are seen as caused by biological evolution, other systems might exhibit other forms of evolution (e.g. social change or deliberate organisation) to cause metasystem transitions. Extrapolated to humans, future transitions might involve parts or the whole of the human species becoming a super-organism.Anders Sandberg

Anders discusses similar issues in ‘The thermodynamics of advanced civilizations‘ – Is the current era the only chance at setting up the game rules for our future light cone? (Also see here)

anders-sandberg-coordination-problems-3b

Further reading
The Coordination Game: https://en.wikipedia.org/wiki/Coordination_game

Heavy-Tailed Distributions: What Lurks Beyond Our Intuitions?

Understanding heavy-tailed distributions are important to assessing likelihoods and impact scales when thinking about possible disasters – especially relevant to xRisk and Global Catastrophic Risk analysis. How likely is civilization to be devastated by a large scale disaster or even go extinct?
Anders discusses how heavy-tailed distributions account for more than our intuitions tell us.

How likely is civilization to devastated by a global disaster or even go extinct?
In this video, Anders Sandberg discusses (with the aid of a whiteboard) how heavy-tailed distributions account for more than our intuitions tell us .

Considering large-scale disasters may be far more important than we intuit.

Transcript of dialog

So typically when people talk about probability they think about nice probability distribution like the bell curve or the Gaussian curve. So this means that it’s most likely that you get something close to zero and then less and less likely that you get very positive or very negative things and this is a rather nice looking curve.

However, many things in the world turn out to have much nastier probability distributions. A lot of disasters for example have a power law distribution. So if this is the size of a disaster and this is the probably, they fall off like this. This doesn’t look very dangerous from the start. Most disasters are fairly small, there’s a high probability of something close to zero and a low probability of something large. But it turns out that the probability getting a really large one can become quite big.

So suppose this one has alpha equal to 1 – that means that there is the chance of getting a disaster of size 10 is proportional to 1 in 10 and that disaster is 10 times as large that’s just a 10th of that probability and that it’s also 10 times as large as that big disaster (again a 10th of that).

That means that we’ve quite a lot of probability of getting very very large disasters – so in this case getting something that is very far out here is exceedingly unlikely, but in the case of power laws you can actually expect to see some very very large outbreaks.

So if you think about the time that various disasters happen – they happen irregularly and occasionally one is through the roof, and then another one, and you can’t of course tell when they happen – that’s random. And you can’t really tell how big they are going to be except that you’re going to be distributed in this way.

The real problem is that when something is bigger than any threshold that you imagine.. well it’s not just going to be a little bit taller, it’s going to be a whole lot taller.

So if we’re going to see a war for example as large as even the Second World War, we shouldn’t expect it to kill a million people more. We could expect it to kill tens or most likely hundreds or even a billions of people more – which is a rather scary prospect.

So the problem here is that disasters seem to be having these heavy tails. So a heavy a tail in probability slang that means that the probability mass over here, the chance that something very large is happening, there again it falls off very slowly. And this is of course a big problem because we tend to think in terms of normal distributions.

Normal distributions are nice. We say they’re normal because a lot of the things in our everyday life get distributed like this. The tallness of people for example – very rarely do we meet somebody who’s a kilometer tall, however, when we meet the people and think about how much they’re making or much money they have – well Bill Gates. He is far far richer than just ten times you and me and then he’s actually got, he’s from afar out here.

So when we get to the land where we have these fat heavy tails when both the the richest (if we are talking about rich people and the dangers if we talk about this) also tend to be much bigger than we can normally think about.

Adam: Hmm yes definitely un-intuitive.

Mmm and the problem is of course our intuitions are all shaped by what’s going on here in the normal realm. We have this experience about what has happened so far in our lives and once we venture out here and talk about very big events or intuitions suddenly become very bad. We make mistakes. We don’t really understand the consequences, cognitive biases take over and this can of course completely mess up our planning.

So we invest far too little in handling the really big disasters and we’re far too uninterested in going for the big wins in technology and science.

We should pay more attention probability theory (esp heavy-tailed distributions) in order to discover and avoid disasters that lurk beyond our intuitions.


Also see –
– Anders Sandberg: The Survival Curve of Our Species: Handling Global Catastrophic and Existential Risks

Anders Sandberg on Wikipedia: https://en.wikipedia.org/wiki/Anders_Sandberg

anders-sandberg-03_41_59_21-still025

Many thanks for watching!

Consider supporting me by:
a) Subscribing to my YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media I create

Kind regards,
Adam Ford
– Science, Technology & the Future: http://scifuture.org

Nick Bostrom: Why Focus on Existential Risk related to Machine Intelligence?

One can think of Existential Risk as a subcategory of a Global Catastrophic Risk – while GCR’s are really bad, civilization has the potential to recover from such a global catastrophic disaster.
An existential Risk is one in which there is no chance of recoverability. An example of the sort of disaster that fits the category of existential risk is human extinction which reduces the probability of [human] lives worth living – theories of value that imply even relatively small reductions in net existential risk have enormous expected value mostly fall under population ethics that consider an average or total utilitarian view of the well-being of the future of life in the universe.  Since we haven’t seen any convincing evidence of life outside earth’s gravity well, it may be that there is no advanced technologically capable life elsewhere in the observable universe.  If we value lives worth living, and lots of lives worth living, we might also value filling the uninhabited parts of the universe with lives worth living – and arguably we need an advanced technologically able civilization to achieve this.  Hence, if humans become extinct it may be that evolution will never again produce a life form capable of escaping the gravity well and colonizing the universe with valuable life.

Here we focus on the reasons to focus on Existential Risk related to machine intelligence.

Say machine intelligence is created with a theory of value outside of, contradictory to, or simply different enough to one in which valued human existence, or the existence of valuable life in the universe.  Also imagine that this machine intelligence could enact on it’s values in an exacting manner – it may cause humanity to become extinct on purpose, or as a side effect of implementing it’s values.

The paper ‘Existential Risk Prevention as Global Priority‘ by Nick Bostrom clarifies the concept of existential risk further:

Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this paper, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. http://www.existential-risk.org

Interview with Nick Bostrom on Machine Intelligence and XRisk

I had the pleasure of doing an interview with Oxford philosopher Nick Bostrom on XRisk:

Transcription of interview:

In recent couple of years we’ve been focusing quite heavily on machine intelligence partly because it seems to raise some significant existentialist down the road part also because relatively little attention has been given to this risk. So when we are prioritizing what we want to spend our time researching then one variable that we take into account is how important is this topic that we could research? But another is how many other people are there who are already studying it? Because the more people who already studying it – the smaller the difference that having a few extra minds focusing on that topic.
So, say the topic of peace and war and how you can try to avoid international conflict is a very important topic – and many existential risks will be reduced if there is more global corporation.
However it is also hard to see how a very small group of people could make a substantial difference to today’s risk of arms races and wars. There is a big interest involved in this and so many people already working either on disarmament and peace and/or military strength that it’s an area where it would be great to make a change – but it’s hard to make a change if there are a smaller number people by contrast with something like the risk from machine intelligence and the risk of Super-Intelligence.
Only been a relatively small number of people have been thinking about this and there might be some low-hanging fruit there – some insights that might make a big difference. So that’s one of the criteria.
Now we are also looking at other existential risks and we are also looking at things other than existential risk like – with try to get a better understanding of what humanity’s situation is in the world and so we have been thinking some about the Fermi Paradox for example, some methodological tools that you need like observation selection theory how you can reason about these things. And to some extent also more near term impacts of technology and of course the opportunities involved in all of this – is that always worth to remind oneself that although enormous technological powers will pose great new dangers including existential risks they also of course make it possible to achieve enormous amount of good.
So one should bear in mind this ..the opportunities as well that are unleashed with technological advance.

About Professor Nick Bostrom

Director & James Martin Research Fellow

Bostrom Xrisk 2Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

He is best known for his work in five areas: (i) the concept of existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) transhumanism, including related issues in bioethics and on consequences of future technologies; and (v) foundations and practical implications of consequentialism. He is currently working on a book on the possibility of an intelligence explosion and on the existential risks and strategic issues related to the prospect of machine superintelligence.

In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine’s list of the world’s top 100 minds. His writings have been translated into more than 21 languages, and there have been some 80 translations or reprints of his works. He has done more than 500 interviews for TV, film, radio, and print media, and he has addressed academic and popular audiences around the world.

CV: http://www.nickbostrom.com/cv.pdf

Personal Web: http://www.nickbostrom.com

FHI Bio: https://www.fhi.ox.ac.uk/about/the-team/

Also consider joining the Facebook Group on Existential Risk: https://www.facebook.com/groups/ExistentialRisk