An existential risk is a risk posing permanent consequences to humanity which can never be undone. In Bostrom's 2002 article , he defined an existential risk as
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
Nick Bostrom (2003) proposes a series of classifications for existential risks:
- Bangs - Earthly intelligent life is extinguished relatively suddenly, either accidentally or by deliberate destruction
- Crunches - Any potential humanity had to enhance itself indefinitely is forever eliminated, although humanity survives
- Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities
- Whimpers - Humanity enhances itself, evolving in a gradual and irrevocable manner to the point that none of our present values are maintained, or enhances itself only infinitesimally
Examples of potential existential risks to all of humanity include molecular nanotechnology weapons, climate chaos that causes social chaos that leads to general nuclear or biological warfare, a perfectly engineered plague, a sufficiently large asteroid impact, an Unfriendly AI or a Friendly AI that makes a significant error of logic or priority (from humanity's point of view, only) in how best to preserve life. As with lesser-scoped existential risks, most such scenarios would contemplate the survival of tiny numbers of humans that might escape into space or underground, but the threat to end civilization as we know it satisfies most existential definitions, since humans have not lived under these circumstances before, and might have to become something quite different emotionally and physically to survive at all.
Existential risks present a unique challenge because of their irreversible nature. Unlike with lesser risks, we don't have the option of learning from experience of past disasters - there has been no such disaster or else there could not be a "we" at present to analyze them.
Even using past experience to predict the probability of future existential risks raises difficult problems in anthropic reasoning: as Milan M. Ćirković put it, "[W]e cannot [...] expect to find traces of a large catastrophe that occurred yesterday, since it would have preempted our existence today." [...] Very destructive events destroy predictability!"
Since existential disasters cannot be recovered from, under many moral systems, they thereby matter for the rest of time: their cost is not just the people who died in the disaster, but all of their possible future descendants.
- Intelligence enhancement as existential risk mitigation by Roko
- Our society lacks good self-preservation mechanisms by Roko
- Disambiguating doom by steven0461
- Existential Risk by lukeprog
- Singularity Institute
- The Future of Humanity Institute
- The Oxford Martin Programme on the Impacts of Future Technology
- Global Catastrophic Risk Institute
- Saving Humanity from Homo Sapiens
- Skoll Global Threats Fund (To Safeguard Humanity from Global Threats)
- Foresight Institute
- Defusing the Nuclear Threat
- Leverage Research
- The Lifeboat Foundation
- Nick Bostrom (March 2002). "Existential Risks. Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology 9. http://www.nickbostrom.com/existential/risks.html. (PDF)
- Nick Bostrom, Milan M. Ćirković, ed (2008). Global Catastrophic Risks. Oxford University Press.
- Milan M. Ćirković (2008). "Observation Selection Effects and global catastrophic risks". Global Catastrophic Risks. Oxford University Press. http://books.google.com/books?id=-Jxc88RuJhgC&lpg=PP1&pg=PA120#v=onepage&q=&f=false.
- Eliezer S. Yudkowsky (2008). "Cognitive Biases Potentially Affecting Judgment of Global Risks". Global Catastrophic Risks. Oxford University Press. http://yudkowsky.net/rational/cognitive-biases. (PDF)