Difference between revisions of "Existential risk"
m |
(Categorized risks after Kaj's excellent suggestion) |
||
Line 5: | Line 5: | ||
Bostrom (2002) proposes a series of classifications for existential risks: | Bostrom (2002) proposes a series of classifications for existential risks: | ||
− | * Bangs - Earthly intelligent life is extinguished relatively suddenly, | + | * Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, [[Simulation argument|the end of our simulation]], or an [[unfriendly AI]]. |
− | * Crunches - | + | * Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a [[superintelligence]]. |
− | * Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities | + | * Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities. As the [[Complexity of value|criteria for desirability haven't been defined yet]], this category is mainly undefined. However, a flawed [[friendly AI]] incorrectly interpreting our values, a superhuman [[WBE|upload]] deciding its own values and imposing them on the rest of humanity, and an intolerant government outlawing social progress would certainly qualify. |
− | + | * Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its [[Complexity of value|values]] were disjoint with ours today, making their civilization worthless by present values. | |
− | |||
− | |||
Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are [[black swans]]. Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born. | Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are [[black swans]]. Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born. |
Revision as of 06:45, 18 July 2012
An existential risk is a risk posing permanent consequences to humanity which can never be undone. In Nick Bostrom's 2002 article, he defined an existential risk as:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
Bostrom (2002) proposes a series of classifications for existential risks:
- Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, the end of our simulation, or an unfriendly AI.
- Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a superintelligence.
- Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities. As the criteria for desirability haven't been defined yet, this category is mainly undefined. However, a flawed friendly AI incorrectly interpreting our values, a superhuman upload deciding its own values and imposing them on the rest of humanity, and an intolerant government outlawing social progress would certainly qualify.
- Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its values were disjoint with ours today, making their civilization worthless by present values.
Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are black swans. Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born.
Blog posts
- Intelligence enhancement as existential risk mitigation by Roko
- Our society lacks good self-preservation mechanisms by Roko
- Disambiguating doom by steven0461
- Existential Risk by lukeprog
Organizations
- Singularity Institute
- The Future of Humanity Institute
- The Oxford Martin Programme on the Impacts of Future Technology
- Global Catastrophic Risk Institute
- Saving Humanity from Homo Sapiens
- Skoll Global Threats Fund (To Safeguard Humanity from Global Threats)
- Foresight Institute
- Defusing the Nuclear Threat
- Leverage Research
- The Lifeboat Foundation
Resources
See also
References
- Nick Bostrom (March 2002). "Existential Risks. Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology 9. http://www.nickbostrom.com/existential/risks.html. (PDF)
- Nick Bostrom, Milan M. Ćirković, ed (2008). Global Catastrophic Risks. Oxford University Press.
- Milan M. Ćirković (2008). "Observation Selection Effects and global catastrophic risks". Global Catastrophic Risks. Oxford University Press. http://books.google.com/books?id=-Jxc88RuJhgC&lpg=PP1&pg=PA120#v=onepage&q=&f=false.
- Eliezer S. Yudkowsky (2008). "Cognitive Biases Potentially Affecting Judgment of Global Risks". Global Catastrophic Risks. Oxford University Press. http://yudkowsky.net/rational/cognitive-biases. (PDF)
- Richard A. Posner (2004). Catastrophe Risk and Response. Oxford University Press. http://books.google.ca/books?id=SDe59lXSrY8C. (DOC)