Difference between revisions of "Existential risk"

From Lesswrongwiki
Jump to: navigation, search
m
(Categorized risks after Kaj's excellent suggestion)
Line 5: Line 5:
  
 
Bostrom (2002) proposes a series of classifications for existential risks:
 
Bostrom (2002) proposes a series of classifications for existential risks:
* Bangs - Earthly intelligent life is extinguished relatively suddenly, either accidentally or by deliberate destruction
+
* Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, [[Simulation argument|the end of our simulation]], or an [[unfriendly AI]].
* Crunches - Any potential humanity had to enhance itself indefinitely is forever eliminated, although humanity survives
+
* Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a [[superintelligence]].
* Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities
+
* Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities. As the [[Complexity of value|criteria for desirability haven't been defined yet]], this category is mainly undefined. However, a flawed [[friendly AI]] incorrectly interpreting our values, a superhuman [[WBE|upload]] deciding its own values and imposing them on the rest of humanity, and an intolerant government outlawing social progress would certainly qualify.
* Whimpers - Humanity enhances itself, evolving in a gradual and irrevocable manner to the point that none of our present values are maintained, or enhances itself only infinitesimally
+
* Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its [[Complexity of value|values]] were disjoint with ours today, making their civilization worthless by present values.  
 
 
The following is a brief list of existential risks:  Asteroids, supervolcanoes, ecological disasters, extreme global warming, nuclear holocaust, pandemics, engineered bioweapons, strangelets, self-replicating nanomachines, [[Unfriendly AI]], the [[Simulation hypothesis|termination of our program]], resource depletion preventing humanity from recovering from a minor disaster, a social or political movement preventing scientific progress, the gradual loss of our core values, extermination or domination by extraterrestrials, and any number of other threats.  
 
  
 
Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are [[black swans]].  Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born.  
 
Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are [[black swans]].  Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born.  

Revision as of 06:45, 18 July 2012

Smallwikipedialogo.png
Wikipedia has an article about

An existential risk is a risk posing permanent consequences to humanity which can never be undone. In Nick Bostrom's 2002 article, he defined an existential risk as:

One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Bostrom (2002) proposes a series of classifications for existential risks:

  • Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, the end of our simulation, or an unfriendly AI.
  • Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a superintelligence.
  • Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities. As the criteria for desirability haven't been defined yet, this category is mainly undefined. However, a flawed friendly AI incorrectly interpreting our values, a superhuman upload deciding its own values and imposing them on the rest of humanity, and an intolerant government outlawing social progress would certainly qualify.
  • Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its values were disjoint with ours today, making their civilization worthless by present values.

Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are black swans. Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born.

Blog posts

Organizations

Resources

See also

References