Difference between revisions of "Existential risk"

From Lesswrongwiki
Jump to: navigation, search
(More editing)
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
An '''existential risk''' is a risk posing permanent consequences to humanity which can never be undone. In Bostrom's 2002 article , he defined an existential risk as
+
An '''existential risk''' is a risk posing permanent consequences to humanity which can never be undone. In [[Nick Bostrom|Nick Bostrom's]] 2002 article, he defined an existential risk as:
  
 
<blockquote>One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.</blockquote>  
 
<blockquote>One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.</blockquote>  
  
Nick Bostrom (2003) proposes a series of classifications for existential risks:
+
Bostrom (2002) proposes a series of classifications for existential risks:
 
* Bangs - Earthly intelligent life is extinguished relatively suddenly, either accidentally or by deliberate destruction
 
* Bangs - Earthly intelligent life is extinguished relatively suddenly, either accidentally or by deliberate destruction
 
* Crunches - Any potential humanity had to enhance itself indefinitely is forever eliminated, although humanity survives  
 
* Crunches - Any potential humanity had to enhance itself indefinitely is forever eliminated, although humanity survives  
Line 10: Line 10:
 
* Whimpers - Humanity enhances itself, evolving in a gradual and irrevocable manner to the point that none of our present values are maintained, or enhances itself only infinitesimally  
 
* Whimpers - Humanity enhances itself, evolving in a gradual and irrevocable manner to the point that none of our present values are maintained, or enhances itself only infinitesimally  
  
Examples of potential existential risks to all of humanity include molecular nanotechnology weapons, climate chaos that causes social chaos that leads to general nuclear or biological warfare, a perfectly engineered plague, a sufficiently large asteroid impact, an [[Unfriendly AI]] or a [[Friendly AI]] that makes a significant error of logic or priority (from humanity's point of view, only) in how best to preserve life.  As with lesser-scoped existential risks, most such scenarios would contemplate the survival of tiny numbers of humans that might escape into space or underground, but the threat to '''end civilization as we know it''' satisfies most existential definitions, since humans have not lived under these circumstances before, and might have to become something quite different emotionally and physically to survive at all.
+
The following is a brief list of existential risks:  Asteroids, supervolcanoes, ecological disasters, extreme global warming, nuclear holocaust, pandemics, engineered bioweapons, strangelets, self-replicating nanomachines, [[Unfriendly AI]], the [[Simulation hypothesis|termination of our program]], resource depletion preventing humanity from recovering from a minor disaster, a social or political movement preventing scientific progress, the gradual loss of our core values, extermination or domination by extraterrestrials, and any number of other threats.  
  
Existential risks present a unique challenge because of their irreversible nature. Unlike with lesser risks, we don't have the option of learning from experience of past disasters - there has been no such disaster or else there could not be a "we" at present to analyze them.
+
Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are [[black swans]].  Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born.  
  
Even using past experience to predict the probability of future existential risks raises difficult problems in anthropic reasoning: as Milan M. Ćirković put it, "[W]e cannot [...] expect to find traces of a large catastrophe that occurred yesterday, since it would have preempted our existence today." [...] ''Very destructive events destroy predictability!''"
+
===Blog posts===
 
 
Since existential disasters cannot be recovered from, under many moral systems, they thereby matter for the rest of time: their cost is not just the people who died in the disaster, but all of their possible future descendants.
 
 
 
==Blog posts==
 
  
 
*[http://lesswrong.com/lw/10l/intelligence_enhancement_as_existential_risk/ Intelligence enhancement as existential risk mitigation] by [[Roko]]
 
*[http://lesswrong.com/lw/10l/intelligence_enhancement_as_existential_risk/ Intelligence enhancement as existential risk mitigation] by [[Roko]]
Line 25: Line 21:
 
*[http://lesswrong.com/lw/8f0/existential_risk/ Existential Risk] by [[lukeprog]]
 
*[http://lesswrong.com/lw/8f0/existential_risk/ Existential Risk] by [[lukeprog]]
  
==Organizations==
+
===Organizations===
  
 
* [http://intelligence.org/ Singularity Institute]
 
* [http://intelligence.org/ Singularity Institute]
Line 38: Line 34:
 
* [http://lifeboat.com/ The Lifeboat Foundation]
 
* [http://lifeboat.com/ The Lifeboat Foundation]
  
==Resources==
+
===Resources===
  
 
* [http://www.existential-risk.org/ existential-risk.org]
 
* [http://www.existential-risk.org/ existential-risk.org]
Line 44: Line 40:
 
* [http://www.global-catastrophic-risks.com/ Global Catastrophic Risks]
 
* [http://www.global-catastrophic-risks.com/ Global Catastrophic Risks]
  
==See also==
+
===See also===
  
 
*[[Black swan]]
 
*[[Black swan]]
Line 52: Line 48:
 
*[[Future]]
 
*[[Future]]
  
==References==
+
===References===
  
 
*{{Cite journal
 
*{{Cite journal
Line 83: Line 79:
 
|publisher=Oxford University Press
 
|publisher=Oxford University Press
 
|url=http://yudkowsky.net/rational/cognitive-biases}} ([http://intelligence.org/Biases.pdf PDF])
 
|url=http://yudkowsky.net/rational/cognitive-biases}} ([http://intelligence.org/Biases.pdf PDF])
 +
 +
*{{cite book
 +
|author=Richard A. Posner
 +
|year=2004
 +
|title=Catastrophe Risk and Response
 +
|publisher=Oxford University Press
 +
|url=http://books.google.ca/books?id=SDe59lXSrY8C}} ([http://www.avturchin.narod.ru/posner.doc DOC])
  
 
[[Category:Concepts]]
 
[[Category:Concepts]]
 
[[Category:Future]]
 
[[Category:Future]]
 
[[Category:Existential risk]]
 
[[Category:Existential risk]]

Revision as of 13:23, 12 July 2012

Smallwikipedialogo.png
Wikipedia has an article about

An existential risk is a risk posing permanent consequences to humanity which can never be undone. In Nick Bostrom's 2002 article, he defined an existential risk as:

One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Bostrom (2002) proposes a series of classifications for existential risks:

  • Bangs - Earthly intelligent life is extinguished relatively suddenly, either accidentally or by deliberate destruction
  • Crunches - Any potential humanity had to enhance itself indefinitely is forever eliminated, although humanity survives
  • Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities
  • Whimpers - Humanity enhances itself, evolving in a gradual and irrevocable manner to the point that none of our present values are maintained, or enhances itself only infinitesimally

The following is a brief list of existential risks: Asteroids, supervolcanoes, ecological disasters, extreme global warming, nuclear holocaust, pandemics, engineered bioweapons, strangelets, self-replicating nanomachines, Unfriendly AI, the termination of our program, resource depletion preventing humanity from recovering from a minor disaster, a social or political movement preventing scientific progress, the gradual loss of our core values, extermination or domination by extraterrestrials, and any number of other threats.

Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are black swans. Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born.

Blog posts

Organizations

Resources

See also

References