Difference between revisions of "Existential risk"

From Lesswrongwiki
Jump to: navigation, search
(Categorized risks after Kaj's excellent suggestion)
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
An '''existential risk''' is a risk posing permanent consequences to humanity which can never be undone. In [[Nick Bostrom|Nick Bostrom's]] 2002 article, he defined an existential risk as:
+
An '''existential risk''' is a risk posing permanent large negative consequences to humanity which can never be undone. In [[Nick Bostrom|Nick Bostrom's]] seminal paper on the subject <ref name="exist1"> BOSTROM, Nick. (2002) "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards" Journal of Evolution and Technology, Vol. 9, March 2002. Available at: http://www.nickbostrom.com/existential/risks.pdf </ref>, he defined an existential risk as:
  
 
<blockquote>One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.</blockquote>  
 
<blockquote>One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.</blockquote>  
  
Bostrom (2002) proposes a series of classifications for existential risks:
+
The total negative impact of an existential risk is one of the greatest negative expect utility known. Such event could not only annihilate life as we value it from earth, but would also severely damage all Earth-originating intelligent life potential. This means the negative [[expected utility]] could amount to the total of potential future lives not being realized. A rough and conservative calculation<ref name="exist2">BOSTROM, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity" Global Policy, forthcoming, 2012. Available at: http://www.existential-risk.org/concept.pdf ,</ref>  gives us a total of 1.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000 (10^54) potential future humans lives – smarter, happier and kinder then we are.  Hence, almost no other task would amount to so much positive [[expected utility]] than existential risk reduction.
 +
Bostrom <ref name="exist1" /> proposes a series of classifications for existential risks:
 
* Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, [[Simulation argument|the end of our simulation]], or an [[unfriendly AI]].  
 
* Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, [[Simulation argument|the end of our simulation]], or an [[unfriendly AI]].  
 
* Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a [[superintelligence]].  
 
* Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a [[superintelligence]].  
Line 10: Line 11:
 
* Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its [[Complexity of value|values]] were disjoint with ours today, making their civilization worthless by present values.  
 
* Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its [[Complexity of value|values]] were disjoint with ours today, making their civilization worthless by present values.  
  
Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. As we have no past experience with existential risks, they are [[black swans]]. Since existential disasters cannot be recovered from, their cost is not just the the dead, but every descendent who would have been born.  
+
Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. They are subject to strong [[Observation selection effect|observational selection effects]] <ref name="exist3"> BOSTROM, Nick & SANDBERG, Anders & CIRKOVIC, Milan. (2010) "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506.</ref>. One cannot estimate their future probability based on the past, because [[Bayesian probability|bayesianly]] speaking, the conditional probability of a past existential catastrophe given our existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere.  A high existential risk probability could be functioning as a [[Great Filter]] and explain why there is no evidence of spacial colonization.  
  
 
==Blog posts==
 
==Blog posts==
 
 
*[http://lesswrong.com/lw/10l/intelligence_enhancement_as_existential_risk/ Intelligence enhancement as existential risk mitigation] by [[Roko]]
 
*[http://lesswrong.com/lw/10l/intelligence_enhancement_as_existential_risk/ Intelligence enhancement as existential risk mitigation] by [[Roko]]
 
*[http://lesswrong.com/lw/12h/our_society_lacks_good_selfpreservation_mechanisms/ Our society lacks good self-preservation mechanisms] by [[Roko]]
 
*[http://lesswrong.com/lw/12h/our_society_lacks_good_selfpreservation_mechanisms/ Our society lacks good self-preservation mechanisms] by [[Roko]]
Line 20: Line 20:
  
 
==Organizations==
 
==Organizations==
 
 
* [http://intelligence.org/ Singularity Institute]
 
* [http://intelligence.org/ Singularity Institute]
 
* [http://www.fhi.ox.ac.uk/ The Future of Humanity Institute]
 
* [http://www.fhi.ox.ac.uk/ The Future of Humanity Institute]
Line 31: Line 30:
 
* [http://www.leverageresearch.org/ Leverage Research]
 
* [http://www.leverageresearch.org/ Leverage Research]
 
* [http://lifeboat.com/ The Lifeboat Foundation]
 
* [http://lifeboat.com/ The Lifeboat Foundation]
 
==Resources==
 
 
* [http://www.existential-risk.org/ existential-risk.org]
 
* [http://sethbaum.com/research/gcr/ Global Catastrophic Risk Research Page]
 
* [http://www.global-catastrophic-risks.com/ Global Catastrophic Risks]
 
 
==See also==
 
 
*[[Black swan]]
 
*[[Technological singularity]], [[intelligence explosion]]
 
*[[Unfriendly AI]]
 
*[[Absurdity bias]]
 
*[[Future]]
 
  
 
==References==
 
==References==
 
+
{{Reflist|2}}
*{{Cite journal
 
|title=Existential Risks. Analyzing Human Extinction Scenarios and Related Hazards
 
|author=Nick Bostrom
 
|journal=Journal of Evolution and Technology
 
|volume=9
 
|date=March 2002
 
|url=http://www.nickbostrom.com/existential/risks.html
 
}} ([http://www.nickbostrom.com/existential/risks.pdf PDF])
 
 
*{{Cite book
 
*{{Cite book
 
|title=Global Catastrophic Risks
 
|title=Global Catastrophic Risks
Line 84: Line 61:
 
|publisher=Oxford University Press
 
|publisher=Oxford University Press
 
|url=http://books.google.ca/books?id=SDe59lXSrY8C}} ([http://www.avturchin.narod.ru/posner.doc DOC])
 
|url=http://books.google.ca/books?id=SDe59lXSrY8C}} ([http://www.avturchin.narod.ru/posner.doc DOC])
 +
 +
==See also==
 +
*[[Great Filter]]
 +
*[[Technological singularity]], [[intelligence explosion]]
 +
*[[Unfriendly AI]]
 +
*[[Absurdity bias]]
 +
*[[Future]]
  
 
[[Category:Concepts]]
 
[[Category:Concepts]]
 
[[Category:Future]]
 
[[Category:Future]]
 
[[Category:Existential risk]]
 
[[Category:Existential risk]]

Revision as of 05:04, 6 September 2012

Smallwikipedialogo.png
Wikipedia has an article about

An existential risk is a risk posing permanent large negative consequences to humanity which can never be undone. In Nick Bostrom's seminal paper on the subject [1], he defined an existential risk as:

One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

The total negative impact of an existential risk is one of the greatest negative expect utility known. Such event could not only annihilate life as we value it from earth, but would also severely damage all Earth-originating intelligent life potential. This means the negative expected utility could amount to the total of potential future lives not being realized. A rough and conservative calculation[2] gives us a total of 1.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000 (10^54) potential future humans lives – smarter, happier and kinder then we are. Hence, almost no other task would amount to so much positive expected utility than existential risk reduction. Bostrom [1] proposes a series of classifications for existential risks:

  • Bangs - Earthly intelligent life is extinguished relatively suddenly by any cause; the prototypical end of humanity. Examples of bangs include deliberate or accidental misuse of nanotechnology, nuclear holocaust, the end of our simulation, or an unfriendly AI.
  • Crunches - The potential humanity had to enhance itself indefinitely is forever eliminated, although humanity continues. Possible crunches include an exhaustion of resources, social or governmental pressure ending technological development, and even future technological development proving an unsurpassable challenge before the creation of a superintelligence.
  • Shrieks - Humanity enhances itself, but explores only a narrow portion of its desirable possibilities. As the criteria for desirability haven't been defined yet, this category is mainly undefined. However, a flawed friendly AI incorrectly interpreting our values, a superhuman upload deciding its own values and imposing them on the rest of humanity, and an intolerant government outlawing social progress would certainly qualify.
  • Whimpers - Though humanity is enduring, only a fraction of our potential is ever achieved. Spread across the galaxy and expanding at near light-speed, we might find ourselves doomed by ours or another being's catastrophic physics experimentation, destroying reality at light-speed. A prolonged galactic war leading to our extinction or severe limitation would also be a whimper. More darkly, humanity might develop until its values were disjoint with ours today, making their civilization worthless by present values.

Existential risks present a unique challenge because of their irreversible nature. We will never, by definition, experience and survive an existential risk and so cannot learn from our mistakes. They are subject to strong observational selection effects [3]. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high existential risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.

Blog posts

Organizations

References

  1. 1.0 1.1 BOSTROM, Nick. (2002) "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards" Journal of Evolution and Technology, Vol. 9, March 2002. Available at: http://www.nickbostrom.com/existential/risks.pdf
  2. BOSTROM, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity" Global Policy, forthcoming, 2012. Available at: http://www.existential-risk.org/concept.pdf ,
  3. BOSTROM, Nick & SANDBERG, Anders & CIRKOVIC, Milan. (2010) "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" Risk Analysis, Vol. 30, No. 10 (2010): 1495-1506.
  • Nick Bostrom, Milan M. Ćirković, ed (2008). Global Catastrophic Risks. Oxford University Press. 

See also