Difference between revisions of "Existential risk"

From Lesswrongwiki
Jump to: navigation, search
(existential risks are not all scoped "all of humanity", it's entirely possible (though problematic) to define an existential risk for say the Republican Party or a small island civilization)
(more examples)
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
An '''existential risk''' is a killing or crippling disaster risk that affects the entire entity doing the analysis or making the decision, as distinguished from risks of lesser scope or intensity.  The entity could be an organization, religion, civilization, culture, ecosystem, or all of humanity, but the most common use of the term is with respect to these latter, larger, risks, because the human species or an ecosystem such as the biosphere or a watershed objectively exists, while the destruction of organizations, religions, civilizations and cultures may all be considered to be changes of perceptions.
+
An '''existential risk''' is a killing or crippling disaster risk that affects the entire entity doing the analysis or making the decision, as distinguished from risks of lesser scope or intensity.  The entity could be an organization, religion, civilization, culture, ecosystem, or all of humanity, but the most common use of the term is with respect to these latter, larger, risks, because the human species or an ecosystem such as the biosphere or a watershed objectively exists, while the destruction of organizations, religions, civilizations and cultures may all be considered to be changes of perceptions:  "We" cease to exist but the individual living things making up "we" continue to exist and make new affiliations.
  
Examples of potential existential risks to all of humanity include molecular nanotechnology weapons, a sufficiently large asteroid impact, an [[unFriendly AI]] or a [[Friendly AI]] that makes a significant error of logic or priority (from humanity's point of view, only) in how best to preserve life.
+
Examples of potential existential risks to all of humanity include molecular nanotechnology weapons, climate chaos that causes social chaos that leads to general nuclear or biological warfare, a perfectly engineered plague, a sufficiently large asteroid impact, an [[unFriendly AI]] or a [[Friendly AI]] that makes a significant error of logic or priority (from humanity's point of view, only) in how best to preserve life.  As with lesser-scoped existential risks, most such scenarios would contemplate the survival of tiny numbers of humans that might escape into space or underground, but the eradication of human ''''civilization as we know it''' satisfies most existential definitions, since humans have not lived under these circumstances before, and might have to become something quite different emotionally and physically to survive at all.
  
 
Existential risks present a unique challenge because of their irreversible nature. Unlike with lesser risks, we don't have the option of learning from experience of past disasters - there has been no such disaster or else there could not be a "we" at present to analyze them.
 
Existential risks present a unique challenge because of their irreversible nature. Unlike with lesser risks, we don't have the option of learning from experience of past disasters - there has been no such disaster or else there could not be a "we" at present to analyze them.
  
Even using past experience to predict the probability of future existential risks raises difficult problems in anthropic reasoning: as Milan M. Ćirković put it, "[W]e cannot [...] expect to find traces of a large catastrophe that occured yesterday, since it would have preempted our existence today." [...] ''Very destructive events destroy predictability!''"
+
Even using past experience to predict the probability of future existential risks raises difficult problems in anthropic reasoning: as Milan M. Ćirković put it, "[W]e cannot [...] expect to find traces of a large catastrophe that occurred yesterday, since it would have preempted our existence today." [...] ''Very destructive events destroy predictability!''"
  
 
Since existential disasters cannot be recovered from, under many moral systems, they thereby matter for the rest of time: their cost is not just the people who died in the disaster, but all of their possible future descendants.
 
Since existential disasters cannot be recovered from, under many moral systems, they thereby matter for the rest of time: their cost is not just the people who died in the disaster, but all of their possible future descendants.

Revision as of 05:31, 24 January 2012

Smallwikipedialogo.png
Wikipedia has an article about

An existential risk is a killing or crippling disaster risk that affects the entire entity doing the analysis or making the decision, as distinguished from risks of lesser scope or intensity. The entity could be an organization, religion, civilization, culture, ecosystem, or all of humanity, but the most common use of the term is with respect to these latter, larger, risks, because the human species or an ecosystem such as the biosphere or a watershed objectively exists, while the destruction of organizations, religions, civilizations and cultures may all be considered to be changes of perceptions: "We" cease to exist but the individual living things making up "we" continue to exist and make new affiliations.

Examples of potential existential risks to all of humanity include molecular nanotechnology weapons, climate chaos that causes social chaos that leads to general nuclear or biological warfare, a perfectly engineered plague, a sufficiently large asteroid impact, an unFriendly AI or a Friendly AI that makes a significant error of logic or priority (from humanity's point of view, only) in how best to preserve life. As with lesser-scoped existential risks, most such scenarios would contemplate the survival of tiny numbers of humans that might escape into space or underground, but the eradication of human 'civilization as we know it satisfies most existential definitions, since humans have not lived under these circumstances before, and might have to become something quite different emotionally and physically to survive at all.

Existential risks present a unique challenge because of their irreversible nature. Unlike with lesser risks, we don't have the option of learning from experience of past disasters - there has been no such disaster or else there could not be a "we" at present to analyze them.

Even using past experience to predict the probability of future existential risks raises difficult problems in anthropic reasoning: as Milan M. Ćirković put it, "[W]e cannot [...] expect to find traces of a large catastrophe that occurred yesterday, since it would have preempted our existence today." [...] Very destructive events destroy predictability!"

Since existential disasters cannot be recovered from, under many moral systems, they thereby matter for the rest of time: their cost is not just the people who died in the disaster, but all of their possible future descendants.

Blog posts

Organisations

A list of organisations and charities concerned with existential risk research.

Resources

See also

References