Difference between revisions of "Suffering risk"

From Lesswrongwiki
Jump to: navigation, search
(Created page with "'''Suffering risks''' (also known as '''s-risks''') are risks of the creation of suffering in the far future on an astronomical scale. In this sense they can be considered a f...")
 
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Suffering risks''' (also known as '''s-risks''') are risks of the creation of suffering in the far future on an astronomical scale. In this sense they can be considered a form of [[existential risk]] according to Bostrom's original definition, but it may be useful to distinguish between risks that threaten to prevent future populations from coming into existence (standard x-risks) and those which would instantiate a large amount of suffering (s-risks).
+
'''Suffering risks''' (also known as '''s-risks''') are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of [[existential risk]] according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (extinction risks) and those which would create a large amount of suffering (s-risks).
 
 
Although the [[Machine Intelligence Research Institute]] and [[Future of Humanity Institute]] have investigated strategies to prevent s-risks, the only [[EA]] organization with s-risk prevention as its primary focus is the [[Foundational Research Institute]].
 
  
 +
Although the [[Machine Intelligence Research Institute]] and [[Future of Humanity Institute]] have investigated strategies to prevent s-risks, the only [[EA]] organization with s-risk prevention research as its primary focus is the [[Foundational Research Institute]]. Much of FRI's work is on suffering-focused [[AI safety]] and [[crucial considerations]]. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to [[instrumental value|instrumentally]] cause suffering to non-human minds such as animals or digital sentience. [http://www.sentienceinstitute.org/ Sentience Institute] works on this value-spreading problem.
  
 
== See also ==
 
== See also ==
 
* [[Foundational Research Institute]]
 
* [[Foundational Research Institute]]
 
* [[Existential risk]]
 
* [[Existential risk]]
 +
* [[Abolitionism]]
 
* [[Mind crime]]
 
* [[Mind crime]]
 
* [[Utilitarianism]], [[Hedonism]]
 
* [[Utilitarianism]], [[Hedonism]]
  
 
== External links ==
 
== External links ==
* [https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-global-priority/ Reducing Risks of Astronomical Suffering: A Neglected Global Priority]
+
* [https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-global-priority/ Reducing Risks of Astronomical Suffering: A Neglected Global Priority (FRI)]
 +
* [https://foundational-research.org/s-risks-talk-eag-boston-2017/ Introductory talk on s-risks (FRI)]
 +
* [https://foundational-research.org/risks-of-astronomical-future-suffering/ Risks of Astronomical Future Suffering (FRI)]
 +
* [https://foundational-research.org/files/suffering-focused-ai-safety.pdf Suffering-focused AI safety: Why "fail-safe" measures might be our top intervention PDF (FRI)]
 +
* [https://foundational-research.org/artificial-intelligence-and-its-implications-for-future-suffering Artificial Intelligence and Its Implications for Future Suffering (FRI)]
 +
* [https://sentience-politics.org/expanding-moral-circle-reduce-suffering-far-future/ Expanding our moral circle to reduce suffering in the far future (Sentience Politics)]
 +
* [https://sentience-politics.org/philosophy/the-importance-of-the-future/ The Importance of the Far Future (Sentience Politics)]

Latest revision as of 01:32, 21 June 2017

Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of existential risk according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (extinction risks) and those which would create a large amount of suffering (s-risks).

Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Much of FRI's work is on suffering-focused AI safety and crucial considerations. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

See also

External links