Difference between revisions of "Suffering risk"

From Lesswrongwiki
Jump to: navigation, search
(linkfix)
(add links)
Line 11: Line 11:
  
 
== External links ==
 
== External links ==
* [https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-global-priority/ Reducing Risks of Astronomical Suffering: A Neglected Global Priority]
+
* [https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-global-priority/ Reducing Risks of Astronomical Suffering: A Neglected Global Priority (FRI)]
* [https://foundational-research.org/s-risks-talk-eag-boston-2017/
+
* [https://foundational-research.org/s-risks-talk-eag-boston-2017/ Introductory talk on s-risks (FRI)]
 +
* [https://foundational-research.org/risks-of-astronomical-future-suffering/ Risks of Astronomical Future Suffering (FRI)]
 +
* [https://foundational-research.org/files/suffering-focused-ai-safety.pdf Suffering-focused AI safety: Why "fail-safe" measures might be our top intervention PDF (FRI)]
 +
* [https://foundational-research.org/artificial-intelligence-and-its-implications-for-future-suffering Artificial Intelligence and Its Implications for Future Suffering (FRI)]
 +
* [https://sentience-politics.org/expanding-moral-circle-reduce-suffering-far-future/ Expanding our moral circle to reduce suffering in the far future (Sentience Politics)]
 +
* [https://sentience-politics.org/philosophy/the-importance-of-the-future/ The Importance of the Far Future (Sentience Politics)]

Revision as of 00:19, 21 June 2017

Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of existential risk according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (standard x-risks) and those which would create a large amount of suffering (s-risks).

Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

See also

External links