Difference between revisions of "Suffering risk"
(→See also) |
(+) |
||
Line 1: | Line 1: | ||
− | '''Suffering risks''' (also known as '''s-risks''') are risks of the creation of suffering in the far future on an astronomical scale. In this sense | + | '''Suffering risks''' (also known as '''s-risks''') are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of [[existential risk]] according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (standard x-risks) and those which would create a large amount of suffering (s-risks). |
− | |||
− | |||
+ | Although the [[Machine Intelligence Research Institute]] and [[Future of Humanity Institute]] have investigated strategies to prevent s-risks, the only [[EA]] organization with s-risk prevention research as its primary focus is the [[Foundational Research Institute]]. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to [[instrumental value|instrumentally]] cause suffering to non-human minds such as animals or digital sentience. [Sentience Institute](http://www.sentienceinstitute.org/) works on this value-spreading problem. | ||
== See also == | == See also == | ||
Line 13: | Line 12: | ||
== External links == | == External links == | ||
* [https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-global-priority/ Reducing Risks of Astronomical Suffering: A Neglected Global Priority] | * [https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-global-priority/ Reducing Risks of Astronomical Suffering: A Neglected Global Priority] | ||
+ | * [https://foundational-research.org/s-risks-talk-eag-boston-2017/ |
Revision as of 23:09, 20 June 2017
Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of existential risk according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (standard x-risks) and those which would create a large amount of suffering (s-risks).
Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. [Sentience Institute](http://www.sentienceinstitute.org/) works on this value-spreading problem.