Difference between revisions of "Suffering risk"

From Lesswrongwiki
Jump to: navigation, search
 
Line 1: Line 1:
 
'''Suffering risks''' (also known as '''s-risks''') are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of [[existential risk]] according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (extinction risks) and those which would create a large amount of suffering (s-risks).
 
'''Suffering risks''' (also known as '''s-risks''') are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of [[existential risk]] according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (extinction risks) and those which would create a large amount of suffering (s-risks).
  
Although the [[Machine Intelligence Research Institute]] and [[Future of Humanity Institute]] have investigated strategies to prevent s-risks, the only [[EA]] organization with s-risk prevention research as its primary focus is the [[Foundational Research Institute]]. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to [[instrumental value|instrumentally]] cause suffering to non-human minds such as animals or digital sentience. [http://www.sentienceinstitute.org/ Sentience Institute] works on this value-spreading problem.
+
Although the [[Machine Intelligence Research Institute]] and [[Future of Humanity Institute]] have investigated strategies to prevent s-risks, the only [[EA]] organization with s-risk prevention research as its primary focus is the [[Foundational Research Institute]]. Much of FRI's work is on suffering-focused [[AI safety]] and [[crucial considerations]]. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to [[instrumental value|instrumentally]] cause suffering to non-human minds such as animals or digital sentience. [http://www.sentienceinstitute.org/ Sentience Institute] works on this value-spreading problem.
  
 
== See also ==
 
== See also ==

Latest revision as of 01:32, 21 June 2017

Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of existential risk according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (extinction risks) and those which would create a large amount of suffering (s-risks).

Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Much of FRI's work is on suffering-focused AI safety and crucial considerations. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

See also

External links