Difference between revisions of "Value extrapolation"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
 
'''Value extrapolation''' is the process of determining the true values of a person. This task is quite challenging, due to the [[Complexity of value| complexity of human values]].
 
'''Value extrapolation''' is the process of determining the true values of a person. This task is quite challenging, due to the [[Complexity of value| complexity of human values]].
  
[[Coherent Extrapolated Volition]] is one proposed method of determining the values of humanity as a whole for a [[friendly AI]] to respect.  
+
[[Coherent Extrapolated Volition]] is one proposed method of determining the values of humanity as a whole for a [[friendly AI]] to respect. Others have argued that the [[biases]] of the human mind are great that there is no meaningful difference between a human value and a human error.  
  
 
Paul Christiano has suggested that a set of extrapolated values may be created using [[WBE]], and running emulated people at high speeds until they settles on a set of values. These people, not having to worry about existential threats, would make better decisions then us. He argues the threat of [[existential risks]] merits using less than perfect value extrapolation. This has been criticized as simply passing the problem on, however.  
 
Paul Christiano has suggested that a set of extrapolated values may be created using [[WBE]], and running emulated people at high speeds until they settles on a set of values. These people, not having to worry about existential threats, would make better decisions then us. He argues the threat of [[existential risks]] merits using less than perfect value extrapolation. This has been criticized as simply passing the problem on, however.  
Line 9: Line 9:
 
* [[Coherent Aggregated Volition]]
 
* [[Coherent Aggregated Volition]]
 
* [[Complexity of value]]
 
* [[Complexity of value]]
 +
* [[Fragility of value]]
  
 
== Blog posts ==
 
== Blog posts ==
Line 14: Line 15:
 
*[http://lesswrong.com/lw/c1x/extrapolating_values_without_outsourcing/ Extrapolating values without outsourcing]
 
*[http://lesswrong.com/lw/c1x/extrapolating_values_without_outsourcing/ Extrapolating values without outsourcing]
 
*[http://lesswrong.com/lw/y3/value_is_fragile/ Value is Fragile]
 
*[http://lesswrong.com/lw/y3/value_is_fragile/ Value is Fragile]
 +
*[http://lesswrong.com/lw/55n Human errors, human values]
  
 
== External Links ==
 
== External Links ==
 
*[http://intelligence.org/files/CEV.pdf Coherent Extrapolated Volition]
 
*[http://intelligence.org/files/CEV.pdf Coherent Extrapolated Volition]

Revision as of 03:35, 14 July 2012

Value extrapolation is the process of determining the true values of a person. This task is quite challenging, due to the complexity of human values.

Coherent Extrapolated Volition is one proposed method of determining the values of humanity as a whole for a friendly AI to respect. Others have argued that the biases of the human mind are great that there is no meaningful difference between a human value and a human error.

Paul Christiano has suggested that a set of extrapolated values may be created using WBE, and running emulated people at high speeds until they settles on a set of values. These people, not having to worry about existential threats, would make better decisions then us. He argues the threat of existential risks merits using less than perfect value extrapolation. This has been criticized as simply passing the problem on, however.

See Also

Blog posts

External Links