Difference between revisions of "Value extrapolation"

From Lesswrongwiki
Jump to: navigation, search
(First draft)
 
 
(26 intermediate revisions by 2 users not shown)
Line 1: Line 1:
'''Value extrapolation''' is the process of determining the true values of a person. This task is quite challenging, due to the [[Complexity of value| complexity of human values]].
+
'''Value extrapolation''' can be defined as an account of how the human values, moral and desires would be under “ideal circumstances”. These circumstances refer to the access to full information about our motivations, its origins and goals, and are proposed as the model on top of which machine ethics should be developed.
  
[[Coherent Extrapolated Volition]] is one proposed method of determining the values of humanity as a whole for a [[friendly AI]] to respect.  
+
It is well known that the true origin of our moral evaluations and motivations are out of our conscious reach. Their development process has facilitated the existence of desires we wish didn’t exist or could suppress (subsequently revealing the ability for “second-order desires”, such as wishing not to wish to eat so much cake).
 +
As such, it seems clear that maybe a developed society should try to become aware, informed, of the root and paths that lead to our current values. Knowing this, understanding the unconscious cognitive processes that give rise to them could help us shift to a set of values intentionally chosen through a state of “reflective equilibrium”.
  
Paul Christiano has suggested that a set of extrapolated values may be created using [[WBE]], and running emulated people at high speeds until they settles on a set of values. These people, not having to worry about existential threats, would make better decisions then us. He argues the threat of [[existential risks]] merits using less than perfect value extrapolation. This has been criticized as simply passing the problem on, however.
+
Yudkowsky's, through [[Coherent Extrapolated Volition]], has proposed that a extrapolation of our motivation and goals could have advantages when developing the first AI seed. The extrapolation of values, in a complementary way, seems useful in thinking a set of machines ethics, namely:
  
=== See Also ===
+
::* the use of real human values after the reflective process;
* [[Coherent Extrapolated Volition]]
+
::* faster AI moral progress; dissolving preference contradictions;
* [[Coherent Aggregated Volition]]
+
::* simplification of the human values through elimination of artifacts;
* [[Complexity of value]]
+
::* a possible solution for human goals’ integration in AI systems;
 +
::* convergence of different human values.
  
=== Blog posts ===
+
== Further Reading & References ==
 +
*[http://intelligence.org/files/CEV.pdf Coherent Extrapolated Volition] by Eliezer Yudkowsky
 +
*[http://intelligence.org/files/SaME.pdf The Singularity and Machine Ethics] by Muehlhauser & Helm
 
*[http://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/ “Indirect Normativity” Write-up] by paulfchristiano, [http://lesswrong.com/lw/c0k/formalizing_value_extrapolation/ LW post and comments]  
 
*[http://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/ “Indirect Normativity” Write-up] by paulfchristiano, [http://lesswrong.com/lw/c0k/formalizing_value_extrapolation/ LW post and comments]  
 +
*[http://multiverseaccordingtoben.blogspot.ca/2010/03/coherent-aggregated-volition-toward.html Coherent Aggregated Volition: A Method for Deriving Goal System Content for Advanced, Beneficial AGIs] by Ben Goertzel
 
*[http://lesswrong.com/lw/c1x/extrapolating_values_without_outsourcing/ Extrapolating values without outsourcing]
 
*[http://lesswrong.com/lw/c1x/extrapolating_values_without_outsourcing/ Extrapolating values without outsourcing]
 
*[http://lesswrong.com/lw/y3/value_is_fragile/ Value is Fragile]
 
*[http://lesswrong.com/lw/y3/value_is_fragile/ Value is Fragile]
 
+
*[http://lesswrong.com/lw/55n Human errors, human values]
=== External Links ==
+
== See Also ==
*[http://intelligence.org/files/CEV.pdf Coherent Extrapolated Volition]
+
* [[Coherent Extrapolated Volition]]
 +
* [[Coherent Aggregated Volition]]
 +
* [[Complexity of value]]
 +
* [[Fragility of value]]

Latest revision as of 01:39, 18 September 2012

Value extrapolation can be defined as an account of how the human values, moral and desires would be under “ideal circumstances”. These circumstances refer to the access to full information about our motivations, its origins and goals, and are proposed as the model on top of which machine ethics should be developed.

It is well known that the true origin of our moral evaluations and motivations are out of our conscious reach. Their development process has facilitated the existence of desires we wish didn’t exist or could suppress (subsequently revealing the ability for “second-order desires”, such as wishing not to wish to eat so much cake). As such, it seems clear that maybe a developed society should try to become aware, informed, of the root and paths that lead to our current values. Knowing this, understanding the unconscious cognitive processes that give rise to them could help us shift to a set of values intentionally chosen through a state of “reflective equilibrium”.

Yudkowsky's, through Coherent Extrapolated Volition, has proposed that a extrapolation of our motivation and goals could have advantages when developing the first AI seed. The extrapolation of values, in a complementary way, seems useful in thinking a set of machines ethics, namely:

  • the use of real human values after the reflective process;
  • faster AI moral progress; dissolving preference contradictions;
  • simplification of the human values through elimination of artifacts;
  • a possible solution for human goals’ integration in AI systems;
  • convergence of different human values.

Further Reading & References

See Also