Value extrapolation

From Lesswrongwiki
Revision as of 12:59, 21 July 2012 by TerminalAwareness (talk | contribs) (Bit on a couple suggestions)
Jump to: navigation, search

Value extrapolation is the process of determining the true values of a person. This task is quite challenging, due to the complexity of human values.

Coherent Extrapolated Volition is one proposed method of determining the values of humanity as a whole for a friendly AI to respect. Coherent Aggregated Volition is a proposal by Ben Goertzel similar to CEV but which aggregates and averages rather than extrapolates the values that humans have today. It is intended to be simpler and less controversial than CEV but would solve no inconsistencies in our values and instead cement them permanently.

Phil Goetz has argued that the biases of the human mind are so great that there is no meaningful difference between a human value and a human error, though he believes this can be partially avoided by accepting an arbitrary ethical base. Mitchell Porter optimistically argues that as AGI skepticism fades the field of machine ethics will experience huge growth, and that the public will engage in the problem and develop an aggregate set of values to program an AGI with.

Paul Christiano has suggested that a set of extrapolated values may be created using WBE, and running emulated people at high speeds until they settles on a set of values. These people, not having to worry about existential threats, would make better decisions then us. He argues the threat of existential risks merits using less than perfect value extrapolation. This has been criticized as simply passing the problem on, however.

See Also

Blog posts

External Links