Coherent Aggregated Volition

From Lesswrongwiki
Revision as of 00:28, 3 October 2012 by Pedrochaves (talk | contribs)
Jump to: navigation, search

Coherent Aggregated Volition is Ben Goertzel's response to Eliezer Yudkowsky's Coherent Extrapolated Volition. CAV would be a combination of the goals and beliefs of humanity at the present time.

The author considers the "extrapolation" aspect of CEV as distorting the concept of volition and to be highly uncertain. He considers that if the person whose volition is being extrapolated has some inconsistent aspects (which is tipically human), then there could be a great variety of possible extrapolations. The problem would then be which version of this extrapolated human to choose, or how to aggregate them, which would be very difficult to achieve.

Coherent Aggregated Volition is presented as simpler, and intended to be easier to formalize and prototype in the foreseeable future. CAV is not, however, intended to answer the question of Friendly AI, although Goertzel claims CEV is possibly not the answer as well.

The concept

CEV and CAV

See also

References