Difference between revisions of "Complexity of value"

From Lesswrongwiki
Jump to: navigation, search
(See also)
Line 3: Line 3:
 
In general, human choices do compress; someone who wants to survive will, over the course of a lifetime, take many different actions, and pursue many different goals, in order to survive.  In this sense, human choices definitely compress beyond the raw list of actions.
 
In general, human choices do compress; someone who wants to survive will, over the course of a lifetime, take many different actions, and pursue many different goals, in order to survive.  In this sense, human choices definitely compress beyond the raw list of actions.
  
But people don't ''just'' want to survive - although you can compress many human activities to that desire, you cannot compress all of human existence into it.  The human equivalents of a utility function, our [[terminal values]], contain many different elements that are not strictly reducible to one another.  William Frankena offered [http://plato.stanford.edu/entries/value-intrinsic-extrinsic/#WhaHasIntVal this list] of things which many cultures and people seem to value (for their own sake rather than strictly for their external consequences):
+
But people don't ''just'' want to survive - although you can compress many human activities to that desire, you cannot compress all of human existence into it.  The human equivalents of a utility function, our terminal values, contain many different elements that are not strictly reducible to one another.  William Frankena offered [http://plato.stanford.edu/entries/value-intrinsic-extrinsic/#WhaHasIntVal this list] of things which many cultures and people seem to value (for their own sake rather than strictly for their external consequences):
  
 
:"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."
 
:"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."
  
The complexity of value is a major theme of Yudkowsky's writing for two reasons:
+
The complexity of value is a major theme of [[Eliezer Yudkowsky]]'s writing for two reasons:
  
 
*[[Hollywood Rationality|Caricatures]] of rationalists often have them moved by artificially simplified values - for example, only caring about personal pleasure.  This becomes a template for arguing against rationality:  X is valuable, but rationality says to only care about Y, in which case we could not value X, therefore do not be rational.
 
*[[Hollywood Rationality|Caricatures]] of rationalists often have them moved by artificially simplified values - for example, only caring about personal pleasure.  This becomes a template for arguing against rationality:  X is valuable, but rationality says to only care about Y, in which case we could not value X, therefore do not be rational.
Line 19: Line 19:
 
* [http://lesswrong.com/lw/lq/fake_utility_functions/ Fake Utility Functions] describes the seeming fascination that many have with trying to compress morality down to a single principle.  The [http://lesswrong.com/lw/lp/fake_fake_utility_functions/ sequence leading up] to this post tries to explain the cognitive twists whereby people [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ smuggle] all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
 
* [http://lesswrong.com/lw/lq/fake_utility_functions/ Fake Utility Functions] describes the seeming fascination that many have with trying to compress morality down to a single principle.  The [http://lesswrong.com/lw/lp/fake_fake_utility_functions/ sequence leading up] to this post tries to explain the cognitive twists whereby people [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ smuggle] all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
  
==Other Posts==
+
==Other posts==
  
*[http://lesswrong.com/lw/ky/fake_morality/ Fake Morality] by [[Eliezer Yudkowsky]]
+
*[http://lesswrong.com/lw/ky/fake_morality/ Fake Morality]
 +
*[http://lesswrong.com/lw/1o9/welcome_to_heaven/ Welcome to Heaven] by denisbider
 +
*[http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/ Complexity of Value ≠ Complexity of Outcome] by [http://weidai.com/ Wei Dai]
 +
*[http://lesswrong.com/lw/65w/not_for_the_sake_of_pleasure_alone/ Not for the Sake of Pleasure Alone] by [http://lukeprog.com/ lukeprog]
  
 
==See also==
 
==See also==
Line 30: Line 33:
 
*[[Fun theory]]
 
*[[Fun theory]]
 
*[[Magical categories]]
 
*[[Magical categories]]
 +
*[[Friendly Artificial Intelligence]]
 +
*[[Preference]]
  
 
{{featured article}}
 
{{featured article}}
 
[[Category:Theses]]
 
[[Category:Theses]]
 +
[[Category:Positions]]
 
[[Category:Evolution]]
 
[[Category:Evolution]]
 
[[Category:Values]]
 
[[Category:Values]]

Revision as of 10:47, 15 March 2012

The thesis that human values have high Kolmogorov complexity - our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules.

In general, human choices do compress; someone who wants to survive will, over the course of a lifetime, take many different actions, and pursue many different goals, in order to survive. In this sense, human choices definitely compress beyond the raw list of actions.

But people don't just want to survive - although you can compress many human activities to that desire, you cannot compress all of human existence into it. The human equivalents of a utility function, our terminal values, contain many different elements that are not strictly reducible to one another. William Frankena offered this list of things which many cultures and people seem to value (for their own sake rather than strictly for their external consequences):

"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."

The complexity of value is a major theme of Eliezer Yudkowsky's writing for two reasons:

  • Caricatures of rationalists often have them moved by artificially simplified values - for example, only caring about personal pleasure. This becomes a template for arguing against rationality: X is valuable, but rationality says to only care about Y, in which case we could not value X, therefore do not be rational.
  • Underestimating the complexity of value leads to underestimating the difficulty of Friendly AI; and there are notable cognitive biases and fallacies which lead people to underestimate this complexity.

Major posts

Other posts

See also