Difference between revisions of "Preference"

From Lesswrongwiki
Jump to: navigation, search
(See also)
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
 
'''Preference''' is a normative side of [[optimization]]. Preference is roughly equivalent to goals and values, but the concept refers more to the sum total of all agent's goals and dispositions than to the individual components. For example, for an agent that runs a [[decision theory]] based on [[expected utility]] maximization, preference should specify both [[prior]] and [[utility function]].
 
'''Preference''' is a normative side of [[optimization]]. Preference is roughly equivalent to goals and values, but the concept refers more to the sum total of all agent's goals and dispositions than to the individual components. For example, for an agent that runs a [[decision theory]] based on [[expected utility]] maximization, preference should specify both [[prior]] and [[utility function]].
 +
 +
==Blog posts==
 +
 +
*[http://lesswrong.com/lw/15c/would_your_real_preferences_please_stand_up Would Your Real Preferences Please Stand Up?] by [[Yvain]]
 +
*[http://lesswrong.com/lw/6oo/to_what_degree_do_we_have_goals/ To What Degree Do We Have Goals?] by Yvain
 +
*[http://lesswrong.com/lw/6r6/tendencies_in_reflective_equilibrium/ Tendencies in Reflective Equilibrium] by Yvain
 +
*[http://lesswrong.com/r/lesswrong/lw/8q8/urges_vs_goals_how_to_use_human_hardware_to/ Urges vs. Goals: The analogy to anticipation and belief] by [[Anna Salamon]]
  
 
==See also==
 
==See also==
Line 7: Line 14:
 
*[[Utility function]], [[Decision theory]]
 
*[[Utility function]], [[Decision theory]]
 
*[[Optimization process]]
 
*[[Optimization process]]
 +
*[[Akrasia]]
 +
*[[Corrupted hardware]]
  
 
{{stub}}
 
{{stub}}
 
[[Category:Decision theory]]
 
[[Category:Decision theory]]
 +
[[Category:Psychology]]

Revision as of 04:20, 11 March 2012

Smallwikipedialogo.png
Wikipedia has an article about

Preference is a normative side of optimization. Preference is roughly equivalent to goals and values, but the concept refers more to the sum total of all agent's goals and dispositions than to the individual components. For example, for an agent that runs a decision theory based on expected utility maximization, preference should specify both prior and utility function.

Blog posts

See also