Difference between revisions of "Preference"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
'''Preference''' is a normative side of [[optimization]]. Preference is roughly equivalent to goals and values, but the concept refers more to the sum total of all agent's goals and dispositions than to the individual components. For example, for an agent that runs a [[decision theory]] based on [[expected utility]] maximization, preference should specify both [[prior]] and [[utility function]].
+
'''Preference''' is usually conceptualized as a set of attitudes or evaluations made by a subject or agent towards a specific object or group of objects. These attitudes can vary in their intensity and valence, directly influencing the decision-making process, both implicitly and explicitly.
  
==Blog posts==
+
Although typically studied by the social sciences, it has been proposed that AI has a more robust set of methods to deal with them. These can be divided in several steps:
 +
 
 +
Preference acquisition: Extraction of preferences from a user, through an interactive learning system, e.g. a question-answer process.
 +
Preferences modeling: After extraction, the goal is to create a mathematical model expressing the preferences, taking into account its properties, like the transition of relations.
 +
Preference representation: With a robust model of preferences, it becomes necessary to develop a symbolic system to represent them - a preference representation language.
 +
Preferences reasoning: Finally, having represented a user’s or agent’s preferences, it is possible to mine the data looking for new insights and knowledge. This could be used, for instance, to aggregate users based on preferences or as biases in decision processes and game theory scenarios.
 +
 
 +
This sequential chain of thought can be particularly useful when dealing with [[Coherent Extrapolated Volition]], as a way of systematically exploring agent’s goals and motivations.
 +
 
 +
 
 +
==~Further Reading & References==
  
 
*[http://lesswrong.com/lw/15c/would_your_real_preferences_please_stand_up Would Your Real Preferences Please Stand Up?] by [[Yvain]]
 
*[http://lesswrong.com/lw/15c/would_your_real_preferences_please_stand_up Would Your Real Preferences Please Stand Up?] by [[Yvain]]
 
*[http://lesswrong.com/lw/2tq/notion_of_preference_in_ambient_control/ Notion of Preference in Ambient Control] by [[Vladimir Nesov]]
 
*[http://lesswrong.com/lw/2tq/notion_of_preference_in_ambient_control/ Notion of Preference in Ambient Control] by [[Vladimir Nesov]]
 
*[http://lesswrong.com/lw/6oo/to_what_degree_do_we_have_goals/ To What Degree Do We Have Goals?] by Yvain
 
*[http://lesswrong.com/lw/6oo/to_what_degree_do_we_have_goals/ To What Degree Do We Have Goals?] by Yvain
*[http://lesswrong.com/lw/6r6/tendencies_in_reflective_equilibrium/ Tendencies in Reflective Equilibrium] by Yvain
+
*[http://lesswrong.com/lw/a73/a_brief_tutorial_on_preferences_in_ai/ A brief tutorial on preferences in AI]] by Luke Muehlhauser
*[http://lesswrong.com/r/lesswrong/lw/8q8/urges_vs_goals_how_to_use_human_hardware_to/ Urges vs. Goals: The analogy to anticipation and belief] by [[Anna Salamon]]
 
  
 
==See also==
 
==See also==

Revision as of 01:41, 17 September 2012

Smallwikipedialogo.png
Wikipedia has an article about

Preference is usually conceptualized as a set of attitudes or evaluations made by a subject or agent towards a specific object or group of objects. These attitudes can vary in their intensity and valence, directly influencing the decision-making process, both implicitly and explicitly.

Although typically studied by the social sciences, it has been proposed that AI has a more robust set of methods to deal with them. These can be divided in several steps:

Preference acquisition: Extraction of preferences from a user, through an interactive learning system, e.g. a question-answer process. Preferences modeling: After extraction, the goal is to create a mathematical model expressing the preferences, taking into account its properties, like the transition of relations. Preference representation: With a robust model of preferences, it becomes necessary to develop a symbolic system to represent them - a preference representation language. Preferences reasoning: Finally, having represented a user’s or agent’s preferences, it is possible to mine the data looking for new insights and knowledge. This could be used, for instance, to aggregate users based on preferences or as biases in decision processes and game theory scenarios.

This sequential chain of thought can be particularly useful when dealing with Coherent Extrapolated Volition, as a way of systematically exploring agent’s goals and motivations.


~Further Reading & References

See also