Difference between revisions of "Utility"

From Lesswrongwiki
Jump to: navigation, search
Line 2: Line 2:
 
'''Utility''' is a generic term used to specify how much a certain action gives results according to an agent’s preferences. Its unit – '''util''' or '''utilon''' – is an abstract arbitrary measure that assumes a concrete value only when the agent’s preferences have been determined through an [[utility function]].
 
'''Utility''' is a generic term used to specify how much a certain action gives results according to an agent’s preferences. Its unit – '''util''' or '''utilon''' – is an abstract arbitrary measure that assumes a concrete value only when the agent’s preferences have been determined through an [[utility function]].
  
It’s a concept rooting from economics and [[game theory]], where it measures how much a certain commodity increases welfare. One of the clearest examples is money: the price that a person is willing to pay for something can be considered a measure of the strength of his preference for it. Thus, a willingness to pay a high sum for something implies that the person has a strong desire for it, i.e. it has a high utility for him. Although it has been argued that utility is hard to quantify when dealing with human agents, it is widely used when designing an AI capable of planning.
+
It’s a concept rooting from economics and [[game theory]], where it measures how much a certain commodity increases welfare. One of the clearest examples is money: the price that a person is willing to pay for something can be considered a measure of the strength of his preference for it. Thus, a willingness to pay a high sum for something implies that the person has a strong desire for it, i.e. it has a high utility for him.
  
[[Utilitarianism]] is a moral philosophy advocating actions which bring the greatest welfare for the greatest amount of agents (generally humans) involved.
+
Although it has been argued that utility is hard to quantify when dealing with humans - mainly due to the complexity of the preferences and motivations in cause – utility-based agents are quite common in AI systems. Such examples include [http://u.cs.biu.ac.il/~meshulr1/meshulam05.pdf navigation systems] or [http://www.diee.unica.it/biomed05/pdf/W22-104.pdf automated resources allocation models], where the agent has to choose the best action, according to its expected utility.
  
 
==Further Reading & References==
 
==Further Reading & References==

Revision as of 02:47, 21 September 2012

Smallwikipedialogo.png
Wikipedia has an article about

Utility is a generic term used to specify how much a certain action gives results according to an agent’s preferences. Its unit – util or utilon – is an abstract arbitrary measure that assumes a concrete value only when the agent’s preferences have been determined through an utility function.

It’s a concept rooting from economics and game theory, where it measures how much a certain commodity increases welfare. One of the clearest examples is money: the price that a person is willing to pay for something can be considered a measure of the strength of his preference for it. Thus, a willingness to pay a high sum for something implies that the person has a strong desire for it, i.e. it has a high utility for him.

Although it has been argued that utility is hard to quantify when dealing with humans - mainly due to the complexity of the preferences and motivations in cause – utility-based agents are quite common in AI systems. Such examples include navigation systems or automated resources allocation models, where the agent has to choose the best action, according to its expected utility.

Further Reading & References

  • Mistakes in Choice-Based Welfare Analysis by Botond Köszegi and Matthew Rabin
  • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2

Blog posts

See also