Difference between revisions of "Utility"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
'''Utility''' is the capacity or the extent to which something satisfies human needs. It is the quantity of how much a commodity increases a individual welfare and satisfaction. It measures satisfied preference or overall happiness. An abstract measure of utility is  called a '''util''', or '''utilon'''. It is an arbitrary quantity, but can be consistently assigned once a complete, continuous and transitive list of preferable things, or more formally [[utility functions]], are defined. Utils are not to be confused with [[hedons]], units of subjective pleasure, or [[fuzzies]], the warm feelings associated with the belief that one has accomplished good. One common denominator for utility, specially in economics, is money: the price a person is willing to pay for the satisfaction of his preference. In most contexts money can be seen as [http://lesswrong.com/lw/65/money_the_unit_of_caring/ the unit of caring].
+
'''Utility''' is a generic term used to specify how much a certain action gives results according to an agent’s preferences. Its unit – '''util''' or '''utilon''' is an abstract arbitrary measure that assumes a concrete value only when the agent’s preferences have been determined through an [[utility function]].
  
[[Utilitarianism]] is the philosophy that advocates whatever action produces the greatest amount of utility is the moral action.  
+
It’s a concept rooting from economics and [[game theory]], where it measures how much a certain commodity increases welfare. One common denominator for utility and the clearest example, especially in this field, is money: the price a person is willing to pay for the satisfaction of his preference. Although it has been argued that utility is hard to quantify when dealing with human agents, it is widely when designing an AI capable of planning.
 +
 
 +
[[Utilitarianism]] is an ethical theory proposing that the appropriate and morally correct course of action in any given situation is the one leading to the maximum utility.
 +
 
 +
==Further Reading & References==
 +
*[http://elsa.berkeley.edu/~botond/mistakeschicago.pdf Mistakes in Choice-Based Welfare Analysis] by Botond Köszegi and Matthew Rabin
 +
*Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2
  
 
==Blog posts==
 
==Blog posts==
 
 
*[http://lesswrong.com/lw/6z/purchase_fuzzies_and_utilons_separately/ Purchase Fuzzies and Utilons Separately]
 
*[http://lesswrong.com/lw/6z/purchase_fuzzies_and_utilons_separately/ Purchase Fuzzies and Utilons Separately]
 
*[http://lesswrong.com/lw/zv/post_your_utility_function/ Post your Utility Function]
 
*[http://lesswrong.com/lw/zv/post_your_utility_function/ Post your Utility Function]
Line 18: Line 23:
 
*[[The utility function is not up for grabs]]
 
*[[The utility function is not up for grabs]]
 
*[[Preference]]
 
*[[Preference]]
*[[Shut up and multiply]]
+
*[[Game theory]]
*[[Fuzzies]]
 
*[[Hedon]]
 
*[[Game Theory]]
 
  
 
[[Category:Concepts]]
 
[[Category:Concepts]]
[[Category:Jargon]]
 
[[Category:Values]]
 

Revision as of 00:48, 20 September 2012

Smallwikipedialogo.png
Wikipedia has an article about

Utility is a generic term used to specify how much a certain action gives results according to an agent’s preferences. Its unit – util or utilon – is an abstract arbitrary measure that assumes a concrete value only when the agent’s preferences have been determined through an utility function.

It’s a concept rooting from economics and game theory, where it measures how much a certain commodity increases welfare. One common denominator for utility and the clearest example, especially in this field, is money: the price a person is willing to pay for the satisfaction of his preference. Although it has been argued that utility is hard to quantify when dealing with human agents, it is widely when designing an AI capable of planning.

Utilitarianism is an ethical theory proposing that the appropriate and morally correct course of action in any given situation is the one leading to the maximum utility.

Further Reading & References

  • Mistakes in Choice-Based Welfare Analysis by Botond Köszegi and Matthew Rabin
  • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2

Blog posts

See also