Difference between revisions of "Agent"

From Lesswrongwiki
Jump to: navigation, search
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 
{{wikilink|Rational agent}}
 
{{wikilink|Rational agent}}
 +
A '''rational agent''' is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a '''rational agent''' is used in economics, [[game theory]], [[decision theory]], and artificial intelligence.
  
An '''agent''' is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a '''rational agent''' is used in economics, [[game theory]], [[decision theory]], and artificial intelligence.
+
More generally, an '''agent''' is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.<ref>Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.</ref>
  
==Humans as agents==
+
There has been much discussion as to whether certain [[AGI]] designs can be made into [[Tool AI|mere tools]] or whether they will necessarily be agents which will attempt to actively carry out their goals. Any minds that actively engage in goal-directed behavior are [[Unfriendly AI|potentially dangerous]], due to considerations such as [[Basic AI drives|basic AI drives]] possibly causing behavior which is in conflict with humanity's values.
The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see ''Thinking Fast and Slow'' by Daniel Kahneman.
+
 +
In [http://lesswrong.com/lw/tj/dreams_of_friendliness/ Dreams of Friendliness] and in [http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/ Reply to Holden on Tool AI], [[Eliezer Yudkowsky]] argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they are necessarily agents.  
  
==AIs as agents==
+
==References==
There is much discussion on LessWrong as to whether certain AI designs will be agents, such as [[Oracle AI|oracles]] and [[tool AI]]. In [http://lesswrong.com/lw/tj/dreams_of_friendliness/ Dreams of Friendliness], [[Eliezer Yudkowsky]] argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they have goals. AIs which are agents will likely dramatically alter the world. Therefore, agents are likely to be [[Unfriendly AI|Unfriendly AIs]]. Finding non-agent AIs is a potential way to achieve the [[Singularity]] without encountering UFAI.
+
 
 +
<references />
 +
 
 +
==See also==
 +
* [[Tool AI]]
 +
* [[Oracle AI]]
  
 
==Blog posts==
 
==Blog posts==
 
*[http://lesswrong.com/lw/5i8/the_power_of_agency/ The Power of Agency]
 
*[http://lesswrong.com/lw/5i8/the_power_of_agency/ The Power of Agency]

Latest revision as of 04:26, 10 November 2012

Smallwikipedialogo.png
Wikipedia has an article about

A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.

More generally, an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.[1]

There has been much discussion as to whether certain AGI designs can be made into mere tools or whether they will necessarily be agents which will attempt to actively carry out their goals. Any minds that actively engage in goal-directed behavior are potentially dangerous, due to considerations such as basic AI drives possibly causing behavior which is in conflict with humanity's values.

In Dreams of Friendliness and in Reply to Holden on Tool AI, Eliezer Yudkowsky argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they are necessarily agents.

References

  1. Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.

See also

Blog posts