Agent

From Lesswrongwiki
Revision as of 20:05, 28 June 2012 by Alex Altair (talk | contribs) (Created page with "{{wikilink|Rational agent}} An '''agent''' is an entity which has preferences, forms beliefs about its environment, evaluates the consequences of possible actions, and then take...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Smallwikipedialogo.png
Wikipedia has an article about


An agent is an entity which has preferences, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its preferences. They are also referred to as goal-seeking. The concept of a rational agent is used in economics, game theory, decision theory, and artificial intelligence.

Humans as agents

The first use of the concept 'agent' was to model humans in economics. While humans undoubtedly model their surroundings, consider multiple actions, et cetera, they often do not do so in the most rational way. Many documented biases exist which comprise the human process of reasoning. For a thorough review of these, see Thinking Fast and Slow by Daniel Kahneman.

AIs as agents

There is much discussion on LessWrong as to whether certain AI designs will be agents, such as oracles and tool AI. In Dreams of Friendliness, Eliezer Yudkowsky argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they have goals. AIs which are agents will likely dramatically alter the world. Therefore, agents are likely to be Unfriendly AIs. Finding non-agent AIs is a potential way to achieve the Singularity without encountering UFAI.

Blog posts