Difference between revisions of "Basic AI drives"

From Lesswrongwiki
Jump to: navigation, search
(Created page)
 
(Key phrase: "unless explicitly counteracted".)
Line 1: Line 1:
''Whatever'' your goals are, choosing in accordance with decision-theoretic desiderata will help you achieve them, and so despite the [[Mind design space|vast diversity of possible minds]], we have theoretical reasons to expect that [[Artificial general intelligence|AIs]] that have undergone substantial self-improvement will share certain features. Steve Omohundro has identified several of these '''Basic AI drives'''. Goal-seeking agents will strive to represent their goals as a [[utility function]], prevent [[Wireheading|"counterfeit" utility]], protect themselves and similar agents, and acquire resources.
+
For the most part, ''whatever'' your goals are, choosing in accordance with decision-theoretic desiderata will help you achieve them, and so despite the [[Mind design space|vast diversity of possible minds]], we have theoretical reasons to expect that [[Artificial general intelligence|AIs]] that have undergone substantial self-improvement will tend to share certain features. Steve Omohundro has identified several of these '''Basic AI drives'''. Goal-seeking agents will usually strive to represent their goals as a [[utility function]], prevent [[Wireheading|"counterfeit" utility]], protect themselves and similar agents, and acquire resources.
 +
 
 +
Obviously, AIs whose goals directly contradict these basic AI drives will strive to avoid them.
  
 
==See also==
 
==See also==

Revision as of 14:56, 17 February 2010

For the most part, whatever your goals are, choosing in accordance with decision-theoretic desiderata will help you achieve them, and so despite the vast diversity of possible minds, we have theoretical reasons to expect that AIs that have undergone substantial self-improvement will tend to share certain features. Steve Omohundro has identified several of these Basic AI drives. Goal-seeking agents will usually strive to represent their goals as a utility function, prevent "counterfeit" utility, protect themselves and similar agents, and acquire resources.

Obviously, AIs whose goals directly contradict these basic AI drives will strive to avoid them.

See also

External links