Difference between revisions of "Basic AI drives"

From Lesswrongwiki
Jump to: navigation, search
(See also)
m (See also)
Line 7: Line 7:
 
*[[Cox's theorem]]
 
*[[Cox's theorem]]
 
*[[Unfriendly AI]], [[Paperclip maximizer]]
 
*[[Unfriendly AI]], [[Paperclip maximizer]]
 +
*[[Instrumental values]]
 
*[[Dutch book argument]]
 
*[[Dutch book argument]]
  
 
==External links==
 
==External links==
 
* [http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ The Basic AI Drives] by Stephen M. Omohundro
 
* [http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ The Basic AI Drives] by Stephen M. Omohundro

Revision as of 08:00, 31 December 2011

For the most part, whatever your goals are, choosing in accordance with decision-theoretic desiderata will help you achieve them, and so despite the vast diversity of possible minds, we have theoretical reasons to expect that AIs that have undergone substantial self-improvement will tend to share certain features. Steve Omohundro has identified several of these Basic AI drives. Goal-seeking agents will usually strive to represent their goals as a utility function, prevent "counterfeit" utility, protect themselves and similar agents, and acquire resources.

Obviously, AIs whose goals directly contradict these basic AI drives will strive to avoid them.

See also

External links