Difference between revisions of "Basic AI drives"
|Line 10:||Line 10:|
|journal=Minds and Machines
|journal=Minds and Machines
|url=http://www.nickbostrom.com/superintelligentwill.pdf}}</ref> by the term '''instrumental convergence thesis'''. The main idea is that a few goals are instrumental to almost all possible final goals. Therefore, all AIs will pursue these instrumental goals.
|url=http://www.nickbostrom.com/superintelligentwill.pdf}}</ref> by the term '''instrumental convergence thesis'''. The main idea is that a few goals are instrumentalto almost all possible final goals. Therefore, all AIs will pursue these instrumental goals.
five main .
Revision as of 06:52, 23 June 2012
A basic AI drive is a goal or motivation that most intelligences will have or converge to. The idea was first explored by Stephen Omohundro. He argued that sufficiently advanced AI systems would all naturally discover similar instrumental subgoals. The concept was also explored by Nick Bostrom by the term instrumental convergence thesis. The main idea is that a few goals are instrumental to almost all possible final goals. Therefore, all AIs will pursue these instrumental goals.
There are five main drives.
A sufficiently advance AI will probably be the best entity to achieve its goals. Therefore, it must continue existing in order to maximize goal fulfillment.
- Goal-content integrity
If the AIs goal system was modified, then it would likely begin pursuing different goals. Since this is not desirable to the current AI, it would act to protect the content of its goal system.
- Cognitive enhancement
Cognition, in the form of rational computation, is the AIs best method for achieving its goal. Any enhancement of this, including extending its computing power, is beneficial.
- Technological perfection
The AIs physical capabilities constitute its level of technology. For instance, if the AI could invent nanotechnology, it would vastly increase the actions it could take to achieve its goals.
- Resource acquisition
Resources like matter and energy are fundamentally necessary to act. The more resources the AI can control, the more actions it can perform to achieve its goals.
In some rarer cases, AIs may not pursue these goals. For instance, if there are two AIs with the same goals, the less capable AI may determine that it should destroy itself to allow the stronger AI to control the universe. Or, an AI may have the goal of using as little resources as possible, or of being as unintelligent as possible. These goals will inherently limit the growth and power of the AI.
- Main article: Orthogonality thesis
Bostom also discusses a related thesis;
Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.
This thesis refers to final goals, while AI drives refer to instrumental goal, which would be used to achieve any final goals. Combining the theses, one could say that a sufficiently advanced AI may have almost any final goals, and will certainly pursue a few basic instrumental goals to gain these final goals.
- Orthogonality thesis
- Cox's theorem
- Unfriendly AI, Paperclip maximizer, Oracle AI
- Instrumental values
- Dutch book argument
- Omohundro, S. (2008). "The Basic AI Drives". Proceedings of the First AGI Conference. http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/.
- Bostrom, N. (2012). "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents". Minds and Machines. http://www.nickbostrom.com/superintelligentwill.pdf.