Difference between revisions of "FAI-complete"

From Lesswrongwiki
Jump to: navigation, search
m (name change)
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
A problem is '''Friendly AI-complete''' if solving it is equivalent to creating [[Friendly AI]]. Friendliness theory is one proposal among many for ushering in safe [[AGI|Artificial General Intelligence]] technology. The [[Singularity Institute]] argues that any safe superintelligence architecture is FAI-complete. For example,
+
A problem is '''Friendly AI-complete''' if solving it is equivalent to creating [[Friendly AI]].  
  
*[[Oracle AI]]
+
The [[Machine Intelligence Research Institute]] argues that any safe superintelligence architecture is FAI-complete. For example, the following have been proposed as hypothetical safe AGI designs:
*[[Tool AI]]
 
*[[Nanny AI]]
 
  
In "Dreams of Friendliness" Eliezer Yudkowsky argues that if you have an Oracle AI, then you can ask it, "What should I do?" If it can answer this question correctly, then it is FAI-complete.  
+
*[[Oracle AI]] - an AGI which takes no action besides answering questions.
 +
*[[Tool AI]] - an AGI which isn't an independent decision-maker at all, but is rather "just a calculator".
 +
*[[Nanny AI]] - an AGI of limited superintelligence, restricted to preventing more advanced AGIs from arising until safer AGIs are developed.
  
Similarly, if you have a tool AI, it must make extremely complex decisions about how many resources it can use, how to display answers in human understandable yet accurate form, et cetera. The many ways in which it could choose catastrophically require it to be FAI-complete. Note that this does not imply that an agent-like, fully free FAI is easier to create than any of the other proposals.
+
In [http://lesswrong.com/lw/tj/dreams_of_friendliness/ "Dreams of Friendliness"] Eliezer Yudkowsky argues that if you have an Oracle AI, then you can ask it, "What should I do?" If it can answer this question correctly, then it is FAI-complete.  
  
Goertzel proposed a "Nanny AI"([http://commonsenseatheism.com/wp-content/uploads/2012/03/Goertzel-Should-Humanity-Build-a-Global-AI-Nanny-to-Delay-the-Singularity-Until-its-Better-Understood.pdf Should humanity build a global AI nanny to delay the singularity until it’s better understood?]) with moderate superhuman intelligence, able to forestall Singularity eternally, or to delay it. However, it has been noticed by Luke Muehlhauser and Anna Salamon ([http://intelligence.org/files/IE-EI.pdf Intelligence Explosion: Evidence and Import]) that a Nanny AI is FAI-complete. In fact, building it would require to solve the problems required to build [[Friendly Artificial Intelligence]].
+
Similarly, if you have a tool AI, it must make extremely complex decisions about how many resources it can use, how to display answers in human understandable yet accurate form, ''et cetera''. The many ways in which it could choose catastrophically require it to be FAI-complete. Note that this does not imply that an agent-like, fully free FAI is easier to create than any of the other proposals.
 +
 
 +
Goertzel proposed a "Nanny AI"([http://commonsenseatheism.com/wp-content/uploads/2012/03/Goertzel-Should-Humanity-Build-a-Global-AI-Nanny-to-Delay-the-Singularity-Until-its-Better-Understood.pdf Should humanity build a global AI nanny to delay the singularity until it’s better understood?]) with moderate superhuman intelligence, able to forestall Singularity eternally, or to delay it. However, it has been argued by Luke Muehlhauser and Anna Salamon ([http://intelligence.org/files/IE-EI.pdf Intelligence Explosion: Evidence and Import]) that a Nanny AI is FAI-complete. They claim that building it could require solving all the problems required to build [[Friendly Artificial Intelligence]].
  
 
==Blog posts==
 
==Blog posts==

Latest revision as of 20:43, 31 January 2013

A problem is Friendly AI-complete if solving it is equivalent to creating Friendly AI.

The Machine Intelligence Research Institute argues that any safe superintelligence architecture is FAI-complete. For example, the following have been proposed as hypothetical safe AGI designs:

  • Oracle AI - an AGI which takes no action besides answering questions.
  • Tool AI - an AGI which isn't an independent decision-maker at all, but is rather "just a calculator".
  • Nanny AI - an AGI of limited superintelligence, restricted to preventing more advanced AGIs from arising until safer AGIs are developed.

In "Dreams of Friendliness" Eliezer Yudkowsky argues that if you have an Oracle AI, then you can ask it, "What should I do?" If it can answer this question correctly, then it is FAI-complete.

Similarly, if you have a tool AI, it must make extremely complex decisions about how many resources it can use, how to display answers in human understandable yet accurate form, et cetera. The many ways in which it could choose catastrophically require it to be FAI-complete. Note that this does not imply that an agent-like, fully free FAI is easier to create than any of the other proposals.

Goertzel proposed a "Nanny AI"(Should humanity build a global AI nanny to delay the singularity until it’s better understood?) with moderate superhuman intelligence, able to forestall Singularity eternally, or to delay it. However, it has been argued by Luke Muehlhauser and Anna Salamon (Intelligence Explosion: Evidence and Import) that a Nanny AI is FAI-complete. They claim that building it could require solving all the problems required to build Friendly Artificial Intelligence.

Blog posts

References

See also