Difference between revisions of "FAI-complete"

From Lesswrongwiki
Jump to: navigation, search
(Blog posts)
Line 9: Line 9:
 
==Blog posts==
 
==Blog posts==
 
*[http://lesswrong.com/lw/tj/dreams_of_friendliness/ Dreams of Friendliness]
 
*[http://lesswrong.com/lw/tj/dreams_of_friendliness/ Dreams of Friendliness]
 +
*[http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/ Reply to Holden on 'Tool AI']
 
*[http://lesswrong.com/lw/any/a_taxonomy_of_oracle_ais/ A Taxonomy of Oracle AIs]
 
*[http://lesswrong.com/lw/any/a_taxonomy_of_oracle_ais/ A Taxonomy of Oracle AIs]
  

Revision as of 07:18, 26 June 2012

A problem is FAI-complete if solving it is equivalent to creating FAI. Friendliness theory is one proposal among many for ushering in safe AGI technology. The Singularity Institute argues that any safe superintelligence architecture is FAI complete. For example,

In "Dreams of Friendliness" Eliezer Yudkowsky argues that if you have an Oracle AI, then you can ask it, "What should I do?" If it can answer this question correctly, then it is FAI-complete. Similarly, if you have a tool AI, it must make extremely complex decisions about how many resources it can use, how to display answers in human understandable yet accurate form, et cetera. The many ways in which it could choose catastrophically require it to be FAI-complete. Note that this does not imply that an agent-like, fully free FAI is easier to create than any of the other proposals.

Blog posts

See also