Difference between revisions of "AGI Chaining"

From Lesswrongwiki
Jump to: navigation, search
(Created page with "'''Chaining God''' is Stuart Armstrong's term for his proposed method of maintaining control over a superhuman AGI. If an agent we create is vastly more intelligent than ourselve...")
 
m
Line 2: Line 2:
  
 
Some major potential issues of this approach include breakdown of the chain of trust, and opportunity costs due to the inefficiency of requiring consent from the bottom of the chain before the top of the chain may act.
 
Some major potential issues of this approach include breakdown of the chain of trust, and opportunity costs due to the inefficiency of requiring consent from the bottom of the chain before the top of the chain may act.
 
  
 
==See also==
 
==See also==

Revision as of 11:20, 20 June 2012

Chaining God is Stuart Armstrong's term for his proposed method of maintaining control over a superhuman AGI. If an agent we create is vastly more intelligent than ourselves, then we would have difficulty comprehending its thought processes and plans, leading us to distrust it. However, we may be able to meaningfully communicate with a mildly superhuman intelligence. When that AGI produces its successor, it would remain at its current level in order to communicate with its human creators. As this step is iterated toward the level of a "godlike" AGI, the entity at each level could understand and trust the entity above itself.

Some major potential issues of this approach include breakdown of the chain of trust, and opportunity costs due to the inefficiency of requiring consent from the bottom of the chain before the top of the chain may act.

See also

References