Difference between revisions of "AGI Chaining"

From Lesswrongwiki
Jump to: navigation, search
m
(Redirected page to AGI chaining)
 
Line 1: Line 1:
'''Chaining God''' is Stuart Armstrong's term for his proposed method of maintaining control over a superhuman AGI. If an agent we create is vastly more intelligent than ourselves, then we would have difficulty comprehending its thought processes and plans, leading us to distrust it. However, we may be able to meaningfully communicate with a mildly superhuman intelligence. When that AGI produces its successor, it would remain at its current level in order to communicate with its human creators. As this step is iterated toward the level of a "godlike" AGI, the entity at each level could understand and trust the entity above itself.
+
#REDIRECT [[AGI chaining]]
 
 
Some major potential issues of this approach include breakdown of the chain of trust, and opportunity costs due to the inefficiency of requiring consent from the bottom of the chain before the top of the chain may act.
 
 
 
==See also==
 
 
 
*[[Friendly AI]]
 
*[[Coherent Extrapolated Volition]]
 
 
 
==References==
 
 
 
*Armstrong, Stuart (2007) [http://www.neweuropeancentury.org/GodAI.pdf Chaining God: A qualitative approach to AI, trust and moral systems]
 
 
 
[[Category:Concepts]]
 

Latest revision as of 08:45, 30 June 2012

Redirect to: