Difference between revisions of "Nanny AI"

From Lesswrongwiki
Jump to: navigation, search
(References)
m (Improved flow)
Line 1: Line 1:
'''Nanny AI''' is a form of [[Artificial General Intelligence]] proposed by [[Ben Goertzel]] to delay the [[Singularity]] and protect and nurture humanity. Nanny AI has been proposed to permanently prevent the dangers associated with the Singularity or to delay the Singularity until after a period of time or when certain conditions are met. Delaying the Singularity would allow for further research and reflection about our [[Complexity of value| values]].
+
'''Nanny AI''' is a form of [[Artificial General Intelligence]] proposed by [[Ben Goertzel]] to delay the [[Singularity]] and protect and nurture humanity. Nanny AI has been proposed to reduce the [[Existential risk|risks]] associated with the Singularity by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our [[Complexity of value| values]].
  
 
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:
 
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:
Line 12: Line 12:
 
In a paper by [http://lukeprog.com/ Luke Muehlhauser] and [[Anna Salamon]] to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a [[Friendly Artificial Intelligence]].  
 
In a paper by [http://lukeprog.com/ Luke Muehlhauser] and [[Anna Salamon]] to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a [[Friendly Artificial Intelligence]].  
  
Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology. He acknowledges that constructing a Nanny AI will be risky, but that it is better than the alternative.
+
Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology. He acknowledges that constructing a Nanny AI will be risky, but that it may be better than the alternative.
  
 
==References==
 
==References==

Revision as of 01:40, 11 July 2012

Nanny AI is a form of Artificial General Intelligence proposed by Ben Goertzel to delay the Singularity and protect and nurture humanity. Nanny AI has been proposed to reduce the risks associated with the Singularity by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our values.

Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:

  • A somewhat smarter than human Artificial General Intelligence
  • A global surveillance network tied to the Nanny AI
  • Control of all robots given to the Nanny AI
  • To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires
  • To be able to reinterpret its goals at human prompting
  • To prevent any technological development that would hinder it
  • To yield control to another AI at a predetermined time

In a paper by Luke Muehlhauser and Anna Salamon to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a Friendly Artificial Intelligence.

Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology. He acknowledges that constructing a Nanny AI will be risky, but that it may be better than the alternative.

References