Difference between revisions of "Nanny AI"

From Lesswrongwiki
Jump to: navigation, search
(Article Creation; First draft)
 
(Added bullets, clarified risk)
Line 1: Line 1:
'''Nanny AI''' is a form of [[Artificial General Intelligence]] proposed by [[Ben Goertzel]] to delay the [[Singularity]] and protect humanity from danger. Nanny AI has been proposed to permanently prevent the dangers associated with the Singularity or to delay the Singularity until after a period of time or when certain conditions are met. Delaying the Singularity would allow for further research and reflection about our [[Complexity of value|moral values.]]
+
'''Nanny AI''' is a form of [[Artificial General Intelligence]] proposed by [[Ben Goertzel]] to delay the [[Singularity]] and protect and nurture humanity. Nanny AI has been proposed to permanently prevent the dangers associated with the Singularity or to delay the Singularity until after a period of time or when certain conditions are met. Delaying the Singularity would allow for further research and reflection about our [[Complexity of value| values.]]
  
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI. He proposed a somewhat smarter than human Artificial General Intelligence tied into a global surveillance network, in control of humanity's robots. It would be preprogrammed with inhibitions against modifying its goals, rapidly changing its intelligence, or acting against supposed human desire. It would be open-minded about having misinterpreted its initial goals and would eventually cede control to another AI at a specified time. Until then, it would prevent any technologies from developing that would prevent it from carrying out its goals.
+
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:
 +
* A somewhat smarter than human Artificial General Intelligence
 +
* A global surveillance network tied to the Nanny AI
 +
* Control of all robots given to the Nanny AI
 +
* To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires
 +
* To be able to reinterpret its goals at human prompting
 +
* To prevent any technological development that would hinder it
 +
* To yield control to another AI at a predetermined time
  
In a paper by [http://lukeprog.com/ Luke Muehlhauser] and [[Anna Salamon]] to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a [[Friendly Artificial Intelligence]].
+
In a paper by [http://lukeprog.com/ Luke Muehlhauser] and [[Anna Salamon]] to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a [[Friendly Artificial Intelligence]].  
  
 +
Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology. He acknowledges that constructing a Nanny AI will be risky, but that it is better than the alternative.
  
 
==References==
 
==References==

Revision as of 09:48, 30 June 2012

Nanny AI is a form of Artificial General Intelligence proposed by Ben Goertzel to delay the Singularity and protect and nurture humanity. Nanny AI has been proposed to permanently prevent the dangers associated with the Singularity or to delay the Singularity until after a period of time or when certain conditions are met. Delaying the Singularity would allow for further research and reflection about our values.

Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:

  • A somewhat smarter than human Artificial General Intelligence
  • A global surveillance network tied to the Nanny AI
  • Control of all robots given to the Nanny AI
  • To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires
  • To be able to reinterpret its goals at human prompting
  • To prevent any technological development that would hinder it
  • To yield control to another AI at a predetermined time

In a paper by Luke Muehlhauser and Anna Salamon to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a Friendly Artificial Intelligence.

Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology. He acknowledges that constructing a Nanny AI will be risky, but that it is better than the alternative.

References