Difference between revisions of "Nanny AI"

From Lesswrongwiki
Jump to: navigation, search
(Article Creation; First draft)
 
m (Fix broken link to first reference)
 
(7 intermediate revisions by 3 users not shown)
Line 1: Line 1:
'''Nanny AI''' is a form of [[Artificial General Intelligence]] proposed by [[Ben Goertzel]] to delay the [[Singularity]] and protect humanity from danger. Nanny AI has been proposed to permanently prevent the dangers associated with the Singularity or to delay the Singularity until after a period of time or when certain conditions are met. Delaying the Singularity would allow for further research and reflection about our [[Complexity of value|moral values.]]
+
'''Nanny AI''' is a form of [[Artificial General Intelligence]] proposed by [[Ben Goertzel]] to delay the [[Singularity]] while protecting and nurturing humanity. Nanny AI was proposed as a means of reducing the [[Existential risk|risks]] associated with the [[Singularity]] by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our [[Complexity of value| values]] and time to built a [[friendly artificial intelligence]].
  
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI. He proposed a somewhat smarter than human Artificial General Intelligence tied into a global surveillance network, in control of humanity's robots. It would be preprogrammed with inhibitions against modifying its goals, rapidly changing its intelligence, or acting against supposed human desire. It would be open-minded about having misinterpreted its initial goals and would eventually cede control to another AI at a specified time. Until then, it would prevent any technologies from developing that would prevent it from carrying out its goals.
+
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:
 
+
* A mildly superhuman Artificial General Intelligence
In a paper by [http://lukeprog.com/ Luke Muehlhauser] and [[Anna Salamon]] to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a [[Friendly Artificial Intelligence]].
+
* A global surveillance network tied to the Nanny AI
 +
* Final control of all robots given to the Nanny AI
 +
* To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires
 +
* To be able to reinterpret its goals at human prompting
 +
* To prevent any technological development that would hinder it
 +
* To yield control to another AI at a predetermined time
  
 +
In a paper by [http://lukeprog.com/ Luke Muehlhauser] and [[Anna Salamon]] to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a [[Friendly Artificial Intelligence]]. Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology, though he acknowledges it poses risks of its own.
  
 
==References==
 
==References==
  
*[http://commonsenseatheism.com/wp-content/uploads/2012/03/Goertzel-Should-Humanity-Build-a-Global-AI-Nanny-to-Delay-the-Singularity-Until-its-Better-Understood.pdf Should humanity build a global AI nanny to delay the singularity until it’s better understood?] by Ben Goertzel
+
*[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.352.3966&rep=rep1&type=pdf Should humanity build a global AI nanny to delay the singularity until it’s better understood?] by Ben Goertzel
 +
*[http://hplusmagazine.com/2011/04/20/mitigating-the-risks-of-artificial-superintelligence/ Mitigating the Risks of Artificial Superintelligence]
 
*[http://hplusmagazine.com/2011/08/17/does-humanity-need-an-ai-nanny/  Does Humanity Need an AI Nanny?] by Ben Goertzel
 
*[http://hplusmagazine.com/2011/08/17/does-humanity-need-an-ai-nanny/  Does Humanity Need an AI Nanny?] by Ben Goertzel
*[http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf Intelligence Explosion: Evidence and Import ] by Luke Muehlhauser and Anna Salamon
+
*{{cite book
 +
| last1 = Muehlhauser
 +
| first1 = Luke
 +
| last2 = Salamon
 +
| first2 = Anna
 +
| contribution = Intelligence Explosion: Evidence and Import
 +
| year = 2012
 +
| title = The singularity hypothesis: A scientific and philosophical assessment
 +
| editor1-last = Eden
 +
| editor1-first = Amnon
 +
| editor2-last = Søraker
 +
| editor2-first = Johnny
 +
| editor3-last = Moor
 +
| editor3-first = James H.
 +
| editor4-last = Steinhart
 +
| editor4-first = Eric
 +
| place = Berlin
 +
| publisher = Springer
 +
| contribution-url = http://intelligence.org/files/IE-EI.pdf
 +
}}

Latest revision as of 07:21, 19 February 2016

Nanny AI is a form of Artificial General Intelligence proposed by Ben Goertzel to delay the Singularity while protecting and nurturing humanity. Nanny AI was proposed as a means of reducing the risks associated with the Singularity by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our values and time to built a friendly artificial intelligence.

Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:

  • A mildly superhuman Artificial General Intelligence
  • A global surveillance network tied to the Nanny AI
  • Final control of all robots given to the Nanny AI
  • To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires
  • To be able to reinterpret its goals at human prompting
  • To prevent any technological development that would hinder it
  • To yield control to another AI at a predetermined time

In a paper by Luke Muehlhauser and Anna Salamon to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a Friendly Artificial Intelligence. Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology, though he acknowledges it poses risks of its own.

References