Difference between revisions of "Nanny AI"

From Lesswrongwiki
Jump to: navigation, search
(Another reference, clarifications and flow)
m (fixed link to 4th reference)
Line 35: Line 35:
  | place = Berlin
  | place = Berlin
  | publisher = Springer
  | publisher = Springer
  | contribution-url = http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf
  | contribution-url = http://intelligence.org/files/IE-EI.pdf

Revision as of 01:36, 7 October 2014

Nanny AI is a form of Artificial General Intelligence proposed by Ben Goertzel to delay the Singularity while protecting and nurturing humanity. Nanny AI was proposed as a means of reducing the risks associated with the Singularity by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our values and time to built a friendly artificial intelligence.

Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:

  • A mildly superhuman Artificial General Intelligence
  • A global surveillance network tied to the Nanny AI
  • Final control of all robots given to the Nanny AI
  • To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires
  • To be able to reinterpret its goals at human prompting
  • To prevent any technological development that would hinder it
  • To yield control to another AI at a predetermined time

In a paper by Luke Muehlhauser and Anna Salamon to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a Friendly Artificial Intelligence. Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology, though he acknowledges it poses risks of its own.