Nanny AI

From Lesswrongwiki
Revision as of 07:39, 30 June 2012 by TerminalAwareness (talk | contribs) (Article Creation; First draft)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Nanny AI is a form of Artificial General Intelligence proposed by Ben Goertzel to delay the Singularity and protect humanity from danger. Nanny AI has been proposed to permanently prevent the dangers associated with the Singularity or to delay the Singularity until after a period of time or when certain conditions are met. Delaying the Singularity would allow for further research and reflection about our moral values.

Ben Goertzel has suggested a number of preliminary components for building a Nanny AI. He proposed a somewhat smarter than human Artificial General Intelligence tied into a global surveillance network, in control of humanity's robots. It would be preprogrammed with inhibitions against modifying its goals, rapidly changing its intelligence, or acting against supposed human desire. It would be open-minded about having misinterpreted its initial goals and would eventually cede control to another AI at a specified time. Until then, it would prevent any technologies from developing that would prevent it from carrying out its goals.

In a paper by Luke Muehlhauser and Anna Salamon to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a Friendly Artificial Intelligence.