Difference between revisions of "Nanny AI"
m (Improved flow) |
Jimrandomh (talk | contribs) m (Fix broken link to first reference) |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | '''Nanny AI''' is a form of [[Artificial General Intelligence]] proposed by [[Ben Goertzel]] to delay the [[Singularity]] and | + | '''Nanny AI''' is a form of [[Artificial General Intelligence]] proposed by [[Ben Goertzel]] to delay the [[Singularity]] while protecting and nurturing humanity. Nanny AI was proposed as a means of reducing the [[Existential risk|risks]] associated with the [[Singularity]] by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our [[Complexity of value| values]] and time to built a [[friendly artificial intelligence]]. |
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI: | Ben Goertzel has suggested a number of preliminary components for building a Nanny AI: | ||
− | * A | + | * A mildly superhuman Artificial General Intelligence |
* A global surveillance network tied to the Nanny AI | * A global surveillance network tied to the Nanny AI | ||
− | * | + | * Final control of all robots given to the Nanny AI |
* To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires | * To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires | ||
* To be able to reinterpret its goals at human prompting | * To be able to reinterpret its goals at human prompting | ||
Line 10: | Line 10: | ||
* To yield control to another AI at a predetermined time | * To yield control to another AI at a predetermined time | ||
− | In a paper by [http://lukeprog.com/ Luke Muehlhauser] and [[Anna Salamon]] to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a [[Friendly Artificial Intelligence]]. | + | In a paper by [http://lukeprog.com/ Luke Muehlhauser] and [[Anna Salamon]] to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a [[Friendly Artificial Intelligence]]. Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology, though he acknowledges it poses risks of its own. |
− | |||
− | Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology | ||
==References== | ==References== | ||
− | *[http:// | + | *[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.352.3966&rep=rep1&type=pdf Should humanity build a global AI nanny to delay the singularity until it’s better understood?] by Ben Goertzel |
+ | *[http://hplusmagazine.com/2011/04/20/mitigating-the-risks-of-artificial-superintelligence/ Mitigating the Risks of Artificial Superintelligence] | ||
*[http://hplusmagazine.com/2011/08/17/does-humanity-need-an-ai-nanny/ Does Humanity Need an AI Nanny?] by Ben Goertzel | *[http://hplusmagazine.com/2011/08/17/does-humanity-need-an-ai-nanny/ Does Humanity Need an AI Nanny?] by Ben Goertzel | ||
*{{cite book | *{{cite book | ||
Line 36: | Line 35: | ||
| place = Berlin | | place = Berlin | ||
| publisher = Springer | | publisher = Springer | ||
− | | contribution-url = http:// | + | | contribution-url = http://intelligence.org/files/IE-EI.pdf |
}} | }} |
Latest revision as of 07:21, 19 February 2016
Nanny AI is a form of Artificial General Intelligence proposed by Ben Goertzel to delay the Singularity while protecting and nurturing humanity. Nanny AI was proposed as a means of reducing the risks associated with the Singularity by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our values and time to built a friendly artificial intelligence.
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:
- A mildly superhuman Artificial General Intelligence
- A global surveillance network tied to the Nanny AI
- Final control of all robots given to the Nanny AI
- To be reluctant to change its goals, increase its intelligence, or act against humanity's extrapolated desires
- To be able to reinterpret its goals at human prompting
- To prevent any technological development that would hinder it
- To yield control to another AI at a predetermined time
In a paper by Luke Muehlhauser and Anna Salamon to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a Friendly Artificial Intelligence. Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology, though he acknowledges it poses risks of its own.
References
- Should humanity build a global AI nanny to delay the singularity until it’s better understood? by Ben Goertzel
- Mitigating the Risks of Artificial Superintelligence
- Does Humanity Need an AI Nanny? by Ben Goertzel
- Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.