Difference between revisions of "Intelligence explosion"

From Lesswrongwiki
Jump to: navigation, search
(revision)
Line 3: Line 3:
 
|I.J. Good|[http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html "Speculations Concerning the First Ultraintelligent Machine"]}}
 
|I.J. Good|[http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html "Speculations Concerning the First Ultraintelligent Machine"]}}
  
An '''intelligence explosion''' is a hypothetical strong positive feedback loop in which a recursively self-improving intelligence makes itself smarter at making itself smarter, resulting in a very fast, very dramatic leap in capability.  
+
An '''intelligence explosion''' is a hypothetical strong positive feedback loop in which a recursively self-improving intelligence makes itself smarter at making itself smarter, resulting in a very fast, very dramatic leap in capability. An AI undergoing a [[hard takeoff]] intelligence explosion might (to pick just one possibility for illustrative purposes) invent molecular nanotechnology, use the internet to gain physical manipulators,  deploy the nanotech, and reach [[singleton]] status within a matter of weeks. Recursive self-improvement would be a genuinely new phenomenon on Earth. Humans study, and human societies accumulate new technologies and ways of doing things, but we don't directly redesign our brains. A cleanly-designed [[seed AI]] could redesign itself, and reap the benefits of recursive self-improvement.
[[Eliezer Yudkowsky]] calls the intelligence explosion one of [http://yudkowsky.net/singularity/schools three major Singularity schools].  
+
 
 +
[[Eliezer Yudkowsky]] calls the intelligence explosion one of [http://yudkowsky.net/singularity/schools three major Singularity schools].
  
 
==Blog posts==
 
==Blog posts==

Revision as of 16:47, 1 November 2009

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind.

An intelligence explosion is a hypothetical strong positive feedback loop in which a recursively self-improving intelligence makes itself smarter at making itself smarter, resulting in a very fast, very dramatic leap in capability. An AI undergoing a hard takeoff intelligence explosion might (to pick just one possibility for illustrative purposes) invent molecular nanotechnology, use the internet to gain physical manipulators, deploy the nanotech, and reach singleton status within a matter of weeks. Recursive self-improvement would be a genuinely new phenomenon on Earth. Humans study, and human societies accumulate new technologies and ways of doing things, but we don't directly redesign our brains. A cleanly-designed seed AI could redesign itself, and reap the benefits of recursive self-improvement.

Eliezer Yudkowsky calls the intelligence explosion one of three major Singularity schools.

Blog posts

Posts by Eliezer Yudkowsky:

See also