Intelligence explosion

From Lesswrongwiki
Revision as of 16:47, 1 November 2009 by Zack M. Davis (talk | contribs) (revision)
Jump to: navigation, search

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind.

An intelligence explosion is a hypothetical strong positive feedback loop in which a recursively self-improving intelligence makes itself smarter at making itself smarter, resulting in a very fast, very dramatic leap in capability. An AI undergoing a hard takeoff intelligence explosion might (to pick just one possibility for illustrative purposes) invent molecular nanotechnology, use the internet to gain physical manipulators, deploy the nanotech, and reach singleton status within a matter of weeks. Recursive self-improvement would be a genuinely new phenomenon on Earth. Humans study, and human societies accumulate new technologies and ways of doing things, but we don't directly redesign our brains. A cleanly-designed seed AI could redesign itself, and reap the benefits of recursive self-improvement.

Eliezer Yudkowsky calls the intelligence explosion one of three major Singularity schools.

Blog posts

Posts by Eliezer Yudkowsky:

See also